

# Replicating objects within and across Regions
Replicating objects within and across Regions

You can use replication to enable automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate objects to a single destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Regions or within the same Region as the source bucket.

There are two types of replication: *live replication* and *on-demand replication*.
+ **Live replication** – **To automatically replicate new and updated objects** as they are written to the source bucket, use live replication. Live replication doesn't replicate any objects that existed in the bucket before you set up replication. To replicate objects that existed before you set up replication, use on-demand replication.
+ **On-demand replication** – **To replicate existing objects** from the source bucket to one or more destination buckets on demand, use S3 Batch Replication. For more information about replicating existing objects, see [When to use S3 Batch Replication](#batch-replication-scenario).

There are two forms of live replication: *Cross-Region Replication (CRR)* and *Same-Region Replication (SRR)*.
+ **Cross-Region Replication (CRR)** – You can use CRR to replicate objects across Amazon S3 buckets in different AWS Regions. For more information about CRR, see [When to use Cross-Region Replication](#crr-scenario).
+ **Same-Region Replication (SRR)** – You can use SRR to copy objects across Amazon S3 buckets in the same AWS Region. For more information about SRR, see [When to use Same-Region Replication](#srr-scenario).

**Topics**
+ [

## Why use replication?
](#replication-scenario)
+ [

## When to use Cross-Region Replication
](#crr-scenario)
+ [

## When to use Same-Region Replication
](#srr-scenario)
+ [

## When to use two-way replication (bi-directional replication)
](#two-way-replication-scenario)
+ [

## When to use S3 Batch Replication
](#batch-replication-scenario)
+ [

## Workload requirements and live replication
](#replication-workload-requirements)
+ [

# What does Amazon S3 replicate?
](replication-what-is-isnot-replicated.md)
+ [

# Requirements and considerations for replication
](replication-requirements.md)
+ [

# Setting up live replication overview
](replication-how-setup.md)
+ [

# Managing or pausing live replication
](disable-replication.md)
+ [

# Replicating existing objects with Batch Replication
](s3-batch-replication-batch.md)
+ [

# Troubleshooting replication
](replication-troubleshoot.md)
+ [

# Monitoring replication with metrics, event notifications, and statuses
](replication-metrics.md)

## Why use replication?


Replication can help you do the following:
+ **Replicate objects while retaining metadata** – You can use replication to make copies of your objects that retain all metadata, such as the original object creation times and version IDs. This capability is important if you must ensure that your replica is identical to the source object.
+ **Replicate objects into different storage classes** – You can use replication to directly put objects into S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, or another storage class in the destination buckets. You can also replicate your data to the same storage class and use lifecycle configurations on the destination buckets to move your objects to a colder storage class as they age.
+ **Maintain object copies under different ownership** – Regardless of who owns the source object, you can tell Amazon S3 to change replica ownership to the AWS account that owns the destination bucket. This is referred to as the *owner override* option. You can use this option to restrict access to object replicas.
+ **Keep objects stored over multiple AWS Regions** – To ensure geographic differences in where your data is kept, you can set multiple destination buckets across different AWS Regions. This feature might help you meet certain compliance requirements. 
+ **Replicate objects within 15 minutes** – To replicate your data in the same AWS Region or across different Regions within a predictable time frame, you can use S3 Replication Time Control (S3 RTC). S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
**Note**  
S3 RTC does not apply to Batch Replication. Batch Replication is an on-demand replication job, and can be tracked with S3 Batch Operations. For more information, see [Tracking job status and completion reports](batch-ops-job-status.md).
+ **Sync buckets, replicate existing objects, and replicate previously failed or replicated objects** – To sync buckets and replicate existing objects, use Batch Replication as an on-demand replication action. For more information about when to use Batch Replication, see [When to use S3 Batch Replication](#batch-replication-scenario).
+ **Replicate objects and fail over to a bucket in another AWS Region** – To keep all metadata and objects in sync across buckets during data replication, use two-way replication (also known as bi-directional replication) rules before configuring Amazon S3 Multi-Region Access Point failover controls. Two-way replication rules help ensure that when data is written to the S3 bucket that traffic fails over to, that data is then replicated back to the source bucket.

## When to use Cross-Region Replication


S3 Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. CRR can help you do the following:
+ **Meet compliance requirements** – Although Amazon S3 stores your data across multiple geographically distant Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. To satisfy these requirements, use Cross-Region Replication to replicate data between distant AWS Regions.
+ **Minimize latency** – If your customers are in two geographic locations, you can minimize latency in accessing objects by maintaining object copies in AWS Regions that are geographically closer to your users.
+ **Increase operational efficiency** – If you have compute clusters in two different AWS Regions that analyze the same set of objects, you might choose to maintain object copies in those Regions.

## When to use Same-Region Replication


Same-Region Replication (SRR) is used to copy objects across Amazon S3 buckets in the same AWS Region. SRR can help you do the following:
+ **Aggregate logs into a single bucket** – If you store logs in multiple buckets or across multiple accounts, you can easily replicate logs into a single, in-Region bucket. Doing so allows for simpler processing of logs in a single location.
+ **Configure live replication between production and test accounts** – If you or your customers have production and test accounts that use the same data, you can replicate objects between those multiple accounts, while maintaining object metadata.
+ **Abide by data sovereignty laws** – You might be required to store multiple copies of your data in separate AWS accounts within a certain Region. Same-Region Replication can help you automatically replicate critical data when compliance regulations don't allow the data to leave your country.

## When to use two-way replication (bi-directional replication)

+ **Build shared datasets across multiple AWS Regions** – With replica modification sync, you can easily replicate metadata changes, such as object access control lists (ACLs), object tags, or object locks, on replication objects. This two-way replication is important if you want to keep all objects and object metadata changes in sync. You can [enable replica modification sync](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-for-metadata-changes.html#enabling-replication-for-metadata-changes) on a new or existing replication rule when performing two-way replication between two or more buckets in the same or different AWS Regions.
+ **Keep data synchronized across Regions during failover** – You can synchronize data in buckets between AWS Regions by configuring two-way replication rules with S3 Cross-Region Replication (CRR) directly from a Multi-Region Access Point. To make an informed decision on when to initiate failover, you can also enable S3 replication metrics so that you can monitor the replication in Amazon CloudWatch, in S3 Replication Time Control (S3 RTC), or from the Multi-Region Access Point.
+ **Make your application highly available** – Even in the event of a Regional traffic disruption, you can use two-way replication rules to keep all metadata and objects in sync across buckets during data replication.

## When to use S3 Batch Replication


Batch Replication replicates existing objects to different buckets as an on-demand option. Unlike live replication, these jobs can be run as needed. Batch Replication can help you do the following:
+ **Replicate existing objects** – You can use Batch Replication to replicate objects that were added to the bucket before Same-Region Replication or Cross-Region Replication were configured.
+ **Replicate objects that previously failed to replicate** – You can filter a Batch Replication job to attempt to replicate objects with a replication status of **FAILED**.
+ **Replicate objects that were already replicated** – You might be required to store multiple copies of your data in separate AWS accounts or AWS Regions. Batch Replication can replicate existing objects to newly added destinations.
+ **Replicate replicas of objects that were created from a replication rule** – Replication configurations create replicas of objects in destination buckets. Replicas of objects can be replicated only with Batch Replication.

## Workload requirements and live replication


Depending on your workload requirements, some types of live replication will be better suited to your use case than others. Use the following table to determine which type of replication to use for your situation, and whether to use S3 Replication Time Control (S3 RTC) for your workload. S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement, or SLA). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).


| Workload requirement | S3 RTC (15-minute SLA) | Cross-Region Replication (CRR) | Same-Region Replication (SRR) | 
| --- | --- | --- | --- | 
| Replicate objects between different AWS accounts | Yes | Yes | Yes | 
| Replicate objects within the same AWS Region within 24-48 hours (not SLA backed) | No | No | Yes | 
| Replicate objects between different AWS Regions within 24-48 hours (not SLA backed) | No | Yes | No | 
|  Predictable replication time: Backed by SLA to replicate 99.9 percent of objects within 15 minutes  | Yes | No | No | 

# What does Amazon S3 replicate?
What's replicated?

Amazon S3 replicates only specific items in buckets that are configured for replication. 

**Topics**
+ [

## What is replicated with replication configurations?
](#replication-what-is-replicated)
+ [

## What isn't replicated with replication configurations?
](#replication-what-is-not-replicated)

## What is replicated with replication configurations?


By default, Amazon S3 replicates the following:
+ Objects created after you add a replication configuration.
+ Unencrypted objects. 
+ Objects encrypted using customer provided keys (SSE-C), objects encrypted at rest under an Amazon S3 managed key (SSE-S3) or a KMS key stored in AWS Key Management Service (SSE-KMS). For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md). 
+ Object metadata from the source objects to the replicas. For information about replicating metadata from the replicas to the source objects, see [Replicating metadata changes with replica modification sync](replication-for-metadata-changes.md).
+ Only objects in the source bucket for which the bucket owner has permissions to read objects and access control lists (ACLs). 

  For more information about resource ownership, see [Amazon S3 bucket and object ownership](access-policy-language-overview.md#about-resource-owner).
+ Object ACL updates, unless you direct Amazon S3 to change the replica ownership when source and destination buckets aren't owned by the same accounts. 

  For more information, see [Changing the replica owner](replication-change-owner.md). 

  It can take a while until Amazon S3 can bring the two ACLs in sync. This change in ownership applies only to objects created after you add a replication configuration to the bucket.
+  Object tags, if there are any.
+ S3 Object Lock retention information, if there is any. 

  When Amazon S3 replicates objects that have retention information applied, it applies those same retention controls to your replicas, overriding the default retention period configured on your destination buckets. If you don't have retention controls applied to the objects in your source bucket, and you replicate into destination buckets that have a default retention period set, the destination bucket's default retention period is applied to your object replicas. For more information, see [Locking objects with Object Lock](object-lock.md).

### How delete operations affect replication


If you delete an object from the source bucket, the following actions occur by default:
+ If you make a DELETE request without specifying an object version ID, Amazon S3 adds a delete marker. Amazon S3 deals with the delete marker as follows:
  + If you are using the latest version of the replication configuration (that is, you specify the `Filter` element in a replication configuration rule), Amazon S3 does not replicate the delete marker by default. However, you can add *delete marker replication* to non-tag-based rules. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md).
  + If you don't specify the `Filter` element, Amazon S3 assumes that the replication configuration is version V1, and it replicates delete markers that resulted from user actions. However, if Amazon S3 deletes an object due to a lifecycle action, the delete marker is not replicated to the destination buckets.
+ If you specify an object version ID to delete in a `DELETE` request, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination buckets. In other words, it doesn't delete the same object version from the destination buckets. This protects data from malicious deletions. 

## What isn't replicated with replication configurations?


By default, Amazon S3 doesn't replicate the following:
+ Objects in the source bucket that are replicas that were created by another replication rule. For example, suppose you configure replication where bucket A is the source and bucket B is the destination. Now suppose that you add another replication configuration where bucket B is the source and bucket C is the destination. In this case, objects in bucket B that are replicas of objects in bucket A are not replicated to bucket C. 

  To replicate objects that are replicas, use Batch Replication. Learn more about configuring Batch Replication at [Replicating existing objects](s3-batch-replication-batch.md).
+ Objects in the source bucket that have already been replicated to a different destination. For example, if you change the destination bucket in an existing replication configuration, Amazon S3 won't replicate the objects again.

  To replicate previously replicated objects, use Batch Replication. Learn more about configuring Batch Replication at [Replicating existing objects](s3-batch-replication-batch.md).
+ Batch Replication does not support re-replicating objects that were deleted with the version ID of the object from the destination bucket. To re-replicate these objects, you can copy the source objects in place with a Batch Copy job. Copying those objects in place creates new versions of the objects in the source bucket and initiates replication automatically to the destination. For more information about how to use Batch Copy, see, [Examples that use Batch Operations to copy objects](batch-ops-examples-copy.md).
+ By default, when replicating from a different AWS account, delete markers added to the source bucket are not replicated.

  For information about how to replicate delete markers, see [Replicating delete markers between buckets](delete-marker-replication.md).
+ Objects that are stored in the S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Intelligent-Tiering Archive Access, or S3 Intelligent-Tiering Deep Archive Access storage classes or tiers. You cannot replicate these objects until you restore them and copy them to a different storage class. 

  To learn more about S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive, see [Storage classes for rarely accessed objects](storage-class-intro.md#sc-glacier).

  To learn more about the S3 Intelligent-Tiering, see [Managing storage costs with Amazon S3 Intelligent-Tiering](intelligent-tiering.md).
+ Objects in the source bucket that the bucket owner doesn't have sufficient permissions to replicate. 

  For information about how an object owner can grant permissions to a bucket owner, see [Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-acl-2).
+ Updates to bucket-level subresources. 

  For example, if you change the lifecycle configuration or add a notification configuration to your source bucket, these changes are not applied to the destination bucket. This feature makes it possible to have different configurations on source and destination buckets. 
+ Actions performed by lifecycle configuration. 

  For example, if lifecycle configuration is enabled only on your source bucket, Amazon S3 creates delete markers for expired objects but doesn't replicate those markers. If you want the same lifecycle configuration applied to both the source and destination buckets, enable the same lifecycle configuration on both. For more information about lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).
+ When you're using tag-based replication rules with live replication, new objects must be tagged with the matching replication rule tag in the `PutObject` operation. Otherwise, the objects won't be replicated. If objects are tagged after the `PutObject` operation, those objects also won't be replicated. 

  To replicate objects that have been tagged after the `PutObject` operation, you must use S3 Batch Replication. For more information about Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

# Requirements and considerations for replication


Amazon S3 replication requires the following:
+ The source bucket owner must have the source and destination AWS Regions enabled for their account. The destination bucket owner must have the destination Region enabled for their account. 

  For more information about enabling or disabling an AWS Region, see [Specify which AWS Regions your account can use](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) in the *AWS Account Management Reference Guide*.
+ Both source and destination buckets must have versioning enabled. For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).
+ Amazon S3 must have permissions to replicate objects from the source bucket to the destination bucket or buckets on your behalf. For more information about these permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).
+ If the owner of the source bucket doesn't own the object in the bucket, the object owner must grant the bucket owner `READ` and `READ_ACP` permissions with the object access control list (ACL). For more information, see [Access control list (ACL) overview](acl-overview.md). 
+ If the source bucket has S3 Object Lock enabled, the destination buckets must also have S3 Object Lock enabled. 

  To enable replication on a bucket that has Object Lock enabled, you must use the AWS Command Line Interface, REST API, or AWS SDKs. For more general information, see [Locking objects with Object Lock](object-lock.md).
**Note**  
You must grant two new permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two new permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission, it satisfies the requirement. For more information, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

For more information, see [Setting up live replication overview](replication-how-setup.md). 

If you are setting the replication configuration in a *cross-account scenario*, where the source and destination buckets are owned by different AWS accounts, the following additional requirement applies:
+ The owner of the destination buckets must grant the owner of the source bucket permissions to replicate objects with a bucket policy. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).
+ The destination buckets cannot be configured as Requester Pays buckets. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md).

## Considerations for replication


Before you create a replication configuration, be aware of the following considerations. 

**Topics**
+ [

### Lifecycle configuration and object replicas
](#replica-and-lifecycle)
+ [

### Versioning configuration and replication configuration
](#replication-and-versioning)
+ [

### Using S3 Replication with S3 Intelligent-Tiering
](#replication-and-intelligent-tiering)
+ [

### Logging configuration and replication configuration
](#replication-and-logging)
+ [

### CRR and the destination Region
](#replication-and-dest-region)
+ [

### S3 Batch Replication
](#considerations-batch-replication)
+ [

### S3 Replication Time Control
](#considerations-RTC)

### Lifecycle configuration and object replicas


The time it takes for Amazon S3 to replicate an object depends on the size of the object. For large objects, it can take several hours. Although it might take a while before a replica is available in the destination, it takes the same amount of time to create the replica as it took to create the corresponding object in the source bucket. If a lifecycle configuration is enabled on a destination bucket, the lifecycle rules honor the original creation time of the object, not when the replica became available in the destination bucket. 

Replication configuration requires the bucket to be versioning-enabled. When you enable versioning on a bucket, keep the following in mind:
+ If you have an object Expiration lifecycle configuration, after you enable versioning, add a `NonCurrentVersionExpiration` policy to maintain the same permanent delete behavior as before you enabled versioning.
+ If you have a Transition lifecycle configuration, after you enable versioning, consider adding a `NonCurrentVersionTransition` policy.

### Versioning configuration and replication configuration


Both the source and destination buckets must be versioning-enabled when you configure replication on a bucket. After you enable versioning on both the source and destination buckets and configure replication on the source bucket, you will encounter the following issues:
+ If you attempt to disable versioning on the source bucket, Amazon S3 returns an error. You must remove the replication configuration before you can disable versioning on the source bucket.
+ If you disable versioning on the destination bucket, replication fails. The source object has the replication status `FAILED`.

### Using S3 Replication with S3 Intelligent-Tiering


S3 Intelligent-Tiering is a storage class that is designed to optimize storage costs by automatically moving data to the most cost-effective access tier. For a small monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed to lower-cost access tiers.

Replicating objects stored in S3 Intelligent-Tiering with S3 Batch Replication or invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) constitutes access. In these cases, the source objects of the copy or replication operations are tiered up.

For more information about S3 Intelligent-Tiering see, [Managing storage costs with Amazon S3 Intelligent-Tiering](intelligent-tiering.md).

### Logging configuration and replication configuration


If Amazon S3 delivers logs to a bucket that has replication enabled, it replicates the log objects.

If [server access logs](ServerLogs.md) or [AWS CloudTrail logs](cloudtrail-logging.md) are enabled on your source or destination bucket, Amazon S3 includes replication-related requests in the logs. For example, Amazon S3 logs each object that it replicates. 

### CRR and the destination Region


Amazon S3 Cross-Region Replication (CRR) is used to copy objects across S3 buckets in different AWS Regions. You might choose the Region for your destination bucket based on either your business needs or cost considerations. For example, inter-Region data transfer charges vary depending on the Regions that you choose. 

Suppose that you chose US East (N. Virginia) (`us-east-1`) as the Region for your source bucket. If you choose US West (Oregon) (`us-west-2`) as the Region for your destination buckets, you pay more than if you choose the US East (Ohio) (`us-east-2`) Region. For pricing information, see "Data Transfer Pricing" in [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

There are no data transfer charges associated with Same-Region Replication (SRR).

### S3 Batch Replication


For information about considerations for Batch Replication, see [S3 Batch Replication considerations](s3-batch-replication-batch.md#batch-replication-considerations).

### S3 Replication Time Control


For information about best practices and considerations for S3 Replication Time Control (S3 RTC), see [Best practices and guidelines for S3 RTC](replication-time-control.md#rtc-best-practices).

# Setting up live replication overview
Setting up live replication

**Note**  
Objects that existed before you set up replication aren't replicated automatically. In other words, Amazon S3 doesn't replicate objects retroactively. To replicate objects that were created before your replication configuration, use S3 Batch Replication. For more information about configuring Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

To enable live replication—Same-Region Replication (SRR) or Cross-Region Replication (CRR)—add a replication configuration to your source bucket. This configuration tells Amazon S3 to replicate objects as specified. In the replication configuration, you must provide the following:
+ **The destination buckets** – The bucket or buckets where you want Amazon S3 to replicate the objects.
+ **The objects that you want to replicate** – You can replicate all objects in the source bucket or a subset of objects. You identify a subset by providing a [key name prefix](https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#keyprefix), one or more object tags, or both in the configuration.

  For example, if you configure a replication rule to replicate only objects with the key name prefix `Tax/`, Amazon S3 replicates objects with keys such as `Tax/doc1` or `Tax/doc2`. But it doesn't replicate objects with the key `Legal/doc3`. If you specify both a prefix and one or more tags, Amazon S3 replicates only objects that have the specific key prefix and tags.
+ **An AWS Identity and Access Management (IAM) role** – Amazon S3 assumes this IAM role to replicate objects on your behalf. For more information about creating this IAM role and managing permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

In addition to these minimum requirements, you can choose the following options: 
+ **Replica storage class** – By default, Amazon S3 stores object replicas using the same storage class as the source object. You can specify a different storage class for the replicas.
+ **Replica ownership** – Amazon S3 assumes that an object replica continues to be owned by the owner of the source object. So when it replicates objects, it also replicates the corresponding object access control list (ACL) or S3 Object Ownership setting. If the source and destination buckets are owned by different AWS accounts, you can configure replication to change the owner of a replica to the AWS account that owns the destination bucket. For more information, see [Changing the replica owner](replication-change-owner.md).

You can configure replication by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or the Amazon S3 REST API. For detailed walkthroughs of how to set up replication, see [Examples for configuring live replication](replication-example-walkthroughs.md).

 Amazon S3 provides REST API operations to support setting up replication rules. For more information, see the following topics in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html) 

**Topics**
+ [

# Replication configuration file elements
](replication-add-config.md)
+ [

# Setting up permissions for live replication
](setting-repl-config-perm-overview.md)
+ [

# Examples for configuring live replication
](replication-example-walkthroughs.md)

# Replication configuration file elements
Replication configuration file

Amazon S3 stores a replication configuration as XML. If you're configuring replication programmatically through the Amazon S3 REST API, you specify the various elements of your replication configuration in this XML file. If you're configuring replication through the AWS Command Line Interface (AWS CLI), you specify your replication configuration using JSON format. For JSON examples, see the walkthroughs in [Examples for configuring live replication](replication-example-walkthroughs.md).

**Note**  
The latest version of the replication configuration XML format is V2. XML V2 replication configurations are those that contain the `<Filter>` element for rules, and rules that specify S3 Replication Time Control (S3 RTC).  
To see your replication configuration version, you can use the `GetBucketReplication` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) in the *Amazon Simple Storage Service API Reference*.   
For backward compatibility, Amazon S3 continues to support the XML V1 replication configuration format. If you've used the XML V1 replication configuration format, see [Backward compatibility considerations](#replication-backward-compat-considerations) for backward compatibility considerations.

In the replication configuration XML file, you must specify an AWS Identity and Access Management (IAM) role and one or more rules, as shown in the following example:

```
<ReplicationConfiguration>
    <Role>IAM-role-ARN</Role>
    <Rule>
        ...
    </Rule>
    <Rule>
         ... 
    </Rule>
     ...
</ReplicationConfiguration>
```

Amazon S3 can't replicate objects without your permission. You grant permissions to Amazon S3 with the IAM role that you specify in the replication configuration. Amazon S3 assumes this IAM role to replicate objects on your behalf. You must grant the required permissions to the IAM role first. For more information about managing permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

You add only one rule in a replication configuration in the following scenarios:
+ You want to replicate all objects.
+ You want to replicate only one subset of objects. You identify the object subset by adding a filter in the rule. In the filter, you specify an object key prefix, tags, or a combination of both to identify the subset of objects that the rule applies to. The filters target objects that match the exact values that you specify.

If you want to replicate different subsets of objects, you add multiple rules in a replication configuration. In each rule, you specify a filter that selects a different subset. For example, you might choose to replicate objects that have either `tax/` or `document/` key prefixes. To do this, you add two rules, one that specifies the `tax/` key prefix filter and another that specifies the `document/` key prefix. For more information about object key prefixes, see [Organizing objects using prefixes](using-prefixes.md).

The following sections provide additional information.

**Topics**
+ [

## Basic rule configuration
](#replication-config-min-rule-config)
+ [

## Optional: Specifying a filter
](#replication-config-optional-filter)
+ [

## Additional destination configurations
](#replication-config-optional-dest-config)
+ [

## Example replication configurations
](#replication-config-example-configs)
+ [

## Backward compatibility considerations
](#replication-backward-compat-considerations)

## Basic rule configuration


Each rule must include the rule's status and priority. The rule must also indicate whether to replicate delete markers. 
+ The `<Status>` element indicates whether the rule is enabled or disabled by using the values `Enabled` or `Disabled`. If a rule is disabled, Amazon S3 doesn't perform the actions specified in the rule. 
+ The `<Priority>` element indicates which rule has precedence whenever two or more replication rules conflict. Amazon S3 attempts to replicate objects according to all replication rules. However, if there are two or more rules with the same destination bucket, then objects are replicated according to the rule with the highest priority. The higher the number, the higher the priority.
+ The `<DeleteMarkerReplication>` element indicates whether to replicate delete markers by using the values `Enabled` or `Disabled`.

In the `<Destination>` element configuration, you must provide the name of the destination bucket or buckets where you want Amazon S3 to replicate objects. 

The following example shows the minimum requirements for a V2 rule. For backward compatibility, Amazon S3 continues to support the XML V1 format. For more information, see [Backward compatibility considerations](#replication-backward-compat-considerations).

```
...
    <Rule>
        <ID>Rule-1</ID>
        <Status>Enabled-or-Disabled</Status>
        <Filter>
            <Prefix></Prefix>   
        </Filter>
        <Priority>integer</Priority>
        <DeleteMarkerReplication>
           <Status>Enabled-or-Disabled</Status>
        </DeleteMarkerReplication>
        <Destination>        
           <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket> 
        </Destination>    
    </Rule>
    <Rule>
         ...
    </Rule>
     ...
...
```

You can also specify other configuration options. For example, you might choose to use a storage class for object replicas that differs from the class for the source object. 

## Optional: Specifying a filter


To choose a subset of objects that the rule applies to, add an optional filter. You can filter by object key prefix, object tags, or a combination of both. If you filter on both a key prefix and object tags, Amazon S3 combines the filters by using a logical `AND` operator. In other words, the rule applies to a subset of objects with both a specific key prefix and specific tags. 

**Filter based on object key prefix**  
To specify a rule with a filter based on an object key prefix, use the following XML. You can specify only one prefix per rule.

```
<Rule>
    ...
    <Filter>
        <Prefix>key-prefix</Prefix>   
    </Filter>
    ...
</Rule>
...
```

**Filter based on object tags**  
To specify a rule with a filter based on object tags, use the following XML. You can specify one or more object tags.

```
<Rule>
    ...
    <Filter>
        <And>
            <Tag>
                <Key>key1</Key>
                <Value>value1</Value>
            </Tag>
            <Tag>
                <Key>key2</Key>
                <Value>value2</Value>
            </Tag>
             ...
        </And>
    </Filter>
    ...
</Rule>
...
```

**Filter with a key prefix and object tags**  
To specify a rule filter with a combination of a key prefix and object tags, use the following XML. You wrap these filters in an `<And>` parent element. Amazon S3 performs a logical `AND` operation to combine these filters. In other words, the rule applies to a subset of objects with both a specific key prefix and specific tags. 

```
<Rule>
    ...
    <Filter>
        <And>
            <Prefix>key-prefix</Prefix>
            <Tag>
                <Key>key1</Key>
                <Value>value1</Value>
            </Tag>
            <Tag>
                <Key>key2</Key>
                <Value>value2</Value>
            </Tag>
             ...
    </Filter>
    ...
</Rule>
...
```

**Note**  
If you specify a rule with an empty `<Filter>` element, your rule applies to all objects in your bucket.
When you're using tag-based replication rules with live replication, new objects must be tagged with the matching replication rule tag in the `PutObject` operation. Otherwise, the objects won't be replicated. If objects are tagged after the `PutObject` operation, those objects also won't be replicated.   
To replicate objects that have been tagged after the `PutObject` operation, you must use S3 Batch Replication. For more information about Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

## Additional destination configurations


In the destination configuration, you specify the bucket or buckets where you want Amazon S3 to replicate objects. You can set configurations to replicate objects from one source bucket to one or more destination buckets. 

```
...
<Destination>        
    <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
</Destination>
...
```

You can add the following options in the `<Destination>` element.

**Topics**
+ [

### Specify storage class
](#storage-class-configuration)
+ [

### Add multiple destination buckets
](#multiple-destination-buckets-configuration)
+ [

### Specify different parameters for each replication rule with multiple destination buckets
](#replication-rule-configuration)
+ [

### Change replica ownership
](#replica-ownership-configuration)
+ [

### Enable S3 Replication Time Control
](#rtc-configuration)
+ [

### Replicate objects created with server-side encryption by using AWS KMS
](#sse-kms-configuration)

### Specify storage class


You can specify the storage class for the object replicas. By default, Amazon S3 uses the storage class of the source object to create object replicas, as in the following example.

```
...
<Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
       <StorageClass>storage-class</StorageClass>
</Destination>
...
```

### Add multiple destination buckets


You can add multiple destination buckets in a single replication configuration, as follows.

```
...
<Rule>
    <ID>Rule-1</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled-or-Disabled</Status>
    </DeleteMarkerReplication>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket1</Bucket> 
    </Destination>    
</Rule>
<Rule>
    <ID>Rule-2</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled-or-Disabled</Status>
    </DeleteMarkerReplication>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket2</Bucket> 
    </Destination>    
</Rule>
...
```

### Specify different parameters for each replication rule with multiple destination buckets


When adding multiple destination buckets in a single replication configuration, you can specify different parameters for each replication rule, as follows.

```
...
<Rule>
    <ID>Rule-1</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Disabled</Status>
    </DeleteMarkerReplication>
      <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket1</Bucket> 
    </Destination>    
</Rule>
<Rule>
    <ID>Rule-2</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled</Status>
    </DeleteMarkerReplication>
      <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
  <ReplicationTime>
    <Status>Enabled</Status>
    <Time>
      <Minutes>15</Minutes>
    </Time>
  </ReplicationTime>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket2</Bucket> 
    </Destination>    
</Rule>
...
```

### Change replica ownership


When the source and destination buckets aren't owned by the same accounts, you can change the ownership of the replica to the AWS account that owns the destination bucket. To do so, add the `<AccessControlTranslation>` element. This element takes the value `Destination`.

```
...
<Destination>
   <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
   <Account>destination-bucket-owner-account-id</Account>
   <AccessControlTranslation>
       <Owner>Destination</Owner>
   </AccessControlTranslation>
</Destination>
...
```

If you don't add the `<AccessControlTranslation>` element to the replication configuration, the replicas are owned by the same AWS account that owns the source object. For more information, see [Changing the replica owner](replication-change-owner.md).

### Enable S3 Replication Time Control


You can enable S3 Replication Time Control (S3 RTC) in your replication configuration. S3 RTC replicates most objects in seconds and 99.99 percent of objects within 15 minutes (backed by a service-level agreement). 

**Note**  
Only a value of `<Minutes>15</Minutes>` is accepted for the `<EventThreshold>` and `<Time>` elements.

```
...
<Destination>
  <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
  <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
  <ReplicationTime>
    <Status>Enabled</Status>
    <Time>
      <Minutes>15</Minutes>
    </Time>
  </ReplicationTime>
</Destination>
...
```

For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md). For API examples, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html) in the *Amazon Simple Storage Service API Reference*.

### Replicate objects created with server-side encryption by using AWS KMS


Your source bucket might contain objects that were created with server-side encryption by using AWS Key Management Service (AWS KMS) keys (SSE-KMS). By default, Amazon S3 doesn't replicate these objects. You can optionally direct Amazon S3 to replicate these objects. To do so, first explicitly opt into this feature by adding the `<SourceSelectionCriteria>` element. Then provide the AWS KMS key (for the AWS Region of the destination bucket) to use for encrypting object replicas. The following example shows how to specify these elements.

```
...
<SourceSelectionCriteria>
  <SseKmsEncryptedObjects>
    <Status>Enabled</Status>
  </SseKmsEncryptedObjects>
</SourceSelectionCriteria>
<Destination>
  <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
  <EncryptionConfiguration>
    <ReplicaKmsKeyID>AWS KMS key ID to use for encrypting object replicas</ReplicaKmsKeyID>
  </EncryptionConfiguration>
</Destination>
...
```

For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).

## Example replication configurations


To get started, you can add the following example replication configurations to your bucket, as appropriate.

**Important**  
To add a replication configuration to a bucket, you must have the `iam:PassRole` permission. This permission allows you to pass the IAM role that grants Amazon S3 replication permissions. You specify the IAM role by providing the Amazon Resource Name (ARN) that is used in the `<Role>` element in the replication configuration XML. For more information, see [Granting a User Permissions to Pass a Role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

**Example 1: Replication configuration with one rule**  
The following basic replication configuration specifies one rule. The rule specifies an IAM role that Amazon S3 can assume and a single destination bucket for object replicas. The `<Status>` element value of `Enabled` indicates that the rule is in effect.  

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>

    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```
To choose a subset of objects to replicate, you can add a filter. In the following configuration, the filter specifies an object key prefix. This rule applies to objects that have the prefix `Tax/` in their key names.   

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>

    <Filter>
       <Prefix>Tax/</Prefix>
    </Filter>

    <Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>

  </Rule>
</ReplicationConfiguration>
```
If you specify the `<Filter>` element, you must also include the `<Priority>` and `<DeleteMarkerReplication>` elements. In this example, the value that you set for the `<Priority>` element is irrelevant because there is only one rule.  
In the following configuration, the filter specifies one prefix and two tags. The rule applies to the subset of objects that have the specified key prefix and tags. Specifically, it applies to objects that have the `Tax/` prefix in their key names and the two specified object tags. In this example, the value that you set for the `<Priority>` element is irrelevant because there is only one rule.  

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>

    <Filter>
        <And>
          <Prefix>Tax/</Prefix>
          <Tag>
             <Tag>
                <Key>tagA</Key>
                <Value>valueA</Value>
             </Tag>
          </Tag>
          <Tag>
             <Tag>
                <Key>tagB</Key>
                <Value>valueB</Value>
             </Tag>
          </Tag>
       </And>

    </Filter>

    <Destination>
        <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>

  </Rule>
</ReplicationConfiguration>
```
You can specify a storage class for the object replicas as follows:  

```
<?xml version="1.0" encoding="UTF-8"?>

<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
       <StorageClass>storage-class</StorageClass>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```
You can specify any storage class that Amazon S3 supports.

**Example 2: Replication configuration with two rules**  

**Example**  
In the following replication configuration, the rules specify the following:  
+ Each rule filters on a different key prefix so that each rule applies to a distinct subset of objects. In this example, Amazon S3 replicates objects with the key names *`Tax/doc1.pdf`* and *`Project/project1.txt`*, but it doesn't replicate objects with the key name *`PersonalDoc/documentA`*. 
+ Although both rules specify a value for the `<Priority>` element, the rule priority is irrelevant because the rules apply to two distinct sets of objects. The next example shows what happens when rule priority is applied. 
+ The second rule specifies the S3 Standard-IA storage class for object replicas. Amazon S3 uses the specified storage class for those object replicas.
   

```
<?xml version="1.0" encoding="UTF-8"?>

<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>Tax</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
     ...
  </Rule>
 <Rule>
    <Status>Enabled</Status>
    <Priority>2</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>Project</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
     <StorageClass>STANDARD_IA</StorageClass>
    </Destination>
     ...
  </Rule>


</ReplicationConfiguration>
```

**Example 3: Replication configuration with two rules with overlapping prefixes**  <a name="overlap-rule-example"></a>
In this configuration, the two rules specify filters with overlapping key prefixes, *`star`* and *`starship`*. Both rules apply to objects with the key name *`starship-x`*. In this case, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority.  

```
<ReplicationConfiguration>

  <Role>arn:aws:iam::account-id:role/role-name</Role>

  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>star</Prefix>
    </Filter>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
  <Rule>
    <Status>Enabled</Status>
    <Priority>2</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>starship</Prefix>
    </Filter>    
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```

**Example 4: Example walkthroughs**  
For example walkthroughs, see [Examples for configuring live replication](replication-example-walkthroughs.md).

For more information about the XML structure of replication configuration, see [PutBucketReplication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) in the *Amazon Simple Storage Service API Reference*. 

## Backward compatibility considerations


The latest version of the replication configuration XML format is V2. XML V2 replication configurations are those that contain the `<Filter>` element for rules, and rules that specify S3 Replication Time Control (S3 RTC).

To see your replication configuration version, you can use the `GetBucketReplication` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) in the *Amazon Simple Storage Service API Reference*. 

For backward compatibility, Amazon S3 continues to support the XML V1 replication configuration format. If you've used the XML V1 replication configuration format, consider the following issues that affect backward compatibility:
+ The replication configuration XML V2 format includes the `<Filter>` element for rules. With the `<Filter>` element, you can specify object filters based on the object key prefix, tags, or both to scope the objects that the rule applies to. The replication configuration XML V1 format supports filtering based only on the key prefix. In that case, you add the `<Prefix>` element directly as a child element of the `<Rule>` element, as in the following example:

  ```
  <?xml version="1.0" encoding="UTF-8"?>
  <ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Role>arn:aws:iam::account-id:role/role-name</Role>
    <Rule>
      <Status>Enabled</Status>
      <Prefix>key-prefix</Prefix>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
      </Destination>
  
    </Rule>
  </ReplicationConfiguration>
  ```
+ When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker. If you use the replication configuration XML V1 format, Amazon S3 replicates only delete markers that result from user actions. In other words, Amazon S3 replicates the delete marker only if a user deletes an object. If an expired object is removed by Amazon S3 (as part of a lifecycle action), Amazon S3 doesn't replicate the delete marker. 

  In the replication configuration XML V2 format, you can enable delete marker replication for non-tag-based rules. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md). 

 

# Setting up permissions for live replication
Setting up permissions

When setting up live replication in Amazon S3, you must acquire the necessary permissions as follows:
+ You must grant the AWS Identity and Access Management (IAM) principal (user or role) who will be creating replication rules a certain set of permissions.
+ Amazon S3 needs permissions to replicate objects on your behalf. You grant these permissions by creating an IAM role and then specifying that role in your replication configuration.
+ When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also grant the source bucket owner permissions to store the replicas.

**Note**  
If you're using S3 Batch Operations to replicate objects on demand instead of setting up live replication, a different IAM role and policies are required for S3 Batch Replication. For a Batch Replication IAM role and policy examples, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

**Topics**
+ [

## Step 1: Granting permissions to the IAM principal who's creating replication rules
](#setting-repl-config-role)
+ [

## Step 2: Creating an IAM role for Amazon S3 to assume
](#setting-repl-config-same-acctowner)
+ [

## (Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts
](#setting-repl-config-crossacct)
+ [

## (Optional) Step 4: Granting permissions to change replica ownership
](#change-replica-ownership)

## Step 1: Granting permissions to the IAM principal who's creating replication rules


The IAM user or role that you will use to create replication rules needs permissions to create replication rules for one- or two-way replications. If the user or role doesn't have these permissions, you won't be able to create replication rules. For more information, see [IAM Identities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html) in the *IAM User Guide*.

The user or role needs the following actions:
+ `iam:AttachRolePolicy`
+ `iam:CreatePolicy`
+ `iam:CreateServiceLinkedRole`
+ `iam:PassRole`
+ `iam:PutRolePolicy`
+ `s3:GetBucketVersioning`
+ `s3:GetObjectVersionAcl`
+ `s3:GetObjectVersionForReplication`
+ `s3:GetReplicationConfiguration`
+ `s3:PutReplicationConfiguration`

Following is a sample IAM policy that includes these actions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetAccessPoint",
                "s3:GetAccountPublicAccessBlock",
                "s3:GetBucketAcl",
                "s3:GetBucketLocation",
                "s3:GetBucketPolicyStatus",
                "s3:GetBucketPublicAccessBlock",
                "s3:ListAccessPoints",
                "s3:ListAllMyBuckets",
                "s3:PutReplicationConfiguration",
                "s3:GetReplicationConfiguration",
                "s3:GetBucketVersioning",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:GetObjectVersion",
                "s3:GetBucketOwnershipControls",
                "s3:PutBucketOwnershipControls",
                "s3:GetObjectLegalHold",
                "s3:GetObjectRetention",
                "s3:GetBucketObjectLockConfiguration"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1-*",
                "arn:aws:s3:::amzn-s3-demo-bucket2-*/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:List*AccessPoint*",
                "s3:GetMultiRegion*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:Get*",
                "iam:CreateServiceLinkedRole",
                "iam:CreateRole",
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::*:role/service-role/s3*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:List*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:CreatePolicy"
              ],
            "Resource": [
                "arn:aws:iam::*:policy/service-role/s3*",
                "arn:aws:iam::*:role/service-role/s3*"
            ]
        }
    ]
}
```

------

## Step 2: Creating an IAM role for Amazon S3 to assume




By default, all Amazon S3 resources—buckets, objects, and related subresources—are private, and only the resource owner can access the resource. Amazon S3 needs permissions to read and replicate objects from the source bucket. You grant these permissions by creating an IAM role and specifying that role in your replication configuration. 

This section explains the trust policy and the minimum required permissions policy that are attached to this IAM role. The example walkthroughs provide step-by-step instructions to create an IAM role. For more information, see [Examples for configuring live replication](replication-example-walkthroughs.md).

**Note**  
If you're using the console to create your replication configuration, we recommend that you skip this section and instead have the console create this IAM role and the necessary trust and permission policies for you.

The *trust policy* identifies which principal identities can assume the IAM role. The *permissions policy* specifies which actions the IAM role can perform, on which resources, and under what conditions. 
+ The following example shows a *trust policy* where you identify Amazon S3 as the AWS service principal that can assume the role:

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement":[
        {
           "Effect":"Allow",
           "Principal":{
              "Service":"s3.amazonaws.com"
           },
           "Action":"sts:AssumeRole"
        }
     ]
  }
  ```

------
+ The following example shows a *trust policy* where you identify Amazon S3 and S3 Batch Operations as service principals that can assume the role. Use this approach if you're creating a Batch Replication job. For more information, see [Create a Batch Replication job for new replication rules or destinations](s3-batch-replication-new-config.md).

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement":[ 
        {
           "Effect":"Allow",
           "Principal":{
              "Service": [
                "s3.amazonaws.com",
                "batchoperations.s3.amazonaws.com"
             ]
           },
           "Action":"sts:AssumeRole"
        }
     ]
  }
  ```

------

  For more information about IAM roles, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*.
+ The following example shows the *permissions policy*, where you grant the IAM role permissions to perform replication tasks on your behalf. When Amazon S3 assumes the role, it has the permissions that you specify in this policy. In this policy, `amzn-s3-demo-source-bucket` is the source bucket, and `amzn-s3-demo-destination-bucket` is the destination bucket.

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
        {
           "Effect": "Allow",
           "Action": [
              "s3:GetReplicationConfiguration",
              "s3:ListBucket"
           ],
           "Resource": [
              "arn:aws:s3:::amzn-s3-demo-source-bucket"
           ]
        },
        {
           "Effect": "Allow",
           "Action": [
              "s3:GetObjectVersionForReplication",
              "s3:GetObjectVersionAcl",
              "s3:GetObjectVersionTagging"
           ],
           "Resource": [
              "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
           ]
        },
        {
           "Effect": "Allow",
           "Action": [
              "s3:ReplicateObject",
              "s3:ReplicateDelete",
              "s3:ReplicateTags"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        }
     ]
  }
  ```

------

  The permissions policy grants permissions for the following actions:
  +  `s3:GetReplicationConfiguration` and `s3:ListBucket` – Permissions for these actions on the `amzn-s3-demo-source-bucket` bucket allow Amazon S3 to retrieve the replication configuration and list the bucket content. (The current permissions model requires the `s3:ListBucket` permission for accessing delete markers.)
  + `s3:GetObjectVersionForReplication` and `s3:GetObjectVersionAcl` – Permissions for these actions are granted on all objects to allow Amazon S3 to get a specific object version and access control list (ACL) associated with the objects. 

    
  + `s3:ReplicateObject` and `s3:ReplicateDelete` – Permissions for these actions on all objects in the `amzn-s3-demo-destination-bucket` bucket allow Amazon S3 to replicate objects or delete markers to the destination bucket. For information about delete markers, see [How delete operations affect replication](replication-what-is-isnot-replicated.md#replication-delete-op). 
**Note**  
Permissions for the `s3:ReplicateObject` action on the `amzn-s3-demo-destination-bucket` bucket also allow replication of metadata such as object tags and ACLs. Therefore, you don't need to explicitly grant permission for the `s3:ReplicateTags` action.
  + `s3:GetObjectVersionTagging` – Permissions for this action on objects in the `amzn-s3-demo-source-bucket` bucket allow Amazon S3 to read object tags for replication. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). If Amazon S3 doesn't have the `s3:GetObjectVersionTagging` permission, it replicates the objects, but not the object tags.

  For a list of Amazon S3 actions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#list_amazons3-actions-as-permissions) in the *Service Authorization Reference*.

  For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
**Important**  
The AWS account that owns the IAM role must have permissions for the actions that it grants to the IAM role.   
For example, suppose that the source bucket contains objects owned by another AWS account. The owner of the objects must explicitly grant the AWS account that owns the IAM role the required permissions through the objects' access control lists (ACLs). Otherwise, Amazon S3 can't access the objects, and replication of the objects fails. For information about ACL permissions, see [Access control list (ACL) overview](acl-overview.md).  
  
The permissions described here are related to the minimum replication configuration. If you choose to add optional replication configurations, you must grant additional permissions to Amazon S3:   
To replicate encrypted objects, you also need to grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication).

## (Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts


When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions, as shown in the following example. In this example policy, `amzn-s3-demo-destination-bucket` is the destination bucket.

You can also use the Amazon S3 console to automatically generate this bucket policy for you. For more information, see [Enable receiving replicated objects from a source bucket](#receiving-replicated-objects).

**Note**  
The ARN format of the role might appear different. If the role was created by using the console, the ARN format is `arn:aws:iam::account-ID:role/service-role/role-name`. If the role was created by using the AWS CLI, the ARN format is `arn:aws:iam::account-ID:role/role-name`. For more information, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) in the *IAM User Guide*. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "PolicyForDestinationBucket",
    "Statement": [
        {
            "Sid": "Permissions on objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
            },
            "Action": [
                "s3:ReplicateDelete",
                "s3:ReplicateObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        },
        {
            "Sid": "Permissions on bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
        }
    ]
}
```

------

For an example, see [Configuring replication for buckets in different accounts](replication-walkthrough-2.md).

If objects in the source bucket are tagged, note the following:
+ If the source bucket owner grants Amazon S3 permission for the `s3:GetObjectVersionTagging` and `s3:ReplicateTags` actions to replicate object tags (through the IAM role), Amazon S3 replicates the tags along with the objects. For information about the IAM role, see [Step 2: Creating an IAM role for Amazon S3 to assume](#setting-repl-config-same-acctowner).
+ If the owner of the destination bucket doesn't want to replicate the tags, they can add the following statement to the destination bucket policy to explicitly deny permission for the `s3:ReplicateTags` action. In this policy, `amzn-s3-demo-destination-bucket` is the destination bucket.

  ```
  ...
     "Statement":[
        {
           "Effect":"Deny",
           "Principal":{
              "AWS":"arn:aws:iam::source-bucket-account-id:role/service-role/source-account-IAM-role"
           },
           "Action":"s3:ReplicateTags",
           "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        }
     ]
  ...
  ```

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

**Enable receiving replicated objects from a source bucket**  
Instead of manually adding the preceding policy to your destination bucket, you can quickly generate the policies needed to enable receiving replicated objects from a source bucket through the Amazon S3 console. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the bucket that you want to use as a destination bucket.

1. Choose the **Management** tab, and scroll down to **Replication rules**.

1. For **Actions**, choose **Receive replicated objects**. 

   Follow the prompts and enter the AWS account ID of the source bucket account, and then choose **Generate policies**. The console generates an Amazon S3 bucket policy and a KMS key policy.

1. To add this policy to your existing bucket policy, either choose **Apply settings** or choose **Copy** to manually copy the changes. 

1. (Optional) Copy the AWS KMS policy to your desired KMS key policy in the AWS Key Management Service console. 

## (Optional) Step 4: Granting permissions to change replica ownership


When different AWS accounts own the source and destination buckets, you can tell Amazon S3 to change the ownership of the replica to the AWS account that owns the destination bucket. To override the ownership of replicas, you must either grant some additional permissions or adjust the S3 Object Ownership settings for the destination bucket. For more information about owner override, see [Changing the replica owner](replication-change-owner.md).

# Examples for configuring live replication
Replication walkthroughs

The following examples provide step-by-step walkthroughs that show how to configure live replication for common use cases. 

**Note**  
Live replication refers to Same-Region Replication (SRR) and Cross-Region Replication (CRR). Live replication doesn't replicate any objects that existed in the bucket before you set up replication. To replicate objects that existed before you set up replication, use on-demand replication. To sync buckets and replicate existing objects on demand, see [Replicating existing objects](s3-batch-replication-batch.md).

These examples demonstrate how to create a replication configuration by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDKs (AWS SDK for Java and AWS SDK for .NET examples are shown). 

For information about installing and configuring the AWS CLI, see the following topics in the *AWS Command Line Interface User Guide*:
+  [Get started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) 
+  [Configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) – You must set up at least one profile. If you are exploring cross-account scenarios, set up two profiles.

For information about the AWS SDKs, see [AWS SDK for Java](https://aws.amazon.com/sdk-for-java/) and [AWS SDK for .NET](https://aws.amazon.com/sdk-for-net/).

**Tip**  
For a step-by-step tutorial that demonstrates how to use live replication to replicate data, see [Tutorial: Replicating data within and between AWS Regions using S3 Replication](https://aws.amazon.com/getting-started/hands-on/replicate-data-using-amazon-s3-replication/?ref=docs_gateway/amazons3/replication-example-walkthroughs.html).

**Topics**
+ [Configuring for buckets in the same account](replication-walkthrough1.md)
+ [Configuring for buckets in different accounts](replication-walkthrough-2.md)
+ [Using S3 Replication Time Control](replication-time-control.md)
+ [Replicating encrypted objects](replication-config-for-kms-objects.md)
+ [Replicating metadata changes](replication-for-metadata-changes.md)
+ [Replicating delete markers](delete-marker-replication.md)

# Configuring replication for buckets in the same account
Configuring for buckets in the same account

Live replication is the automatic, asynchronous copying of objects across general purpose buckets in the same or different AWS Regions. Live replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. For more information, see [Replicating objects within and across Regions](replication.md).

When you configure replication, you add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account.

If you specify an object version ID to delete, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as you save it. 

In this example, you set up live replication for source and destination buckets that are owned by the same AWS account. Examples are provided for using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), and the AWS SDK for Java and AWS SDK for .NET.

## Prerequisites


Before you use the following procedures, make sure that you've set up the necessary permissions for replication, depending on whether the source and destination buckets are owned by the same or different accounts. For more information, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

## Using the S3 console


To configure a replication rule when the destination bucket is in the same AWS account as the source bucket, follow these steps.

If the destination bucket is in a different account from the source bucket, you must add a bucket policy to the destination bucket to grant the owner of the source bucket account permission to replicate objects in the destination bucket. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want.

1. Choose the **Management** tab, scroll down to **Replication rules**, and then choose **Create replication rule**.

    

1. In the **Replication rule configuration** section, under **Replication rule name**, enter a name for your rule to help identify the rule later. The name is required and must be unique within the bucket.

1. Under **Status**, **Enabled** is selected by default. An enabled rule starts to work as soon as you save it. If you want to enable the rule later, choose **Disabled**.

1. If the bucket has existing replication rules, you are instructed to set a priority for the rule. You must set a priority for the rule to avoid conflicts caused by objects that are included in the scope of more than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority. For more information about rule priority, see [Replication configuration file elements](replication-add-config.md).

1. Under **Source bucket**, you have the following options for setting the replication source:
   + To replicate the whole bucket, choose **Apply to all objects in the bucket**. 
   + To replicate all objects that have the same prefix, choose **Limit the scope of this rule using one or more filters**. This limits replication to all objects that have names that begin with the prefix that you specify (for example `pictures`). Enter a prefix in the **Prefix** box. 
**Note**  
If you enter a prefix that is the name of a folder, you must use **/** (forward slash) as the last character (for example, `pictures/`).
   + To replicate all objects with one or more object tags, choose **Add tag** and enter the key-value pair in the boxes. Repeat the procedure to add another tag. You can combine a prefix and tags. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md).

   The new replication configuration XML schema supports prefix and tag filtering and the prioritization of rules. For more information about the new schema, see [Backward compatibility considerations](replication-add-config.md#replication-backward-compat-considerations). For more information about the XML used with the Amazon S3 API that works behind the user interface, see [Replication configuration file elements](replication-add-config.md). The new schema is described as *replication configuration XML V2*.

1. Under **Destination**, choose the bucket where you want Amazon S3 to replicate objects.
**Note**  
The number of destination buckets is limited to the number of AWS Regions in a given partition. A partition is a grouping of Regions. AWS currently has three partitions: `aws` (Standard Regions), `aws-cn` (China Regions), and `aws-us-gov` (AWS GovCloud (US) Regions). To request an increase in your destination bucket quota, you can use [service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html).
   + To replicate to a bucket or buckets in your account, choose **Choose a bucket in this account**, and enter or browse for the destination bucket names. 
   + To replicate to a bucket or buckets in a different AWS account, choose **Specify a bucket in another account**, and enter the destination bucket account ID and bucket name. 

     If the destination is in a different account from the source bucket, you must add a bucket policy to the destination buckets to grant the owner of the source bucket account permission to replicate objects. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).

     Optionally, if you want to help standardize ownership of new objects in the destination bucket, choose **Change object ownership to the destination bucket owner**. For more information about this option, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).
**Note**  
If versioning is not enabled on the destination bucket, you get a warning that contains an **Enable versioning** button. Choose this button to enable versioning on the bucket.

1. Set up an AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate objects on your behalf.

   To set up an IAM role, in the **IAM role** section, select one of the following from the **IAM role** dropdown list:
   + We highly recommend that you choose **Create new role** to have Amazon S3 create a new IAM role for you. When you save the rule, a new policy is generated for the IAM role that matches the source and destination buckets that you choose.
   + You can choose to use an existing IAM role. If you do, you must choose a role that grants Amazon S3 the necessary permissions for replication. Replication fails if this role does not grant Amazon S3 sufficient permissions to follow your replication rule.
**Important**  
When you add a replication rule to a bucket, you must have the `iam:PassRole` permission to be able to pass the IAM role that grants Amazon S3 replication permissions. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

1. To replicate objects in the source bucket that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), under **Encryption**, select **Replicate objects encrypted with AWS KMS**. Under **AWS KMS keys for encrypting destination objects** are the source keys that you allow replication to use. All source KMS keys are included by default. To narrow the KMS key selection, you can choose an alias or key ID. 

   Objects encrypted by AWS KMS keys that you do not select are not replicated. A KMS key or a group of KMS keys is chosen for you, but you can choose the KMS keys if you want. For information about using AWS KMS with replication, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
**Important**  
When you replicate objects that are encrypted with AWS KMS, the AWS KMS request rate doubles in the source Region and increases in the destination Region by the same amount. These increased call rates to AWS KMS are due to the way that data is re-encrypted by using the KMS key that you define for the replication destination Region. AWS KMS has a request rate quota that is per calling account per Region. For information about the quota defaults, see [AWS KMS Quotas - Requests per Second: Varies](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second) in the *AWS Key Management Service Developer Guide*.   
If your current Amazon S3 `PUT` object request rate during replication is more than half the default AWS KMS rate limit for your account, we recommend that you request an increase to your AWS KMS request rate quota. To request an increase, create a case in the Support Center at [Contact Us](https://aws.amazon.com/contact-us/). For example, suppose that your current `PUT` object request rate is 1,000 requests per second and you use AWS KMS to encrypt your objects. In this case, we recommend that you ask Support to increase your AWS KMS rate limit to 2,500 requests per second, in both your source and destination Regions (if different), to ensure that there is no throttling by AWS KMS.   
To see your `PUT` object request rate in the source bucket, view `PutRequests` in the Amazon CloudWatch request metrics for Amazon S3. For information about viewing CloudWatch metrics, see [Using the S3 console](configure-request-metrics-bucket.md#configure-metrics).

   If you chose to replicate objects encrypted with AWS KMS, do the following: 

   1. Under **AWS KMS key for encrypting destination objects **, specify your KMS key in one of the following ways:
     + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from the list of available keys.

       Both the AWS managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
     + To enter the KMS key Amazon Resource Name (ARN), choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears. This encrypts the replicas in the destination bucket. You can find the ARN for your KMS key in the [IAM Console](https://console.aws.amazon.com/iam/), under **Encryption keys**. 
     + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

       For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
You can only use KMS keys that are enabled in the same AWS Region as the bucket. When you choose **Choose from your KMS keys**, the S3 console lists only 100 KMS keys per Region. If you have more than 100 KMS keys in the same Region, you can see only the first 100 KMS keys in the S3 console. To use a KMS key that is not listed in the console, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN.  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*. For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

1. Under **Destination storage class**, if you want to replicate your data into a specific storage class in the destination, choose **Change the storage class for the replicated objects**. Then choose the storage class that you want to use for the replicated objects in the destination. If you don't choose this option, the storage class for replicated objects is the same class as the original objects.

1. You have the following additional options while setting the **Additional replication options**:
   + If you want to enable S3 Replication Time Control (S3 RTC) in your replication configuration, select **Replication Time Control (RTC)**. For more information about this option, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
   + If you want to enable S3 Replication metrics in your replication configuration, select **Replication metrics and events**. For more information, see [Monitoring replication with metrics, event notifications, and statuses](replication-metrics.md).
   + If you want to enable delete marker replication in your replication configuration, select **Delete marker replication**. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md).
   + If you want to enable Amazon S3 replica modification sync in your replication configuration, select **Replica modification sync**. For more information, see [Replicating metadata changes with replica modification sync](replication-for-metadata-changes.md).
**Note**  
When you use S3 RTC or S3 Replication metrics, additional fees apply.

1. To finish, choose **Save**.

1. After you save your rule, you can edit, enable, disable, or delete your rule by selecting your rule and choosing **Edit rule**. 

## Using the AWS CLI


To use the AWS CLI to set up replication when the source and destination buckets are owned by the same AWS account, you do the following:
+ Create source and destination buckets.
+ Enable versioning on the buckets.
+ Create an AWS Identity and Access Management (IAM) role that gives Amazon S3 permission to replicate objects.
+ Add the replication configuration to the source bucket.

To verify your setup, you test it.

**To set up replication when the source and destination buckets are owned by the same AWS account**

1. Set a credentials profile for the AWS CLI. This example uses the profile name `acctA`. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 
**Important**  
The profile that you use for this example must have the necessary permissions. For example, in the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if the profile that you use has the `iam:PassRole` permission. For more information, see [Grant a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*. If you use administrator credentials to create a named profile, you can perform all the tasks. 

1. Create a source bucket and enable versioning on it by using the following AWS CLI commands. To use these commands, replace the *`user input placeholders`* with your own information. 

   The following `create-bucket` command creates a source bucket named `amzn-s3-demo-source-bucket` in the US East (N. Virginia) (`us-east-1`) Region:

   

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   The following `put-bucket-versioning` command enables S3 Versioning on the `amzn-s3-demo-source-bucket` bucket: 

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create a destination bucket and enable versioning on it by using the following AWS CLI commands. To use these commands, replace the *`user input placeholders`* with your own information. 
**Note**  
To set up a replication configuration when both source and destination buckets are in the same AWS account, you use the same profile for the source and destination buckets. This example uses `acctA`.   
To test a replication configuration when the buckets are owned by different AWS accounts, specify different profiles for each account. For example, use an `acctB` profile for the destination bucket.

   

   The following `create-bucket` command creates a destination bucket named `amzn-s3-demo-destination-bucket` in the US West (Oregon) (`us-west-2`) Region:

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctA
   ```

   The following `put-bucket-versioning` command enables S3 Versioning on the `amzn-s3-demo-destination-bucket` bucket: 

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create an IAM role. You specify this role in the replication configuration that you add to the source bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create a role.
   + Attach a permissions policy to the role.

   1. Create the IAM role.

      1. Copy the following trust policy and save it to a file named `s3-role-trust-policy.json` in the current directory on your local computer. This policy grants the Amazon S3 service principal permissions to assume the role.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Run the following command to create a role.

         ```
         $ aws iam create-role \
         --role-name replicationRole \
         --assume-role-policy-document file://s3-role-trust-policy.json  \
         --profile acctA
         ```

   1. Attach a permissions policy to the role.

      1. Copy the following permissions policy and save it to a file named `s3-role-permissions-policy.json` in the current directory on your local computer. This policy grants permissions for various Amazon S3 bucket and object actions. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ReplicateTags"
                  ],
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               }
            ]
         }
         ```

------
**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

      1. Run the following command to create a policy and attach it to the role. Replace the *`user input placeholders`* with your own information.

         ```
         $ aws iam put-role-policy \
         --role-name replicationRole \
         --policy-document file://s3-role-permissions-policy.json \
         --policy-name replicationRolePolicy \
         --profile acctA
         ```

1. Add a replication configuration to the source bucket. 

   1. Although the Amazon S3 API requires that you specify the replication configuration as XML, the AWS CLI requires that you specify the replication configuration as JSON. Save the following JSON in a file called `replication.json` to the local directory on your computer.

      ```
      {
        "Role": "IAM-role-ARN",
        "Rules": [
          {
            "Status": "Enabled",
            "Priority": 1,
            "DeleteMarkerReplication": { "Status": "Disabled" },
            "Filter" : { "Prefix": "Tax"},
            "Destination": {
              "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            }
          }
        ]
      }
      ```

   1. Update the JSON by replacing the values for the `amzn-s3-demo-destination-bucket` and `IAM-role-ARN` with your own information. Save the changes.

   1. Run the following `put-bucket-replication` command to add the replication configuration to your source bucket. Be sure to provide the source bucket name:

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

   To retrieve the replication configuration, use the `get-bucket-replication` command:

   ```
   $ aws s3api get-bucket-replication \
   --bucket amzn-s3-demo-source-bucket \
   --profile acctA
   ```

1. Test the setup in the Amazon S3 console, by doing the following steps:

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. In the left navigation pane, choose **Buckets**. In the **General purpose buckets** list, choose the source bucket.

   1. In the source bucket, create a folder named `Tax`. 

   1. Add sample objects to the `Tax` folder in the source bucket. 
**Note**  
The amount of time that it takes for Amazon S3 to replicate an object depends on the size of the object. For information about how to see the status of replication, see [Getting replication status information](replication-status.md).

      In the destination bucket, verify the following:
      + That Amazon S3 replicated the objects.
      + That the objects are replicas. On the **Properties** tab for your objects, scroll down to the **Object management overview** section. Under **Management configurations**, see the value under **Replication status**. Make sure that this value is set to `REPLICA`.
      + That the replicas are owned by the source bucket account. You can verify the object ownership on the **Permissions** tab for your objects. 

        If the source and destination buckets are owned by different accounts, you can add an optional configuration to tell Amazon S3 to change the replica ownership to the destination account. For an example, see [How to change the replica owner](replication-change-owner.md#replication-walkthrough-3). 

## Using the AWS SDKs


Use the following code examples to add a replication configuration to a bucket with the AWS SDK for Java and AWS SDK for .NET, respectively.

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

------
#### [ Java ]

To add a replication configuration to a bucket and then retrieve and verify the configuration using the AWS SDK for Java, you can use the S3Client to manage replication settings programmatically.

For examples of how to configure replication with the AWS SDK for Java, see [Set replication configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutBucketReplication_section.html) in the *Amazon S3 API Reference*.

------
#### [ C\$1 ]

The following AWS SDK for .NET code example adds a replication configuration to a bucket and then retrieves it. To use this code, provide the names for your buckets and the Amazon Resource Name (ARN) for your IAM role. For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class CrossRegionReplicationTest
    {
        private const string sourceBucket = "*** source bucket ***";
        // Bucket ARN example - arn:aws:s3:::destinationbucket
        private const string destinationBucketArn = "*** destination bucket ARN ***";
        private const string roleArn = "*** IAM Role ARN ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint sourceBucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 s3Client;
        public static void Main()
        {
            s3Client = new AmazonS3Client(sourceBucketRegion);
            EnableReplicationAsync().Wait();
        }
        static async Task EnableReplicationAsync()
        {
            try
            {
                ReplicationConfiguration replConfig = new ReplicationConfiguration
                {
                    Role = roleArn,
                    Rules =
                        {
                            new ReplicationRule
                            {
                                Prefix = "Tax",
                                Status = ReplicationRuleStatus.Enabled,
                                Destination = new ReplicationDestination
                                {
                                    BucketArn = destinationBucketArn
                                }
                            }
                        }
                };

                PutBucketReplicationRequest putRequest = new PutBucketReplicationRequest
                {
                    BucketName = sourceBucket,
                    Configuration = replConfig
                };

                PutBucketReplicationResponse putResponse = await s3Client.PutBucketReplicationAsync(putRequest);

                // Verify configuration by retrieving it.
                await RetrieveReplicationConfigurationAsync(s3Client);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }
        private static async Task RetrieveReplicationConfigurationAsync(IAmazonS3 client)
        {
            // Retrieve the configuration.
            GetBucketReplicationRequest getRequest = new GetBucketReplicationRequest
            {
                BucketName = sourceBucket
            };
            GetBucketReplicationResponse getResponse = await client.GetBucketReplicationAsync(getRequest);
            // Print.
            Console.WriteLine("Printing replication configuration information...");
            Console.WriteLine("Role ARN: {0}", getResponse.Configuration.Role);
            foreach (var rule in getResponse.Configuration.Rules)
            {
                Console.WriteLine("ID: {0}", rule.Id);
                Console.WriteLine("Prefix: {0}", rule.Prefix);
                Console.WriteLine("Status: {0}", rule.Status);
            }
        }
    }
}
```

------

# Configuring replication for buckets in different accounts
Configuring for buckets in different accounts

Live replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. Live replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. For more information, see [Replicating objects within and across Regions](replication.md).

When you configure replication, you add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account.

If you specify an object version ID to delete, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as you save it. 

Setting up live replication when the source and destination buckets are owned by different AWS accounts is similar to setting up replication when both buckets are owned by the same account. However, there are several differences when you're configuring replication in a cross-account scenario: 
+ The destination bucket owner must grant the source bucket owner permission to replicate objects in the destination bucket policy. 
+ If you're replicating objects that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) in a cross-account scenario, the owner of the KMS key must grant the source bucket owner permission to use the KMS key. For more information, see [Granting additional permissions for cross-account scenarios](replication-config-for-kms-objects.md#replication-kms-cross-acct-scenario). 
+ By default, replicated objects are owned by the source bucket owner. In a cross-account scenario, you might want to configure replication to change the ownership of the replicated objects to the owner of the destination bucket. For more information, see [Changing the replica owner](replication-change-owner.md).

**To configure replication when the source and destination buckets are owned by different AWS accounts**

1. In this example, you create source and destination buckets in two different AWS accounts. You must have two credential profiles set for the AWS CLI. This example uses `acctA` and `acctB` for those profile names. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 

1. Follow the step-by-step instructions in [Configuring replication for buckets in the same account](replication-walkthrough1.md) with the following changes:
   + For all AWS CLI commands related to source bucket activities (such as creating the source bucket, enabling versioning, and creating the IAM role), use the `acctA` profile. Use the `acctB` profile to create the destination bucket. 
   + Make sure that the permissions policy for the IAM role specifies the source and destination buckets that you created for this example.

1. In the console, add the following bucket policy on the destination bucket to allow the owner of the source bucket to replicate objects. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). Be sure to edit the policy by providing the AWS account ID of the source bucket owner, the IAM role name, and the destination bucket name. 
**Note**  
To use the following example, replace the `user input placeholders` with your own information. Replace `amzn-s3-demo-destination-bucket` with your destination bucket name. Replace `source-bucket-account-ID:role/service-role/source-account-IAM-role` in the IAM Amazon Resource Name (ARN) with the IAM role that you're using for this replication configuration.  
If you created the IAM service role manually, set the role path in the IAM ARN as `role/service-role/`, as shown in the following policy example. For more information, see [IAM ARNs](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) in the *IAM User Guide*. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Id": "",
       "Statement": [
           {
               "Sid": "Set-permissions-for-objects",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
               },
               "Action": [
                   "s3:ReplicateObject",
                   "s3:ReplicateDelete"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
           },
           {
               "Sid": "Set-permissions-on-bucket",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
               },
               "Action": [
                   "s3:GetBucketVersioning",
                   "s3:PutBucketVersioning"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
           }
       ]
   }
   ```

------

1. (Optional) If you're replicating objects that are encrypted with SSE-KMS, the owner of the KMS key must grant the source bucket owner permission to use the KMS key. For more information, see [Granting additional permissions for cross-account scenarios](replication-config-for-kms-objects.md#replication-kms-cross-acct-scenario).

1. (Optional) In replication, the owner of the source object owns the replica by default. When the source and destination buckets are owned by different AWS accounts, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets. This includes granting the `ObjectOwnerOverrideToBucketOwner` permission. For more information, see [Changing the replica owner](replication-change-owner.md).

# Changing the replica owner
Changing the replica owner

In replication, the owner of the source object also owns the replica by default. However, when the source and destination buckets are owned by different AWS accounts, you might want to change the replica ownership. For example, you might want to change the ownership to restrict access to object replicas. In your replication configuration, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets. 

To change the replica owner, you do the following:
+ Add the *owner override* option to the replication configuration to tell Amazon S3 to change replica ownership. 
+ Grant Amazon S3 the `s3:ObjectOwnerOverrideToBucketOwner` permission to change replica ownership. 
+ Add the `s3:ObjectOwnerOverrideToBucketOwner` permission in the destination bucket policy to allow changing replica ownership. The `s3:ObjectOwnerOverrideToBucketOwner` permission allows the owner of the destination buckets to accept the ownership of object replicas.

For more information, see [Considerations for the ownership override option](#repl-ownership-considerations) and [Adding the owner override option to the replication configuration](#repl-ownership-owneroverride-option). For a working example with step-by-step instructions, see [How to change the replica owner](#replication-walkthrough-3).

**Important**  
Instead of using the owner override option, you can use the bucket owner enforced setting for Object Ownership. When you use replication and the source and destination buckets are owned by different AWS accounts, the bucket owner of the destination bucket can use the bucket owner enforced setting for Object Ownership to change replica ownership to the AWS account that owns the destination bucket. This setting disables object access control lists (ACLs).   
The bucket owner enforced setting mimics the existing owner override behavior without the need of the `s3:ObjectOwnerOverrideToBucketOwner` permission. All objects that are replicated to the destination bucket with the bucket owner enforced setting are owned by the destination bucket owner. For more information about Object Ownership, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

## Considerations for the ownership override option


When you configure the ownership override option, the following considerations apply:
+ By default, the owner of the source object also owns the replica. Amazon S3 replicates the object version and the ACL associated with it.

  If you add the owner override option to your replication configuration, Amazon S3 replicates only the object version, not the ACL. In addition, Amazon S3 doesn't replicate subsequent changes to the source object ACL. Amazon S3 sets the ACL on the replica that grants full control to the destination bucket owner. 
+  When you update a replication configuration to enable or disable the owner override, the following behavior occurs:
  + If you add the owner override option to the replication configuration:

    When Amazon S3 replicates an object version, it discards the ACL that's associated with the source object. Instead, Amazon S3 sets the ACL on the replica, giving full control to the owner of the destination bucket. Amazon S3 doesn't replicate subsequent changes to the source object ACL. However, this ACL change doesn't apply to object versions that were replicated before you set the owner override option. ACL updates on source objects that were replicated before the owner override was set continue to be replicated (because the object and its replicas continue to have the same owner).
  + If you remove the owner override option from the replication configuration:

    Amazon S3 replicates new objects that appear in the source bucket and the associated ACLs to the destination buckets. For objects that were replicated before you removed the owner override, Amazon S3 doesn't replicate the ACLs because the object ownership change that Amazon S3 made remains in effect. That is, ACLs put on the object version that were replicated when the owner override was set continue to be not replicated.

## Adding the owner override option to the replication configuration


**Warning**  
Add the owner override option only when the source and destination buckets are owned by different AWS accounts. Amazon S3 doesn't check if the buckets are owned by the same or different accounts. If you add the owner override when both buckets are owned by same AWS account, Amazon S3 applies the owner override. This option grants full permissions to the owner of the destination bucket and doesn't replicate subsequent updates to the source objects' access control lists (ACLs). The replica owner can directly change the ACL associated with a replica with a `PutObjectAcl` request, but not through replication.

To specify the owner override option, add the following to each `Destination` element: 
+ The `AccessControlTranslation` element, which tells Amazon S3 to change replica ownership
+ The `Account` element, which specifies the AWS account of the destination bucket owner 

```
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    ...
    <Destination>
      ...
      <AccessControlTranslation>
           <Owner>Destination</Owner>
       </AccessControlTranslation>
      <Account>destination-bucket-owner-account-id</Account>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```

The following example replication configuration tells Amazon S3 to replicate objects that have the *`Tax`* key prefix to the `amzn-s3-demo-destination-bucket` destination bucket and change ownership of the replicas. To use this example, replace the `user input placeholders` with your own information.

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
   <Role>arn:aws:iam::account-id:role/role-name</Role>
   <Rule>
      <ID>Rule-1</ID>
      <Priority>1</Priority>
      <Status>Enabled</Status>
      <DeleteMarkerReplication>
         <Status>Disabled</Status>
      </DeleteMarkerReplication>
      <Filter>
         <Prefix>Tax</Prefix>
      </Filter>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <Account>destination-bucket-owner-account-id</Account>
         <AccessControlTranslation>
            <Owner>Destination</Owner>
         </AccessControlTranslation>
      </Destination>
   </Rule>
</ReplicationConfiguration>
```

## Granting Amazon S3 permission to change replica ownership


Grant Amazon S3 permissions to change replica ownership by adding permission for the `s3:ObjectOwnerOverrideToBucketOwner` action in the permissions policy that's associated with the AWS Identity and Access Management (IAM) role. This role is the IAM role that you specified in the replication configuration that allows Amazon S3 to assume and replicate objects on your behalf. To use the following example, replace `amzn-s3-demo-destination-bucket` with the name of the destination bucket.

```
...
{
    "Effect":"Allow",
         "Action":[
       "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
...
```

## Adding permission in the destination bucket policy to allow changing replica ownership


The owner of the destination bucket must grant the owner of the source bucket permission to change replica ownership. The owner of the destination bucket grants the owner of the source bucket permission for the `s3:ObjectOwnerOverrideToBucketOwner` action. This permission allows the destination bucket owner to accept ownership of the object replicas. The following example bucket policy statement shows how to do this. To use this example, replace the `user input placeholders` with your own information.

```
...
{
    "Sid":"1",
    "Effect":"Allow",
    "Principal":{"AWS":"source-bucket-account-id"},
    "Action":["s3:ObjectOwnerOverrideToBucketOwner"],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
...
```

## How to change the replica owner
How to change the replica owner

When the source and destination buckets in a replication configuration are owned by different AWS accounts, you can tell Amazon S3 to change replica ownership to the AWS account that owns the destination bucket. The following examples show how to use the Amazon S3 console, the AWS Command Line Interface (AWS CLI), and the AWS SDKs to change replica ownership. 

### Using the S3 console


For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for setting up a replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI


The following procedure shows how to change replica ownership by using the AWS CLI. In this procedure, you do the following: 
+ Create the source and destination buckets.
+ Enable versioning on the buckets.
+ Create an AWS Identity and Access Management (IAM) role that gives Amazon S3 permission to replicate objects.
+ Add the replication configuration to the source bucket.
+ In the replication configuration, you direct Amazon S3 to change the replica ownership.
+ You test your replication configuration.

**To change replica ownership when the source and destination buckets are owned by different AWS accounts (AWS CLI)**

To use the example AWS CLI commands in this procedure, replace the `user input placeholders` with your own information. 

1. In this example, you create the source and destination buckets in two different AWS accounts. To work with these two accounts, configure the AWS CLI with two named profiles. This example uses profiles named *`acctA`* and *`acctB`*, respectively. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 
**Important**  
The profiles that you use for this procedure must have the necessary permissions. For example, in the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if the profile that you use has the `iam:PassRole` permission. If you use administrator user credentials to create a named profile, then you can perform all of the tasks in this procedure. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*. 

1. Create the source bucket and enable versioning. This example creates a source bucket named `amzn-s3-demo-source-bucket` in the US East (N. Virginia) (`us-east-1`) Region. 

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create a destination bucket and enable versioning. This example creates a destination bucket named `amzn-s3-demo-destination-bucket` in the US West (Oregon) (`us-west-2`) Region. Use an AWS account profile that's different from the one that you used for the source bucket.

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctB
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctB
   ```

1. You must add permissions to your destination bucket policy to allow changing the replica ownership.

   1.  Save the following policy to a file named `destination-bucket-policy.json`. Make sure to replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "destination_bucket_policy_sid",
                  "Principal": {
                      "AWS": "source-bucket-owner-123456789012"
                  },
                  "Action": [
                      "s3:ReplicateObject",
                      "s3:ReplicateDelete",
                      "s3:ObjectOwnerOverrideToBucketOwner",
                      "s3:ReplicateTags",
                      "s3:GetObjectVersionTagging"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
                  ]
              }
          ]
      }
      ```

------

   1. Add the preceding policy to the destination bucket by using the following `put-bucket-policy` command:

      ```
      aws s3api put-bucket-policy --region $ {destination-region} --bucket $ {amzn-s3-demo-destination-bucket} --policy file://destination_bucket_policy.json
      ```

1. Create an IAM role. You specify this role in the replication configuration that you add to the source bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create the role.
   + Attach a permissions policy to the role.

   1. Create the IAM role.

      1. Copy the following trust policy and save it to a file named `s3-role-trust-policy.json` in the current directory on your local computer. This policy grants Amazon S3 permissions to assume the role.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Run the following AWS CLI `create-role` command to create the IAM role:

         ```
         $ aws iam create-role \
         --role-name replicationRole \
         --assume-role-policy-document file://s3-role-trust-policy.json  \
         --profile acctA
         ```

         Make note of the Amazon Resource Name (ARN) of the IAM role that you created. You will need this ARN in a later step.

   1. Attach a permissions policy to the role.

      1. Copy the following permissions policy and save it to a file named `s3-role-perm-pol-changeowner.json` in the current directory on your local computer. This policy grants permissions for various Amazon S3 bucket and object actions. In the following steps, you attach this policy to the IAM role that you created earlier. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ObjectOwnerOverrideToBucketOwner",
                     "s3:ReplicateTags",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               }
            ]
         }
         ```

------

      1. To attach the preceding permissions policy to the role, run the following `put-role-policy` command:

         ```
         $ aws iam put-role-policy \
         --role-name replicationRole \
         --policy-document file://s3-role-perm-pol-changeowner.json \
         --policy-name replicationRolechangeownerPolicy \
         --profile acctA
         ```

1. Add a replication configuration to your source bucket.

   1. The AWS CLI requires specifying the replication configuration as JSON. Save the following JSON in a file named `replication.json` in the current directory on your local computer. In the configuration, the `AccessControlTranslation` specifies the change in replica ownership from the source bucket owner to the destination bucket owner. 

      ```
      {
         "Role":"IAM-role-ARN",
         "Rules":[
            {
               "Status":"Enabled",
               "Priority":1,
               "DeleteMarkerReplication":{
                  "Status":"Disabled"
               },
               "Filter":{
               },
               "Status":"Enabled",
               "Destination":{
                  "Bucket":"arn:aws:s3:::amzn-s3-demo-destination-bucket",
                  "Account":"destination-bucket-owner-account-id",
                  "AccessControlTranslation":{
                     "Owner":"Destination"
                  }
               }
            }
         ]
      }
      ```

   1. Edit the JSON by providing values for the destination bucket name, the destination bucket owner account ID, and the `IAM-role-ARN`. Replace *`IAM-role-ARN`* with the ARN of the IAM role that you created earlier. Save the changes.

   1. To add the replication configuration to the source bucket, run the following command:

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

1. Test your replication configuration by checking replica ownership in the Amazon S3 console.

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Add objects to the source bucket. Verify that the destination bucket contains the object replicas and that the ownership of the replicas has changed to the AWS account that owns the destination bucket.

### Using the AWS SDKs


 For a code example to add a replication configuration, see [Using the AWS SDKs](replication-walkthrough1.md#replication-ex1-sdk). You must modify the replication configuration appropriately. For conceptual information, see [Changing the replica owner](#replication-change-owner). 

# Meeting compliance requirements with S3 Replication Time Control
Using S3 Replication Time Control

S3 Replication Time Control (S3 RTC) helps you meet compliance or business requirements for data replication and provides visibility into Amazon S3 replication times. S3 RTC replicates most objects that you upload to Amazon S3 in seconds, and 99.9 percent of those objects within 15 minutes. 

By default, S3 RTC includes two ways to track the progress of replication: 
+ **S3 Replication metrics** – You can use S3 Replication metrics to monitor the total number of S3 API operations that are pending replication, the total size of objects pending replication, the maximum replication time to the destination Region, and the total number of operations that failed replication. You can then monitor each dataset that you replicate separately. You can also enable S3 Replication metrics independently of S3 RTC. For more information, see [Using S3 Replication metrics](repl-metrics.md).

  Replication rules with S3 Replication Time Control (S3 RTC) enabled publish S3 Replication metrics. Replication metrics are available within 15 minutes of enabling S3 RTC. Replication metrics are available through the Amazon S3 console, the Amazon S3 API, the AWS SDKs, the AWS Command Line Interface (AWS CLI), and Amazon CloudWatch. For more information about CloudWatch metrics, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md). For more information about viewing replication metrics through the Amazon S3 console, see [Viewing replication metrics](repl-metrics.md#viewing-replication-metrics).

  S3 Replication metrics are billed at the same rate as Amazon CloudWatch custom metrics. For information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 
+ **Amazon S3 Event Notifications** – S3 RTC provides `OperationMissedThreshold` and `OperationReplicatedAfterThreshold` events that notify the bucket owner if object replication exceeds or occurs after the 15-minute threshold. With S3 RTC, Amazon S3 Event Notifications can notify you in the rare instance when objects don't replicate within 15 minutes and when those objects replicate after the 15-minute threshold. 

  Replication events are available within 15 minutes of enabling S3 RTC. Amazon S3 Event Notifications are available through Amazon SQS, Amazon SNS, or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).

 

## Best practices and guidelines for S3 RTC
Best practices and guidelines for S3 RTC

When replicating data in Amazon S3 with S3 Replication Time Control (S3 RTC) enabled, follow these best practice guidelines to optimize replication performance for your workloads. 

**Topics**
+ [

### Amazon S3 Replication and request rate performance guidelines
](#rtc-request-rate-performance)
+ [

### Estimating your replication request rates
](#estimating-replication-request-rates)
+ [

### Exceeding S3 RTC data transfer rate quotas
](#exceed-rtc-data-transfer-limits)
+ [

### AWS KMS encrypted object replication request rates
](#kms-object-replication-request-rates)

### Amazon S3 Replication and request rate performance guidelines


When uploading and retrieving storage from Amazon S3, your applications can achieve thousands of transactions per second in request performance. For example, an application can achieve at least 3,500 `PUT`/`COPY`/`POST`/`DELETE` or 5,500 `GET`/`HEAD` requests per second per prefix in an S3 bucket, including the requests that S3 Replication makes on your behalf. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an S3 bucket to parallelize reads, you can scale your read performance to 55,000 read requests per second. 

Amazon S3 automatically scales in response to sustained request rates above these guidelines, or sustained request rates concurrent with `LIST` requests. While Amazon S3 is internally optimizing for the new request rate, you might receive HTTP 503 request responses temporarily until the optimization is complete. This behavior might occur with increases in request per second rates, or when you first enable S3 RTC. During these periods, your replication latency might increase. The S3 RTC service level agreement (SLA) doesn’t apply to time periods when Amazon S3 performance guidelines on requests per second are exceeded. 

The S3 RTC SLA also doesn't apply during time periods where your replication data transfer rate exceeds the default 1 gigabit per second (Gbps) quota. If you expect your replication transfer rate to exceed 1 Gbps, you can contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) to request an increase in your replication transfer rate quota. 

### Estimating your replication request rates


Your total request rate including the requests that Amazon S3 replication makes on your behalf must be within the Amazon S3 request rate guidelines for both the replication source and destination buckets. For each object replicated, Amazon S3 replication makes up to five `GET`/`HEAD` requests and one `PUT` request to the source bucket, and one `PUT` request to each destination bucket.

For example, if you expect to replicate 100 objects per second, Amazon S3 replication might perform an additional 100 `PUT` requests on your behalf for a total of 200 `PUT` requests per second to the source S3 bucket. Amazon S3 replication also might perform up to 500 `GET`/`HEAD` requests (5 `GET`/`HEAD` requests for each object that's replicated.) 

**Note**  
You incur costs for only one `PUT` request per object replicated. For more information, see the pricing information in the [Amazon S3 FAQs about replication](https://aws.amazon.com/s3/faqs/#Replication). 

### Exceeding S3 RTC data transfer rate quotas


If you expect your S3 RTC data transfer rate to exceed the default 1 Gbps quota, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) to request an increase in your replication transfer rate quota. 

### AWS KMS encrypted object replication request rates


When you replicate objects that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), AWS KMS requests per second quotas apply. AWS KMS might reject an otherwise valid request because your request rate exceeds the quota for the number of requests per second. When a request is throttled, AWS KMS returns a `ThrottlingException` error. The AWS KMS request rate quota applies to requests that you make directly and to requests made by Amazon S3 replication on your behalf. 

For example, if you expect to replicate 1,000 objects per second, you can subtract 2,000 requests from your AWS KMS request rate quota. The resulting request rate per second is available for your AWS KMS workloads excluding replication. You can use [AWS KMS request metrics in Amazon CloudWatch](https://docs.aws.amazon.com/kms/latest/developerguide/monitoring-cloudwatch.html) to monitor the total AWS KMS request rate on your AWS account.

To request an increase to your AWS KMS requests per second quota, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html). 

## Enabling S3 Replication Time Control
Enabling S3 Replication Time Control

You can start using S3 Replication Time Control (S3 RTC) with a new or existing replication rule. You can choose to apply your replication rule to an entire bucket, or to objects with a specific prefix or tag. When you enable S3 RTC, S3 Replication metrics are also enabled on your replication rule. 

You can configure S3 RTC by using the Amazon S3 console, the Amazon S3 API, the AWS SDKs, and the AWS Command Line Interface (AWS CLI).

**Topics**

### Using the S3 console


For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for enabling S3 RTC in your replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI


To use the AWS CLI to replicate objects with S3 RTC enabled, you create buckets, enable versioning on the buckets, create an IAM role that gives Amazon S3 permission to replicate objects, and add the replication configuration to the source bucket. The replication configuration must have S3 RTC enabled, as shown in the following example. 

For step-by-step instructions for setting up your replication configuration by using the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

The following example replication configuration enables and sets the `ReplicationTime` and `EventThreshold` values for a replication rule. Enabling and setting these values enables S3 RTC on the rule.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "DeleteMarkerReplication": {
                "Status": "Disabled"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
                "Metrics": {
                    "Status": "Enabled",
                    "EventThreshold": {
                        "Minutes": 15
                    }
                },
                "ReplicationTime": {
                    "Status": "Enabled",
                    "Time": {
                        "Minutes": 15
                    }
                }
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

**Important**  
 `Metrics:EventThreshold:Minutes` and `ReplicationTime:Time:Minutes` can only have `15` as a valid value. 

### Using the AWS SDK for Java


 The following Java example adds replication configuration with S3 Replication Time Control (S3 RTC) enabled.

```
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.DeleteMarkerReplication;
import software.amazon.awssdk.services.s3.model.Destination;
import software.amazon.awssdk.services.s3.model.Metrics;
import software.amazon.awssdk.services.s3.model.MetricsStatus;
import software.amazon.awssdk.services.s3.model.PutBucketReplicationRequest;
import software.amazon.awssdk.services.s3.model.ReplicationConfiguration;
import software.amazon.awssdk.services.s3.model.ReplicationRule;
import software.amazon.awssdk.services.s3.model.ReplicationRuleFilter;
import software.amazon.awssdk.services.s3.model.ReplicationTime;
import software.amazon.awssdk.services.s3.model.ReplicationTimeStatus;
import software.amazon.awssdk.services.s3.model.ReplicationTimeValue;

public class Main {

  public static void main(String[] args) {
    S3Client s3 = S3Client.builder()
      .region(Region.US_EAST_1)
      .credentialsProvider(() -> AwsBasicCredentials.create(
          "AWS_ACCESS_KEY_ID",
          "AWS_SECRET_ACCESS_KEY")
      )
      .build();

    ReplicationConfiguration replicationConfig = ReplicationConfiguration
      .builder()
      .rules(
          ReplicationRule
            .builder()
            .status("Enabled")
            .priority(1)
            .deleteMarkerReplication(
                DeleteMarkerReplication
                    .builder()
                    .status("Disabled")
                    .build()
            )
            .destination(
                Destination
                    .builder()
                    .bucket("destination_bucket_arn")
                    .replicationTime(
                        ReplicationTime.builder().time(
                            ReplicationTimeValue.builder().minutes(15).build()
                        ).status(
                            ReplicationTimeStatus.ENABLED
                        ).build()
                    )
                    .metrics(
                        Metrics.builder().eventThreshold(
                            ReplicationTimeValue.builder().minutes(15).build()
                        ).status(
                            MetricsStatus.ENABLED
                        ).build()
                    )
                    .build()
            )
            .filter(
                ReplicationRuleFilter
                    .builder()
                    .prefix("testtest")
                    .build()
            )
        .build())
        .role("role_arn")
        .build();

    // Put replication configuration
    PutBucketReplicationRequest putBucketReplicationRequest = PutBucketReplicationRequest
      .builder()
      .bucket("source_bucket")
      .replicationConfiguration(replicationConfig)
      .build();

    s3.putBucketReplication(putBucketReplicationRequest);
  }
}
```

# Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)
Replicating encrypted objects

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

There are some special considerations when you're replicating objects that have been encrypted by using server-side encryption. Amazon S3 supports the following types of server-side encryption:
+ Server-side encryption with Amazon S3 managed keys (SSE-S3)
+ Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
+ Dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)
+ Server-side encryption with customer-provided keys (SSE-C)

For more information about server-side encryption, see [Protecting data with server-side encryption](serv-side-encryption.md).

This topic explains the permissions that you need to direct Amazon S3 to replicate objects that have been encrypted by using server-side encryption. This topic also provides additional configuration elements that you can add and example AWS Identity and Access Management (IAM) policies that grant the necessary permissions for replicating encrypted objects. 

For an example with step-by-step instructions, see [Enabling replication for encrypted objects](#replication-walkthrough-4). For information about creating a replication configuration, see [Replicating objects within and across Regions](replication.md). 

**Note**  
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

**Topics**
+ [

## How default bucket encryption affects replication
](#replication-default-encryption)
+ [

## Replicating objects encrypted with SSE-C
](#replicationSSEC)
+ [

## Replicating objects encrypted with SSE-S3, SSE-KMS, or DSSE-KMS
](#replications)
+ [

## Enabling replication for encrypted objects
](#replication-walkthrough-4)

## How default bucket encryption affects replication


When you enable default encryption for a replication destination bucket, the following encryption behavior applies:
+ If objects in the source bucket are not encrypted, the replica objects in the destination bucket are encrypted by using the default encryption settings of the destination bucket. As a result, the entity tags (ETags) of the source objects differ from the ETags of the replica objects. If you have applications that use ETags, you must update those applications to account for this difference.
+ If objects in the source bucket are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), the replica objects in the destination bucket use the same type of encryption as the source objects. The default encryption settings of the destination bucket are not used.

## Replicating objects encrypted with SSE-C


By using server-side encryption with customer-provided keys (SSE-C), you can manage your own proprietary encryption keys. With SSE-C, you manage the keys while Amazon S3 manages the encryption and decryption process. You must provide an encryption key as part of your request, but you don't need to write any code to perform object encryption or decryption. When you upload an object, Amazon S3 encrypts the object by using the key that you provided. Amazon S3 then purges that key from memory. When you retrieve an object, you must provide the same encryption key as part of your request. For more information, see [Using server-side encryption with customer-provided keys (SSE-C)](ServerSideEncryptionCustomerKeys.md).

S3 Replication supports objects that are encrypted with SSE-C. You can configure SSE-C object replication in the Amazon S3 console or with the AWS SDKs in the same way that you configure replication for unencrypted objects. There aren't additional SSE-C permissions beyond what are currently required for replication. 

S3 Replication automatically replicates newly uploaded SSE-C encrypted objects if they are eligible, as specified in your S3 Replication configuration. To replicate existing objects in your buckets, use S3 Batch Replication. For more information about replicating objects, see [Setting up live replication overview](replication-how-setup.md) and [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

There are no additional charges for replicating SSE-C objects. For details about replication pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Replicating objects encrypted with SSE-S3, SSE-KMS, or DSSE-KMS


By default, Amazon S3 doesn't replicate objects that are encrypted with SSE-KMS or DSSE-KMS. This section explains the additional configuration elements that you can add to direct Amazon S3 to replicate these objects. 

For an example with step-by-step instructions, see [Enabling replication for encrypted objects](#replication-walkthrough-4). For information about creating a replication configuration, see [Replicating objects within and across Regions](replication.md). 

### Specifying additional information in the replication configuration


In the replication configuration, you do the following:
+ In the `Destination` element in your replication configuration, add the ID of the symmetric AWS KMS customer managed key that you want Amazon S3 to use to encrypt object replicas, as shown in the following example replication configuration. 
+ Explicitly opt in by enabling replication of objects encrypted by using KMS keys (SSE-KMS or DSSE-KMS). To opt in, add the `SourceSelectionCriteria` element, as shown in the following example replication configuration.

 

```
<ReplicationConfiguration>
   <Rule>
      ...
      <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
           <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
      </SourceSelectionCriteria>

      <Destination>
          ...
          <EncryptionConfiguration>
             <ReplicaKmsKeyID>AWS KMS key ARN or Key Alias ARN that's in the same AWS Region as the destination bucket.</ReplicaKmsKeyID>
          </EncryptionConfiguration>
       </Destination>
      ...
   </Rule>
</ReplicationConfiguration>
```

**Important**  
The KMS key must have been created in the same AWS Region as the destination bucket. 
The KMS key *must* be valid. The `PutBucketReplication` API operation doesn't check the validity of KMS keys. If you use a KMS key that isn't valid, you will receive the HTTP `200 OK` status code in response, but replication fails.

The following example shows a replication configuration that includes optional configuration elements. This replication configuration has one rule. The rule applies to objects with the `Tax` key prefix. Amazon S3 uses the specified AWS KMS key ID to encrypt these object replicas.

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration>
   <Role>arn:aws:iam::account-id:role/role-name</Role>
   <Rule>
      <ID>Rule-1</ID>
      <Priority>1</Priority>
      <Status>Enabled</Status>
      <DeleteMarkerReplication>
         <Status>Disabled</Status>
      </DeleteMarkerReplication>
      <Filter>
         <Prefix>Tax</Prefix>
      </Filter>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <EncryptionConfiguration>
            <ReplicaKmsKeyID>AWS KMS key ARN or Key Alias ARN that's in the same AWS Region as the destination bucket.</ReplicaKmsKeyID>
         </EncryptionConfiguration>
      </Destination>
      <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
            <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
      </SourceSelectionCriteria>
   </Rule>
</ReplicationConfiguration>
```

### Granting additional permissions for the IAM role


To replicate objects that are encrypted at rest by using SSE-S3, SSE-KMS, or DSSE-KMS, grant the following additional permissions to the AWS Identity and Access Management (IAM) role that you specify in the replication configuration. You grant these permissions by updating the permissions policy that's associated with the IAM role. 
+ **`s3:GetObjectVersionForReplication` action for source objects** – This action allows Amazon S3 to replicate both unencrypted objects and objects created with server-side encryption by using SSE-S3, SSE-KMS, or DSSE-KMS.
**Note**  
We recommend that you use the `s3:GetObjectVersionForReplication` action instead of the `s3:GetObjectVersion` action because `s3:GetObjectVersionForReplication` provides Amazon S3 with only the minimum permissions necessary for replication. In addition, the `s3:GetObjectVersion` action allows replication of unencrypted and SSE-S3-encrypted objects, but not of objects that are encrypted by using KMS keys (SSE-KMS or DSSE-KMS). 
+ **`kms:Decrypt` and `kms:Encrypt` AWS KMS actions for the KMS keys**
  + You must grant `kms:Decrypt` permissions for the AWS KMS key that's used to decrypt the source object.
  + You must grant `kms:Encrypt` permissions for the AWS KMS key that's used to encrypt the object replica.
+ **`kms:GenerateDataKey` action for replicating plaintext objects** – If you're replicating plaintext objects to a bucket with SSE-KMS or DSSE-KMS encryption enabled by default, you must include the `kms:GenerateDataKey` permission for the destination encryption context and the KMS key in the IAM policy.

**Important**  
If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region. 

We recommend that you restrict these permissions only to the destination buckets and objects by using AWS KMS condition keys. The AWS account that owns the IAM role must have permissions for the `kms:Encrypt` and `kms:Decrypt` actions for the KMS keys that are listed in the policy. If the KMS keys are owned by another AWS account, the owner of the KMS keys must grant these permissions to the AWS account that owns the IAM role. For more information about managing access to these KMS keys, see [Using IAM policies with AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html) in the* AWS Key Management Service Developer Guide*.

### S3 Bucket Keys and replication


To use replication with an S3 Bucket Key, the AWS KMS key policy for the KMS key that's used to encrypt the object replica must include the `kms:Decrypt` permission for the calling principal. The call to `kms:Decrypt` verifies the integrity of the S3 Bucket Key before using it. For more information, see [Using an S3 Bucket Key with replication](bucket-key.md#bucket-key-replication).

When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context will be the bucket's Amazon Resource Name (ARN), not the object's ARN (for example, `arn:aws:s3:::bucket_ARN`). You must update your IAM policies to use the bucket ARN for the encryption context:

```
"kms:EncryptionContext:aws:s3:arn": [
"arn:aws:s3:::bucket_ARN"
]
```

For more information, see [Encryption context (`x-amz-server-side-encryption-context`)](specifying-kms-encryption.md#s3-kms-encryption-context) (in the "Using the REST API" section) and [Changes to note before enabling an S3 Bucket Key](bucket-key.md#bucket-key-changes).

### Example policies: Using SSE-S3 and SSE-KMS with replication


The following example IAM policies show statements for using SSE-S3 and SSE-KMS with replication.

**Example – Using SSE-KMS with separate destination buckets**  
The following example policy shows statements for using SSE-KMS with separate destination buckets. 

**Example – Replicating objects created with SSE-S3 and SSE-KMS**  
The following is a complete IAM policy that grants the necessary permissions to replicate unencrypted objects, objects created with SSE-S3, and objects created with SSE-KMS.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetReplicationConfiguration",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObjectVersionForReplication",
            "s3:GetObjectVersionAcl"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ReplicateObject",
            "s3:ReplicateDelete"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/key-prefix1*"
      },
      {
         "Action":[
            "kms:Decrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
               ]
            }
         },
         "Resource":[
           "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      },
      {
         "Action":[
            "kms:Encrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-destination-bucket/prefix1*"
               ]
            }
         },
         "Resource":[
            "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      }
   ]
}
```

**Example – Replicating objects with S3 Bucket Keys**  
The following is a complete IAM policy that grants the necessary permissions to replicate objects with S3 Bucket Keys.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetReplicationConfiguration",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObjectVersionForReplication",
            "s3:GetObjectVersionAcl"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ReplicateObject",
            "s3:ReplicateDelete"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/key-prefix1*"
      },
      {
         "Action":[
            "kms:Decrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-source-bucket"
               ]
            }
         },
         "Resource":[
           "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      },
      {
         "Action":[
            "kms:Encrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-destination-bucket"
               ]
            }
         },
         "Resource":[
            "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      }
   ]
}
```

### Granting additional permissions for cross-account scenarios


In a cross-account scenario, where the source and destination buckets are owned by different AWS accounts, you can use a KMS key to encrypt object replicas. However, the KMS key owner must grant the source bucket owner permission to use the KMS key. 

**Note**  
If you need to replicate SSE-KMS data cross-account, then your replication rule must specify a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) from AWS KMS for the destination account. [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) don't allow cross-account use, and therefore can't be used to perform cross-account replication.<a name="cross-acct-kms-key-permission"></a>

**To grant the source bucket owner permission to use the KMS key (AWS KMS console)**

1. Sign in to the AWS Management Console and open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. To view the keys in your account that you create and manage, in the navigation pane choose **Customer managed keys**.

1. Choose the KMS key.

1. Under the **General configuration** section, choose the **Key policy** tab.

1. Scroll down to **Other AWS accounts**.

1. Choose **Add other AWS accounts**. 

   The **Other AWS accounts** dialog box appears. 

1. In the dialog box, choose **Add another AWS account**. For **arn:aws:iam::**, enter the source bucket account ID.

1. Choose **Save changes**.

**To grant the source bucket owner permission to use the KMS key (AWS CLI)**
+ For information about the `put-key-policy` AWS Command Line Interface (AWS CLI) command, see [https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html](https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html) in the* AWS CLI Command Reference*. For information about the underlying `PutKeyPolicy` API operation, see [https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html) in the [AWS Key Management Service API Reference](https://docs.aws.amazon.com/kms/latest/APIReference/).

### AWS KMS transaction quota considerations


When you add many new objects with AWS KMS encryption after enabling Cross-Region Replication (CRR), you might experience throttling (HTTP `503 Service Unavailable` errors). Throttling occurs when the number of AWS KMS transactions per second exceeds the current quota. For more information, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.

To request a quota increase, use Service Quotas. For more information, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html). If Service Quotas isn't supported in your Region, [open an AWS Support case](https://console.aws.amazon.com/support/home#/). 

## Enabling replication for encrypted objects


By default, Amazon S3 doesn't replicate objects that are encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS). To replicate objects encrypted with SSE-KMS or DSS-KMS, you must modify the bucket replication configuration to tell Amazon S3 to replicate these objects. This example explains how to use the Amazon S3 console and the AWS Command Line Interface (AWS CLI) to change the bucket replication configuration to enable replicating encrypted objects.

**Note**  
When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context will be the bucket's Amazon Resource Name (ARN), not the object's ARN. You must update your IAM policies to use the bucket ARN for the encryption context. For more information, see [S3 Bucket Keys and replication](#bk-replication).

**Note**  
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

### Using the S3 console


For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for setting a replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI


To replicate encrypted objects with the AWS CLI, you do the following: 
+ Create source and destination buckets and enable versioning on these buckets. 
+ Create an AWS Identity and Access Management (IAM) service role that gives Amazon S3 permission to replicate objects. The IAM role's permissions include the necessary permissions to replicate the encrypted objects.
+ Add a replication configuration to the source bucket. The replication configuration provides information related to replicating objects that are encrypted by using KMS keys.
+ Add encrypted objects to the source bucket. 
+ Test the setup to confirm that your encrypted objects are being replicated to the destination bucket.

The following procedures walk you through this process. 

**To replicate server-side encrypted objects (AWS CLI)**

To use the examples in this procedure, replace the `user input placeholders` with your own information.

1. In this example, you create both the source (*`amzn-s3-demo-source-bucket`*) and destination (*`amzn-s3-demo-destination-bucket`*) buckets in the same AWS account. You also set a credentials profile for the AWS CLI. This example uses the profile name `acctA`. 

   For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 

1. Use the following commands to create the `amzn-s3-demo-source-bucket` bucket and enable versioning on it. The following example commands create the `amzn-s3-demo-source-bucket` bucket in the US East (N. Virginia) (`us-east-1`) Region.

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Use the following commands to create the `amzn-s3-demo-destination-bucket` bucket and enable versioning on it. The following example commands create the `amzn-s3-demo-destination-bucket` bucket in the US West (Oregon) (`us-west-2`) Region. 
**Note**  
To set up a replication configuration when both `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets are in the same AWS account, you use the same profile. This example uses `acctA`. To configure replication when the buckets are owned by different AWS accounts, you specify different profiles for each. 

   

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Next, you create an IAM service role. You will specify this role in the replication configuration that you add to the `amzn-s3-demo-source-bucket` bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create a service role.
   + Attach a permissions policy to the role.

   1. To create an IAM service role, do the following:

      1. Copy the following trust policy and save it to a file called `s3-role-trust-policy-kmsobj.json` in the current directory on your local computer. This policy grants the Amazon S3 service principal permissions to assume the role so that Amazon S3 can perform tasks on your behalf.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Use the following command to create the role:

         ```
         $ aws iam create-role \
         --role-name replicationRolekmsobj \
         --assume-role-policy-document file://s3-role-trust-policy-kmsobj.json  \
         --profile acctA
         ```

   1. Next, you attach a permissions policy to the role. This policy grants permissions for various Amazon S3 bucket and object actions. 

      1. Copy the following permissions policy and save it to a file named `s3-role-permissions-policykmsobj.json` in the current directory on your local computer. You will create an IAM role and attach the policy to it later. 
**Important**  
In the permissions policy, you specify the AWS KMS key IDs that will be used for encryption of the `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets. You must create two separate KMS keys for the `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets. AWS KMS keys aren't shared outside the AWS Region in which they were created. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration",
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Effect":"Allow",
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket",
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ReplicateTags"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLikeIfExists":{
                        "s3:x-amz-server-side-encryption":[
                           "aws:kms",
                           "AES256",
                           "aws:kms:dsse"
                        ],
                        "s3:x-amz-server-side-encryption-aws-kms-key-id":[
                           "AWS KMS key IDs(in ARN format) to use for encrypting object replicas"  
                        ]
                     }
                  },
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               },
               {
                  "Action":[
                     "kms:Decrypt"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLike":{
                        "kms:ViaService":"s3.us-east-1.amazonaws.com",
                        "kms:EncryptionContext:aws:s3:arn":[
                           "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                        ]
                     }
                  },
                  "Resource":[
                     "arn:aws:kms:us-east-1:111122223333:key/key-id" 
                  ]
               },
               {
                  "Action":[
                     "kms:Encrypt"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLike":{
                        "kms:ViaService":"s3.us-west-2.amazonaws.com",
                        "kms:EncryptionContext:aws:s3:arn":[
                           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
                        ]
                     }
                  },
                  "Resource":[
                     "arn:aws:kms:us-west-2:111122223333:key/key-id" 
                  ]
               }
            ]
         }
         ```

------

      1. Create a policy and attach it to the role.

         ```
         $ aws iam put-role-policy \
         --role-name replicationRolekmsobj \
         --policy-document file://s3-role-permissions-policykmsobj.json \
         --policy-name replicationRolechangeownerPolicy \
         --profile acctA
         ```

1. Next, add the following replication configuration to the `amzn-s3-demo-source-bucket` bucket. It tells Amazon S3 to replicate objects with the `Tax/` prefix to the `amzn-s3-demo-destination-bucket` bucket. 
**Important**  
In the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if you have the `iam:PassRole` permission. The profile that you specify in the CLI command must have this permission. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

   ```
    <ReplicationConfiguration>
     <Role>IAM-Role-ARN</Role>
     <Rule>
       <Priority>1</Priority>
       <DeleteMarkerReplication>
          <Status>Disabled</Status>
       </DeleteMarkerReplication>
       <Filter>
          <Prefix>Tax</Prefix>
       </Filter>
       <Status>Enabled</Status>
       <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
           <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
       </SourceSelectionCriteria>
       <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <EncryptionConfiguration>
           <ReplicaKmsKeyID>AWS KMS key IDs to use for encrypting object replicas</ReplicaKmsKeyID>
         </EncryptionConfiguration>
       </Destination>
     </Rule>
   </ReplicationConfiguration>
   ```

   To add a replication configuration to the `amzn-s3-demo-source-bucket` bucket, do the following:

   1. The AWS CLI requires you to specify the replication configuration as JSON. Save the following JSON in a file (`replication.json`) in the current directory on your local computer. 

      ```
      {
         "Role":"IAM-Role-ARN",
         "Rules":[
            {
               "Status":"Enabled",
               "Priority":1,
               "DeleteMarkerReplication":{
                  "Status":"Disabled"
               },
               "Filter":{
                  "Prefix":"Tax"
               },
               "Destination":{
                  "Bucket":"arn:aws:s3:::amzn-s3-demo-destination-bucket",
                  "EncryptionConfiguration":{
                     "ReplicaKmsKeyID":"AWS KMS key IDs (in ARN format) to use for encrypting object replicas"
                  }
               },
               "SourceSelectionCriteria":{
                  "SseKmsEncryptedObjects":{
                     "Status":"Enabled"
                  }
               }
            }
         ]
      }
      ```

   1. Edit the JSON to provide values for the `amzn-s3-demo-destination-bucket` bucket, `AWS KMS key IDs (in ARN format)`, and `IAM-role-ARN`. Save the changes.

   1. Use the following command to add the replication configuration to your `amzn-s3-demo-source-bucket` bucket. Be sure to provide the `amzn-s3-demo-source-bucket` bucket name.

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

1. Test the configuration to verify that encrypted objects are replicated. In the Amazon S3 console, do the following:

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. In the `amzn-s3-demo-source-bucket` bucket, create a folder named `Tax`. 

   1. Add sample objects to the folder. Be sure to choose the encryption option and specify your KMS key to encrypt the objects. 

   1. Verify that the `amzn-s3-demo-destination-bucket` bucket contains the object replicas and that they are encrypted by using the KMS key that you specified in the configuration. For more information, see [Getting replication status information](replication-status.md).

### Using the AWS SDKs


For a code example that shows how to add a replication configuration, see [Using the AWS SDKs](replication-walkthrough1.md#replication-ex1-sdk). You must modify the replication configuration appropriately. 

 

# Replicating metadata changes with replica modification sync
Replicating metadata changes

Amazon S3 replica modification sync can help you keep object metadata such as tags, access control lists (ACLs), and Object Lock settings replicated between replicas and source objects. By default, Amazon S3 replicates metadata from the source objects to the replicas only. When replica modification sync is enabled, Amazon S3 replicates metadata changes made to the replica copies back to the source object, making the replication bidirectional (two-way replication).

## Enabling replica modification sync


You can use Amazon S3 replica modification sync with new or existing replication rules. You can apply it to an entire bucket or to objects that have a specific prefix.

To enable replica modification sync by using the Amazon S3 console, see [Examples for configuring live replication](replication-example-walkthroughs.md). This topic provides instructions for enabling replica modification sync in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable replica modification sync by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the bucket containing the replicas with `ReplicaModifications` enabled. To set up two-way replication, create a replication rule from the source bucket (`amzn-s3-demo-source-bucket`) to the bucket containing the replicas (`amzn-s3-demo-destination-bucket`). Then, create a second replication rule from the bucket containing the replicas (`amzn-s3-demo-destination-bucket`) to the source bucket (`amzn-s3-demo-source-bucket`). The source and destination buckets can be in the same or different AWS Regions.

**Note**  
You must enable replica modification sync on both the source and destination buckets to replicate replica metadata changes like object access control lists (ACLs), object tags, or Object Lock settings on the replicated objects. Like all replication rules, you can apply these rules to the entire bucket or to a subset of objects filtered by prefix or object tags.

In the following example configuration, Amazon S3 replicates metadata changes under the prefix `Tax` to the bucket `amzn-s3-demo-source-bucket`, which contains the source objects.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "SourceSelectionCriteria": {
                "ReplicaModifications":{
                    "Status": "Enabled"
                }
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-source-bucket"
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

For full instructions on creating replication rules by using the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

# Replicating delete markers between buckets
Replicating delete markers

By default, when S3 Replication is enabled and an object is deleted in the source bucket, Amazon S3 adds a delete marker in the source bucket only. This action helps protect data in the destination buckets from accidental or malicious deletions. If you have *delete marker replication* enabled, these markers are copied to the destination buckets, and Amazon S3 behaves as if the object was deleted in both the source and destination buckets. For more information about how delete markers work, see [Working with delete markers](DeleteMarker.md).

**Note**  
Delete marker replication isn't supported for tag-based replication rules. Delete marker replication also doesn't adhere to the 15-minute service-level agreement (SLA) that's granted when you're using S3 Replication Time Control (S3 RTC).
If you're not using the latest replication configuration XML version, delete operations affect replication differently. For more information, see [How delete operations affect replication](replication-what-is-isnot-replicated.md#replication-delete-op).
If you enable delete marker replication and your source bucket has an S3 Lifecycle expiration rule, the delete markers added by the S3 Lifecycle expiration rule won't be replicated to the destination bucket.

## Enabling delete marker replication


You can start using delete marker replication with a new or existing replication rule. You can apply delete marker replication to an entire bucket or to objects that have a specific prefix.

To enable delete marker replication by using the Amazon S3 console, see [Using the S3 console](replication-walkthrough1.md#enable-replication). This topic provides instructions for enabling delete marker replication in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable delete marker replication by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the source bucket with `DeleteMarkerReplication` enabled, as shown in the following example configuration. 

In the following example replication configuration, delete markers are replicated to the destination bucket `amzn-s3-demo-destination-bucket` for objects under the prefix `Tax`.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "DeleteMarkerReplication": {
                "Status": "Enabled"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

For full instructions on creating replication rules through the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

# Managing or pausing live replication


Live replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. After you set up your replication configuration, Amazon S3 replicates newly created objects and object updates from a source bucket to one or more specified destination buckets. 

You use the Amazon S3 console to add replication rules to the source bucket. Replication rules define the source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. For more information about replication, see [Replicating objects within and across Regions](replication.md).

You can manage replication rules on the **Replication** page in the Amazon S3 console. You can add, view, edit, enable, disable, or delete replication rules. You can also change the priority of your replication rules. For information about adding replication rules to a bucket, see [Using the S3 console](replication-walkthrough1.md#enable-replication).

**To manage the replication rules for a bucket by using the Amazon S3 console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**. 

1. On the **General purpose buckets** tab, choose the name of the bucket that you want.

1. Choose the **Management** tab, and then scroll down to **Replication rules**.

1. You can change your replication rules in the following ways:
   + To enable or disable a replication rule, choose the option button to the left of the rule. On the **Actions** menu, choose **Enable rule** or **Disable rule**. You can also disable, enable, or delete all the rules in the bucket from the **Actions** menu.
**Note**  
If you disable a replication rule and then later re-enable the rule, any new or changed objects that weren't replicated while the rule was disabled are *not* automatically replicated when the rule is re-enabled. To replicate those objects, you must use S3 Batch Replication. For more information, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).
   + To change the priority of a rule, choose the option button to the left of the rule, and then choose **Edit rule**.

     You set rule priorities to avoid conflicts caused by objects that are included in the scope of more than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority. For more information about rule priority, see [Replication configuration file elements](replication-add-config.md).

## Pausing or stopping replication


To temporarily pause replication and have it automatically resume later, you can use the `aws:s3:bucket-pause-replication` action in AWS Fault Injection Service. For more information, see [https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#bucket-pause-replication](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#bucket-pause-replication) and [Pause S3 Replication](https://docs.aws.amazon.com/fis/latest/userguide/cross-region-scenario.html#cross-region-scenario-actions-pause-s3-replication) in the *AWS Fault Injection Service User Guide*.

To stop replication in Amazon S3, we recommend disabling your replication rules. If you disable a replication rule and then later re-enable the rule, any new or changed objects that weren't replicated while the rule was disabled are *not* automatically replicated when the rule is re-enabled. To replicate those objects, you must use S3 Batch Replication. For more information, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

Replication will also stop if you remove the AWS Identity and Access Management (IAM) role, the AWS Key Management Service (AWS KMS) permissions, or the bucket policy permissions that grant Amazon S3 the required permissions. However, we don't recommend these approaches because they cause replication to fail. Amazon S3 reports the replication status for affected objects as `FAILED`. If permissions are later restored, objects marked as `FAILED` are *not* automatically replicated. To replicate those objects, you must use S3 Batch Replication.

# Replicating existing objects with Batch Replication
Replicating existing objects

S3 Batch Replication differs from live replication, which continuously and automatically replicates new objects across Amazon S3 buckets. Instead, S3 Batch Replication occurs on demand on existing objects. You can use S3 Batch Replication to replicate the following types of objects: 
+ Objects that existed before a replication configuration was in place
+ Objects that have previously been replicated
+ Objects that have failed replication

You can replicate these objects on demand by using a Batch Operations job.

To get started with Batch Replication, you can:
+ **Initiate Batch Replication for a new replication rule or destination** – You can create a one-time Batch Replication job when you're creating the first rule in a new replication configuration or when you're adding a new destination bucket to an existing configuration through the Amazon S3 console. 
+ **Initiate Batch Replication for an existing replication configuration** – You can create a new Batch Replication job by using S3 Batch Operations through the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

When the Batch Replication job finishes, you receive a completion report. For more information about how to use this report to examine the job, see [Tracking job status and completion reports](batch-ops-job-status.md).

## S3 Batch Replication considerations


Before using S3 Batch Replication, review the following list of considerations: 
+ Your source bucket must have an existing replication configuration. To enable replication, see [Setting up live replication overview](replication-how-setup.md) and [Examples for configuring live replication](replication-example-walkthroughs.md).
+ If you have S3 Lifecycle configured for your bucket, we recommend disabling your lifecycle rules while the Batch Replication job is active. Doing so helps ensure parity between the source and destination buckets. Otherwise, these buckets could diverge, and the destination bucket won't be an exact replica of the source bucket. For example, consider the following scenario:
  + Your source bucket has multiple versions of an object and a delete marker on that object.
  + Your source and destination buckets have a lifecycle configuration to remove expired delete markers.

  In this scenario, Batch Replication might replicate the delete marker to the destination bucket before replicating the object versions. This behavior could result in your lifecycle configuration marking the delete marker as expired and the delete marker being removed from the destination bucket before the object versions are replicated.
+ The AWS Identity and Access Management (IAM) role that you specify to run the Batch Operations job must have the necessary permissions to perform the underlying Batch Replication operation. For more information about creating IAM roles, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).
+ Batch Replication requires a manifest, which can be generated by Amazon S3. The generated manifest must be stored in the same AWS Region as the source bucket. If you choose not to generate the manifest, you can supply an Amazon S3 Inventory report or CSV file that contains the objects that you want to replicate. For more information, see [Specifying a manifest for a Batch Replication job](#batch-replication-manifest). 
+ Batch Replication doesn't support re-replicating objects that were deleted by specifying the version ID of the object from the destination bucket. To re-replicate these objects, you can copy the source objects in place with a Batch Copy job. Copying those objects in place creates new versions of the objects in the source bucket and automatically initiates replication to the destination bucket. Deleting and recreating the destination bucket doesn't initiate replication.

  For more information about Batch Copy, see [Examples that use Batch Operations to copy objects](batch-ops-examples-copy.md).
+ If you're using a replication rule on the source bucket, make sure to [update your replication configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-2.html) by granting the IAM role that's attached to the replication rule the proper permissions to replicate objects. This IAM role must have the necessary permissions to perform replication on both the source and destination buckets.
+ If you submit multiple Batch Replication jobs for the same bucket within a short time frame, Amazon S3 runs those jobs concurrently.
+ If you submit multiple Batch Replication jobs for two different buckets, be aware that Amazon S3 might not run all jobs concurrently. If you exceed the number of Batch Replication jobs that can run at one time on your account, Amazon S3 pauses the lower priority jobs to work on the higher priority ones. After the higher priority jobs are completed, any paused jobs become active again.
+ Batch Replication isn't supported for objects that are stored in the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes.
+ To batch replicate S3 Intelligent-Tiering objects that are stored in the Archive Access or Deep Archive Access storage tiers, you must first initiate a [restore](https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-managing.html#restore-data-from-int-tier-archive) request and wait until the objects are moved to the Frequent Access tier. 
+ A single Batch Replication job can support a manifest with up to 20 billion objects.
+ If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region. For more information, see [Replicating encrypted objects](replication-config-for-kms-objects.md).

## Specifying a manifest for a Batch Replication job


A manifest is an Amazon S3 object that contains the object keys that you want Amazon S3 to act upon. If you want to create a Batch Replication job, you must supply either a user-generated manifest or have Amazon S3 generate a manifest based on your replication configuration.

If you supply a user-generated manifest, it must be in the form of an Amazon S3 Inventory report or a CSV file. If the objects in your manifest are in a versioned bucket, you must specify the version IDs for the objects. Only the object with the version ID that's specified in the manifest will be replicated. To learn more about specifying a manifest, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).

If you choose to have Amazon S3 generate a manifest file on your behalf, the objects listed use the same source bucket, prefix, and tags as your replication configurations on the source bucket. With a generated manifest, Amazon S3 replicates all eligible versions of your objects.

**Note**  
If you choose to have Amazon S3 generate the manifest, the manifest must be stored in the same AWS Region as the source bucket.

## Filters for a Batch Replication job


When creating your Batch Replication job, you can optionally specify additional filters, such as the object creation date and replication status, to reduce the scope of the job.

You can filter objects to replicate based on the `ObjectReplicationStatuses` value, by providing one or more of the following values:
+ `"NONE"` – Indicates that Amazon S3 has never attempted to replicate the object before.
+ `"FAILED"` – Indicates that Amazon S3 has attempted, but failed, to replicate the object before.
+ `"COMPLETED"` – Indicates that Amazon S3 has successfully replicated the object before.
+ `"REPLICA"` – Indicates that this object is a replica that Amazon S3 has replicated from another source bucket.

For more information about replication statuses, see [Getting replication status information](replication-status.md).

If you don't filter your Batch Replication job, Batch Operations attempts to replicate all objects (no matter their `ObjectReplicationStatus`) in your manifest that match the rules in your replication configuration, except for certain objects that aren't replicated by default. For more information, see [What isn't replicated with replication configurations?](replication-what-is-isnot-replicated.md#replication-what-is-not-replicated)

Depending on your goal, you might set `ObjectReplicationStatuses` to one or more of the following values:
+ To replicate only existing objects that have never been replicated, only include `"NONE"`.
+ To retry replicating only objects that previously failed to replicate, only include `"FAILED"`.
+ To both replicate existing objects and retry replicating objects that previously failed to replicate, include both `"NONE"` and `"FAILED"`.
+ To backfill a destination bucket with objects that have been replicated to another destination, include `"COMPLETED"`.
+ To replicate objects that were previously replicated, include `"REPLICA"`.

## Batch Replication completion report
Batch Replication completion report

When you create a Batch Replication job, you can request a CSV completion report. This report shows the objects, replication success or failure codes, outputs, and descriptions. For more information about job tracking and completion reports, see [Completion reports](batch-ops-job-status.md#batch-ops-completion-report). 

For a list of replication failure codes and descriptions, see [Amazon S3 replication failure reasons](replication-metrics-events.md#replication-failure-codes).

For information about troubleshooting Batch Replication, see [Batch Replication errors](replication-troubleshoot.md#troubleshoot-batch-replication-errors).

## Getting started with Batch Replication


To learn more about how to use Batch Replication, see [Tutorial: Replicating existing objects in your Amazon S3 buckets with S3 Batch Replication](https://aws.amazon.com/getting-started/hands-on/replicate-existing-objects-with-amazon-s3-batch-replication/).

# Configuring an IAM role for S3 Batch Replication
Configuring IAM role and policy

Because Amazon S3 Batch Replication is a type of Batch Operations job, you must create an AWS Identity and Access Management (IAM) role to grant Batch Operations permissions to perform actions on your behalf. You also must attach a Batch Replication IAM policy to the Batch Operations IAM role. 

Use the following procedures to create a policy and an IAM role that give Batch Operations permission to initiate a Batch Replication job.

**To create a policy for Batch Replication**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Under **Access management**, choose **Policies**.

1. Choose **Create policy**.

1. On the **Specify permissions** page, choose **JSON**.

1. Insert one of the following policies, depending on whether your manifest is generated by Amazon S3 or whether you are supplying your own manifest. For more information about manifests, see [Specifying a manifest for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-manifest). 

   Before using these policies, replace the `user input placeholders` in the following policies with the names of your replication source bucket, manifest bucket, and completion report bucket. 
**Note**  
Your IAM role for Batch Replication needs different permissions, depending on whether you are generating a manifest or supplying one, so make sure that you choose the appropriate policy from the following examples.

**Policy if using and storing an Amazon S3 generated manifest**

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Action": [
               "s3:InitiateReplication"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
            ]
         },
         {
            "Action": [
               "s3:GetReplicationConfiguration",
               "s3:PutInventoryConfiguration"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-source-bucket"
            ]
         },
         {
            "Action": [
               "s3:GetObject",
               "s3:GetObjectVersion"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
            ]
         },
         {
            "Effect": "Allow",
            "Action": [
               "s3:PutObject"
            ],
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-completion-report-bucket/*",
               "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"    
            ]
         }
      ]
   }
   ```

------

**Policy if using a user-supplied manifest**

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Action": [
               "s3:InitiateReplication"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
            ]
         },
         {
            "Action": [
               "s3:GetObject",
               "s3:GetObjectVersion"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
            ]
         },
         {
            "Effect": "Allow",
            "Action": [
               "s3:PutObject"
            ],
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-completion-report-bucket/*"    
            ]
         }
      ]
   }
   ```

------

1. Choose **Next**.

1. Specify a name for the policy, and then choose **Create policy**.

**To create an IAM role for Batch Replication**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Under **Access management**, choose **Roles**.

1. Choose **Create role**.

1. Choose **AWS service** as the type of trusted entity. In the **Use case** section, choose **S3** as the service, and **S3 Batch Operations** as the use case.

1. Choose **Next**. The **Add permissions** page appears. In the search box, search for the policy that you created in the preceding procedure. Select the checkbox next to the policy name, then choose **Next**. 

1. On the **Name, review, and create** page, specify a name for your IAM role.

1. In the **Step 1: Trust identities** section, verify that your IAM role is using the following trust policy:

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
         {
            "Effect":"Allow",
            "Principal":{
               "Service":"batchoperations.s3.amazonaws.com"
            },
            "Action":"sts:AssumeRole"
         }
      ]
   }
   ```

------

1. In the **Step 2: Add permissions** section, verify that your IAM role is using the policy that you created earlier. 

1. Choose **Create role**. 

# Create a Batch Replication job for new replication rules or destinations
Batch Replication for a first replication rule or new destination

In Amazon S3, live replication doesn't replicate any objects that already existed in your source bucket before you created a replication configuration. Live replication automatically replicates only new and updated objects that are written to the bucket after the replication configuration is created. To replicate already existing objects, you can use S3 Batch Replication to replicate these objects on demand. 

When you create the first rule in a new live replication configuration or add a new destination bucket to an existing replication configuration through the Amazon S3 console, you can optionally create a Batch Replication job. You can use this Batch Replication job to replicate existing objects in the source bucket to the destination bucket. 

To use Batch Replication for an existing configuration without adding a new destination bucket, see [Create a Batch Replication job for existing replication rules](s3-batch-replication-existing-config.md).

**Prerequisites**  
Before creating your Batch Replication job, you must create a Batch Operations AWS Identity and Access Management (IAM) role to grant Amazon S3 permissions to perform actions on your behalf. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

## Using Batch Replication for a new replication rule or destination through the Amazon S3 console


When you create the first rule in a new replication configuration or add a new destination bucket to an existing configuration through the Amazon S3 console, you can choose to create a Batch Replication job to replicate existing objects in the source bucket.

**To create a Batch Replication job when creating or updating a replication configuration**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**. 

1. In the **General purpose buckets** list, choose the name of the bucket that contains the objects that you want to replicate.

1. To create a new replication rule or edit an existing rule, choose the **Management** tab, and scroll down to **Replication rules**:
   + To create a new replication rule, choose **Create replication rule**. For examples of how to set up a basic replication rule, see [Examples for configuring live replication](replication-example-walkthroughs.md).
   + To edit an existing replication rule, select the option button next to the rule name, and then choose **Edit rule**.

1. Create your new replication rule or edit the destination for your existing replication rule, and choose **Save**.

   After you create the first rule in a new replication configuration or edit an existing configuration to add a new destination, a **Replicate existing objects?** dialog appears, giving you the option to create a Batch Replication job.

1. If you want to create and run this job now, choose **Yes, replicate existing objects**.

   If you want to create a Batch Replication job at a later time, choose **No, do not replicate existing objects**.

1. If you chose **Yes, replicate existing objects**, the **Create Batch Operations job** page appears. The S3 Batch Replication job has the following settings:   
**Job run options**  
If you want the S3 Batch Replication job to run immediately, choose **Automatically run the job when it's ready**. If you want to run the job at a later time, choose **Wait to run the job when it's ready**.  
If you choose **Automatically run the job when it's ready**, you won't be able to create and save a Batch Operations manifest. To save the Batch Operations manifest, choose **Wait to run the job when it's ready**.  
**Batch Operations manifest**  
If you chose **Wait to run the job when it's ready**, the **Batch Operations manifest** section appears. The manifest is a list of all of the objects that you want to run the specified action on. You can choose to save the manifest. Similar to S3 Inventory files, the manifest will be saved as a CSV file and stored in a bucket. To learn more about Batch Operations manifests, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).  
**Completion report**  
S3 Batch Operations executes one task for each object specified in the manifest. Completion reports provide an easy way to view the results of your tasks in a consolidated format with no additional setup required. You can request a completion report for all tasks or only for failed tasks. To learn more about completion reports, see [Completion reports](batch-ops-job-status.md#batch-ops-completion-report).  
**Permissions**  
One of the most common causes of replication failures is insufficient permissions in the provided AWS Identity and Access Management (IAM) role. For information about creating this role, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md). Make sure that you create or choose an IAM role that has the required permissions for Batch Replication. 

1. Choose **Save**.

# Create a Batch Replication job for existing replication rules
Batch Replication for existing replication rules

In Amazon S3, live replication doesn't replicate any objects that already existed in your source bucket before you created a replication configuration. Live replication automatically replicates only new and updated objects that are written to the bucket after the replication configuration is created. To replicate already existing objects, you can use S3 Batch Replication to replicate these objects on demand. 

You can configure S3 Batch Replication for an existing replication configuration by using the AWS SDKs, AWS Command Line Interface (AWS CLI), or the Amazon S3 console. For an overview of Batch Replication, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

When the Batch Replication job finishes, you receive a completion report. For more information about how to use the report to examine the job, see [Tracking job status and completion reports](batch-ops-job-status.md).

**Prerequisites**  
Before creating your Batch Replication job, you must create a Batch Operations AWS Identity and Access Management (IAM) role to grant Amazon S3 permissions to perform actions on your behalf. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

## Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Batch Operations**.

1. Choose **Create job**.

1. Verify that the **AWS Region** section shows the Region where you want to create your job. 

1. In the **Manifest** section, specify the manifest format that you want to use. The manifest is a list of all of the objects that you want to run the specified action on. To learn more about Batch Operations manifests, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).
   + If you have a manifest prepared, choose **S3 inventory report (manifest.json)** or **CSV**. If your manifest is in a versioned bucket, you can specify the version ID for the manifest. If you don't specify a version ID, Batch Operations uses the current version of your manifest. For more information about creating a manifest, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).
**Note**  
If the objects in your manifest are in a versioned bucket, you must specify the version IDs for the objects. For more information, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).
   + To create a manifest based on your replication configuration, choose **Create manifest using S3 Replication configuration**. Then choose the source bucket of your replication configuration.

1. (Optional) If you chose **Create manifest using S3 Replication configuration**, you can include additional filters, such as the object creation date and replication status. For examples of how to filter by replication status, see [Specifying a manifest for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-manifest). 

1. (Optional) If you chose **Create manifest using S3 Replication configuration**, you can save the generated manifest. To save this manifest, select **Save Batch Operations manifest**. Then specify the destination bucket for the manifest and choose whether to encrypt the manifest. 
**Note**  
The generated manifest must be stored in the same AWS Region as the source bucket.

1. Choose **Next**.

1. On the **Operations** page, choose **Replicate**, then choose **Next**. 

1. (Optional) Provide a **Description**. 

1. Adjust the **Priority** of the job if needed. Higher numbers indicate higher priority. Amazon S3 attempts to run higher priority jobs before lower priority jobs. For more information about job priority, see [Assigning job priority](batch-ops-job-priority.md).

1. (Optional) Generate a completion report. To generate this report, select **Generate completion report**.

   If you choose to generate a completion report, you must choose either to report **Failed tasks only** or **All tasks**, and provide a destination bucket for the report.

1. In the **Permissions** section, make sure that you choose an IAM role that has the required permissions for Batch Replication. One of the most common causes of replication failures is insufficient permissions in the provided IAM role. For information about creating this role, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md). 

1. (Optional) Add job tags to the Batch Replication job.

1. Choose **Next**.

1. Review your job configuration, and then choose **Create job**.

## Using the AWS CLI with an S3 manifest


The following example `create-job` command creates an S3 Batch Replication job by using an S3 generated manifest for the AWS account `111122223333`. This example replicates existing objects and objects that previously failed to replicate. For information about filtering by replication status, see [Specifying a manifest for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-manifest). 

To use this command, replace the *`user input placeholders`* with your own information. Replace the IAM role `role/batch-Replication-IAM-policy` with the IAM role that you previously created. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

```
aws s3control create-job --account-id 111122223333 \ 
--operation '{"S3ReplicateObject":{}}' \ 
--report '{"Bucket":"arn:aws:s3:::amzn-s3-demo-completion-report-bucket",\ 
"Prefix":"batch-replication-report", \ 
"Format":"Report_CSV_20180820","Enabled":true,"ReportScope":"AllTasks"}' \ 
--manifest-generator '{"S3JobManifestGenerator": {"ExpectedBucketOwner": "111122223333", \ 
"SourceBucket": "arn:aws:s3:::amzn-s3-demo-source-bucket", \ 
"EnableManifestOutput": false, "Filter": {"EligibleForReplication": true, \ 
"ObjectReplicationStatuses": ["NONE","FAILED"]}}}' \ 
--priority 1 \ 
--role-arn arn:aws:iam::111122223333:role/batch-Replication-IAM-policy \ 
--no-confirmation-required \ 
--region source-bucket-region
```

**Note**  
You must initiate the job from the same AWS Region as the replication source bucket. 

After you have successfully initiated a Batch Replication job, you receive the job ID as the response. You can monitor this job by using the following `describe-job` command. To use this command, replace the *`user input placeholders`* with your own information. 

```
aws s3control describe-job --account-id 111122223333 --job-id job-id --region source-bucket-region
```

## Using the AWS CLI with a user-provided manifest


The following example creates an S3 Batch Replication job by using a user-defined manifest for AWS account `111122223333`. If the objects in your manifest are in a versioned bucket, you must specify the version IDs for the objects. Only the object with the version ID specified in the manifest will be replicated. For more information about creating a manifest, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest). 

To use this command, replace the *`user input placeholders`* with your own information. Replace the IAM role `role/batch-Replication-IAM-policy` with the IAM role that you previously created. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

```
aws s3control create-job --account-id 111122223333 \ 
--operation '{"S3ReplicateObject":{}}' \
--report '{"Bucket":"arn:aws:s3:::amzn-s3-demo-completion-report-bucket",\
"Prefix":"batch-replication-report", \
"Format":"Report_CSV_20180820","Enabled":true,"ReportScope":"AllTasks"}' \
--manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820",\
"Fields":["Bucket","Key","VersionId"]},\
"Location":{"ObjectArn":"arn:aws:s3:::amzn-s3-demo-manifest-bucket/manifest.csv",\
"ETag":"Manifest Etag"}}' \
--priority 1 \
--role-arn arn:aws:iam::111122223333:role/batch-Replication-IAM-policy \
--no-confirmation-required \
--region source-bucket-region
```

**Note**  
You must initiate the job from the same AWS Region as the replication source bucket. 

After you have successfully initiated a Batch Replication job, you receive the job ID as the response. You can monitor this job by using the following `describe-job` command.

```
aws s3control describe-job --account-id 111122223333 --job-id job-id --region source-bucket-region
```

# Troubleshooting replication
Troubleshooting replication

This section lists troubleshooting tips for Amazon S3 Replication and information about S3 Batch Replication errors.

**Topics**
+ [

## Troubleshooting tips for S3 Replication
](#troubleshoot-replication-tips)
+ [

## Batch Replication errors
](#troubleshoot-batch-replication-errors)

## Troubleshooting tips for S3 Replication


If object replicas don't appear in the destination bucket after you configure replication, use these troubleshooting tips to identify and fix issues.
+ The majority of objects replicate within 15 minutes. The time that it takes Amazon S3 to replicate an object depends on several factors, including the source and destination Region pair, and the size of the object. For large objects, replication can take up to several hours. For visibility into replication times, you can use [S3 Replication Time Control (S3 RTC)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html#enabling-replication-time-control).

  If the object that is being replicated is large, wait a while before checking to see whether it appears in the destination. You can also check the replication status of the source object. If the object replication status is `PENDING`, Amazon S3 hasn't completed the replication. If the object replication status is `FAILED`, check the replication configuration that's set on the source bucket. 

  Additionally, to receive information about failures during replication, you can set up Amazon S3 Event Notifications replication to receive failure events. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-metrics.html).
+ To check the replication status of an object, you can call the `HeadObject` API operation. The `HeadObject` API operation returns the `PENDING`, `COMPLETED`, or `FAILED` replication status of an object. In a response to a `HeadObject` API call, the replication status is returned in the `x-amz-replication-status` header.
**Note**  
To run `HeadObject`, you must have read access to the object that you're requesting. A `HEAD` request has the same options as a `GET` request, without performing a `GET` operation. For example, to run a `HeadObject` request by using the AWS Command Line Interface (AWS CLI), you can run the following command. Replace the `user input placeholders` with your own information.   

  ```
  aws s3api head-object --bucket amzn-s3-demo-source-bucket --key index.html
  ```
+ If `HeadObject` returns objects with a `FAILED` replication status, you can use S3 Batch Replication to replicate those failed objects. For more information, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md). Alternatively, you can re-upload the failed objects to the source bucket, which will initiate replication for the new objects. 
+ In the replication configuration on the source bucket, verify the following:
  + The Amazon Resource Name (ARN) of the destination bucket is correct.
  + The key name prefix is correct. For example, if you set the configuration to replicate objects with the prefix `Tax`, then only objects with key names such as `Tax/document1` or `Tax/document2` are replicated. An object with the key name `document3` is not replicated.
  + The status of the replication rule is `Enabled`.
+ Verify that versioning hasn't been suspended on any bucket in the replication configuration. Both the source and destination buckets must have versioning enabled.
+ If a replication rule is set to **Change object ownership to the destination bucket owner**, then the AWS Identity and Access Management (IAM) role that's used for replication must have the `s3:ObjectOwnerOverrideToBucketOwner` permission. This permission is granted on the resource (in this case, the destination bucket). For example, the following `Resource` statement shows how to grant this permission on the destination bucket:

  ```
  {
    "Effect":"Allow",
    "Action":[
      "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
  }
  ```
+ If the destination bucket is owned by another account, the owner of the destination bucket must also grant the `s3:ObjectOwnerOverrideToBucketOwner` permission to the source bucket owner through the destination bucket policy. To use the following example bucket policy, replace the `user input placeholders` with your own information: 

------
#### [ JSON ]

****  

  ```
  {
    "Version":"2012-10-17",		 	 	 
    "Id": "Policy1644945280205",
    "Statement": [
      {
        "Sid": "Stmt1644945277847",
        "Effect": "Allow",
        "Principal": {
          "AWS": "arn:aws:iam::123456789101:role/s3-replication-role"
        },
        "Action": [
          "s3:ReplicateObject",
          "s3:ReplicateTags",
          "s3:ObjectOwnerOverrideToBucketOwner"
        ],
        "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
      }
    ]
  }
  ```

------
**Note**  
If the destination bucket's object ownership settings include **Bucket owner enforced**, then you don't need to update the setting to **Change object ownership to the destination bucket owner** in the replication rule. The object ownership change will occur by default. For more information about changing replica ownership, see [Changing the replica owner](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-change-owner.html).
+ If you're setting the replication configuration in a cross-account scenario, where the source and destination buckets are owned by different AWS accounts, the destination buckets can't be configured as a Requester Pays bucket. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md).
+ If a bucket's source objects are encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), then the replication rule must be configured to include AWS KMS-encrypted objects. Make sure to select **Replicate objects encrypted with AWS KMS** under your **Encryption** settings in the Amazon S3 console. Then, select an AWS KMS key for encrypting the destination objects.
**Note**  
If the destination bucket is in a different account, specify an AWS KMS customer managed key that is owned by the destination account. Don't use the default Amazon S3 managed key (`aws/s3`). Using the default key encrypts the objects with the Amazon S3 managed key that's owned by the source account, preventing the object from being shared with another account. As a result, the destination account won't be able to access the objects in the destination bucket.

  To use an AWS KMS key that belongs to the destination account to encrypt the destination objects, the destination account must grant the `kms:GenerateDataKey` and `kms:Encrypt` permissions to the replication role in the KMS key policy. To use the following example statement in your KMS key policy, replace the `user input placeholders` with your own information:

  ```
  {    
      "Sid": "AllowS3ReplicationSourceRoleToUseTheKey",
      "Effect": "Allow",
      "Principal": {
          "AWS": "arn:aws:iam::123456789101:role/s3-replication-role"
      },
      "Action": ["kms:GenerateDataKey", "kms:Encrypt"],
      "Resource": "*"
  }
  ```

  If you use an asterisk (`*`) for the `Resource` statement in the AWS KMS key policy, the policy grants permission to use the KMS key to only the replication role. The policy doesn't allow the replication role to elevate its permissions. 

  By default, the KMS key policy grants the root user full permissions to the key. These permissions can be delegated to other users in the same account. Unless there are `Deny` statements in the source KMS key policy, using an IAM policy to grant the replication role permissions to the source KMS key is sufficient.
**Note**  
KMS key policies that restrict access to specific CIDR ranges, virtual private cloud (VPC) endpoints, or S3 access points can cause replication to fail.

  If either the source or destination KMS keys grant permissions based on the encryption context, confirm that Amazon S3 Bucket Keys are turned on for the buckets. If the buckets have S3 Bucket Keys turned on, the encryption context must be the bucket-level resource, like this:

  ```
  "kms:EncryptionContext:arn:aws:arn": [
       "arn:aws:s3:::amzn-s3-demo-source-bucket"
       ]
  "kms:EncryptionContext:arn:aws:arn": [
       "arn:aws:s3:::amzn-s3-demo-destination-bucket"
       ]
  ```

  In addition to the permissions granted by the KMS key policy, the source account must add the following minimum permissions to the replication role's IAM policy:

  ```
  {
      "Effect": "Allow",
      "Action": [
          "kms:Decrypt",
          "kms:GenerateDataKey"
      ],
      "Resource": [
          "Source-KMS-Key-ARN"
      ]
  },
  {
      "Effect": "Allow",
      "Action": [
          "kms:GenerateDataKey",
          "kms:Encrypt"
      ],
      "Resource": [
          "Destination-KMS-Key-ARN"
      ]
  }
  ```
**Important**  
If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region.

  For more information about how to replicate objects that are encrypted with AWS KMS, see [Replicating encrypted objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-4.html).
+ If the destination bucket is owned by another AWS account, verify that the bucket owner has a bucket policy on the destination bucket that allows the source bucket owner to replicate objects. For an example, see [Configuring replication for buckets in different accounts](replication-walkthrough-2.md).
+ To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication).
+ If your objects still aren't replicating after you've validated the permissions, check for any explicit `Deny` statements in the following locations:
  + `Deny` statements in the source or destination bucket policies. Replication fails if the bucket policy denies access to the replication role for any of the following actions:

    Source bucket:

    ```
    1.            "s3:GetReplicationConfiguration",
    2.            "s3:ListBucket",
    3.            "s3:GetObjectVersionForReplication",
    4.            "s3:GetObjectVersionAcl",
    5.            "s3:GetObjectVersionTagging"
    ```

    Destination buckets:

    ```
    1.            "s3:ReplicateObject",
    2.            "s3:ReplicateDelete",
    3.            "s3:ReplicateTags"
    ```
  + `Deny` statements or permissions boundaries attached to the IAM role can cause replication to fail.
  + `Deny` statements in AWS Organizations service control policies (SCPs) that are attached to either the source or destination accounts can cause replication to fail.
  + `Deny` statements in AWS Organizations resource control policies (RCPs) that are attached to either the source or destination buckets can cause replication to fail.
+ If an object replica doesn't appear in the destination bucket, the following issues might have prevented replication:
  + Amazon S3 doesn't replicate an object in a source bucket that is a replica created by another replication configuration. For example, if you set a replication configuration from bucket A to bucket B to bucket C, Amazon S3 doesn't replicate object replicas in bucket B to bucket C.
  + A source bucket owner can grant other AWS accounts permission to upload objects. By default, the source bucket owner doesn't have permissions for the objects created by other accounts. The replication configuration replicates only the objects for which the source bucket owner has access permissions. To avoid this problem, the source bucket owner can grant other AWS accounts permissions to create objects conditionally, requiring explicit access permissions on those objects. For an example policy, see [Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-acl-2).
+ Suppose that in the replication configuration, you add a rule to replicate a subset of objects that have a specific tag. In this case, you must assign the specific tag key and value at the time the object is created in order for Amazon S3 to replicate the object. If you first create an object and then add the tag to the existing object, Amazon S3 doesn't replicate the object. 
+ Use Amazon S3 Event Notifications to notify you of instances when objects don't replicate to their destination AWS Region. Amazon S3 Event Notifications are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).

  You can also view replication failure reasons by using Amazon S3 Event Notifications. To review the list of failure reasons, see [Amazon S3 replication failure reasons](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-failure-codes.html).

## Batch Replication errors
Batch Replication errors

To troubleshoot objects that aren't replicating to the destination bucket, check the different types of permissions for your buckets, replication role, and IAM role that's used to create the Batch Replication job. Also, make sure to check the Block Public Access settings and S3 Object Ownership settings for your buckets.

For additional troubleshooting tips for working with Batch Operations, see [Troubleshooting S3 Batch Operations](troubleshooting-batch-operations.md). 

If you've set up replication and objects aren't replicating, see [Why aren't my Amazon S3 objects replicating when I set up replication between my buckets?](https://repost.aws/knowledge-center/s3-troubleshoot-replication) in the AWS re:Post Knowledge Center.

While using Batch Replication, you might encounter one of these errors:
+ Manifest generation found no keys matching the filter criteria.

  This error occurs for one of the following reasons:
  + When objects in the source bucket are stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes.

    To use Batch Replication on these objects, first restore them to the S3 Standard storage class by using a **Restore** (`S3InitiateRestoreObjectOperation`) operation in a Batch Operations job. For more information, see [Restoring an archived object](restoring-objects.md) and [Restore objects (Batch Operations)](batch-ops-initiate-restore-object.md). After you've restored the objects, you can replicate them by using a Batch Replication job.
  + When the provided filter criteria doesn’t match any valid objects in the source bucket.

    Verify and correct the filter criteria. For example, in the Batch Replication rule, the filter criteria is looking for all objects in the source bucket with the prefix `Tax/`. If the prefix name was entered inaccurately, with a slash in the beginning and the end `/Tax/` instead of only at the end, then no S3 objects were found. To resolve the error, correct the prefix, in this case, from `/Tax/` to `Tax/` in the replication rule.
+ Batch operation status is failed with reason: The job report could not be written to your report bucket.

  This error occurs if the IAM role that's used for the Batch Operations job is unable to put the completion report into the location that was specified when you created the job. To resolve this error, check that the IAM role has the `s3:PutObject` permission for the bucket where you want to save the Batch Operations completion report. We recommend delivering the report to a bucket different from the source bucket.
+ Batch operation is completed with failures and Total failed is not 0.

  This error occurs if there are insufficient object permissions issues with the Batch Replication job that is running. If you're using a replication rule for your Batch Replication job, make sure that the IAM role that's used for replication has the proper permissions to access objects from either the source or destination bucket. You can also check the [Batch Replication completion report](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-batch-replication-batch.html#batch-replication-completion-report) to review the specific [Amazon S3 replication failure reason](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-failure-codes.html).
+ Batch job ran successfully but the number of objects expected in destination bucket is not the same.

  This error occurs when there's a mismatch between the objects listed in the manifest that's supplied in the Batch Replication job and the filters that you selected when you created the job. You might also receive this message when the objects in your source bucket don't match any replication rules and aren't included in the generated manifest.

### Batch Operations failures occur after adding a new replication rule to an existing replication configuration
Batch Replication failures after adding a new replication rule

Batch Operations attempts to perform existing object replication for every rule in the source bucket's replication configuration. If there are problems with any of the existing replication rules, failures might occur. 

The Batch Operations job's completion report explains the job failure reasons. For a list of common errors, see [Amazon S3 replication failure reasons](replication-metrics-events.md#replication-failure-codes).

# Monitoring replication with metrics, event notifications, and statuses
Monitoring progress and getting status

You can monitor your live replication configurations and your S3 Batch Replication jobs through the following mechanisms: 
+ **S3 Replication metrics** – When you enable S3 Replication metrics, Amazon CloudWatch emits metrics that you can use to track bytes pending, operations pending, and replication latency at the replication rule level. You can view S3 Replication metrics through the Amazon S3 console and the Amazon CloudWatch console. In the Amazon S3 console, you can view these metrics in the source bucket's **Metrics** tab. For more information about S3 Replication metrics, see [Using S3 Replication metrics](repl-metrics.md). 
+ **S3 Storage Lens metrics** – In addition to S3 Replication metrics, you can use the replication-related Data Protection metrics provided by S3 Storage Lens dashboards. For example, if you use the free metrics in S3 Storage Lens, you can see metrics such as the total number of bytes that are replicated from the source bucket or the count of replicated objects from the source bucket. 

  To audit your overall replication stance, you can enable advanced metrics in S3 Storage Lens. With advanced metrics in S3 Storage Lens, you can see how many replication rules you have of various types, including the count of replication rules with a replication destination that's not valid. 

  For more information about working with replication metrics in S3 Storage Lens, see [Viewing replication metrics in S3 Storage Lens dashboards](viewing-replication-metrics-storage-lens.md).
+ **S3 Event Notifications** – S3 Event Notifications can notify you at the object level in instances when objects don't replicate to their destination AWS Region or when objects aren't replicated within certain thresholds. S3 Event Notifications provides the following replication event types: `s3:Replication:OperationFailedReplication`, `s3:Replication:OperationMissedThreshold`, `s3:Replication:OperationReplicatedAfterThreshold`, and `s3:Replication:OperationNotTracked`. 

  Amazon S3 events are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).
+ **Replication status values** – You can also retrieve the replication status of your objects. The replication status can help you determine the current state of an object that's being replicated. The replication status of a source object will return either `PENDING`, `COMPLETED`, or `FAILED`. The replication status of a replica will return `REPLICA`. 

  You can also use replication status values when you're creating S3 Batch Replication jobs. For example, you can use these status values to replicate objects that have either never been replicated or that have failed replication. 

  For more information about retrieving the replication status of your objects, see [Getting replication status information](replication-status.md). For more information about using these values with Batch Replication, see [Filters for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-filters).

**Topics**
+ [

# Using S3 Replication metrics
](repl-metrics.md)
+ [

# Viewing replication metrics in S3 Storage Lens dashboards
](viewing-replication-metrics-storage-lens.md)
+ [

# Receiving replication failure events with Amazon S3 Event Notifications
](replication-metrics-events.md)
+ [

# Getting replication status information
](replication-status.md)

# Using S3 Replication metrics


S3 Replication metrics provide detailed metrics for the replication rules in your replication configuration. With replication metrics, you can monitor minute-by-minute progress by tracking bytes pending, operations pending, operations that failed replication, and replication latency.

**Note**  
S3 Replication metrics are billed at the same rate as Amazon CloudWatch custom metrics. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).
If you're using S3 Replication Time Control, Amazon CloudWatch begins reporting replication metrics 15 minutes after you enable S3 RTC on the respective replication rule. 

S3 Replication metrics are turned on automatically when you enable S3 Replication Time Control (S3 RTC). You can also enable S3 Replication metrics independently of S3 RTC while [ creating or editing a rule](replication-walkthrough1.md). S3 RTC includes other features, such as a service level agreement (SLA) and notifications for missed thresholds. For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).

When S3 Replication metrics are enabled, Amazon S3 publishes the following metrics to Amazon CloudWatch. CloudWatch metrics are delivered on a best-effort basis.


| Metric name | Metric description | Which objects does this metric apply to? | Which Region is this metric published in? | Is this metric still published if the destination bucket is deleted? | Is this metric still published if replication doesn't occur? | 
| --- | --- | --- | --- | --- | --- | 
| **Bytes Pending Replication** |  The total number of bytes of objects that are pending replication for a given replication rule.  | This metric applies only to new objects that are replicated with S3 Cross-Region Replication (S3 CRR) or S3 Same-Region Replication (S3 SRR). | This metric is published in the Region of the destination bucket. | No | Yes | 
| **Replication Latency** |  The maximum number of seconds by which the replication destination bucket is behind the source bucket for a given replication rule.  | This metric applies only to new objects that are replicated with S3 CRR or S3 SRR. | This metric is published in the Region of the destination bucket. | No | Yes | 
| **Operations Pending Replication** |  The number of operations that are pending replication for a given replication rule. This metric tracks operations related to objects, delete markers, tags, access control lists (ACLs), and S3 Object Lock.  | This metric applies only to new objects that are replicated with S3 CRR or S3 SRR. | This metric is published in the Region of the destination bucket. | No | Yes | 
| **Operations Failed Replication** |  The number of operations that failed replication for a given replication rule. This metric tracks operations related to objects, delete markers, tags, access control lists (ACLs), and Object Lock. **Operations Failed Replication** tracks S3 Replication failures aggregated at a per-minute interval. To identify the specific objects that have failed replication and their failure reasons, subscribe to the `OperationFailedReplication` event in Amazon S3 Event Notifications. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).  |  This metric applies both to new objects that are replicated with S3 CRR or S3 SRR and also to existing objects that are replicated with S3 Batch Replication.  If an S3 Batch Replication job fails to run at all, metrics aren't sent to Amazon CloudWatch. For example, your job won't run if you don't have the necessary permissions to run an S3 Batch Replication job, or if the tags or prefix in your replication configuration don't match.   | This metric is published in the Region of the source bucket. | Yes | No | 

For information about working with these metrics in CloudWatch, see [S3 Replication metrics in CloudWatch](metrics-dimensions.md#s3-cloudwatch-replication-metrics).

## Enabling S3 Replication metrics
Enabling replication metrics

You can start using S3 Replication metrics with a new or existing replication rule. For full instructions on creating replication rules, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). You can choose to apply your replication rule to an entire S3 bucket, or to Amazon S3 objects with a specific prefix or tag.

This topic provides instructions for enabling S3 Replication metrics in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable replication metrics by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the source bucket with `Metrics` enabled. In this example configuration, objects under the prefix `Tax` are replicated to the destination bucket `amzn-s3-demo-bucket`, and metrics are generated for those objects.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-bucket",
                "Metrics": {
                    "Status": "Enabled"
                }
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

## Viewing replication metrics


You can view S3 Replication metrics in the source general purpose bucket's **Metrics** tab in the Amazon S3 console. These Amazon CloudWatch metrics are also available in the Amazon CloudWatch console. When you enable S3 Replication metrics, Amazon CloudWatch emits metrics that you can use to track bytes pending, operations pending, and replication latency at the replication rule level. 

S3 Replication metrics are turned on automatically when you enable replication with S3 Replication Time Control (S3 RTC) by using the Amazon S3 console or the Amazon S3 REST API. You can also enable S3 Replication metrics independently of S3 RTC while [ creating or editing a rule](replication-walkthrough1.md).

If you're using S3 Replication Time Control, Amazon CloudWatch begins reporting replication metrics 15 minutes after you enable S3 RTC on the respective replication rule. For more information, see [Using S3 Replication metrics](#repl-metrics).

Replication metrics track the rule IDs of the replication configuration. A replication rule ID can be specific to a prefix, a tag, or a combination of both.

 For more information about CloudWatch metrics for Amazon S3, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

**Prerequisites**  
Create a replication rule that has S3 Replication metrics enabled. For more information, see [Enabling S3 Replication metrics](#enabling-replication-metrics).

**To view S3 Replication metrics through the source bucket's **Metrics** tab**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**. 

1. In the buckets list, choose the name of the source bucket that contains the objects that you want replication metrics for.

1. Choose the **Metrics** tab.

1. Under **Replication metrics**, choose the replication rules that you want to see metrics for.

1. Choose **Display charts**.

   Amazon S3 displays **Replication latency**, **Bytes pending replication**, **Operations pending replication**, and **Operations failed replication** charts for the rules that you selected.

# Viewing replication metrics in S3 Storage Lens dashboards


In addition to [S3 Replication metrics](repl-metrics.md), you can use the replication-related Data Protection metrics provided by S3 Storage Lens. S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. For more information, see [Using S3 Storage Lens to protect your data](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-data-protection.html#storage-lens-data-protection-replication-rule). 

S3 Storage Lens has two tiers of metrics: free metrics, and advanced metrics and recommendations, which you can upgrade to for an additional charge. With advanced metrics and recommendations, you can access additional metrics and features for gaining insight into your storage. For information about S3 Storage Lens pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing). 

If you use the free metrics in S3 Storage Lens, you can see metrics such as the total number of bytes that are replicated from the source bucket or the count of replicated objects from the source bucket. 

To audit your overall replication stance, you can enable advanced metrics in S3 Storage Lens. With advanced metrics in S3 Storage Lens, you can see how many replication rules you have of various types, including the count of replication rules with a replication destination that's not valid. 

For a complete list of S3 Storage Lens metrics, including which replication metrics are in each tier, see the [S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_replication.html). 

**Prerequisites**  
Create a [live replication configuration](replication-how-setup.md) or an [S3 Batch Replication job](s3-batch-replication-batch.md). 

**To view replication metrics in Amazon S3 Storage Lens**

1. Create an S3 Storage Lens dashboard. For step-by-step instructions, see [Using the S3 console](storage_lens_creating_dashboard.md#storage_lens_console_creating).

1. (Optional) During your dashboard setup, if you want to see all S3 Storage Lens replication metrics, select **Advanced metrics and recommendations** and then select **Advanced data protection metrics**. For a complete list of metrics, see the [S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_replication.html).

   If you enable advanced metrics and recommendations, you can gain further insights into your replication configurations. For example, you can use S3 Storage Lens replication rule count metrics to get detailed information about your buckets that are configured for replication. This information includes replication rules within and across buckets and Regions. For more information, see [Count the total number of replication rules for each bucket](storage-lens-data-protection.md#storage-lens-data-protection-replication-rule).

1. After you've created your dashboard, open the dashboard, and choose the **Buckets** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, choose **Data protection**. Then clear **Summary**.

1. To filter the **Buckets** list to display only replication metrics, choose the preferences icon (![\[The preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all data-protection metrics until only the replication metrics remain selected.

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Continue**.

# Receiving replication failure events with Amazon S3 Event Notifications
Receiving replication failure events

If you've enabled S3 Replication metrics on your replication configuration, you can set up Amazon S3 Event Notifications to notify you when objects don't replicate to their destination AWS Region. If you've enabled S3 Replication Time Control (S3 RTC) on your replication configuration, you can also be notified when objects don't replicate within the 15-minute S3 RTC threshold for replication. 

By using the following `Replication` event types, you can monitor the minute-by-minute progress of replication events by tracking bytes pending, operations pending, and replication latency. For more information about S3 Replication metrics, see [Using S3 Replication metrics](repl-metrics.md).
+ The `s3:Replication:OperationFailedReplication` event type notifies you when an object that was eligible for replication failed to replicate. 
+ The `s3:Replication:OperationMissedThreshold` event type notifies you when an object that was eligible for replication that uses S3 RTC exceeds the 15-minute threshold for replication.
+ The `s3:Replication:OperationReplicatedAfterThreshold` event type notifies you when an object that was eligible for replication that uses S3 RTC replicates after the 15-minute threshold.
+ The `s3:Replication:OperationNotTracked` event type notifies you when an object that was eligible for live replication (either Same-Region Replication [SRR] or Cross-Region Replication [CRR]) is no longer being tracked by replication metrics.

For full descriptions of all the supported replication event types, see [Supported event types for SQS, SNS, and Lambda](notification-how-to-event-types-and-destinations.md#supported-notification-event-types).

For a list of the failure codes captured by S3 Event Notifications, see [Amazon S3 replication failure reasons](#replication-failure-codes).

You can receive S3 Event Notifications through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

For instructions on how to configure Amazon S3 Event Notifications, see [Enabling event notifications](how-to-enable-disable-notification-intro.md).

**Note**  
In addition to enabling event notifications, make sure that you also enable S3 Replication metrics. For more information, see [Enabling S3 Replication metrics](repl-metrics.md#enabling-replication-metrics).

The following is an example of a message that Amazon S3 sends to publish an `s3:Replication:OperationFailedReplication` event. For more information, see [Event message structure](notification-content-structure.md).

```
{
  "Records": [
    {
      "eventVersion": "2.2",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-1",
      "eventTime": "2024-09-05T21:04:32.527Z",
      "eventName": "Replication:OperationFailedReplication",
      "userIdentity": {
        "principalId": "s3.amazonaws.com"
      },
      "requestParameters": {
        "sourceIPAddress": "s3.amazonaws.com"
      },
      "responseElements": {
        "x-amz-request-id": "123bf045-2b4b-4ca8-a211-c34a63c59426",
        "x-amz-id-2": "12VAWNDIHnwJsRhTccqQTeAPoXQmRt22KkewMV8G3XZihAuf9CLDdmkApgZzudaIe2KlLfDqGS0="
      },
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "ReplicationEventName",
        "bucket": {
          "name": "amzn-s3-demo-bucket1",
          "ownerIdentity": {
            "principalId": "111122223333"
          },
          "arn": "arn:aws:s3:::amzn-s3-demo-bucket1"
        },
        "object": {
          "key": "replication-object-put-test.png",
          "size": 520080,
          "eTag": "e12345ca7e88a38428305d3ff7fcb99f",
          "versionId": "abcdeH0Xp66ep__QDjR76LK7Gc9X4wKO",
          "sequencer": "0066DA1CBF104C0D51"
        }
      },
      "replicationEventData": {
        "replicationRuleId": "notification-test-replication-rule",
        "destinationBucket": "arn:aws:s3:::amzn-s3-demo-bucket2",
        "s3Operation": "OBJECT_PUT",
        "requestTime": "2024-09-05T21:03:59.168Z",
        "failureReason": "AssumeRoleNotPermitted"
      }
    }
  ]
}
```

## Amazon S3 replication failure reasons


The following table lists Amazon S3 Replication failure reasons. You can view these reasons by receiving the `s3:Replication:OperationFailedReplication` event with Amazon S3 Event Notifications and then looking at the `failureReason` value. 

You can also view these failure reasons in an S3 Batch Replication completion report. For more information, see [Batch Replication completion report](s3-batch-replication-batch.md#batch-replication-completion-report).


| Replication failure reason | Description | 
| --- | --- | 
| `AssumeRoleNotPermitted` | Amazon S3 can't assume the AWS Identity and Access Management (IAM) role that's specified in the replication configuration or in the Batch Operations job. | 
| `DstBucketInvalidRegion` | The destination bucket is not in the same AWS Region as specified by the Batch Operations job. This error is specific to Batch Replication. | 
| `DstBucketNotFound` | Amazon S3 is unable to find the destination bucket that's specified in the replication configuration. | 
| `DstBucketObjectLockConfigMissing` | To replicate objects from a source bucket with Object Lock enabled, the destination bucket must also have Object Lock enabled. This error indicates that Object Lock might not be enabled in the destination bucket. For more information, see [Object Lock considerations](object-lock-managing.md). | 
| `DstBucketUnversioned` | Versioning is not enabled for the S3 destination bucket. To replicate objects with S3 Replication, enable versioning for the destination bucket. | 
| `DstDelObjNotPermitted` | Amazon S3 is unable to replicate delete markers to the destination bucket. The `s3:ReplicateDelete` permission might be missing for the destination bucket. | 
| `DstKmsKeyInvalidState` | The AWS Key Management Service (AWS KMS) key for the destination bucket isn't in a valid state. Review and enable the required AWS KMS key. For more information about managing AWS KMS keys, see [Key states of AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) in the *AWS Key Management Service Developer Guide*. | 
| `DstKmsKeyNotFound` | The AWS KMS key that's configured for the destination bucket in the replication configuration doesn't exist. | 
| `DstMultipartCompleteNotPermitted` | Amazon S3 is unable to complete multipart uploads of objects in the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket. | 
| `DstMultipartInitNotPermitted` | Amazon S3 is unable to initiate multipart uploads of objects to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket.  | 
| `DstMultipartUploadNotPermitted` | Amazon S3 is unable to upload multipart upload objects to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket.  | 
| `DstObjectHardDeleted` | S3 Batch Replication does not support re-replicating objects deleted with the version ID of the object from the destination bucket. This error is specific to Batch Replication. | 
| `DstPutAclNotPermitted` | Amazon S3 is unable to replicate object access control lists (ACLs) to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket. | 
| `DstPutLegalHoldNotPermitted` | Amazon S3 is unable to put an Object Lock legal hold on the destination objects when it's replicating immutable objects. The `s3:PutObjectLegalHold` permission might be missing for the destination bucket. For more information, see [Legal holds](object-lock.md#object-lock-legal-holds). | 
|  `DstPutObjectNotPermitted` | Amazon S3 is unable to replicate objects to the destination bucket. This can occur when required permissions (`s3:ReplicateObject` or `s3:ObjectOwnerOverrideToBucketOwner` permissions) are missing for the destination bucket or when the AWS KMS key policy doesn't allow the source bucket's replication role to use the AWS KMS key (`kms:Decrypt` and `kms:GenerateDataKey*` actions) at the destination bucket.  | 
|  `DstPutRetentionNotPermitted` | Amazon S3 is unable to put a retention period on the destination objects when it's replicating immutable objects. The `s3:PutObjectRetention` permission might be missing for the destination bucket. | 
| `DstPutTaggingNotPermitted` | Amazon S3 is unable to replicate object tags to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket. | 
| `DstVersionNotFound ` | Amazon S3 is unable to find the required object version in the destination bucket for which metadata needs to be replicated.  | 
| `InitiateReplicationNotPermitted` | Amazon S3 is unable to initiate replication on objects. The `s3:InitiateReplication` permission might be missing for the Batch Operations job. This error is specific to Batch Replication. | 
| `SrcBucketInvalidRegion` | The source bucket isn't in the same AWS Region as specified by the Batch Operations job. This error is specific to Batch Replication. | 
| `SrcBucketNotFound` | Amazon S3 is unable to find the source bucket. | 
| `SrcBucketReplicationConfigMissing` | Amazon S3 couldn't find a replication configuration for the source bucket. | 
| `SrcGetAclNotPermitted` |  Amazon S3 is unable to access the object in the source bucket for replication. The `s3:GetObjectVersionAcl` permission might be missing for the source bucket object. The objects in the source bucket must be owned by the bucket owner. If ACLs are enabled, then verify if Object Ownership is set to Bucket owner preferred or Object writer. If Object Ownership is set to Bucket owner preferred, then the source bucket objects must have the `bucket-owner-full-control` ACL for the bucket owner to become the object owner. The source account can take ownership of all objects in their bucket by setting Object Ownership to Bucket owner enforced and disabling ACLs.  | 
| `SrcGetLegalHoldNotPermitted` | Amazon S3 is unable to access the S3 Object Lock legal hold information. | 
| `SrcGetObjectNotPermitted` | Amazon S3 is unable to access the object in the source bucket for replication. The `s3:GetObjectVersionForReplication` permission might be missing for the source bucket.  | 
| `SrcGetRetentionNotPermitted` | Amazon S3 is unable to access the S3 Object Lock retention period information. | 
| `SrcGetTaggingNotPermitted` | Amazon S3 is unable to access object tag information from the source bucket. The `s3:GetObjectVersionTagging` permission might be missing for the source bucket. | 
| `SrcHeadObjectNotPermitted` | Amazon S3 is unable to retrieve object metadata from the source bucket. The `s3:GetObjectVersionForReplication` permission might be missing for the source bucket.  | 
| `SrcKeyNotFound` | Amazon S3 is unable to find the source object key to replicate. Source object may have been deleted before replication was complete. | 
| `SrcKmsKeyInvalidState` | The AWS KMS key for the source bucket isn't in a valid state. Review and enable the required AWS KMS key. For more information about managing AWS KMS keys, see [Key states of AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) in the *AWS Key Management Service Developer Guide*. | 
| `SrcObjectNotEligible` | Some objects aren't eligible for replication. This may be due to the object's storage class or the object tags don't match the replication configuration. | 
| `SrcObjectNotFound` | Source object does not exist. | 
| `SrcReplicationNotPending` | Amazon S3 has already replicated this object. This object is no longer pending replication. | 
| `SrcVersionNotFound` | Amazon S3 is unable to find the source object version to replicate. Source object version may have been deleted before replication was complete. | 

### Related topics


[Setting up permissions for live replication](setting-repl-config-perm-overview.md)

[Troubleshooting replication](replication-troubleshoot.md)

# Getting replication status information
Getting replication status

Replication status can help you determine the current state of an object being replicated. The replication status of a source object will return either `PENDING`, `COMPLETED`, or `FAILED`. The replication status of a replica will return `REPLICA`.

You can also use replication status values when you're creating S3 Batch Replication jobs. For example, you can use these status values to replicate objects that have either never been replicated or that have failed replication. For more information about using these values with Batch Replication, see [Using replication status information with Batch Replication jobs](#replication-status-batch-replication).

**Topics**
+ [

## Replication status overview
](#replication-status-overview)
+ [

## Replication status if replicating to multiple destination buckets
](#replication-status-multiple-destinations)
+ [

## Replication status if Amazon S3 replica modification sync is enabled
](#replication-status-replica-mod-syn)
+ [

## Using replication status information with Batch Replication jobs
](#replication-status-batch-replication)
+ [

## Finding replication status
](#replication-status-usage)

## Replication status overview


In replication, you have a source bucket on which you configure replication and one or more destination buckets where Amazon S3 replicates objects. When you request an object (by using `GetObject`) or object metadata (by using `HeadObject`) from these buckets, Amazon S3 returns the `x-amz-replication-status` header in the response: 
+ When you request an object from the source bucket, Amazon S3 returns the `x-amz-replication-status` header if the object in your request is eligible for replication. 

  For example, suppose that you specify the object prefix `TaxDocs` in your replication configuration to tell Amazon S3 to replicate only objects with the key name prefix `TaxDocs`. Any objects that you upload that have this key name prefix—for example, `TaxDocs/document1.pdf`—will be replicated. For object requests with this key name prefix, Amazon S3 returns the `x-amz-replication-status` header with one of the following values for the object's replication status: `PENDING`, `COMPLETED`, or `FAILED`.
**Note**  
If object replication fails after you upload an object, you can't retry replication. You must upload the object again, or you must use S3 Batch Replication to replicate any failed objects. S3 Lifecycle blocks expiration and transition actions on objects with `PENDING` or `FAILED` replication status. For more information about using Batch Replication, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).   
Objects transition to a `FAILED` state for issues such as missing replication role permissions, AWS Key Management Service (AWS KMS) permissions, or bucket permissions. For temporary failures, such as if a bucket or Region is unavailable, replication status doesn't transition to `FAILED`, but remains `PENDING`. After the resource is back online, Amazon S3 resumes replicating those objects.
+ When you request an object from a destination bucket, if the object in your request is a replica that Amazon S3 created, Amazon S3 returns the `x-amz-replication-status` header with the value `REPLICA`.

**Note**  
Before deleting an object from a source bucket that has replication enabled, check the object's replication status to make sure that the object has been replicated.   
If an S3 Lifecycle configuration is enabled on the source bucket, Amazon S3 suspends lifecycle actions until it marks the object's replication status as `COMPLETED`. If replication status is `FAILED`, S3 Lifecycle continues to block expiration and transition actions on the object until you resolve the underlying replication issue. For more information, see [S3 Lifecycle and](lifecycle-and-other-bucket-config.md#lifecycle-and-replication).

## Replication status if replicating to multiple destination buckets


When you replicate objects to multiple destination buckets, the `x-amz-replication-status` header acts differently. The header of the source object returns a value of `COMPLETED` only when replication is successful to all destinations. The header remains at the `PENDING` value until replication has completed for all destinations. If one or more destinations fail replication, the header returns `FAILED`.

## Replication status if Amazon S3 replica modification sync is enabled


When your replication rules enable Amazon S3 replica modification sync, replicas can report statuses other than `REPLICA`. If metadata changes are in the process of replicating, the `x-amz-replication-status` header returns `PENDING`. If replica modification sync fails to replicate metadata, the header returns `FAILED`. If metadata is replicated correctly, the replicas return the header `REPLICA`.

## Using replication status information with Batch Replication jobs


When creating a Batch Replication job, you can optionally specify additional filters, such as the object creation date and replication status, to reduce the scope of the job.

You can filter objects to replicate based on the `ObjectReplicationStatuses` value, by providing one or more of the following values:
+ `"NONE"` – Indicates that Amazon S3 has never attempted to replicate the object before.
+ `"FAILED"` – Indicates that Amazon S3 has attempted, but failed, to replicate the object before.
+ `"COMPLETED"` – Indicates that Amazon S3 has successfully replicated the object before.
+ `"REPLICA"` – Indicates that this is a replica object that Amazon S3 has replicated from another source.

For more information about using these replication status values with Batch Replication, see [Filters for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-filters).

## Finding replication status


To get the replication status of the objects in a bucket, you can use the Amazon S3 Inventory tool. Amazon S3 sends a CSV file to the destination bucket that you specify in the inventory configuration. You can also use Amazon Athena to query the replication status in the inventory report. For more information about Amazon S3 Inventory, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

You can also find the object replication status by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), or the AWS SDK. 

### Using the S3 console


In the Amazon S3 console, you can view the replication status for an object on the object's details page.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **General purpose buckets** list, choose the name of the replication source bucket.

1. In the **Objects** list, choose the object name. The object's details page appears. 

1. On the **Properties** tab, scroll down to the **Object management overview** section. Under **Management configurations**, see the value under **Replication status**.

### Using the AWS CLI


Use the AWS Command Line Interface (AWS CLI) `head-object` command to retrieve object metadata, as shown in the following example. Replace the `amzn-s3-demo-source-bucket1` with the name of your replication source bucket, and replace the other `user input placeholders` with your own information.

```
aws s3api head-object --bucket amzn-s3-demo-source-bucket1 --key object-key --version-id object-version-id           
```

The command returns object metadata, including the `ReplicationStatus` as shown in the following example response.

```
{
   "AcceptRanges":"bytes",
   "ContentType":"image/jpeg",
   "LastModified":"Mon, 23 Mar 2015 21:02:29 GMT",
   "ContentLength":3191,
   "ReplicationStatus":"COMPLETED",
   "VersionId":"jfnW.HIMOfYiD_9rGbSkmroXsFj3fqZ.",
   "ETag":"\"6805f2cfc46c0f04559748bb039d69ae\"",
   "Metadata":{

   }
}
```

### Using the AWS SDKs


The following code fragments get your replication status by using the AWS SDK for Java and AWS SDK for .NET, respectively. 

------
#### [ Java ]

```
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest(bucketName, key);
ObjectMetadata metadata = s3Client.getObjectMetadata(metadataRequest);

System.out.println("Replication Status : " + metadata.getRawMetadataValue(Headers.OBJECT_REPLICATION_STATUS));
```

------
#### [ .NET ]

```
GetObjectMetadataRequest getmetadataRequest = new GetObjectMetadataRequest
    {
         BucketName = sourceBucket,
         Key        = objectKey
    };

GetObjectMetadataResponse getmetadataResponse = client.GetObjectMetadata(getmetadataRequest);
Console.WriteLine("Object replication status: {0}", getmetadataResponse.ReplicationStatus);
```

------