

# Copy objects


You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The Batch Operations **Copy** operation copies each object that is specified in the manifest. You can copy objects to a bucket in the same AWS Region or to a bucket in a different Region. S3 Batch Operations supports most options available through Amazon S3 for copying objects. These options include setting object metadata, setting permissions, and changing an object's storage class. 

You can also use the **Copy** operation to copy existing unencrypted objects and write them back to the same bucket as encrypted objects. For more information, see [Encrypting objects with Amazon S3 Batch Operations](https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/).

When you copy objects, you can change the checksum algorithm used to calculate the checksum of the object. If objects don't have an additional checksum calculated, you can also add one by specifying the checksum algorithm for Amazon S3 to use. For more information, see [Checking object integrity in Amazon S3](checking-object-integrity.md).

For more information about copying objects in Amazon S3 and the required and optional parameters, see [Copying, moving, and renaming objects](copy-object.md) in this guide and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.

## Restrictions and limitations


When you're using the Batch Operations **Copy** operation, the following restrictions and limitations apply:
+ All source objects must be in one bucket.
+ All destination objects must be in one bucket.
+ You must have read permissions for the source bucket and write permissions for the destination bucket.
+ Objects to be copied can be up to 5 GB in size.
+ If you try to copy objects from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive classes to the S3 Standard storage class, you must first restore these objects. For more information, see [Restoring an archived object](restoring-objects.md).
+ You must create your Batch Operations **Copy** jobs in the destination Region, which is the Region that you intend to copy the objects to.
+ All `CopyObject` options are supported except for conditional checks on entity tags (ETags) and server-side encryption with customer-provided encryption keys (SSE-C).
+ If the destination bucket is unversioned, you will overwrite any objects that have the same key names.
+ Objects aren't necessarily copied in the same order as they appear in the manifest. For versioned buckets, if preserving the current or noncurrent version order is important, copy all noncurrent versions first. Then, after the first job is complete, copy the current versions in a subsequent job. 
+ Copying objects to the Reduced Redundancy Storage (RRS) class isn't supported.
+ A single Batch Operations Copy job can support a manifest with up to 20 billion objects.

# Copying objects using S3 Batch Operations
Examples that use Batch Operations to copy objects

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use S3 Batch Operations to create a **Copy** (`CopyObject`) job to copy objects within the same account or to a different destination account. 

The following examples show how to store and use a manifest that is in a different account. The first example shows how you can use Amazon S3 Inventory to deliver the inventory report to the destination account for use during job creation. The second example shows how to use a comma-separated values (CSV) manifest in the source or destination account. The third example shows how to use the **Copy** operation to enable S3 Bucket Keys for existing objects that have been encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

**Topics**
+ [

# Using an inventory report to copy objects across AWS accounts
](specify-batchjob-manifest-xaccount-inventory.md)
+ [

# Using a CSV manifest to copy objects across AWS accounts
](specify-batchjob-manifest-xaccount-csv.md)
+ [

# Using Batch Operations to enable S3 Bucket Keys for SSE-KMS
](batch-ops-copy-example-bucket-key.md)

# Using an inventory report to copy objects across AWS accounts
Using an inventory report to copy objects across AWS accounts

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use S3 Batch Operations to create a **Copy** (`CopyObject`) job to copy objects within the same account or to a different destination account.

You can use Amazon S3 Inventory to create an inventory report and use the report to create a list (manifest) of objects to copy with S3 Batch Operations. For more information about using a CSV manifest in the source or destination account, see [Using a CSV manifest to copy objects across AWS accounts](specify-batchjob-manifest-xaccount-csv.md).

Amazon S3 Inventory generates inventories of the objects in a bucket. The resulting list is published to an output file. The bucket that is inventoried is called the source bucket, and the bucket where the inventory report file is stored is called the destination bucket. 

The Amazon S3 Inventory report can be configured to be delivered to another AWS account. Doing so allows S3 Batch Operations to read the inventory report when the job is created in the destination account.

For more information about Amazon S3 Inventory source and destination buckets, see [Source and destination buckets](storage-inventory.md#storage-inventory-buckets).

The easiest way to set up an inventory is by using the Amazon S3 console, but you can also use the Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs.

The following console procedure contains the high-level steps for setting up permissions for an S3 Batch Operations job. In this procedure, you copy objects from a source account to a destination account, with the inventory report stored in the destination account.

**To set up Amazon S3 Inventory for source and destination buckets owned by different accounts**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. Decide on (or create) a destination manifest bucket to store the inventory report in. In this procedure, the *destination account* is the account that owns both the destination manifest bucket and the bucket that the objects are copied to.

1. Configure an inventory report for a source bucket. For information about how to use the console to configure an inventory or how to encrypt an inventory list file, see [Configuring Amazon S3 Inventory](configure-inventory.md). 

   When you configure the inventory report, you specify the destination bucket where you want the list to be stored. The inventory report for the source bucket is published to the destination bucket. In this procedure, the *source account* is the account that owns the source bucket.

   Choose **CSV** for the output format.

   When you enter information for the destination bucket, choose **Buckets in another account**. Then enter the name of the destination manifest bucket. Optionally, you can enter the account ID of the destination account.

   After the inventory configuration is saved, the console displays a message similar to the following: 

   Amazon S3 could not create a bucket policy on the destination bucket. Ask the destination bucket owner to add the following bucket policy to allow Amazon S3 to place data in that bucket.

   The console then displays a bucket policy that you can use for the destination bucket.

1. Copy the destination bucket policy that appears on the console.

1. In the destination account, add the copied bucket policy to the destination manifest bucket where the inventory report is stored.

1. Create a role in the destination account that is based on the S3 Batch Operations trust policy. For more information about this trust policy, see [Trust policy](batch-ops-iam-role-policies.md#batch-ops-iam-role-policies-trust).

   For more information about creating a role, see [ Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*.

   Enter a name for the role (the following example role uses the name *`BatchOperationsDestinationRoleCOPY`*). Choose the **S3** service, and then choose the **S3 Batch Operations** use case, which applies the trust policy to the role. 

   Then choose **Create policy** to attach the following policy to the role. To use this policy, replace the *`user input placeholders`* with your own information. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsDestinationObjectCOPY",
         "Effect": "Allow",
         "Action": [
           "s3:PutObject",
           "s3:PutObjectVersionAcl",
           "s3:PutObjectAcl",
           "s3:PutObjectVersionTagging",
           "s3:PutObjectTagging",
           "s3:GetObject",
           "s3:GetObjectVersion",
           "s3:GetObjectAcl",
           "s3:GetObjectTagging",
           "s3:GetObjectVersionAcl",
           "s3:GetObjectVersionTagging"
         ],
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
         ]
       }
     ]
   }
   ```

------

   The role uses the policy to grant `batchoperations.s3.amazonaws.com` permission to read the manifest in the destination bucket. It also grants permissions to `GET` objects, access control lists (ACLs), tags, and versions in the source object bucket. And it grants permissions to `PUT` objects, ACLs, tags, and versions into the destination object bucket.

1. In the source account, create a bucket policy for the source bucket that grants the role that you created in the previous step permissions to `GET` objects, ACLs, tags, and versions in the source bucket. This step allows S3 Batch Operations to get objects from the source bucket through the trusted role.

   The following is an example of the bucket policy for the source account. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowBatchOperationsSourceObjectCOPY",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/BatchOperationsDestinationRoleCOPY"
               },
               "Action": [
                   "s3:GetObject",
                   "s3:GetObjectVersion",
                   "s3:GetObjectAcl",
                   "s3:GetObjectTagging",
                   "s3:GetObjectVersionAcl",
                   "s3:GetObjectVersionTagging"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
           }
       ]
   }
   ```

------

1. After the inventory report is available, create an S3 Batch Operations **Copy** (`CopyObject`) job in the destination account, and choose the inventory report from the destination manifest bucket. You need the ARN for the IAM role that you created in the destination account.

   For general information about creating a job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

   For information about creating a job by using the console, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

# Using a CSV manifest to copy objects across AWS accounts
Using a CSV manifest to copy objects across AWS accounts

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use S3 Batch Operations to create a **Copy** (`CopyObject`) job to copy objects within the same account or to a different destination account.

You can use a CSV manifest that's stored in the source account to copy objects across AWS accounts with S3 Batch Operations. To use an S3 Inventory report as a manifest, see [Using an inventory report to copy objects across AWS accounts](specify-batchjob-manifest-xaccount-inventory.md).

For an example of the CSV format for manifest files, see [Creating a manifest file](batch-ops-create-job.md#create-manifest-file).

The following procedure shows how to set up permissions when using an S3 Batch Operations job to copy objects from a source account to a destination account with a CSV manifest file that's stored in the source account.

**To use a CSV manifest to copy objects across AWS accounts**

1. Create an AWS Identity and Access Management (IAM) role in the destination account that's based on the S3 Batch Operations trust policy. In this procedure, the *destination account* is the account that the objects are being copied to.

   For more information about the trust policy, see [Trust policy](batch-ops-iam-role-policies.md#batch-ops-iam-role-policies-trust).

   For more information about creating a role, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*.

   If you create the role by using the console, enter a name for the role (the following example role uses the name `BatchOperationsDestinationRoleCOPY`). Choose the **S3** service, and then choose the **S3 Batch Operations** use case, which applies the trust policy to the role.

   Then choose **Create policy** to attach the following policy to the role. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsDestinationObjectCOPY",
         "Effect": "Allow",
         "Action": [
           "s3:PutObject",
           "s3:PutObjectVersionAcl",
           "s3:PutObjectAcl",
           "s3:PutObjectVersionTagging",
           "s3:PutObjectTagging",
           "s3:GetObject",
           "s3:GetObjectVersion",
           "s3:GetObjectAcl",
           "s3:GetObjectTagging",
           "s3:GetObjectVersionAcl",
           "s3:GetObjectVersionTagging"
         ],
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
         ]
       }
     ]
   }
   ```

------

   Using the policy, the role grants `batchoperations.s3.amazonaws.com` permission to read the manifest in the source manifest bucket. It grants permissions to `GET` objects, access control lists (ACLs), tags, and versions in the source object bucket. It also grants permissions to `PUT` objects, ACLs, tags, and versions into the destination object bucket.

1. In the source account, create a bucket policy for the bucket that contains the manifest to grant the role that you created in the previous step permissions to `GET` objects and versions in the source manifest bucket.

   This step allows S3 Batch Operations to read the manifest by using the trusted role. Apply the bucket policy to the bucket that contains the manifest.

   The following is an example of the bucket policy to apply to the source manifest bucket. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsSourceManifestRead",
         "Effect": "Allow",
         "Principal": {
           "AWS": [
             "arn:aws:iam::111122223333:user/ConsoleUserCreatingJob",
             "arn:aws:iam::111122223333:role/BatchOperationsDestinationRoleCOPY"
           ]
         },
         "Action": [
           "s3:GetObject",
           "s3:GetObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
       }
     ]
   }
   ```

------

   This policy also grants permissions to allow a console user who is creating a job in the destination account the same permissions in the source manifest bucket through the same bucket policy.

1. In the source account, create a bucket policy for the source bucket that grants the role that you created permissions to `GET` objects, ACLs, tags, and versions in the source object bucket. S3 Batch Operations can then get objects from the source bucket through the trusted role.

   The following is an example of the bucket policy for the bucket that contains the source objects. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsSourceObjectCOPY",
         "Effect": "Allow",
         "Principal": {
           "AWS": "arn:aws:iam::111122223333:role/BatchOperationsDestinationRoleCOPY"
         },
         "Action": [
           "s3:GetObject",
           "s3:GetObjectVersion",
           "s3:GetObjectAcl",
           "s3:GetObjectTagging",
           "s3:GetObjectVersionAcl",
           "s3:GetObjectVersionTagging"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
       }
     ]
   }
   ```

------

1. Create an S3 Batch Operations job in the destination account. You need the Amazon Resource Name (ARN) for the role that you created in the destination account. For more information about creating a job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

# Using Batch Operations to enable S3 Bucket Keys for SSE-KMS
Using Batch Operations to encrypt objects with S3 Bucket Keys

S3 Bucket Keys reduce the cost of server-side encryption with AWS Key Management Service (AWS KMS) (SSE-KMS) by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md) and [Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects](configuring-bucket-key.md). When you perform a `CopyObject` operation by using the REST API, AWS SDKs, or AWS CLI, you can enable or disable an S3 Bucket Key at the object level by adding the `x-amz-server-side-encryption-bucket-key-enabled` request header with a `true` or `false` value. 

When you configure an S3 Bucket Key for an object by using a `CopyObject` operation, Amazon S3 updates only the settings for that object. The S3 Bucket Key settings for the destination bucket don't change. If you submit a `CopyObject` request for an AWS KMS encrypted object to a bucket that has S3 Bucket Keys enabled, your object level operation will automatically use S3 Bucket Keys unless you disable the keys in the request header. If you don't specify an S3 Bucket Key for your object, Amazon S3 applies the S3 Bucket Key settings for the destination bucket to the object.

To encrypt your existing Amazon S3 objects, you can use S3 Batch Operations. You can use the Batch Operations **Copy** operation to copy existing unencrypted objects and write them back to the same bucket as encrypted objects. For more information, see [Encrypting objects with Amazon S3 Batch Operations](https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/) on the AWS Storage Blog.

In the following example, you use the Batch Operations **Copy** operation to enable S3 Bucket Keys on existing objects. For more information, see [Configuring an S3 Bucket Key at the object level](configuring-bucket-key-object.md).

**Topics**
+ [

## Considerations for using S3 Batch Operations to encrypt objects with S3 Bucket Keys enabled
](#bucket-key-ex-things-to-note)
+ [

## Prerequisites
](#bucket-key-ex-prerequisites)
+ [

## Step 1: Get your list of objects using Amazon S3 Inventory
](#bucket-key-ex-get-list-of-objects)
+ [

## Step 2: Filter your object list with S3 Select
](#bucket-key-ex-filter-object-list-with-s3-select)
+ [

## Step 3: Set up and run your S3 Batch Operations job
](#bucket-key-ex-setup-and-run-job)

## Considerations for using S3 Batch Operations to encrypt objects with S3 Bucket Keys enabled


Consider the following issues when you use S3 Batch Operations to encrypt objects with S3 Bucket Keys enabled:
+ You will be charged for S3 Batch Operations jobs, objects, and requests in addition to any charges associated with the operation that S3 Batch Operations performs on your behalf, including data transfers, requests, and other charges. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).
+ If you use a versioned bucket, each S3 Batch Operations job performed creates new encrypted versions of your objects. It also maintains the previous versions without an S3 Bucket Key configured. To delete the old versions, set up an S3 Lifecycle expiration policy for noncurrent versions as described in [Lifecycle configuration elements](intro-lifecycle-rules.md).
+ The copy operation creates new objects with new creation dates, which can affect lifecycle actions like archiving. If you copy all objects in your bucket, all the new copies have identical or similar creation dates. To further identify these objects and create different lifecycle rules for various data subsets, consider using object tags. 

## Prerequisites


Before you configure your objects to use an S3 Bucket Key, review [Changes to note before enabling an S3 Bucket Key](bucket-key.md#bucket-key-changes). 

To use this example, you must have an AWS account and at least one S3 bucket to hold your working files and encrypted results. You might also find much of the existing S3 Batch Operations documentation useful, including the following topics:
+ [S3 Batch Operations basics](batch-ops.md#batch-ops-basics)
+ [Creating an S3 Batch Operations job](batch-ops-create-job.md)
+ [Operations supported by S3 Batch Operations](batch-ops-operations.md)
+ [Managing S3 Batch Operations jobs](batch-ops-managing-jobs.md)

## Step 1: Get your list of objects using Amazon S3 Inventory


To get started, identify the S3 bucket that contains the objects to encrypt, and get a list of its contents. An Amazon S3 Inventory report is the most convenient and affordable way to do this. The report provides the list of the objects in a bucket along with their associated metadata. In this step, the source bucket is the inventoried bucket, and the destination bucket is the bucket where you store the inventory report file. For more information about Amazon S3 Inventory source and destination buckets, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

The easiest way to set up an inventory is by using the AWS Management Console. But you can also use the REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs. Before following these steps, be sure to sign in to the console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). If you encounter permission denied errors, add a bucket policy to your destination bucket. For more information, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1).

**To get a list of objects using S3 Inventory**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**, and choose a bucket that contains objects to encrypt.

1. On the **Management** tab, navigate to the **Inventory configurations** section, and choose **Create inventory configuration**.

1. Give your new inventory a name, enter the name of the destination S3 bucket, and optionally create a destination prefix for Amazon S3 to assign objects in that bucket.

1. For **Output format**, choose **CSV**.

1. (Optional) In the **Additional fields – *optional*** section, choose **Encryption** and any other report fields that interest you. Set the frequency for report deliveries to **Daily** so that the first report is delivered to your bucket sooner.

1. Choose **Create** to save your configuration.

Amazon S3 can take up to 48 hours to deliver the first report, so check back when the first report arrives. After you receive your first report, proceed to the next step to filter your S3 Inventory report's contents. If you no longer want to receive inventory reports for this bucket, delete your S3 Inventory configuration. Otherwise, Amazon S3 continues to deliver reports on a daily or weekly schedule.

An inventory list isn't a single point-in-time view of all objects. Inventory lists are a rolling snapshot of bucket items, which are eventually consistent (for example, the list might not include recently added or deleted objects). Combining S3 Inventory and S3 Batch Operations works best when you work with static objects, or with an object set that you created two or more days ago. To work with more recent data, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) (`GET` bucket) API operation to build your list of objects manually. If needed, repeat the process for the next few days or until your inventory report shows the desired status for all objects.

## Step 2: Filter your object list with S3 Select


After you receive your S3 Inventory report, you can filter the report’s contents to list only the objects that aren't encrypted with S3 Bucket Keys enabled. If you want all your bucket’s objects encrypted with S3 Bucket Keys enabled, you can ignore this step. However, filtering your S3 Inventory report at this stage saves you the time and expense of re-encrypting objects that you previously encrypted with S3 Bucket Keys enabled.

Although the following steps show how to filter by using [Amazon S3 Select](https://aws.amazon.com/blogs/aws/s3-glacier-select/), you can also use [Amazon Athena](https://aws.amazon.com/athena). To decide which tool to use, look at your S3 Inventory report’s `manifest.json` file. This file lists the number of data files that are associated with that report. If the number is large, use Amazon Athena because it runs across multiple S3 objects, whereas S3 Select works on one object at a time. For more information about using Amazon S3 and Athena together, see [Querying Amazon S3 Inventory with Amazon Athena](storage-inventory-athena-query.md) and "Using Athena" in the AWS Storage Blog post [Encrypting objects with Amazon S3 Batch Operations](https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations).

**To filter your S3 Inventory report by using S3 Select**

1. Open the `manifest.json` file from your inventory report and look at the `fileSchema` section of the JSON. This informs the query that you run on the data. 

   The following JSON is an example `manifest.json` file for a CSV-formatted inventory on a bucket with versioning enabled. Depending on how you configured your inventory report, your manifest might look different.

   ```
     {
       "sourceBucket": "batchoperationsdemo",
       "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
       "version": "2021-05-22",
       "creationTimestamp": "1558656000000",
       "fileFormat": "CSV",
       "fileSchema": "Bucket, Key, VersionId, IsLatest, IsDeleteMarker, BucketKeyStatus",
       "files": [
         {
           "key": "demoinv/batchoperationsdemo/DemoInventory/data/009a40e4-f053-4c16-8c75-6100f8892202.csv.gz",
           "size": 72691,
           "MD5checksum": "c24c831717a099f0ebe4a9d1c5d3935c"
         }
       ]
     }
   ```

   If versioning isn't activated on the bucket, or if you choose to run the report for the latest versions, the `fileSchema` is `Bucket`, `Key`, and `BucketKeyStatus`. 

   If versioning *is* activated, depending on how you set up the inventory report, the `fileSchema` might include the following: `Bucket`, `Key`, `VersionId`, `IsLatest`, `IsDeleteMarker`, `BucketKeyStatus`. So pay attention to columns 1, 2, 3, and 6 when you run your query. 

   S3 Batch Operations needs the bucket, key, and version ID as inputs to perform the job, in addition to the field to search by, which is `BucketKeyStatus`. You don't need the `VersionID` field, but it helps to specify the `VersionID` field when you operate on a versioned bucket. For more information, see [Working with objects in a versioning-enabled bucket](manage-objects-versioned-bucket.md).

1. Locate the data files for the inventory report. The `manifest.json` object lists the data files under **files**.

1. After you locate and select the data file in the S3 console, choose **Actions**, and then choose **Query with S3 Select**.

1. Keep the preset **CSV**, **Comma**, and **GZIP** fields selected, and choose **Next**.

1. To review your inventory report’s format before proceeding, choose **Show file preview**.

1. Enter the columns to reference in the SQL expression field, and choose **Run SQL**. The following expression returns columns 1–3 for all objects without an S3 Bucket Key configured.

   `select s._1, s._2, s._3 from s3object s where s._6 = 'DISABLED'`

   The following are example results.

   ```
         batchoperationsdemo,0100059%7Ethumb.jpg,lsrtIxksLu0R0ZkYPL.LhgD5caTYn6vu
         batchoperationsdemo,0100074%7Ethumb.jpg,sd2M60g6Fdazoi6D5kNARIE7KzUibmHR
         batchoperationsdemo,0100075%7Ethumb.jpg,TLYESLnl1mXD5c4BwiOIinqFrktddkoL
         batchoperationsdemo,0200147%7Ethumb.jpg,amufzfMi_fEw0Rs99rxR_HrDFlE.l3Y0
         batchoperationsdemo,0301420%7Ethumb.jpg,9qGU2SEscL.C.c_sK89trmXYIwooABSh
         batchoperationsdemo,0401524%7Ethumb.jpg,ORnEWNuB1QhHrrYAGFsZhbyvEYJ3DUor
         batchoperationsdemo,200907200065HQ%7Ethumb.jpg,d8LgvIVjbDR5mUVwW6pu9ahTfReyn5V4
         batchoperationsdemo,200907200076HQ%7Ethumb.jpg,XUT25d7.gK40u_GmnupdaZg3BVx2jN40
         batchoperationsdemo,201103190002HQ%7Ethumb.jpg,z.2sVRh0myqVi0BuIrngWlsRPQdb7qOS
   ```

1. Download the results, save them into a CSV format, and upload them to Amazon S3 as your list of objects for the S3 Batch Operations job.

1. If you have multiple manifest files, run **Query with S3 Select** on those also. Depending on the size of the results, you could combine the lists and run a single S3 Batch Operations job or run each list as a separate job. To decide number of jobs to run, consider the [price](https://aws.amazon.com/s3/pricing/) of running each S3 Batch Operations job.

## Step 3: Set up and run your S3 Batch Operations job


Now that you have your filtered CSV lists of S3 objects, you can begin the S3 Batch Operations job to encrypt the objects with S3 Bucket Keys enabled.

A *job* refers collectively to the list (manifest) of objects provided, the operation performed, and the specified parameters. The easiest way to encrypt this set of objects with S3 Bucket Keys enabled is by using the **Copy** operation and specifying the same destination prefix as the objects listed in the manifest. In an unversioned bucket, this operation overwrites the existing objects. In a bucket with versioning turned on, this operation creates a newer, encrypted version of the objects.

As part of copying the objects, specify that Amazon S3 should encrypt the objects with SSE-KMS encryption. This job copies the objects, so all of your objects will show an updated creation date upon completion, regardless of when you originally added them to Amazon S3. Also specify the other properties for your set of objects as part of the S3 Batch Operations job, including object tags and storage class.

**Topics**
+ [

### Set up your IAM policy
](#bucket-key-ex-set-up-iam-policy)
+ [

### Set up your Batch Operations IAM role
](#bucket-key-ex-set-up-iam-role)
+ [

### Enable S3 Bucket Keys for an existing bucket
](#bucket-key-ex-enable-s3-bucket-key-on-a-bucket)
+ [

### Create your Batch Operations job
](#bucket-key-ex-create-job)
+ [

### Run your Batch Operations job
](#bucket-key-ex-run-job)

### Set up your IAM policy


1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Policy**, and then choose **Create policy**.

1. Choose the **JSON** tab. Choose **Edit policy** and add the example IAM policy that appears in the following code block. 

   After copying the policy example into your [IAM Console](https://console.aws.amazon.com/iam/), replace the following:

   1. Replace `amzn-s3-demo-source-bucket` with the name of your source bucket to copy objects from.

   1. Replace `amzn-s3-demo-destination-bucket` with the name of your destination bucket to copy objects to.

   1. Replace `amzn-s3-demo-manifest-bucket/manifest-key` with the name of your manifest object.

   1. Replace `amzn-s3-demo-completion-report-bucket` with the name of the bucket where you want to save your completion reports.

------
#### [ JSON ]

****  

   ```
     {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
           "Sid": "CopyObjectsToEncrypt",
           "Effect": "Allow",
           "Action": [
             "s3:PutObject",
             "s3:PutObjectTagging",
             "s3:PutObjectAcl",
             "s3:PutObjectVersionTagging",
             "s3:PutObjectVersionAcl",
             "s3:GetObject",
             "s3:GetObjectAcl",
             "s3:GetObjectTagging",
             "s3:GetObjectVersion",
             "s3:GetObjectVersionAcl",
             "s3:GetObjectVersionTagging"
           ],
           "Resource": [
             "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
             "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
           ]
         },
         {
           "Sid": "ReadManifest",
           "Effect": "Allow",
           "Action": [
             "s3:GetObject",
             "s3:GetObjectVersion"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-manifest-bucket/manifest-key"
         },
         {
           "Sid": "WriteReport",
           "Effect": "Allow",
           "Action": [
             "s3:PutObject"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-completion-report-bucket/*"
         }
       ]
     }
   ```

------

1. Choose **Next: Tags**.

1. Add any tags that you want (optional), and choose **Next: Review**.

1. Add a policy name, optionally add a description, and choose **Create policy**.

1. Choose **Review policy** and **Save changes**.

1. With your S3 Batch Operations policy now complete, the console returns you to the IAM **Policies** page. Filter on the policy name, choose the button to the left of the policy name, choose **Policy actions**, and choose **Attach**. 

   To attach the newly created policy to an IAM role, select the appropriate users, groups, or roles in your account and choose **Attach policy**. This takes you back to the IAM console.

### Set up your Batch Operations IAM role


1. On the [IAM Console](https://console.aws.amazon.com/iam/), in the navigation pane, choose **Roles**, and then choose **Create role**.

1. Choose **AWS service**, **S3**, and **S3 Batch Operations**. Then choose **Next: Permissions**.

1. Start entering the name of the IAM **policy** that you just created. Select the check box by the policy name when it appears, and choose **Next: Tags**.

1. (Optional) Add tags or keep the key and value fields blank for this exercise. Choose **Next: Review**.

1. Enter a role name, and accept the default description or add your own. Choose **Create role**.

1. Ensure that the user creating the job has the permissions in the following example. 

   Replace `account-id` with your AWS account ID and `IAM-role-name` with the name that you plan to apply to the IAM role that you will create in the Batch Operations job creation step later. For more information, see [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md).

   ```
               {
               "Sid": "AddIamPermissions",
               "Effect": "Allow",
               "Action": [
               "iam:GetRole",
               "iam:PassRole"
               ],
               "Resource": "arn:aws:iam::account-id:role/IAM-role-name"
               }
   ```

### Enable S3 Bucket Keys for an existing bucket


1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the bucket that you want to turn on an S3 Bucket Key for.

1. Choose **Properties**.

1. Under **Default encryption**, choose **Edit**.

1. Under **Encryption type**, you can choose between **Amazon S3 managed keys (SSE-S3)** and **AWS Key Management Service key (SSE-KMS)**. 

1. If you chose **AWS Key Management Service key (SSE-KMS)**, under **AWS KMS key**, you can specify the AWS KMS key through one of the following options.
   + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**. From the list of available keys, choose a symmetric encryption KMS key in the same Region as your bucket. Both the AWS managed key (`aws/s3`) and your customer managed keys appear in the list.
   + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and then enter your KMS key ARN in the field that appears.
   + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

1. Under **Bucket Key**, choose **Enable**, and then choose **Save changes**.

Now that an S3 Bucket Key is enabled at the bucket level, objects that are uploaded, modified, or copied into this bucket will inherit this encryption configuration by default. This includes objects that are copied by using Amazon S3 Batch Operations.

### Create your Batch Operations job


1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Batch Operations**, and then choose **Create Job**.

1. Choose the **Region** where you store your objects, and choose **CSV** as the manifest type.

1. Enter the path or navigate to the CSV manifest file that you created earlier from S3 Select (or Athena) results. If your manifest contains version IDs, select that box. Choose **Next**.

1. Choose the **Copy** operation, and choose the copy destination bucket. You can keep server-side encryption disabled. As long as the bucket destination has S3 Bucket Keys enabled, the copy operation applies S3 Bucket Keys at the destination bucket.

1. (Optional) Choose a storage class and the other parameters as desired. The parameters that you specify in this step apply to all operations performed on the objects that are listed in the manifest. Choose **Next**.

1. To configure server-side encryption, follow these steps: 

   1. Under **Server-side encryption**, choose one of the following:
      + To keep the bucket settings for default server-side encryption of objects when storing them in Amazon S3, choose **Do not specify an encryption key**. As long as the bucket destination has S3 Bucket Keys enabled, the copy operation applies an S3 Bucket Key at the destination bucket.
**Note**  
If the bucket policy for the specified destination requires objects to be encrypted before storing them in Amazon S3, you must specify an encryption key. Otherwise, copying objects to the destination will fail.
      + To encrypt objects before storing them in Amazon S3, choose **Specify an encryption key**.

   1. Under **Encryption settings**, if you choose **Specify an encryption key**, you must choose either **Use destination bucket settings for default encryption** or **Override destination bucket settings for default encryption**.

   1. If you choose **Override destination bucket settings for default encryption**, you must configure the following encryption settings.

      1. Under **Encryption type**, you must choose either **Amazon S3 managed keys (SSE-S3)** or **AWS Key Management Service key (SSE-KMS)**. SSE-S3 uses one of the strongest block ciphers—256-bit Advanced Encryption Standard (AES-256) to encrypt each object. SSE-KMS provides you with more control over your key. For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md) and [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

      1. If you choose **AWS Key Management Service key (SSE-KMS)**, under **AWS KMS key**, you can specify your AWS KMS key through one of the following options.
         + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and then choose a symmetric encryption KMS key in the same Region as your bucket. Both the AWS managed key (`aws/s3`) and your customer managed keys appear in the list.
         + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears.
         + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

      1. Under **Bucket Key**, choose **Enable**. The copy operation applies an S3 Bucket Key at the destination bucket.

1. Give your job a description (or keep the default), set its priority level, choose a report type, and specify the **Path to completion report destination**.

1. In the **Permissions** section, be sure to choose the Batch Operations IAM role that you defined earlier. Choose **Next**.

1. Under **Review**, verify the settings. If you want to make changes, choose **Previous**. After confirming the Batch Operations settings, choose **Create job**. 

   For more information, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

### Run your Batch Operations job


The setup wizard automatically returns you to the S3 Batch Operations section of the Amazon S3 console. Your new job transitions from the **New** state to the **Preparing** state as S3 begins the process. During the Preparing state, S3 reads the job’s manifest, checks it for errors, and calculates the number of objects.

1. Choose the refresh button in the Amazon S3 console to check progress. Depending on the size of the manifest, reading can take minutes or hours.

1. After S3 finishes reading the job’s manifest, the job moves to the **Awaiting your confirmation** state. Choose the option button to the left of the Job ID, and choose **Run job**.

1. Check the settings for the job, and choose **Run job** in the bottom-right corner.

   After the job begins running, you can choose the refresh button to check progress through the console dashboard view or by selecting the specific job.

1. When the job is complete, you can view the **Successful** and **Failed** object counts to confirm that everything performed as expected. If you enabled job reports, check your job report for the exact cause of any failed operations.

   You can also perform these steps by using the AWS CLI, AWS SDKs, or Amazon S3 REST API. For more information about tracking job status and completion reports, see [Tracking job status and completion reports](batch-ops-job-status.md).

For examples that show the copy operation with tags using the AWS CLI and AWS SDK for Java, see [Creating a Batch Operations job with job tags used for labeling](batch-ops-tags-create.md).