

# Operations supported by S3 Batch Operations
<a name="batch-ops-operations"></a>

You can use S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. S3 Batch Operations can perform a single operation on lists of Amazon S3 objects that you specify. A single job can perform a specified operation on billions of objects containing exabytes of data. Amazon S3 tracks progress, sends notifications, and stores a detailed completion report of all actions, providing a fully managed, auditable, and serverless experience. You can use S3 Batch Operations through the Amazon S3 console, AWS CLI, AWS SDKs, or Amazon S3 REST API.

S3 Batch Operations supports the following operations:

# Copy objects
<a name="batch-ops-copy-object"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The Batch Operations **Copy** operation copies each object that is specified in the manifest. You can copy objects to a bucket in the same AWS Region or to a bucket in a different Region. S3 Batch Operations supports most options available through Amazon S3 for copying objects. These options include setting object metadata, setting permissions, and changing an object's storage class. 

You can also use the **Copy** operation to copy existing unencrypted objects and write them back to the same bucket as encrypted objects. For more information, see [Encrypting objects with Amazon S3 Batch Operations](https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/).

When you copy objects, you can change the checksum algorithm used to calculate the checksum of the object. If objects don't have an additional checksum calculated, you can also add one by specifying the checksum algorithm for Amazon S3 to use. For more information, see [Checking object integrity in Amazon S3](checking-object-integrity.md).

For more information about copying objects in Amazon S3 and the required and optional parameters, see [Copying, moving, and renaming objects](copy-object.md) in this guide and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.

## Restrictions and limitations
<a name="batch-ops-copy-object-restrictions"></a>

When you're using the Batch Operations **Copy** operation, the following restrictions and limitations apply:
+ All source objects must be in one bucket.
+ All destination objects must be in one bucket.
+ You must have read permissions for the source bucket and write permissions for the destination bucket.
+ Objects to be copied can be up to 5 GB in size.
+ If you try to copy objects from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive classes to the S3 Standard storage class, you must first restore these objects. For more information, see [Restoring an archived object](restoring-objects.md).
+ You must create your Batch Operations **Copy** jobs in the destination Region, which is the Region that you intend to copy the objects to.
+ All `CopyObject` options are supported except for conditional checks on entity tags (ETags) and server-side encryption with customer-provided encryption keys (SSE-C).
+ If the destination bucket is unversioned, you will overwrite any objects that have the same key names.
+ Objects aren't necessarily copied in the same order as they appear in the manifest. For versioned buckets, if preserving the current or noncurrent version order is important, copy all noncurrent versions first. Then, after the first job is complete, copy the current versions in a subsequent job. 
+ Copying objects to the Reduced Redundancy Storage (RRS) class isn't supported.
+ A single Batch Operations Copy job can support a manifest with up to 20 billion objects.

# Copying objects using S3 Batch Operations
<a name="batch-ops-examples-copy"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use S3 Batch Operations to create a **Copy** (`CopyObject`) job to copy objects within the same account or to a different destination account. 

The following examples show how to store and use a manifest that is in a different account. The first example shows how you can use Amazon S3 Inventory to deliver the inventory report to the destination account for use during job creation. The second example shows how to use a comma-separated values (CSV) manifest in the source or destination account. The third example shows how to use the **Copy** operation to enable S3 Bucket Keys for existing objects that have been encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

**Topics**
+ [

# Using an inventory report to copy objects across AWS accounts
](specify-batchjob-manifest-xaccount-inventory.md)
+ [

# Using a CSV manifest to copy objects across AWS accounts
](specify-batchjob-manifest-xaccount-csv.md)
+ [

# Using Batch Operations to enable S3 Bucket Keys for SSE-KMS
](batch-ops-copy-example-bucket-key.md)

# Using an inventory report to copy objects across AWS accounts
<a name="specify-batchjob-manifest-xaccount-inventory"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use S3 Batch Operations to create a **Copy** (`CopyObject`) job to copy objects within the same account or to a different destination account.

You can use Amazon S3 Inventory to create an inventory report and use the report to create a list (manifest) of objects to copy with S3 Batch Operations. For more information about using a CSV manifest in the source or destination account, see [Using a CSV manifest to copy objects across AWS accounts](specify-batchjob-manifest-xaccount-csv.md).

Amazon S3 Inventory generates inventories of the objects in a bucket. The resulting list is published to an output file. The bucket that is inventoried is called the source bucket, and the bucket where the inventory report file is stored is called the destination bucket. 

The Amazon S3 Inventory report can be configured to be delivered to another AWS account. Doing so allows S3 Batch Operations to read the inventory report when the job is created in the destination account.

For more information about Amazon S3 Inventory source and destination buckets, see [Source and destination buckets](storage-inventory.md#storage-inventory-buckets).

The easiest way to set up an inventory is by using the Amazon S3 console, but you can also use the Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs.

The following console procedure contains the high-level steps for setting up permissions for an S3 Batch Operations job. In this procedure, you copy objects from a source account to a destination account, with the inventory report stored in the destination account.

**To set up Amazon S3 Inventory for source and destination buckets owned by different accounts**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. Decide on (or create) a destination manifest bucket to store the inventory report in. In this procedure, the *destination account* is the account that owns both the destination manifest bucket and the bucket that the objects are copied to.

1. Configure an inventory report for a source bucket. For information about how to use the console to configure an inventory or how to encrypt an inventory list file, see [Configuring Amazon S3 Inventory](configure-inventory.md). 

   When you configure the inventory report, you specify the destination bucket where you want the list to be stored. The inventory report for the source bucket is published to the destination bucket. In this procedure, the *source account* is the account that owns the source bucket.

   Choose **CSV** for the output format.

   When you enter information for the destination bucket, choose **Buckets in another account**. Then enter the name of the destination manifest bucket. Optionally, you can enter the account ID of the destination account.

   After the inventory configuration is saved, the console displays a message similar to the following: 

   Amazon S3 could not create a bucket policy on the destination bucket. Ask the destination bucket owner to add the following bucket policy to allow Amazon S3 to place data in that bucket.

   The console then displays a bucket policy that you can use for the destination bucket.

1. Copy the destination bucket policy that appears on the console.

1. In the destination account, add the copied bucket policy to the destination manifest bucket where the inventory report is stored.

1. Create a role in the destination account that is based on the S3 Batch Operations trust policy. For more information about this trust policy, see [Trust policy](batch-ops-iam-role-policies.md#batch-ops-iam-role-policies-trust).

   For more information about creating a role, see [ Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*.

   Enter a name for the role (the following example role uses the name *`BatchOperationsDestinationRoleCOPY`*). Choose the **S3** service, and then choose the **S3 Batch Operations** use case, which applies the trust policy to the role. 

   Then choose **Create policy** to attach the following policy to the role. To use this policy, replace the *`user input placeholders`* with your own information. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsDestinationObjectCOPY",
         "Effect": "Allow",
         "Action": [
           "s3:PutObject",
           "s3:PutObjectVersionAcl",
           "s3:PutObjectAcl",
           "s3:PutObjectVersionTagging",
           "s3:PutObjectTagging",
           "s3:GetObject",
           "s3:GetObjectVersion",
           "s3:GetObjectAcl",
           "s3:GetObjectTagging",
           "s3:GetObjectVersionAcl",
           "s3:GetObjectVersionTagging"
         ],
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
         ]
       }
     ]
   }
   ```

------

   The role uses the policy to grant `batchoperations.s3.amazonaws.com` permission to read the manifest in the destination bucket. It also grants permissions to `GET` objects, access control lists (ACLs), tags, and versions in the source object bucket. And it grants permissions to `PUT` objects, ACLs, tags, and versions into the destination object bucket.

1. In the source account, create a bucket policy for the source bucket that grants the role that you created in the previous step permissions to `GET` objects, ACLs, tags, and versions in the source bucket. This step allows S3 Batch Operations to get objects from the source bucket through the trusted role.

   The following is an example of the bucket policy for the source account. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowBatchOperationsSourceObjectCOPY",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/BatchOperationsDestinationRoleCOPY"
               },
               "Action": [
                   "s3:GetObject",
                   "s3:GetObjectVersion",
                   "s3:GetObjectAcl",
                   "s3:GetObjectTagging",
                   "s3:GetObjectVersionAcl",
                   "s3:GetObjectVersionTagging"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
           }
       ]
   }
   ```

------

1. After the inventory report is available, create an S3 Batch Operations **Copy** (`CopyObject`) job in the destination account, and choose the inventory report from the destination manifest bucket. You need the ARN for the IAM role that you created in the destination account.

   For general information about creating a job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

   For information about creating a job by using the console, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

# Using a CSV manifest to copy objects across AWS accounts
<a name="specify-batchjob-manifest-xaccount-csv"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use S3 Batch Operations to create a **Copy** (`CopyObject`) job to copy objects within the same account or to a different destination account.

You can use a CSV manifest that's stored in the source account to copy objects across AWS accounts with S3 Batch Operations. To use an S3 Inventory report as a manifest, see [Using an inventory report to copy objects across AWS accounts](specify-batchjob-manifest-xaccount-inventory.md).

For an example of the CSV format for manifest files, see [Creating a manifest file](batch-ops-create-job.md#create-manifest-file).

The following procedure shows how to set up permissions when using an S3 Batch Operations job to copy objects from a source account to a destination account with a CSV manifest file that's stored in the source account.

**To use a CSV manifest to copy objects across AWS accounts**

1. Create an AWS Identity and Access Management (IAM) role in the destination account that's based on the S3 Batch Operations trust policy. In this procedure, the *destination account* is the account that the objects are being copied to.

   For more information about the trust policy, see [Trust policy](batch-ops-iam-role-policies.md#batch-ops-iam-role-policies-trust).

   For more information about creating a role, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*.

   If you create the role by using the console, enter a name for the role (the following example role uses the name `BatchOperationsDestinationRoleCOPY`). Choose the **S3** service, and then choose the **S3 Batch Operations** use case, which applies the trust policy to the role.

   Then choose **Create policy** to attach the following policy to the role. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsDestinationObjectCOPY",
         "Effect": "Allow",
         "Action": [
           "s3:PutObject",
           "s3:PutObjectVersionAcl",
           "s3:PutObjectAcl",
           "s3:PutObjectVersionTagging",
           "s3:PutObjectTagging",
           "s3:GetObject",
           "s3:GetObjectVersion",
           "s3:GetObjectAcl",
           "s3:GetObjectTagging",
           "s3:GetObjectVersionAcl",
           "s3:GetObjectVersionTagging"
         ],
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
           "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
         ]
       }
     ]
   }
   ```

------

   Using the policy, the role grants `batchoperations.s3.amazonaws.com` permission to read the manifest in the source manifest bucket. It grants permissions to `GET` objects, access control lists (ACLs), tags, and versions in the source object bucket. It also grants permissions to `PUT` objects, ACLs, tags, and versions into the destination object bucket.

1. In the source account, create a bucket policy for the bucket that contains the manifest to grant the role that you created in the previous step permissions to `GET` objects and versions in the source manifest bucket.

   This step allows S3 Batch Operations to read the manifest by using the trusted role. Apply the bucket policy to the bucket that contains the manifest.

   The following is an example of the bucket policy to apply to the source manifest bucket. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsSourceManifestRead",
         "Effect": "Allow",
         "Principal": {
           "AWS": [
             "arn:aws:iam::111122223333:user/ConsoleUserCreatingJob",
             "arn:aws:iam::111122223333:role/BatchOperationsDestinationRoleCOPY"
           ]
         },
         "Action": [
           "s3:GetObject",
           "s3:GetObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
       }
     ]
   }
   ```

------

   This policy also grants permissions to allow a console user who is creating a job in the destination account the same permissions in the source manifest bucket through the same bucket policy.

1. In the source account, create a bucket policy for the source bucket that grants the role that you created permissions to `GET` objects, ACLs, tags, and versions in the source object bucket. S3 Batch Operations can then get objects from the source bucket through the trusted role.

   The following is an example of the bucket policy for the bucket that contains the source objects. To use this policy, replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowBatchOperationsSourceObjectCOPY",
         "Effect": "Allow",
         "Principal": {
           "AWS": "arn:aws:iam::111122223333:role/BatchOperationsDestinationRoleCOPY"
         },
         "Action": [
           "s3:GetObject",
           "s3:GetObjectVersion",
           "s3:GetObjectAcl",
           "s3:GetObjectTagging",
           "s3:GetObjectVersionAcl",
           "s3:GetObjectVersionTagging"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
       }
     ]
   }
   ```

------

1. Create an S3 Batch Operations job in the destination account. You need the Amazon Resource Name (ARN) for the role that you created in the destination account. For more information about creating a job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

# Using Batch Operations to enable S3 Bucket Keys for SSE-KMS
<a name="batch-ops-copy-example-bucket-key"></a>

S3 Bucket Keys reduce the cost of server-side encryption with AWS Key Management Service (AWS KMS) (SSE-KMS) by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md) and [Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects](configuring-bucket-key.md). When you perform a `CopyObject` operation by using the REST API, AWS SDKs, or AWS CLI, you can enable or disable an S3 Bucket Key at the object level by adding the `x-amz-server-side-encryption-bucket-key-enabled` request header with a `true` or `false` value. 

When you configure an S3 Bucket Key for an object by using a `CopyObject` operation, Amazon S3 updates only the settings for that object. The S3 Bucket Key settings for the destination bucket don't change. If you submit a `CopyObject` request for an AWS KMS encrypted object to a bucket that has S3 Bucket Keys enabled, your object level operation will automatically use S3 Bucket Keys unless you disable the keys in the request header. If you don't specify an S3 Bucket Key for your object, Amazon S3 applies the S3 Bucket Key settings for the destination bucket to the object.

To encrypt your existing Amazon S3 objects, you can use S3 Batch Operations. You can use the Batch Operations **Copy** operation to copy existing unencrypted objects and write them back to the same bucket as encrypted objects. For more information, see [Encrypting objects with Amazon S3 Batch Operations](https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/) on the AWS Storage Blog.

In the following example, you use the Batch Operations **Copy** operation to enable S3 Bucket Keys on existing objects. For more information, see [Configuring an S3 Bucket Key at the object level](configuring-bucket-key-object.md).

**Topics**
+ [

## Considerations for using S3 Batch Operations to encrypt objects with S3 Bucket Keys enabled
](#bucket-key-ex-things-to-note)
+ [

## Prerequisites
](#bucket-key-ex-prerequisites)
+ [

## Step 1: Get your list of objects using Amazon S3 Inventory
](#bucket-key-ex-get-list-of-objects)
+ [

## Step 2: Filter your object list with S3 Select
](#bucket-key-ex-filter-object-list-with-s3-select)
+ [

## Step 3: Set up and run your S3 Batch Operations job
](#bucket-key-ex-setup-and-run-job)

## Considerations for using S3 Batch Operations to encrypt objects with S3 Bucket Keys enabled
<a name="bucket-key-ex-things-to-note"></a>

Consider the following issues when you use S3 Batch Operations to encrypt objects with S3 Bucket Keys enabled:
+ You will be charged for S3 Batch Operations jobs, objects, and requests in addition to any charges associated with the operation that S3 Batch Operations performs on your behalf, including data transfers, requests, and other charges. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).
+ If you use a versioned bucket, each S3 Batch Operations job performed creates new encrypted versions of your objects. It also maintains the previous versions without an S3 Bucket Key configured. To delete the old versions, set up an S3 Lifecycle expiration policy for noncurrent versions as described in [Lifecycle configuration elements](intro-lifecycle-rules.md).
+ The copy operation creates new objects with new creation dates, which can affect lifecycle actions like archiving. If you copy all objects in your bucket, all the new copies have identical or similar creation dates. To further identify these objects and create different lifecycle rules for various data subsets, consider using object tags. 

## Prerequisites
<a name="bucket-key-ex-prerequisites"></a>

Before you configure your objects to use an S3 Bucket Key, review [Changes to note before enabling an S3 Bucket Key](bucket-key.md#bucket-key-changes). 

To use this example, you must have an AWS account and at least one S3 bucket to hold your working files and encrypted results. You might also find much of the existing S3 Batch Operations documentation useful, including the following topics:
+ [S3 Batch Operations basics](batch-ops.md#batch-ops-basics)
+ [Creating an S3 Batch Operations job](batch-ops-create-job.md)
+ [Operations supported by S3 Batch Operations](batch-ops-operations.md)
+ [Managing S3 Batch Operations jobs](batch-ops-managing-jobs.md)

## Step 1: Get your list of objects using Amazon S3 Inventory
<a name="bucket-key-ex-get-list-of-objects"></a>

To get started, identify the S3 bucket that contains the objects to encrypt, and get a list of its contents. An Amazon S3 Inventory report is the most convenient and affordable way to do this. The report provides the list of the objects in a bucket along with their associated metadata. In this step, the source bucket is the inventoried bucket, and the destination bucket is the bucket where you store the inventory report file. For more information about Amazon S3 Inventory source and destination buckets, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

The easiest way to set up an inventory is by using the AWS Management Console. But you can also use the REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs. Before following these steps, be sure to sign in to the console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). If you encounter permission denied errors, add a bucket policy to your destination bucket. For more information, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1).

**To get a list of objects using S3 Inventory**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**, and choose a bucket that contains objects to encrypt.

1. On the **Management** tab, navigate to the **Inventory configurations** section, and choose **Create inventory configuration**.

1. Give your new inventory a name, enter the name of the destination S3 bucket, and optionally create a destination prefix for Amazon S3 to assign objects in that bucket.

1. For **Output format**, choose **CSV**.

1. (Optional) In the **Additional fields – *optional*** section, choose **Encryption** and any other report fields that interest you. Set the frequency for report deliveries to **Daily** so that the first report is delivered to your bucket sooner.

1. Choose **Create** to save your configuration.

Amazon S3 can take up to 48 hours to deliver the first report, so check back when the first report arrives. After you receive your first report, proceed to the next step to filter your S3 Inventory report's contents. If you no longer want to receive inventory reports for this bucket, delete your S3 Inventory configuration. Otherwise, Amazon S3 continues to deliver reports on a daily or weekly schedule.

An inventory list isn't a single point-in-time view of all objects. Inventory lists are a rolling snapshot of bucket items, which are eventually consistent (for example, the list might not include recently added or deleted objects). Combining S3 Inventory and S3 Batch Operations works best when you work with static objects, or with an object set that you created two or more days ago. To work with more recent data, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) (`GET` bucket) API operation to build your list of objects manually. If needed, repeat the process for the next few days or until your inventory report shows the desired status for all objects.

## Step 2: Filter your object list with S3 Select
<a name="bucket-key-ex-filter-object-list-with-s3-select"></a>

After you receive your S3 Inventory report, you can filter the report’s contents to list only the objects that aren't encrypted with S3 Bucket Keys enabled. If you want all your bucket’s objects encrypted with S3 Bucket Keys enabled, you can ignore this step. However, filtering your S3 Inventory report at this stage saves you the time and expense of re-encrypting objects that you previously encrypted with S3 Bucket Keys enabled.

Although the following steps show how to filter by using [Amazon S3 Select](https://aws.amazon.com/blogs/aws/s3-glacier-select/), you can also use [Amazon Athena](https://aws.amazon.com/athena). To decide which tool to use, look at your S3 Inventory report’s `manifest.json` file. This file lists the number of data files that are associated with that report. If the number is large, use Amazon Athena because it runs across multiple S3 objects, whereas S3 Select works on one object at a time. For more information about using Amazon S3 and Athena together, see [Querying Amazon S3 Inventory with Amazon Athena](storage-inventory-athena-query.md) and "Using Athena" in the AWS Storage Blog post [Encrypting objects with Amazon S3 Batch Operations](https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations).

**To filter your S3 Inventory report by using S3 Select**

1. Open the `manifest.json` file from your inventory report and look at the `fileSchema` section of the JSON. This informs the query that you run on the data. 

   The following JSON is an example `manifest.json` file for a CSV-formatted inventory on a bucket with versioning enabled. Depending on how you configured your inventory report, your manifest might look different.

   ```
     {
       "sourceBucket": "batchoperationsdemo",
       "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
       "version": "2021-05-22",
       "creationTimestamp": "1558656000000",
       "fileFormat": "CSV",
       "fileSchema": "Bucket, Key, VersionId, IsLatest, IsDeleteMarker, BucketKeyStatus",
       "files": [
         {
           "key": "demoinv/batchoperationsdemo/DemoInventory/data/009a40e4-f053-4c16-8c75-6100f8892202.csv.gz",
           "size": 72691,
           "MD5checksum": "c24c831717a099f0ebe4a9d1c5d3935c"
         }
       ]
     }
   ```

   If versioning isn't activated on the bucket, or if you choose to run the report for the latest versions, the `fileSchema` is `Bucket`, `Key`, and `BucketKeyStatus`. 

   If versioning *is* activated, depending on how you set up the inventory report, the `fileSchema` might include the following: `Bucket`, `Key`, `VersionId`, `IsLatest`, `IsDeleteMarker`, `BucketKeyStatus`. So pay attention to columns 1, 2, 3, and 6 when you run your query. 

   S3 Batch Operations needs the bucket, key, and version ID as inputs to perform the job, in addition to the field to search by, which is `BucketKeyStatus`. You don't need the `VersionID` field, but it helps to specify the `VersionID` field when you operate on a versioned bucket. For more information, see [Working with objects in a versioning-enabled bucket](manage-objects-versioned-bucket.md).

1. Locate the data files for the inventory report. The `manifest.json` object lists the data files under **files**.

1. After you locate and select the data file in the S3 console, choose **Actions**, and then choose **Query with S3 Select**.

1. Keep the preset **CSV**, **Comma**, and **GZIP** fields selected, and choose **Next**.

1. To review your inventory report’s format before proceeding, choose **Show file preview**.

1. Enter the columns to reference in the SQL expression field, and choose **Run SQL**. The following expression returns columns 1–3 for all objects without an S3 Bucket Key configured.

   `select s._1, s._2, s._3 from s3object s where s._6 = 'DISABLED'`

   The following are example results.

   ```
         batchoperationsdemo,0100059%7Ethumb.jpg,lsrtIxksLu0R0ZkYPL.LhgD5caTYn6vu
         batchoperationsdemo,0100074%7Ethumb.jpg,sd2M60g6Fdazoi6D5kNARIE7KzUibmHR
         batchoperationsdemo,0100075%7Ethumb.jpg,TLYESLnl1mXD5c4BwiOIinqFrktddkoL
         batchoperationsdemo,0200147%7Ethumb.jpg,amufzfMi_fEw0Rs99rxR_HrDFlE.l3Y0
         batchoperationsdemo,0301420%7Ethumb.jpg,9qGU2SEscL.C.c_sK89trmXYIwooABSh
         batchoperationsdemo,0401524%7Ethumb.jpg,ORnEWNuB1QhHrrYAGFsZhbyvEYJ3DUor
         batchoperationsdemo,200907200065HQ%7Ethumb.jpg,d8LgvIVjbDR5mUVwW6pu9ahTfReyn5V4
         batchoperationsdemo,200907200076HQ%7Ethumb.jpg,XUT25d7.gK40u_GmnupdaZg3BVx2jN40
         batchoperationsdemo,201103190002HQ%7Ethumb.jpg,z.2sVRh0myqVi0BuIrngWlsRPQdb7qOS
   ```

1. Download the results, save them into a CSV format, and upload them to Amazon S3 as your list of objects for the S3 Batch Operations job.

1. If you have multiple manifest files, run **Query with S3 Select** on those also. Depending on the size of the results, you could combine the lists and run a single S3 Batch Operations job or run each list as a separate job. To decide number of jobs to run, consider the [price](https://aws.amazon.com/s3/pricing/) of running each S3 Batch Operations job.

## Step 3: Set up and run your S3 Batch Operations job
<a name="bucket-key-ex-setup-and-run-job"></a>

Now that you have your filtered CSV lists of S3 objects, you can begin the S3 Batch Operations job to encrypt the objects with S3 Bucket Keys enabled.

A *job* refers collectively to the list (manifest) of objects provided, the operation performed, and the specified parameters. The easiest way to encrypt this set of objects with S3 Bucket Keys enabled is by using the **Copy** operation and specifying the same destination prefix as the objects listed in the manifest. In an unversioned bucket, this operation overwrites the existing objects. In a bucket with versioning turned on, this operation creates a newer, encrypted version of the objects.

As part of copying the objects, specify that Amazon S3 should encrypt the objects with SSE-KMS encryption. This job copies the objects, so all of your objects will show an updated creation date upon completion, regardless of when you originally added them to Amazon S3. Also specify the other properties for your set of objects as part of the S3 Batch Operations job, including object tags and storage class.

**Topics**
+ [

### Set up your IAM policy
](#bucket-key-ex-set-up-iam-policy)
+ [

### Set up your Batch Operations IAM role
](#bucket-key-ex-set-up-iam-role)
+ [

### Enable S3 Bucket Keys for an existing bucket
](#bucket-key-ex-enable-s3-bucket-key-on-a-bucket)
+ [

### Create your Batch Operations job
](#bucket-key-ex-create-job)
+ [

### Run your Batch Operations job
](#bucket-key-ex-run-job)

### Set up your IAM policy
<a name="bucket-key-ex-set-up-iam-policy"></a>

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Policy**, and then choose **Create policy**.

1. Choose the **JSON** tab. Choose **Edit policy** and add the example IAM policy that appears in the following code block. 

   After copying the policy example into your [IAM Console](https://console.aws.amazon.com/iam/), replace the following:

   1. Replace `amzn-s3-demo-source-bucket` with the name of your source bucket to copy objects from.

   1. Replace `amzn-s3-demo-destination-bucket` with the name of your destination bucket to copy objects to.

   1. Replace `amzn-s3-demo-manifest-bucket/manifest-key` with the name of your manifest object.

   1. Replace `amzn-s3-demo-completion-report-bucket` with the name of the bucket where you want to save your completion reports.

------
#### [ JSON ]

****  

   ```
     {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
           "Sid": "CopyObjectsToEncrypt",
           "Effect": "Allow",
           "Action": [
             "s3:PutObject",
             "s3:PutObjectTagging",
             "s3:PutObjectAcl",
             "s3:PutObjectVersionTagging",
             "s3:PutObjectVersionAcl",
             "s3:GetObject",
             "s3:GetObjectAcl",
             "s3:GetObjectTagging",
             "s3:GetObjectVersion",
             "s3:GetObjectVersionAcl",
             "s3:GetObjectVersionTagging"
           ],
           "Resource": [
             "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
             "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
           ]
         },
         {
           "Sid": "ReadManifest",
           "Effect": "Allow",
           "Action": [
             "s3:GetObject",
             "s3:GetObjectVersion"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-manifest-bucket/manifest-key"
         },
         {
           "Sid": "WriteReport",
           "Effect": "Allow",
           "Action": [
             "s3:PutObject"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-completion-report-bucket/*"
         }
       ]
     }
   ```

------

1. Choose **Next: Tags**.

1. Add any tags that you want (optional), and choose **Next: Review**.

1. Add a policy name, optionally add a description, and choose **Create policy**.

1. Choose **Review policy** and **Save changes**.

1. With your S3 Batch Operations policy now complete, the console returns you to the IAM **Policies** page. Filter on the policy name, choose the button to the left of the policy name, choose **Policy actions**, and choose **Attach**. 

   To attach the newly created policy to an IAM role, select the appropriate users, groups, or roles in your account and choose **Attach policy**. This takes you back to the IAM console.

### Set up your Batch Operations IAM role
<a name="bucket-key-ex-set-up-iam-role"></a>

1. On the [IAM Console](https://console.aws.amazon.com/iam/), in the navigation pane, choose **Roles**, and then choose **Create role**.

1. Choose **AWS service**, **S3**, and **S3 Batch Operations**. Then choose **Next: Permissions**.

1. Start entering the name of the IAM **policy** that you just created. Select the check box by the policy name when it appears, and choose **Next: Tags**.

1. (Optional) Add tags or keep the key and value fields blank for this exercise. Choose **Next: Review**.

1. Enter a role name, and accept the default description or add your own. Choose **Create role**.

1. Ensure that the user creating the job has the permissions in the following example. 

   Replace `account-id` with your AWS account ID and `IAM-role-name` with the name that you plan to apply to the IAM role that you will create in the Batch Operations job creation step later. For more information, see [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md).

   ```
               {
               "Sid": "AddIamPermissions",
               "Effect": "Allow",
               "Action": [
               "iam:GetRole",
               "iam:PassRole"
               ],
               "Resource": "arn:aws:iam::account-id:role/IAM-role-name"
               }
   ```

### Enable S3 Bucket Keys for an existing bucket
<a name="bucket-key-ex-enable-s3-bucket-key-on-a-bucket"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the bucket that you want to turn on an S3 Bucket Key for.

1. Choose **Properties**.

1. Under **Default encryption**, choose **Edit**.

1. Under **Encryption type**, you can choose between **Amazon S3 managed keys (SSE-S3)** and **AWS Key Management Service key (SSE-KMS)**. 

1. If you chose **AWS Key Management Service key (SSE-KMS)**, under **AWS KMS key**, you can specify the AWS KMS key through one of the following options.
   + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**. From the list of available keys, choose a symmetric encryption KMS key in the same Region as your bucket. Both the AWS managed key (`aws/s3`) and your customer managed keys appear in the list.
   + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and then enter your KMS key ARN in the field that appears.
   + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

1. Under **Bucket Key**, choose **Enable**, and then choose **Save changes**.

Now that an S3 Bucket Key is enabled at the bucket level, objects that are uploaded, modified, or copied into this bucket will inherit this encryption configuration by default. This includes objects that are copied by using Amazon S3 Batch Operations.

### Create your Batch Operations job
<a name="bucket-key-ex-create-job"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Batch Operations**, and then choose **Create Job**.

1. Choose the **Region** where you store your objects, and choose **CSV** as the manifest type.

1. Enter the path or navigate to the CSV manifest file that you created earlier from S3 Select (or Athena) results. If your manifest contains version IDs, select that box. Choose **Next**.

1. Choose the **Copy** operation, and choose the copy destination bucket. You can keep server-side encryption disabled. As long as the bucket destination has S3 Bucket Keys enabled, the copy operation applies S3 Bucket Keys at the destination bucket.

1. (Optional) Choose a storage class and the other parameters as desired. The parameters that you specify in this step apply to all operations performed on the objects that are listed in the manifest. Choose **Next**.

1. To configure server-side encryption, follow these steps: 

   1. Under **Server-side encryption**, choose one of the following:
      + To keep the bucket settings for default server-side encryption of objects when storing them in Amazon S3, choose **Do not specify an encryption key**. As long as the bucket destination has S3 Bucket Keys enabled, the copy operation applies an S3 Bucket Key at the destination bucket.
**Note**  
If the bucket policy for the specified destination requires objects to be encrypted before storing them in Amazon S3, you must specify an encryption key. Otherwise, copying objects to the destination will fail.
      + To encrypt objects before storing them in Amazon S3, choose **Specify an encryption key**.

   1. Under **Encryption settings**, if you choose **Specify an encryption key**, you must choose either **Use destination bucket settings for default encryption** or **Override destination bucket settings for default encryption**.

   1. If you choose **Override destination bucket settings for default encryption**, you must configure the following encryption settings.

      1. Under **Encryption type**, you must choose either **Amazon S3 managed keys (SSE-S3)** or **AWS Key Management Service key (SSE-KMS)**. SSE-S3 uses one of the strongest block ciphers—256-bit Advanced Encryption Standard (AES-256) to encrypt each object. SSE-KMS provides you with more control over your key. For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md) and [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

      1. If you choose **AWS Key Management Service key (SSE-KMS)**, under **AWS KMS key**, you can specify your AWS KMS key through one of the following options.
         + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and then choose a symmetric encryption KMS key in the same Region as your bucket. Both the AWS managed key (`aws/s3`) and your customer managed keys appear in the list.
         + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears.
         + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

      1. Under **Bucket Key**, choose **Enable**. The copy operation applies an S3 Bucket Key at the destination bucket.

1. Give your job a description (or keep the default), set its priority level, choose a report type, and specify the **Path to completion report destination**.

1. In the **Permissions** section, be sure to choose the Batch Operations IAM role that you defined earlier. Choose **Next**.

1. Under **Review**, verify the settings. If you want to make changes, choose **Previous**. After confirming the Batch Operations settings, choose **Create job**. 

   For more information, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

### Run your Batch Operations job
<a name="bucket-key-ex-run-job"></a>

The setup wizard automatically returns you to the S3 Batch Operations section of the Amazon S3 console. Your new job transitions from the **New** state to the **Preparing** state as S3 begins the process. During the Preparing state, S3 reads the job’s manifest, checks it for errors, and calculates the number of objects.

1. Choose the refresh button in the Amazon S3 console to check progress. Depending on the size of the manifest, reading can take minutes or hours.

1. After S3 finishes reading the job’s manifest, the job moves to the **Awaiting your confirmation** state. Choose the option button to the left of the Job ID, and choose **Run job**.

1. Check the settings for the job, and choose **Run job** in the bottom-right corner.

   After the job begins running, you can choose the refresh button to check progress through the console dashboard view or by selecting the specific job.

1. When the job is complete, you can view the **Successful** and **Failed** object counts to confirm that everything performed as expected. If you enabled job reports, check your job report for the exact cause of any failed operations.

   You can also perform these steps by using the AWS CLI, AWS SDKs, or Amazon S3 REST API. For more information about tracking job status and completion reports, see [Tracking job status and completion reports](batch-ops-job-status.md).

For examples that show the copy operation with tags using the AWS CLI and AWS SDK for Java, see [Creating a Batch Operations job with job tags used for labeling](batch-ops-tags-create.md).

# Compute checksums
<a name="batch-ops-compute-checksums"></a>

You can use S3 Batch Operations with the **Compute checksum** operation to perform checksum calculations for objects stored in Amazon S3 at rest. The **Compute checksum** operation calculates object checksums which you can use to validate data integrity without downloading or restoring objects for stored data. You can use the **Compute checksum** operation to calculate checksums for both composite and full object checksum types, for all supported checksum algorithms.

With the **Compute checksum** operation, you can process billions of objects through a single job request. This batch operation is compatible with all S3 storage classes, regardless of object size. To create a **Compute checksum** job, use the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

When you [enable server access logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html), you can also receive log entries about your **Compute checksum** job. The **Compute checksum** job operation emits separate server access log events after completing the checksum computations. These log entries follow the standard [S3 server access logging format](https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html) and include fields such as operation type, timestamp, [error codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList), and the associated **Compute checksum** job ID. This logging provides an audit trail of checksum verification activities performed on your objects, helping you track and verify data integrity operations. 

**Note**  
The **Compute checksum** operation doesn’t support server-side encryption with customer-provided encryption keys (SSE-C) encrypted objects. However, you can use the **Compute checksum** operation with objects that are encrypted by using [server-side encryption with S3 managed keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html), server-side encryption with AWS Key Management Service (DSSE-KMS). Make sure that you’ve [granted the proper AWS KMS permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html#require-sse-kms) to perform the **Compute checksum** operation.

To get started with the **Compute checksum** operation using Batch Operations, you can either:
+ Manually create a new manifest file.
+ Use an existing manifest.
+ Direct Batch Operations to automatically generate a manifest based on object filter criteria that you [specify when you create your job](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-create-job.html#specify-batchjob-manifest).

Then, submit your **Compute checksum** job request and monitor its status. After the **Compute checksum** job finishes, you automatically receive a completion report in the specified destination bucket. This completion report contains checksum information for every object in the bucket, allowing you to verify data consistency. For more information about how to use this report to examine the job, see [Tracking job status and completion reports](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-job-status.html).

For more information about **Compute checksum** capabilities and how to use **Compute checksum** in the console, see [Checking object integrity for data at rest in Amazon S3](checking-object-integrity-at-rest.md). For information about how to send REST requests to **Compute checksum**, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeJob.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html) in the *Amazon S3 API Reference*.

The following sections explain how you can get started using the **Compute checksum** operation with S3 Batch Operations.

**Topics**
+ [

## S3 Batch Operations **Compute checksum** considerations
](#batch-ops-compute-checksum-considerations)
+ [

## S3 Batch Operations completion report
](#batch-ops-compute-checksum-completion-report)

## S3 Batch Operations **Compute checksum** considerations
<a name="batch-ops-compute-checksum-considerations"></a>

Before using the **Compute checksum** operation, review the following list of considerations:
+ If your manifest includes a version ID field, you must provide a version ID for all objects in the manifest. If the version ID isn’t specified, the **Compute checksum** request performs the operation on the latest version of the object.
+ To receive **Compute checksum** operation details in your [server access logs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html), you must first [enable server access logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html) on the source bucket and specify a destination bucket to store the logs. The destination bucket must also exist in the same AWS Region and AWS account as the source bucket. After configuring server access logging, the **Compute checksum** operation generates [log records](https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html#log-record-fields) that include standard fields such as operation type, HTTP status code, [S3 error codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList), timestamps, and the associated **Compute checksum** job ID. The **Compute checksum** operation runs asynchronously. As a result, the [log entries](https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html#log-record-fields) use a **Compute checksum** job ID, rather than a request ID, in its log entries.
+ The report generation can take up to a few hours for stored objects.
+ For the following S3 Glacier storage classes, the **Compute checksum** job can take up to a week to finish:
  + S3 Glacier Flexible Retrieval
  + S3 Glacier Deep Archive
+ For buckets where the completion report will be written, you must use the [bucket owner condition](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html#bucket-owner-condition-when-to-use) when running the **Compute checksum** operation. If the actual bucket owner doesn't match the expected bucket owner for the submitted job request, then the job fails. For a list of S3 operations that don't support bucket owner condition, see [Restrictions and limitations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html#bucket-owner-condition-restrictions-limitations).

## S3 Batch Operations completion report
<a name="batch-ops-compute-checksum-completion-report"></a>

When you create a **Compute checksum** job, you can request an S3 Batch Operations completion report. This CSV file shows the objects, success or failure codes, outputs, and descriptions. For more information about job tracking and completion reports, see [Completion reports](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-job-status.html#batch-ops-completion-report).

# Delete all object tags
<a name="batch-ops-delete-object-tagging"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The **Delete all object tags** operation removes all Amazon S3 object tag sets currently associated with the objects that are listed in the manifest. S3 Batch Operations doesn't support deleting tags from objects while keeping other tags in place. 

If the objects in your manifest are in a versioned bucket, you can remove the tag sets from a specific version of an object. To do so, you must specify a version ID for every object in the manifest. If you don't include a version ID for an object, S3 Batch Operations removes the tag set from the latest version of every object. For more information about Batch Operations manifests, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest). 

For more details about object tagging, see [Categorizing your objects using tags](object-tagging.md) in this guide, and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.

**Warning**  
Running this job removes all object tag sets on every object listed in the manifest. 

To use the console to create a **Delete all object tags** job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

## Restrictions and limitations
<a name="batch-ops-delete-object-tagging-restrictions"></a>

When you're using Batch Operations to delete object tags, the following restrictions and limitations apply:
+ The AWS Identity and Access Management (IAM) role that you specify to run the job must have permissions to perform the underlying Amazon S3 `DeleteObjectTagging` operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.
+ S3 Batch Operations uses the Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html) operation to remove the tag sets from every object in the manifest. All restrictions and limitations that apply to the underlying operation also apply to S3 Batch Operations jobs. 
+ A single delete object tagging job can support a manifest with up to 20 billion objects.

# Invoke AWS Lambda function
<a name="batch-ops-invoke-lambda"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The **Invoke AWS Lambda function** Batch Operations operation initiates AWS Lambda functions to perform custom actions on objects that are listed in a manifest. This section describes how to create a Lambda function to use with S3 Batch Operations and how to create a job to invoke the function. The S3 Batch Operations job uses the `LambdaInvoke` operation to run a Lambda function on every object listed in a manifest.

You can work with S3 Batch Operations by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API. For more information about using Lambda, see [Getting Started with AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) in the *AWS Lambda Developer Guide*. 

The following sections explain how you can get started using S3 Batch Operations with Lambda.

**Topics**
+ [

## Using Lambda with Batch Operations
](#batch-ops-invoke-lambda-using)
+ [

## Creating a Lambda function to use with S3 Batch Operations
](#batch-ops-invoke-lambda-custom-functions)
+ [

## Creating an S3 Batch Operations job that invokes a Lambda function
](#batch-ops-invoke-lambda-create-job)
+ [

## Providing task-level information in Lambda manifests
](#storing-task-level-information-in-lambda)
+ [

## S3 Batch Operations tutorial
](#batch-ops-tutorials-lambda)

## Using Lambda with Batch Operations
<a name="batch-ops-invoke-lambda-using"></a>

When using S3 Batch Operations with AWS Lambda, you must create new Lambda functions specifically for use with S3 Batch Operations. You can't reuse existing Amazon S3 event-based functions with S3 Batch Operations. Event functions can only receive messages; they don't return messages. The Lambda functions that are used with S3 Batch Operations must accept and return messages. For more information about using Lambda with Amazon S3 events, see [Using AWS Lambda with Amazon S3](https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html) in the *AWS Lambda Developer Guide*.

You create an S3 Batch Operations job that invokes your Lambda function. The job runs the same Lambda function on all of the objects listed in your manifest. You can control what versions of your Lambda function to use while processing the objects in your manifest. S3 Batch Operations support unqualified Amazon Resource Names (ARNs), aliases, and specific versions. For more information, see [ Introduction to AWS Lambda Versioning](https://docs.aws.amazon.com/lambda/latest/dg/versioning-intro.html) in the *AWS Lambda Developer Guide*.

If you provide the S3 Batch Operations job with a function ARN that uses an alias or the `$LATEST` qualifier, and you update the version that either of those points to, S3 Batch Operations starts calling the new version of your Lambda function. This can be useful when you want to update functionality part of the way through a large job. If you don't want S3 Batch Operations to change the version that's used, provide the specific version in the `FunctionARN` parameter when you create your job.

A single AWS Lambda job with S3 Batch Operations can support a manifest with up to 20 billion objects.

### Using Lambda and Batch Operations with directory buckets
<a name="batch-ops-invoke-lambda-directory-buckets"></a>

Directory buckets are a type of Amazon S3 bucket that's designed for workloads or performance-critical applications that require consistent single-digit millisecond latency. For more information, see [Directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/directory-buckets-overview.html).

There are special requirements for using Batch Operations to invoke Lambda functions that act on directory buckets. For example, you must structure your Lambda request using an updated JSON schema, and specify [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_LambdaInvokeOperation.html#AmazonS3-Type-control_LambdaInvokeOperation-InvocationSchemaVersion](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_LambdaInvokeOperation.html#AmazonS3-Type-control_LambdaInvokeOperation-InvocationSchemaVersion) 2.0 (not 1.0) when you create the job. This updated schema allows you to specify optional key-value pairs for [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_LambdaInvokeOperation.html#AmazonS3-Type-control_LambdaInvokeOperation-UserArguments](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_LambdaInvokeOperation.html#AmazonS3-Type-control_LambdaInvokeOperation-UserArguments), which you can use to modify certain parameters of existing Lambda functions. For more information, see [Automate object processing in Amazon S3 directory buckets with S3 Batch Operations and AWS Lambda](https://aws.amazon.com/blogs/storage/automate-object-processing-in-amazon-s3-directory-buckets-with-s3-batch-operations-and-aws-lambda/) in the AWS Storage Blog.

### Response and result codes
<a name="batch-ops-invoke-lambda-response-codes"></a>

S3 Batch Operations invokes the Lambda function with one or more keys, each of which has a `TaskID` associated with it. S3 Batch Operations expects a per-key result code from Lambda functions. Any task IDs sent in the request which aren't returned with a per-key result code will be given the result code from the `treatMissingKeysAs` field. `treatMissingKeysAs` is an optional request field and defaults to `TemporaryFailure`. The following table contains the other possible result codes and values for the `treatMissingKeysAs` field. 


| Response code | Description | 
| --- | --- | 
| Succeeded | The task completed normally. If you requested a job completion report, the task's result string is included in the report. | 
| TemporaryFailure | The task suffered a temporary failure and will be redriven before the job completes. The result string is ignored. If this is the final redrive, the error message is included in the final report. | 
| PermanentFailure | The task suffered a permanent failure. If you requested a job-completion report, the task is marked as Failed and includes the error message string. Result strings from failed tasks are ignored. | 

## Creating a Lambda function to use with S3 Batch Operations
<a name="batch-ops-invoke-lambda-custom-functions"></a>

This section provides example AWS Identity and Access Management (IAM) permissions that you must use with your Lambda function. It also contains an example Lambda function to use with S3 Batch Operations. If you have never created a Lambda function before, see [Tutorial: Using AWS Lambda with Amazon S3](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) in the *AWS Lambda Developer Guide*.

You must create Lambda functions specifically for use with S3 Batch Operations. You can't reuse existing Amazon S3 event-based Lambda functions because Lambda functions that are used for S3 Batch Operations must accept and return special data fields. 

**Important**  
AWS Lambda functions written in Java accept either [https://github.com/aws/aws-lambda-java-libs/blob/master/aws-lambda-java-core/src/main/java/com/amazonaws/services/lambda/runtime/RequestHandler.java](https://github.com/aws/aws-lambda-java-libs/blob/master/aws-lambda-java-core/src/main/java/com/amazonaws/services/lambda/runtime/RequestHandler.java) or [https://github.com/aws/aws-lambda-java-libs/blob/master/aws-lambda-java-core/src/main/java/com/amazonaws/services/lambda/runtime/RequestStreamHandler.java](https://github.com/aws/aws-lambda-java-libs/blob/master/aws-lambda-java-core/src/main/java/com/amazonaws/services/lambda/runtime/RequestStreamHandler.java) handler interfaces. However, to support S3 Batch Operations request and response format, AWS Lambda requires the `RequestStreamHandler` interface for custom serialization and deserialization of a request and response. This interface allows Lambda to pass an InputStream and OutputStream to the Java `handleRequest` method.   
Be sure to use the `RequestStreamHandler` interface when using Lambda functions with S3 Batch Operations. If you use a `RequestHandler` interface, the batch job will fail with "Invalid JSON returned in Lambda payload" in the completion report.   
For more information, see [Handler interfaces](https://docs.aws.amazon.com//lambda/latest/dg/java-handler.html#java-handler-interfaces) in the *AWS Lambda User Guide*.

### Example IAM permissions
<a name="batch-ops-invoke-lambda-custom-functions-iam"></a>

The following are examples of the IAM permissions that are necessary to use a Lambda function with S3 Batch Operations. 

**Example — S3 Batch Operations trust policy**  
The following is an example of the trust policy that you can use for the Batch Operations IAM role. This IAM role is specified when you create the job and gives Batch Operations permission to assume the IAM role.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "batchoperations.s3.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

**Example — Lambda IAM policy**  
The following is an example of an IAM policy that gives S3 Batch Operations permission to invoke the Lambda function and read the input manifest.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "BatchOperationsLambdaPolicy",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion",
                "s3:PutObject",
                "lambda:InvokeFunction"
            ],
            "Resource": "*"
        }
    ]
}
```

### Example request and response
<a name="batch-ops-invoke-lambda-custom-functions-request"></a>

This section provides request and response examples for the Lambda function.

**Example Request**  
The following is a JSON example of a request for the Lambda function.  

```
{
    "invocationSchemaVersion": "1.0",
    "invocationId": "YXNkbGZqYWRmaiBhc2RmdW9hZHNmZGpmaGFzbGtkaGZza2RmaAo",
    "job": {
        "id": "f3cc4f60-61f6-4a2b-8a21-d07600c373ce"
    },
    "tasks": [
        {
            "taskId": "dGFza2lkZ29lc2hlcmUK",
            "s3Key": "customerImage1.jpg",
            "s3VersionId": "1",
            "s3BucketArn": "arn:aws:s3:us-east-1:0123456788:amzn-s3-demo-bucket1"
        }
    ]
}
```

**Example Response**  
The following is a JSON example of a response for the Lambda function.  

```
{
  "invocationSchemaVersion": "1.0",
  "treatMissingKeysAs" : "PermanentFailure",
  "invocationId" : "YXNkbGZqYWRmaiBhc2RmdW9hZHNmZGpmaGFzbGtkaGZza2RmaAo",
  "results": [
    {
      "taskId": "dGFza2lkZ29lc2hlcmUK",
      "resultCode": "Succeeded",
      "resultString": "[\"Mary Major", \"John Stiles\"]"
    }
  ]
}
```

### Example Lambda function for S3 Batch Operations
<a name="batch-ops-invoke-lambda-custom-functions-example"></a>

The following example Python Lambda removes a delete marker from a versioned object.

As the example shows, keys from S3 Batch Operations are URL encoded. To use Amazon S3 with other AWS services, it's important that you URL decode the key that is passed from S3 Batch Operations.

```
import logging
from urllib import parse
import boto3
from botocore.exceptions import ClientError

logger = logging.getLogger(__name__)
logger.setLevel("INFO")

s3 = boto3.client("s3")


def lambda_handler(event, context):
    """
    Removes a delete marker from the specified versioned object.

    :param event: The S3 batch event that contains the ID of the delete marker
                  to remove.
    :param context: Context about the event.
    :return: A result structure that Amazon S3 uses to interpret the result of the
             operation. When the result code is TemporaryFailure, S3 retries the
             operation.
    """
    # Parse job parameters from Amazon S3 batch operations
    invocation_id = event["invocationId"]
    invocation_schema_version = event["invocationSchemaVersion"]

    results = []
    result_code = None
    result_string = None

    task = event["tasks"][0]
    task_id = task["taskId"]

    try:
        obj_key = parse.unquote_plus(task["s3Key"], encoding="utf-8")
        obj_version_id = task["s3VersionId"]
        bucket_name = task["s3BucketArn"].split(":")[-1]

        logger.info(
            "Got task: remove delete marker %s from object %s.", obj_version_id, obj_key
        )

        try:
            # If this call does not raise an error, the object version is not a delete
            # marker and should not be deleted.
            response = s3.head_object(
                Bucket=bucket_name, Key=obj_key, VersionId=obj_version_id
            )
            result_code = "PermanentFailure"
            result_string = (
                f"Object {obj_key}, ID {obj_version_id} is not " f"a delete marker."
            )

            logger.debug(response)
            logger.warning(result_string)
        except ClientError as error:
            delete_marker = error.response["ResponseMetadata"]["HTTPHeaders"].get(
                "x-amz-delete-marker", "false"
            )
            if delete_marker == "true":
                logger.info(
                    "Object %s, version %s is a delete marker.", obj_key, obj_version_id
                )
                try:
                    s3.delete_object(
                        Bucket=bucket_name, Key=obj_key, VersionId=obj_version_id
                    )
                    result_code = "Succeeded"
                    result_string = (
                        f"Successfully removed delete marker "
                        f"{obj_version_id} from object {obj_key}."
                    )
                    logger.info(result_string)
                except ClientError as error:
                    # Mark request timeout as a temporary failure so it will be retried.
                    if error.response["Error"]["Code"] == "RequestTimeout":
                        result_code = "TemporaryFailure"
                        result_string = (
                            f"Attempt to remove delete marker from  "
                            f"object {obj_key} timed out."
                        )
                        logger.info(result_string)
                    else:
                        raise
            else:
                raise ValueError(
                    f"The x-amz-delete-marker header is either not "
                    f"present or is not 'true'."
                )
    except Exception as error:
        # Mark all other exceptions as permanent failures.
        result_code = "PermanentFailure"
        result_string = str(error)
        logger.exception(error)
    finally:
        results.append(
            {
                "taskId": task_id,
                "resultCode": result_code,
                "resultString": result_string,
            }
        )
    return {
        "invocationSchemaVersion": invocation_schema_version,
        "treatMissingKeysAs": "PermanentFailure",
        "invocationId": invocation_id,
        "results": results,
    }
```

## Creating an S3 Batch Operations job that invokes a Lambda function
<a name="batch-ops-invoke-lambda-create-job"></a>

When creating an S3 Batch Operations job to invoke a Lambda function, you must provide the following:
+ The ARN of your Lambda function (which might include the function alias or a specific version number)
+ An IAM role with permission to invoke the function
+ The action parameter `LambdaInvokeFunction`

For more information about creating an S3 Batch Operations job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md) and [Operations supported by S3 Batch Operations](batch-ops-operations.md).

The following example creates an S3 Batch Operations job that invokes a Lambda function by using the AWS CLI. To use this example, replace the *`user input placeholders`* with your own information.

```
aws s3control create-job
    --account-id account-id
    --operation  '{"LambdaInvoke": { "FunctionArn": "arn:aws:lambda:region:account-id:function:LambdaFunctionName" } }'
    --manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820","Fields":["Bucket","Key"]},"Location":{"ObjectArn":"arn:aws:s3:::amzn-s3-demo-manifest-bucket","ETag":"ManifestETag"}}'
    --report '{"Bucket":"arn:aws:s3:::amzn-s3-demo-bucket","Format":"Report_CSV_20180820","Enabled":true,"Prefix":"ReportPrefix","ReportScope":"AllTasks"}'
    --priority 2
    --role-arn arn:aws:iam::account-id:role/BatchOperationsRole
    --region region
    --description "Lambda Function"
```

## Providing task-level information in Lambda manifests
<a name="storing-task-level-information-in-lambda"></a>

When you use AWS Lambda functions with S3 Batch Operations, you might want additional data to accompany each task or key that's operated on. For example, you might want to have both a source object key and a new object key provided. Your Lambda function could then copy the source key to a new S3 bucket under a new name. By default, Batch Operations lets you specify only the destination bucket and a list of source keys in the input manifest to your job. The following examples describe how you can include additional data in your manifest so that you can run more complex Lambda functions.

To specify per-key parameters in your S3 Batch Operations manifest to use in your Lambda function's code, use the following URL-encoded JSON format. The `key` field is passed to your Lambda function as if it were an Amazon S3 object key. But it can be interpreted by the Lambda function to contain other values or multiple keys, as shown in the following examples. 

**Note**  
The maximum number of characters for the `key` field in the manifest is 1,024.

**Example — Manifest substituting the "Amazon S3 keys" with JSON strings**  
The URL-encoded version must be provided to S3 Batch Operations.  

```
amzn-s3-demo-bucket,{"origKey": "object1key", "newKey": "newObject1Key"}
amzn-s3-demo-bucket,{"origKey": "object2key", "newKey": "newObject2Key"}
amzn-s3-demo-bucket,{"origKey": "object3key", "newKey": "newObject3Key"}
```

**Example — Manifest URL-encoded**  
This URL-encoded version must be provided to S3 Batch Operations. The non-URL-encoded version does not work.  

```
amzn-s3-demo-bucket,%7B%22origKey%22%3A%20%22object1key%22%2C%20%22newKey%22%3A%20%22newObject1Key%22%7D
amzn-s3-demo-bucket,%7B%22origKey%22%3A%20%22object2key%22%2C%20%22newKey%22%3A%20%22newObject2Key%22%7D
amzn-s3-demo-bucket,%7B%22origKey%22%3A%20%22object3key%22%2C%20%22newKey%22%3A%20%22newObject3Key%22%7D
```

**Example — Lambda function with manifest format writing results to the job report**  
This URL-encoded manifest example contains pipe-delimited object keys for the following Lambda function to parse.  

```
amzn-s3-demo-bucket,object1key%7Clower
amzn-s3-demo-bucket,object2key%7Cupper
amzn-s3-demo-bucket,object3key%7Creverse
amzn-s3-demo-bucket,object4key%7Cdelete
```
This Lambda function shows how to parse a pipe-delimited task that's encoded into the S3 Batch Operations manifest. The task indicates which revision operation is applied to the specified object.  

```
import logging
from urllib import parse
import boto3
from botocore.exceptions import ClientError

logger = logging.getLogger(__name__)
logger.setLevel("INFO")

s3 = boto3.resource("s3")


def lambda_handler(event, context):
    """
    Applies the specified revision to the specified object.

    :param event: The Amazon S3 batch event that contains the ID of the object to
                  revise and the revision type to apply.
    :param context: Context about the event.
    :return: A result structure that Amazon S3 uses to interpret the result of the
             operation.
    """
    # Parse job parameters from Amazon S3 batch operations
    invocation_id = event["invocationId"]
    invocation_schema_version = event["invocationSchemaVersion"]

    results = []
    result_code = None
    result_string = None

    task = event["tasks"][0]
    task_id = task["taskId"]
    # The revision type is packed with the object key as a pipe-delimited string.
    obj_key, revision = parse.unquote_plus(task["s3Key"], encoding="utf-8").split("|")
    bucket_name = task["s3BucketArn"].split(":")[-1]

    logger.info("Got task: apply revision %s to %s.", revision, obj_key)

    try:
        stanza_obj = s3.Bucket(bucket_name).Object(obj_key)
        stanza = stanza_obj.get()["Body"].read().decode("utf-8")
        if revision == "lower":
            stanza = stanza.lower()
        elif revision == "upper":
            stanza = stanza.upper()
        elif revision == "reverse":
            stanza = stanza[::-1]
        elif revision == "delete":
            pass
        else:
            raise TypeError(f"Can't handle revision type '{revision}'.")

        if revision == "delete":
            stanza_obj.delete()
            result_string = f"Deleted stanza {stanza_obj.key}."
        else:
            stanza_obj.put(Body=bytes(stanza, "utf-8"))
            result_string = (
                f"Applied revision type '{revision}' to " f"stanza {stanza_obj.key}."
            )

        logger.info(result_string)
        result_code = "Succeeded"
    except ClientError as error:
        if error.response["Error"]["Code"] == "NoSuchKey":
            result_code = "Succeeded"
            result_string = (
                f"Stanza {obj_key} not found, assuming it was deleted "
                f"in an earlier revision."
            )
            logger.info(result_string)
        else:
            result_code = "PermanentFailure"
            result_string = (
                f"Got exception when applying revision type '{revision}' "
                f"to {obj_key}: {error}."
            )
            logger.exception(result_string)
    finally:
        results.append(
            {
                "taskId": task_id,
                "resultCode": result_code,
                "resultString": result_string,
            }
        )
    return {
        "invocationSchemaVersion": invocation_schema_version,
        "treatMissingKeysAs": "PermanentFailure",
        "invocationId": invocation_id,
        "results": results,
    }
```

## S3 Batch Operations tutorial
<a name="batch-ops-tutorials-lambda"></a>

The following tutorial presents complete end-to-end procedures for some Batch Operations tasks with Lambda. In this tutorial, you learn how to set up Batch Operations to invoke a Lambda function for batch-transcoding of videos stored in an S3 source bucket. The Lambda function calls AWS Elemental MediaConvert to transcode the videos. 
+ [Tutorial: Batch-transcoding videos with S3 Batch Operations](tutorial-s3-batchops-lambda-mediaconvert-video.md)

# Replace all object tags
<a name="batch-ops-put-object-tagging"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The **Replace all object tags** operation replaces the object tags on every object listed in the manifest. An object tag is a key-value pair of strings that you can use to store metadata about an object.

To create a **Replace all object tags** job, you provide a set of tags that you want to apply. S3 Batch Operations applies the same set of tags to every object. The tag set that you provide replaces whatever tag sets are already associated with the objects in the manifest. S3 Batch Operations doesn't support adding tags to objects while leaving the existing tags in place.

If the objects in your manifest are in a versioned bucket, you can apply the tag set to specific versions of every object. To do so, specify a version ID for every object in the manifest. If you don't include a version ID for any objects, S3 Batch Operations applies the tag set to the latest version of every object. For more information about Batch Operations manifests, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest). 

For more information about object tagging, see [Categorizing your objects using tags](object-tagging.md) in this guide, and see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.

To use the console to create a **Replace all object tags** job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

## Restrictions and limitations
<a name="batch-ops-set-tagging-restrictions"></a>

When you're using Batch Operations to replace object tags, the following restrictions and limitations apply:
+ The AWS Identity and Access Management (IAM) role that you specify to run the Batch Operations job must have permissions to perform the underlying `PutObjectTagging` operation. For more information about the permissions required, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.
+ S3 Batch Operations uses the Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html) operation to apply tags to each object in the manifest. All restrictions and limitations that apply to the underlying operation also apply to S3 Batch Operations jobs.
+ A single replace all object tags job can support a manifest with up to 20 billion objects.

# Replace access control list (ACL)
<a name="batch-ops-put-object-acl"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The **Replace access control list (ACL)** operation replaces the access control lists (ACLs) for every object that's listed in the manifest. By using ACLs, you can define who can access an object and what actions they can perform.

**Note**  
A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

S3 Batch Operations support custom ACLs that you define and the canned ACLs that Amazon S3 provides with a predefined set of access permissions.

If the objects in your manifest are in a versioned bucket, you can apply the ACLs to specific versions of every object. To do so, specify a version ID for every object in the manifest. If you don't include a version ID for any object, S3 Batch Operations applies the ACL to the latest version of the object.

For more information about ACLs in Amazon S3, see [Access control list (ACL) overview](acl-overview.md).

**S3 Block Public Access**  
If you want to limit public access to all objects in a bucket, we recommend using Amazon S3 Block Public Access instead of using S3 Batch Operations to apply ACLs. Block Public Access can limit public access on a per-bucket or account-wide basis with a single, simple operation that takes effect quickly. This behavior makes Amazon S3 Block Public Access a better choice when your goal is to control public access to all objects in a bucket or account. Use S3 Batch Operations only when you need to apply a customized ACL to every object in the manifest. For more information about S3 Block Public Access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

**S3 Object Ownership**  
If the objects in the manifest are in a bucket that uses the **Bucket owner enforced** setting for Object Ownership, the **Replace access control list (ACL)** operation can only specify object ACLs that grant full control to the bucket owner. In this case, the **Replace access control list (ACL)** operation can't grant object ACL permissions to other AWS accounts or groups. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

## Restrictions and limitations
<a name="batch-ops-put-object-acl-restrictions"></a>

When you're using Batch Operations to replace ACLs, the following restrictions and limitations apply: 
+ The AWS Identity and Access Management (IAM) role that you specify to run the **Replace access control list (ACL)** job must have permissions to perform the underlying Amazon S3 `PutObjectAcl` operation. For more information about the permissions required, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) in the *Amazon Simple Storage Service API Reference*.
+ S3 Batch Operations uses the Amazon S3 `PutObjectAcl` operation to apply the specified ACL to every object in the manifest. Therefore, all restrictions and limitations that apply to the underlying `PutObjectAcl` operation also apply to S3 Batch Operations **Replace access control list (ACL)** jobs.
+ A single replace access control list job can support a manifest with up to 20 billion objects.

# Restore objects with Batch Operations
<a name="batch-ops-initiate-restore-object"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The **Restore** operation initiates restore requests for the archived Amazon S3 objects that are listed in your manifest. The following archived objects must be restored before they can be accessed in real time:
+ Objects archived in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes
+ Objects archived through the S3 Intelligent-Tiering storage class in the Archive Access or Deep Archive Access tiers

Using a **Restore** ([https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_S3InitiateRestoreObjectOperation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_S3InitiateRestoreObjectOperation.html)) operation in your S3 Batch Operations job results in a `RestoreObject` request for every object that's specified in the manifest.

**Important**  
The **Restore** job only *initiates* the request to restore objects. S3 Batch Operations reports the job as complete for each object after the request is initiated for that object. Amazon S3 doesn't update the job or otherwise notify you when the objects have been restored. However, you can use S3 Event Notifications to receive notifications when the objects are available in Amazon S3. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

When you create a **Restore** job, the following arguments are available:

**ExpirationInDays**  
This argument specifies how long the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive object remains available in Amazon S3. **Restore** jobs that target S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive objects require that you set `ExpirationInDays` to `1` or greater.  
Don't set `ExpirationInDays` when creating **Restore** operation jobs that target S3 Intelligent-Tiering Archive Access and Deep Archive Access tier objects. Objects in S3 Intelligent-Tiering archive access tiers aren't subject to restore expiration, so specifying `ExpirationInDays` results in a `RestoreObject` request failure.

**GlacierJobTier**  
Amazon S3 can restore objects by using one of three different retrieval tiers: `EXPEDITED`, `STANDARD`, and `BULK`. However, the S3 Batch Operations feature supports only the `STANDARD` and `BULK` retrieval tiers. For more information about the differences between the retrieval tiers, see [Understanding archive retrieval options](restoring-objects-retrieval-options.md).   
For more information about the pricing for each tier, see the **Requests & data retrievals** section on the [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) page.

## Differences when restoring from S3 Glacier and S3 Intelligent-Tiering
<a name="batch-ops-initiate-restore-diff"></a>

Restoring archived files from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes differs from restoring files from the S3 Intelligent-Tiering storage class in the Archive Access or Deep Archive Access tiers.
+ When you restore from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, a temporary *copy* of the object is created. Amazon S3 deletes this copy after the value that you specified in the `ExpirationInDays` argument has elapsed. After this temporary copy is deleted, you must submit an additional restore request to access the object.
+ When restoring archived S3 Intelligent-Tiering objects, do *not* specify the `ExpirationInDays` argument. When you restore an object from the S3 Intelligent-Tiering Archive Access or Deep Archive Access tiers, the object transitions back into the S3 Intelligent-Tiering Frequent Access tier. After a minimum of 90 consecutive days of no access, the object automatically transitions into the Archive Access tier. After a minimum of 180 consecutive days of no access, the object automatically moves into the Deep Archive Access tier. 
+ Batch Operations jobs can operate either on S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage class objects *or* on S3 Intelligent-Tiering Archive Access and Deep Archive Access storage tier objects. Batch Operations can't operate on both types of archived objects in the same job. To restore objects of both types, you *must* create separate Batch Operations jobs. 

## Overlapping restores
<a name="batch-ops-initiate-restore-object-in-progress"></a>

If your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_S3InitiateRestoreObjectOperation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_S3InitiateRestoreObjectOperation.html) job tries to restore an object that's already in the process of being restored, S3 Batch Operations proceeds as follows.

The restore operation succeeds for the object if either of the following conditions is true:
+ Compared to the restoration request already in progress, this job's `ExpirationInDays` value is the same and its `GlacierJobTier` value is faster.
+ The previous restoration request has already been completed, and the object is currently available. In this case, Batch Operations updates the expiration date of the restored object to match the `ExpirationInDays` value that's specified in the in-progress restoration request.

The restore operation fails for the object if any of the following conditions are true:
+ The restoration request already in progress hasn't yet been completed, and the restoration duration for this job (specified by the `ExpirationInDays` value) is different from the restoration duration that's specified in the in-progress restoration request.
+ The restoration tier for this job (specified by the `GlacierJobTier` value) is the same or slower than the restoration tier that's specified in the in-progress restoration request.

## Limitations
<a name="batch-ops-initiate-restore-object-limitations"></a>

`S3InitiateRestoreObjectOperation` jobs have the following limitations:
+ You must create the job in the same Region as the archived objects.
+ S3 Batch Operations doesn't support the `EXPEDITED` retrieval tier.
+ A single Batch Operations Restore job can support a manifest with up to 4 billion objects.

For more information about restoring objects, see [Restoring an archived object](restoring-objects.md).

# Update object encryption
<a name="batch-ops-update-encryption"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. The Batch Operations [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_UpdateObjectEncryptionOperation.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_UpdateObjectEncryptionOperation.html) operation updates the server-side encryption type of more than one Amazon S3 object with a single request. A single `UpdateObjectEncryption` operation job can support a manifest with up to 20 billion objects.

The `UpdateObjectEncryption` operation is supported for all Amazon S3 storage classes that are supported by general purpose buckets. You can use the `UpdateObjectEncryption` operation to change encrypted objects from [ server-side encryption with Amazon S3 managed keys (SSE- S3)](https://docs.aws.amazon.com//AmazonS3/latest/userguide/UsingServerSideEncryption.html) to [AWS Key Management Service (AWS KMS) keys (SSE-KMS)](https://docs.aws.amazon.com//AmazonS3/latest/userguide/UsingKMSEncryption.html), or to apply S3 Bucket Keys. You can also use the `UpdateObjectEncryption` operation to change the customer-managed KMS key used to encrypt your data so that you can comply with custom key-rotation standards.

 When you create a Batch Operations job, you can generate an object list based on the source location and filter criteria that you specify. You can use the `MatchAnyObjectEncryption` filter to generate a list of objects from your bucket that you want to update and include in your manifest. The generated object list includes only source bucket objects with the indicated server-side encryption type. If you select SSE-KMS, you can optionally further filter your results by specifying a specific KMS Key ARN and Bucket Key enabled status. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_JobManifestGeneratorFilter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_JobManifestGeneratorFilter.html) and [`SSEKMSFilter` in the *Amazon S3API Reference*](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SSEKMSFilter.html).

## Restrictions and considerations
<a name="batch-ops-encrypt-object-restrictions"></a>

When you're using the Batch Operations `UpdateObjectEncryption` operation, the following restrictions and considerations apply:
+ The `UpdateObjectEncryption` operation doesn't support objects that are unencrypted or objects that are encrypted with either dual-layer server-side encryption with AWS KMS keys (DSSE-KMS) or customer-provided encryption keys (SSE-C). Additionally, you cannot specify SSE-S3 encryption type `UpdateObjectEncryption` request.
+ You can use the `UpdateObjectEncryption` operation to update objects in buckets that have S3 Versioning enabled. To update the encryption type of a particular version, you must specify a version ID in your `UpdateObjectEncryption` request. If you don't specify version ID, the `UpdateObjectEncryption` request acts on the current version of the object. For more information about S3 Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).
+ The `UpdateObjectEncryption` operation fails on any object that has an S3 Object Lock retention mode or legal hold applied to it. If an object has a governance-mode retention period or a legal hold, you must first remove the Object Lock status on the object before you issue your `UpdateObjectEncryption` request. You can't use the `UpdateObjectEncryption` operation with objects that have an Object Lock compliance mode retention period applied to them. For more information about S3 Object Lock, see [Locking objects with Object Lock](object-lock.md).
+ `UpdateObjectEncryption` requests on source buckets with live replication enabled won't initiate replica events in the destination bucket. If you want to change the encryption type of objects in both your source and destination buckets, you must initiate separate `UpdateObjectEncryption` requests on the objects in the source and destination buckets.
+ By default, all `UpdateObjectEncryption` requests that specify a customer-managed KMS key are restricted to KMS keys that are owned by the bucket owner's AWS account. If you're using AWS Organizations, you can request the ability to use AWS KMS keys owned by other member accounts within your organization by contacting AWS Support.
+ If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region.
+ Provide a full KMS key ARN in your `UpdateObjectEncryption` request. You can't use an alias name or alias ARN. You can determine the full KMS Key ARN in the AWS KMS Console or using the AWS KMS `DescribeKey` API.
+ To improve manifest generation performance when using the `KmsKeyArn` filter, use this filter in conjunction with other object metadata filters. For example, you can combine `KmsKeyArn` with `MatchAnyPrefix`, `CreatedAfter`, or `MatchAnyStorageClass` when you automatically generate a manifest in S3 Batch Operations.

For more information about `UpdateObjectEncryption`, see [Updating server-side encryption for existing data](update-sse-encryption.md).

## Required permissions
<a name="batch-ops-required-permissions"></a>

To perform the `UpdateObjectEncryption` operation, add the following AWS Identity and Access Management (IAM) policy to your IAM principal (user, role, or group). To use this policy, replace *`amzn-s3-demo-bucket`* with the name of the bucket that contains the objects that you want to update encryption for. Replace `amzn-s3-demo-manifest-bucket` with the name of the bucket that contains your manifest, and replace `amzn-s3-demo-completion-report-bucket` with the name of the bucket where you want to store your completion report.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3BatchOperationsUpdateEncryption",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion",
                "s3:PutObject",
                "s3:UpdateObjectEncryption"
            ],
            "Resource": [
                 "arn:aws:s3:::amzn-s3-demo-bucket-target"
                "arn:aws:s3:::amzn-s3-demo-bucket-target-target/*"
            ]
        },
        {
            "Sid": "S3BatchOperationsPolicyForManifestFile",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket-manifest/*"
            ]
        },
        {
            "Sid": "S3BatchOperationsPolicyForCompletionReport",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket-completion-report/*"
            ]
        },
        {
            "Sid": "S3BatchOperationsPolicyManifestGeneration",
            "Effect": "Allow",
            "Action": [
                "s3:PutInventoryConfiguration"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket-target"
            ]
        },
        {
            "Sid": "AllowKMSOperationsForS3BatchOperations",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:GenerateDataKey",
                "kms:Encrypt",
                "kms:ReEncrypt*"
            ],
            "Resource": [                "arn:aws:kms:us-east-1:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
            ]
        }
    ]
}
```

For the trust policy and permissions policy that you must attach to the IAM role that the S3 Batch Operations service principal assumes to run Batch Operations jobs on your behalf, see [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md) and [Update object encryption](batch-ops-iam-role-policies.md#batch-ops-update-encryption-policies).

# Creating a Batch Operations job to update object encryption
<a name="batch-ops-update"></a>

To update the server-side encryption type of more than one Amazon S3 object with a single request, you can use S3 Batch Operations. You can use S3 Batch Operations through the Amazon S3 console, AWS Command Line Interface (AWS CLI) AWS SDKs, or the Amazon S3 REST API.

## Using the AWS CLI
<a name="batch-ops-example-cli-update-job"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don’t have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

Alternatively, you can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**Example 1 – Create a Batch Operations job that updates encrypted objects from one AWS KMS key to another KMS key**  
The following example shows how to create an S3 Batch Operations job that updates the encryption settings for multiple objects in your general purpose bucket. This command creates a job that changes objects encrypted with one AWS Key Management Service (AWS KMS) key to use a different KMS key. This job also generates and saves a manifest of the affected objects and creates a report of the results. To use this command, replace the `user input placeholders` with your own information.  

```
aws s3control create-job --account-id account-id \
--no-confirmation-required \
--operation '{"S3UpdateObjectEncryption": {  "ObjectEncryption": { "SSEKMS": { "KMSKeyArn": "KMS-key-ARN-to-apply", "BucketKeyEnabled": false  }  }  } }' \
--report '{ "Enabled": true, "Bucket": "report-bucket-ARN",  "Format": "Report_CSV_20180820", "Prefix": "report", "ReportScope": "AllTasks" }' \
--manifest-generator '{ "S3JobManifestGenerator": { "ExpectedBucketOwner": "account-id", "SourceBucket": "source-bucket-ARN", "EnableManifestOutput": true, "ManifestOutputLocation": { "Bucket": "manifest-bucket-ARN", "ManifestFormat": "S3InventoryReport_CSV_20211130", "ManifestPrefix": "manifest-prefix" }, "Filter": {   "MatchAnyObjectEncryption": [{ "SSEKMS": { "KmsKeyArn": "kms-key-ARN-to-match" } }] } } }' \
--priority 1 \
--role-arn batch-operations-role-ARN
```
For best performance, we recommend using the `KmsKeyArn` filter in conjunction with other object metadata filters, such as `MatchAnyPrefix`, `CreatedAfter`, or `MatchAnyStorageClass`.

**Example 2 – Create a Batch Operations job that updates SSE-S3 encrypted objects to SSE-KMS**  
The following example shows how to create an S3 Batch Operations job that updates the encryption settings for multiple objects in your general purpose bucket. This command creates a job that changes objects encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3) to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) instead. This job also generates and saves a manifest of the affected objects and creates a report of the results. To use this command, replace the `user input placeholders` with your own information.  

```
aws s3control create-job --account-id account-id \
--no-confirmation-required \
--operation '{"S3UpdateObjectEncryption": {  "ObjectEncryption": { "SSEKMS": { "KMSKeyArn": "KMS-key-ARN-to-apply", "BucketKeyEnabled": false  }  }  } }' \
--report '{ "Enabled": true, "Bucket": "report-bucket-ARN",  "Format": "Report_CSV_20180820", "Prefix": "report", "ReportScope": "AllTasks" }' \
--manifest-generator '{ "S3JobManifestGenerator": { "ExpectedBucketOwner": "account-id", "SourceBucket": "source-bucket-ARN", "EnableManifestOutput": true, "ManifestOutputLocation": { "Bucket": "manifest-bucket-ARN", "ManifestFormat": "S3InventoryReport_CSV_20211130", "ManifestPrefix": "manifest-prefix" }, "Filter": {   "MatchAnyObjectEncryption": [{ "SSES3": {} }] } } }' \
--priority 1 \
--role-arn batch-operations-role-ARN
```
For best performance, we recommend using the `KmsKeyArn` filter in conjunction with other object metadata filters, such as `MatchAnyPrefix`, `CreatedAfter`, or `MatchAnyStorageClass`.

# S3 Object Lock retention
<a name="batch-ops-retention-date"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use the **Object Lock retention** operation to apply retention dates for your objects by using either *governance* mode or *compliance* mode. These retention modes apply different levels of protection. You can apply either retention mode to any object version. Retention dates, like legal holds, prevent an object from being overwritten or deleted. Amazon S3 stores the *retain until date* specified in the object's metadata and protects the specified version of the object version until the retention period expires.

You can use S3 Batch Operations with Object Lock to manage the retention dates of many Amazon S3 objects at once. You specify the list of target objects in your manifest and submit the manifest to Batch Operations for completion. For more information, see S3 Object Lock [Retention periods](object-lock.md#object-lock-retention-periods). 

Your S3 Batch Operations job with retention dates runs until completion, until cancellation, or until a failure state is reached. We recommend using S3 Batch Operations and S3 Object Lock retention when you want to add, change, or remove the retention date for many objects with a single request. 

Batch Operations verifies that Object Lock is enabled on your bucket before processing any keys in the manifest. To perform the operations and validation, Batch Operations needs the `s3:GetBucketObjectLockConfiguration` and `s3:PutObjectRetention` permissions in an AWS Identity and Access Management (IAM) role to allow Batch Operations to call Object Lock on your behalf. For more information, see [Object Lock considerations](object-lock-managing.md).

For information about using this operation with the REST API, see `S3PutObjectRetention` in the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html) operation in the *Amazon Simple Storage Service API Reference*. 

For an AWS Command Line Interface (AWS CLI) example of using this operation, see [Using the AWS CLI](batch-ops-object-lock-retention.md#batch-ops-cli-object-lock-retention-example). For an AWS SDK for Java example, see [Using the AWS SDK for Java](batch-ops-object-lock-retention.md#batch-ops-examples-java-object-lock-retention). 

## Restrictions and limitations
<a name="batch-ops-retention-date-restrictions"></a>

When you're using Batch Operations to apply Object Lock retention periods, the following restrictions and limitations apply: 
+ S3 Batch Operations doesn't make any bucket-level changes.
+ Versioning and S3 Object Lock must be configured on the bucket where the job is performed.
+ All objects listed in the manifest must be in the same bucket.
+ The operation works on the latest version of the object unless a version is explicitly specified in the manifest.
+ You need `s3:PutObjectRetention` permission in your IAM role to use an **Object Lock retention** job.
+ The `s3:GetBucketObjectLockConfiguration` IAM permission is required to confirm that Object Lock is enabled for the S3 bucket that you're performing the job on. 
+ You can only extend the retention period of objects with `COMPLIANCE` mode retention dates applied, and this retention period can't be shortened.
+ A single S3 Object Lock retention job can support a manifest with up to 20 billion objects.

# S3 Object Lock legal hold
<a name="batch-ops-legal-hold"></a>

You can use Amazon S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. You can use the **Object Lock legal hold** operation to place a legal hold on an object version. Like setting a retention period, a legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until it's removed. 

You can use S3 Batch Operations with Object Lock to add legal holds to many Amazon S3 objects at once. To do so, specify a list of the target objects in your manifest and submit that list to Batch Operations. Your S3 Batch Operations **Object Lock legal hold** job runs until completion, until cancellation, or until a failure state is reached.

S3 Batch Operations verifies that Object Lock is enabled on your S3 bucket before processing any objects in the manifest. To perform the object operations and bucket-level validation, S3 Batch Operations needs the `s3:PutObjectLegalHold` and `s3:GetBucketObjectLockConfiguration` in an AWS Identity and Access Management (IAM) role. These permissions allow S3 Batch Operations to call S3 Object Lock on your behalf. 

When you create an S3 Batch Operations job to remove a legal hold, you only need to specify `Off` as the legal hold status. For more information, see [Object Lock considerations](object-lock-managing.md).

For information about how to use this operation with the Amazon S3 REST API, see `S3PutObjectLegalHold` in the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html) operation in the *Amazon Simple Storage Service API Reference*. 

For an example of using this operation, see [Using the AWS SDK for Java](batch-ops-legal-hold-off.md#batch-ops-examples-java-object-lock-legalhold). 

## Restrictions and limitations
<a name="batch-ops-legal-hold-restrictions"></a>

When you're using Batch Operations to apply or remove an Object Lock legal hold, the following restrictions and limitations apply: 
+ S3 Batch Operations doesn't make any bucket-level changes.
+ All objects listed in the manifest must be in the same bucket.
+ Versioning and S3 Object Lock must be configured on the bucket where the job is performed.
+ The **Object Lock legal hold** operation works on the latest version of the object unless a version is explicitly specified in the manifest.
+ The `s3:PutObjectLegalHold` permission is required in your IAM role to add or remove a legal hold from objects.
+ The `s3:GetBucketObjectLockConfiguration` IAM permission is required to confirm that S3 Object Lock is enabled for the S3 bucket where the job is performed. 
+ A single S3 Object Lock legal hold job can support a manifest with up to 20 billion objects.
+ [Copy objects](batch-ops-copy-object.md)
+ [Compute checksums](batch-ops-compute-checksums.md)
+ [Delete all object tags](batch-ops-delete-object-tagging.md)
+ [Invoke AWS Lambda function](batch-ops-invoke-lambda.md)
+ [Replace all object tags](batch-ops-put-object-tagging.md)
+ [Replace access control list (ACL)](batch-ops-put-object-acl.md)
+ [Restore objects with Batch Operations](batch-ops-initiate-restore-object.md)
+ [Update object encryption](batch-ops-update-encryption.md)
+ [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md)
+ [S3 Object Lock retention](batch-ops-retention-date.md)
+ [S3 Object Lock legal hold](batch-ops-legal-hold.md)