

# Access control in Amazon S3
Access control

In AWS, a resource is an entity that you can work with. In Amazon Simple Storage Service (S3), *buckets* and *objects* are the original Amazon S3 resources. Every S3 customer likely has buckets with objects in them. As new features were added to S3, additional resources were also added, but not every customer uses these feature-specific resources. For more information about Amazon S3 resources, see [S3 resources](#access-management-resources).

By default, all Amazon S3 resources are private. Also by default, the root user of the AWS account that created the resource (resource owner) and IAM users within that account with the necessary permissions can access a resource that they created. The resource owner decides who else can access the resource and the actions that others are allowed to perform on the resource. S3 has various access management tools that you can use to grant others access to your S3 resources.

The following sections provide you with an overview of S3 resources, the S3 access management tools available, and the best use cases for each access management tool. The lists in these sections aim to be comprehensive and include all S3 resources, access management tools, and common access management use cases. At the same time, these sections are designed to be directories that lead you to the technical details you want. If you have a good understanding of some of the following topics, you can skip to the section that applies to you. 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [S3 resources](#access-management-resources)
+ [Identities](#access-management-owners)
+ [Access management tools](#access-management-tools)
+ [Actions](#access-management-actions)
+ [Access management use cases](#access-management-usecases)
+ [Access management troubleshooting](#access-management-troubleshooting)

## S3 resources


The original Amazon S3 resources are buckets and the objects that they contain. As new features are added to S3, new resources are also added. The following is a complete list of S3 resources and their respective features. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/access-management.html)

**Buckets**  
There are two types of Amazon S3 buckets: *general purpose buckets* and *directory buckets*. 
+ **General purpose buckets** are the original S3 bucket type and are recommended for most use cases and access patterns. General purpose buckets also allow objects that are stored across all storage classes, except S3 Express One Zone. For more information about S3 storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md).
+ **Directory buckets** use the S3 Express One Zone storage class, which is recommended if your application is performance-sensitive and benefits from single-digit millisecond `PUT` and `GET` latencies. For more information, see [Working with directory buckets](directory-buckets-overview.md), [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone), and [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).

**Categorizing S3 resources**  
Amazon S3 provides features to categorize and organize your S3 resources. Categorizing your resources is not only useful for organizing them, but you can also set access management rules based on the resource categories. In particular, prefixes and tagging are two storage organization features that you can use when setting access management permissions. 

**Note**  
The following information applies to general purpose buckets. Directory buckets do not support tagging, and they have prefix limitations. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).
+ **Prefixes** — A prefix in Amazon S3 is a string of characters at the beginning of an object key name that's used to organize the objects that are stored in your S3 buckets. You can use a delimiter character, such as a forward slash (`/`), to indicate the end of the prefix within the object key name. For example, you might have object key names that start with the `engineering/` prefix or object key names that start with the `marketing/campaigns/` prefix. Using a delimeter at the end of your prefix, such as as a forward slash character `/` emulates folder and file naming conventions. However, in S3, the prefix is part of the object key name. In general purpose S3 buckets, there is no actual folder hierarchy. 

  Amazon S3 supports organizing and grouping objects by using their prefixes. You can also manage access to objects by their prefixes. For example, you can limit access to only the objects with names that start with a specific prefix. 

  For more information, see [Organizing objects using prefixes](using-prefixes.md). S3 Console uses the concept of *folders*, which, in general purpose buckets, are essentially prefixes that are pre-pended to the object key name. For more information, see [Organizing objects in the Amazon S3 console by using folders](using-folders.md).
+ **Tags** — Each tag is a key-value pair that you assign to resources. For example, you can tag some resources with the tag `topicCategory=engineering`. You can use tagging to help with cost allocation, categorizing and organizing, and access control. Bucket tagging is only used for cost allocation. You can tag objects, S3 Storage Lens, jobs, and S3 Access Grants for the purposes of organizing or for access control. In S3 Access Grants, you can also use tagging for cost-allocation. As an example of controlling access to resources by using their tags, you can share only the objects that have a specific tag or a combination of tags. 

  For more information, see [Controlling access to AWS resources by using resource tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) in the *IAM User Guide*.

## Identities


In Amazon S3, the resource owner is the identity that created the resource, such as a bucket or an object. By default, only the root user of the account that created the resource and IAM identities within the account that have the required permission can access the S3 resource. Resource owners can give other identities access to their S3 resources. 

Identities that don't own a resource can request access to that resource. Requests to a resource are either authenticated or unauthenticated. Authenticated requests must include a signature value that authenticates the request sender, but unauthenticated requests do not require a signature. We recommend that you grant access only to authenticated users. For more information about request authentication, see [Making requests ](https://docs.aws.amazon.com/AmazonS3/latest/API/MakingRequests.html) in the *Amazon S3 API Reference*. 

**Important**  
We recommend that you don't use the AWS account root user credentials to make authenticated requests. Instead, create an IAM role and grant that role full access. We refer to users with this role as *administrator users*. You can use credentials assigned to the administrator role, instead of AWS account root user credentials, to interact with AWS and perform tasks, such as create a bucket, create users, and grant permissions. For more information, see [AWS account root user credentials and IAM user credentials](https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html) in the *AWS General Reference*, and see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

Identities accessing your data in Amazon S3 can be one of the following:

**AWS account owner**  
The AWS account that created the resource. For example, the account that created the bucket. This account owns the resource. For more information, see [AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html).

**IAM identities in the same account of the AWS account owner**  
When setting up accounts for new team members who require S3 access, the AWS account owner can use AWS Identity and Access Management (IAM) to create [users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html), [groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html), and [roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html). The AWS account owner can then share resources with these IAM identities. The account owner can also specify the permissions to give the IAM identities, which allow or deny the actions that can be performed on the shared resources. 

IAM identities provide increased capabilities, including the ability to require users to enter login credentials before accessing shared resources. By using IAM identities, you can implement a form of IAM multi-factor authentication (MFA) to support a strong identity foundation. An IAM best practice is to create roles for access management instead of granting permissions to each individual user. You assign individual users to the appropriate role. For more information, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). 

**Other AWS account owners and their IAM identities (cross-account access)**  
The AWS account owner can also give other AWS account owners, or IAM identities that belong to another AWS account, access to resources. 

**Note**  
**Permission delegation** — If an AWS account owns a resource, it can grant those permissions to another AWS account. That account can then delegate those permissions, or a subset of them, to users in the same account. This is referred to as permission delegation. But an account that receives permissions from another account cannot delegate those permissions "cross-account" to another AWS account. 

**Anonymous users (public access)**  
The AWS account owner can make resources public. Making a resource public technically shares the resource with *the anonymous user*. Buckets created since April 2023 block all public access by default, unless you change this setting. We recommend that you set your buckets to block public access, and that you only grant access to authenticated users. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md). 

**AWS services**  
The resource owner can grant another AWS service access to an Amazon S3 resource. For example, you can grant the AWS CloudTrail service `s3:PutObject` permission to write log files to your bucket. For more information, see [Providing access to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_services.html).

**Corporate directory identities**  
The resource owner can grant users or roles from your corporate directory access to an S3 resource by using [S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-get-started.html). For more information about adding your corporate directory to AWS IAM Identity Center, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html). 

### Bucket or resource owners


The AWS account that you use to create buckets and upload objects owns those resources. A bucket owner can grant cross-account permissions to another AWS account (or users in another account) to upload objects.

When a bucket owner permits another account to upload objects to a bucket, the bucket owner, by default, owns all objects uploaded to their bucket. However, if both the *Bucket owner enforced* and *Bucket owner preferred* bucket settings are turned off, the AWS account that uploads the objects owns those objects, and the bucket owner does not have permissions on the objects owned by another account, with the following exceptions:
+ The bucket owner pays the bills. The bucket owner can deny access to any objects, or delete any objects in the bucket, regardless of who owns them. 
+ The bucket owner can archive any objects or restore archived objects, regardless of who owns them. Archival refers to the storage class used to store the objects. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

## Access management tools


Amazon S3 provides a variety of security features and tools. The following is a comprehensive list of these features and tools. You do not need all of these access management tools, but you must use one or more to grant access to your Amazon S3 resources. Proper application of these tools can help make sure that your resources are accessible only to the intended users. 

The most commonly used access management tool is an *access policy*. An access policy can be a *resource-based policy* that is attached to an AWS resource, such as a bucket policy for a bucket. An access policy can also be an *identity-based policy* that is attached to an AWS Identity and Access Management (IAM) identity, such as an IAM user, group, or role. Write an access policy to grant AWS accounts and IAM users, groups, and roles permission to perform operations on a resource. For example, you can grant `PUT Object` permission to another AWS account so that the other account can upload objects to your bucket.

An access policy describes who has access to what things. When Amazon S3 receives a request, it must evaluate all of the access policies to determine whether to authorize or deny the request. For more information about how Amazon S3 evaluates these policies, see [How Amazon S3 authorizes a request](how-s3-evaluates-access-control.md).

The following are the access management tools available in Amazon S3.

### Bucket policy


An Amazon S3 bucket policy is a JSON-formatted [AWS Identity and Access Management (IAM) resource-based policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) that is attached to a particular bucket. Use bucket policies to grant other AWS accounts or IAM identities permissions for the bucket and the objects in it. Many S3 access management use cases can be met by using a bucket policy. With bucket policies, you can personalize bucket access to help make sure that only the identities that you have approved can access resources and perform actions within them. For more information, see [Bucket policies for Amazon S3](bucket-policies.md). 

The following is an example bucket policy. You express the bucket policy by using a JSON file. This example policy grants an IAM role read permission to all objects in the bucket. It contains one statement named `BucketLevelReadPermissions`, which allows the `s3:GetObject` action (read permission) on objects in a bucket named `amzn-s3-demo-bucket1`. By specifying an IAM role as the `Principal`, this policy grants access to any IAM user with this role. To use this example policy, replace the `user input placeholders` with your own information. 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid":"BucketLevelReadPermissions",
      "Effect":"Allow",
      "Principal": {
	    "AWS": "arn:aws:iam::123456789101:role/s3-role"
      },
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::amzn-s3-demo-bucket/*"]
    }]
  }
```

------

**Note**  
When creating policies, avoid the use of wildcard characters (`*`) in the `Principal` element because using a wildcard character allows anyone to access your Amazon S3 resources. Instead, explicitly list users or groups that are allowed to access the bucket, or list conditions that must be met by using a condition clause in the policy. Also, rather than including a wildcard character for the actions of your users or groups, grant them specific permissions when applicable. 

### Identity-based policy


An identity-based or IAM user policy is a type of [AWS Identity and Access Management (IAM) policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html). An identity-based policy is a JSON-formatted policy that is attached to IAM users, groups, or roles in your AWS account. You can use identity-based policies to grant an IAM identity access to your buckets or objects. You can create IAM users, groups, and roles in your account and attach access policies to them. You can then grant access to AWS resources, including Amazon S3 resources. For more information, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md). 

The following is an example of an identity-based policy. The example policy allows the associated IAM role to perform six different Amazon S3 actions (permissions) on a bucket and the objects in it. If you attach this policy to an IAM role in your account and assign the role to some of your IAM users, the users with this role will be able to perform these actions on the resources (buckets) specified in your policy. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
  {
    "Sid": "AssignARoleActions",
    "Effect": "Allow",
    "Action": [
	  "s3:PutObject",
	  "s3:GetObject",
	  "s3:ListBucket",
	  "s3:DeleteObject",
	  "s3:GetBucketLocation"
    ],
    "Resource": [
	  "arn:aws:s3:::amzn-s3-demo-bucket/*",
	  "arn:aws:s3:::amzn-s3-demo-bucket"
    ]
    },
  {
    "Sid": "AssignARoleActions2",
    "Effect": "Allow",
    "Action": "s3:ListAllMyBuckets",
    "Resource": "*"
  }
  ]
}
```

------

### S3 Access Grants


Use S3 Access Grants to create access grants to your Amazon S3 data for both identities in corporate identity directories, such as Active Directory, and to AWS Identity and Access Management (IAM) identities. S3 Access Grants helps you manage data permissions at scale. Additionally, S3 Access Grants logs end-user identity and the application used to access the S3 data in AWS CloudTrail. This provides a detailed audit history down to the end-user identity for all access to the data in your S3 buckets. For more information, see [Managing access with S3 Access Grants](access-grants.md).

### Access Points


Amazon S3 Access Points simplifies managing data access at scale for applications that use shared datasets on S3. Access Points are named network endpoints that are attached to a bucket. You can use access points to perform S3 object operations at scale, such as uploading and retrieving objects. A bucket can have up to 10,000 access points attached, and for each access point, you can enforce distinct permissions and network controls to give you detailed control over access to your S3 objects. S3 Access Points can be associated with buckets in the same account or in another trusted account. Access Points policies are resource-based policies that are evaluated in conjunction with the underlying bucket policy. For more information, see [Managing access to shared datasets with access points](access-points.md).

### Access control list (ACL)


An ACL is a list of grants identifying the grantee and the permission granted. ACLs grant basic read or write permissions to other AWS accounts. ACLs use an Amazon S3–specific XML schema. An ACL is a type of [AWS Identity and Access Management (IAM) policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html). An object ACL is used to manage access to an object, and a bucket ACL is used to manage access to a bucket. With bucket policies, there is a single policy for the entire bucket, but object ACLs are specified for each object. We recommend that you keep ACLs turned off, except in circumstances where you must individually control access for each object. For more information about using ACLs, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**Warning**  
The majority of modern use cases in Amazon S3 do not require the use of ACLs. 

The following is an example bucket ACL. The grant in the ACL shows a bucket owner that has full control permission. 

```
<?xml version="1.0" encoding="UTF-8"?>
	<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
		<Owner>
			<ID>Owner-Canonical-User-ID</ID>
		</Owner>
	<AccessControlList>
		<Grant>
			<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Canonical User">
				<ID>Owner-Canonical-User-ID</ID>
			</Grantee>
			<Permission>FULL_CONTROL</Permission>
		</Grant>
	</AccessControlList>
</AccessControlPolicy>
```

### Object Ownership


To manage access to your objects, you must be the owner of the object. You can use the Object Ownership bucket-level setting to control ownership of objects uploaded to your bucket. Also, use Object Ownership to turn on ACLs. By default, Object Ownership is set to the *Bucket owner enforced setting* and all ACLs are turned off. When ACLs are turned off, the bucket owner owns all of the objects in the bucket and exclusively manages access to data. To manage access, the bucket owner uses policies or another access management tool, excluding ACLs. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

Object Ownership has three settings that you can use both to control ownership of objects that are uploaded to your bucket and to turn on ACLs:

**ACLs turned off**
+ **Bucket owner enforced (default)** – ACLs are turned off, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs do not affect permissions to data in the S3 bucket. The bucket uses policies exclusively to define access control.

**ACLs turned on**
+ **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 
+ **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

**Additional best practices**  
Consider using the following bucket settings and tools to help protect data in transit and at rest, both of which are crucial in maintaining the integrity and accessibility of your data:
+ **Block Public Access** — Do not turn off the default bucket-level setting *Block Public Access*. This setting blocks public access to your data by default. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).
+ **S3 Versioning** — For data integrity, you can implement the S3 Versioning bucket setting, which versions your objects as you make updates, instead of overwriting them. You can use S3 Versioning to preserve, retrieve, and restore a previous version, if needed. For information about S3 Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). 
+ **S3 Object Lock** — S3 Object Lock is another setting that you can implement for achieving data integrity. This feature can implement a write-once-read-many (WORM) model to store objects immutably. For information about Object Lock, see [Locking objects with Object Lock](object-lock.md).
+ **Object encryption** — Amazon S3 offers several object encryption options that protect data in transit and at rest. *Server-side encryption* encrypts your object before saving it on disks in its data centers and then decrypts it when you download the objects. If you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. For more information, see [Protecting data with server-side encryption](serv-side-encryption.md). S3 encrypts newly uploaded objects by default. For more information, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md). *Client-side encryption* is the act of encrypting data before sending it to Amazon S3. For more information, see [Protecting data by using client-side encryption](UsingClientSideEncryption.md). 
+ **Signing methods** — Signature Version 4 is the process of adding authentication information to AWS requests sent by HTTP. For security, most requests to AWS must be signed with an access key, which consists of an access key ID and secret access key. These two keys are commonly referred to as your security credentials. For more information, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) and [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). 

## Actions


For a complete list of S3 permissions and condition keys, see [Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Actions**  
The AWS Identity and Access Management (IAM) actions for Amazon S3 are the possible actions that can be performed on an S3 bucket or object. You grant these actions to identities so they can act on your S3 resources. Examples of S3 actions are `s3:GetObject` to read objects in a bucket, and `s3:PutObject` to write objects to a bucket. 

**Condition keys**  
In addition to actions, IAM condition keys are limited to granting access to only when a condition is met. Condition keys are optional. 



**Note**  
In a resource-based access policy, such as a bucket policy, or in an identity-based policy, you can specify the following:  
An action or an array of actions in the `Action` element of the policy statement. 
In the `Effect` element of the policy statement, you can specify `Allow` to grant the actions listed, or you can specify `Deny` to block the listed actions. To further maintain the practice of least privileges, `Deny` statements in the `Effect` element of the access policy should be as broad as possible, and `Allow` statements should be as narrow as possible. `Deny` effects paired with the `s3:*` action are another good way to implement opt-in best practices for the identities that are included in policy condition statements.
A condition key in the `Condition` element of a policy statement.

## Access management use cases


Amazon S3 provides resource owners with a variety of tools for granting access. The S3 access management tool that you use depends on the S3 resources that you want to share, the identities that you are granting access to, and the actions that you want to allow or deny. You might want to use one or a combination of S3 access management tools to manage access to your S3 resources.

In most cases, you can use an access policy to manage permissions. An access policy can be a resource-based policy, which is attached to a resource, such as a bucket, or another Amazon S3 resource ([S3 resources](#access-management-resources)). An access policy can also be an identity-based policy, which is attached to an AWS Identity and Access Management (IAM) user, group, or role in your account. You might find that a bucket policy works better for your use case. For more information, see [Bucket policies for Amazon S3](bucket-policies.md). Alternatively, with AWS Identity and Access Management (IAM), you can create IAM users, groups, and roles within your AWS account and manage their access to buckets and objects through identity-based policies. For more information, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).

To help you navigate these access management options, the following are common Amazon S3 customer use cases and recommendations for each of the S3 access management tools. 

### The AWS account owner wants to share buckets only with users within the same account


All access management tools can fulfill this basic use case. We recommend the following access management tools for this use case:
+ **Bucket policy** – If you want to grant access to one bucket or a small number of buckets, or if your bucket access permissions are similar from bucket to bucket, use a bucket policy. With bucket policies, you manage one policy for each bucket. For more information, see [Bucket policies for Amazon S3](bucket-policies.md).
+ **Identity-based policy** – If you have a very large number of buckets with different access permissions for each bucket, and only a few user roles to manage, you can use an IAM policy for users, groups, or roles. IAM policies are also a good option if you are managing user access to other AWS resources, as well as Amazon S3 resources. For more information, see [Example 1: Bucket owner granting its users bucket permissions](example-walkthroughs-managing-access-example1.md).
+ **S3 Access Grants** – You can use S3 Access Grants to grant access to your S3 buckets, prefixes, or objects. S3 Access Grants allows you to specify varying object-level permissions at scale; whereas, bucket policies are limited to 20 KB in size. For more information, see [Getting started with S3 Access Grants](access-grants-get-started.md).
+ **Access Points** – You can use Access Points, which are named network endpoints that are attached to a bucket. A bucket can have up to 10,000 access points attached, and for each access point you can enforce distinct permissions and network controls to give you detailed control over access to your S3 objects. For more information, see [Managing access to shared datasets with access points](access-points.md). 

### The AWS account owner wants to share buckets or objects with users from another AWS account (cross-account)


To grant permission to another AWS account, you must use a bucket policy or one of the following recommended access management tools. You cannot use an identity-based access policy for this use case. For more information about granting cross-account access, see [How do I provide cross-account access to objects that are in Amazon S3 buckets?](https://repost.aws/knowledge-center/cross-account-access-s3)

We recommend the following access management tools for this use case:
+ **Bucket policy** – With bucket policies, you manage one policy for each bucket. For more information, see [Bucket policies for Amazon S3](bucket-policies.md).
+ **S3 Access Grants** – You can use S3 Access Grants to grant cross-account permissions to your S3 buckets, prefixes, or objects. You can use S3 Access Grants to specify varying object-level permissions at scale; whereas, bucket policies are limited to 20 KB in size. For more information, see [Getting started with S3 Access Grants](access-grants-get-started.md).
+ **Access Points** – You can use Access Points, which are named network endpoints that are attached to a bucket. A bucket can have up to 10,000 access points attached, and for each access point you can enforce distinct permissions and network controls to give you detailed control over access to your S3 objects. For more information, see [Managing access to shared datasets with access points](access-points.md). 

### The AWS account owner or bucket owner must grant permissions at the object-level or prefix-level, and these permissions vary from object to object or prefix to prefix


In a bucket policy, for example, you can grant access to the objects within a bucket that share a specific [key name prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) or have a specific tag. You can grant read permission on objects starting with the key name prefix `logs/`. However, if your access permissions vary by object, granting permissions to individual objects by using a bucket policy might not be practical, especially since bucket policies are limited to 20 KB in size.

We recommend the following access management tools for this use case:
+ **S3 Access Grants** – You can use S3 Access Grants to manage object-level or prefix-level permissions. Unlike bucket policies, you can use S3 Access Grants to specify varying object-level permissions at scale. Bucket policies are limited to 20 KB in size. For more information, see [Getting started with S3 Access Grants](access-grants-get-started.md).
+ **Access Points** – You can use access points to manage object-level or prefix-level permissions. Access Points are named network endpoints that are attached to a bucket. A bucket can have up to 10,000 access points attached, and for each access point you can enforce distinct permissions and network controls to give you detailed control over access to your S3 objects. For more information, see [Managing access to shared datasets with access points](access-points.md).
+ **ACLs** – We do not recommend using Access Control Lists (ACLs), especially because ACLs are limited to 100 grants per object. However, if you choose to turn on ACLs, in your Bucket Settings, set *Object Ownership* to *Bucket owner preferred* and *ACLs enabled*. With this setting, new objects that are written with the `bucket-owner-full-control` canned ACL are automatically owned by the bucket owner rather than the object writer. You can then use object ACLs, which is an XML-formatted access policy, to grant other users access to the object. For more information, see [Access control list (ACL) overview](acl-overview.md).

### The AWS account owner or bucket owner wants to limit bucket access only to specific account IDs


We recommend the following access management tools for this use case:
+ **Bucket policy** – With bucket policies, you manage one policy for each bucket. For more information, see [Bucket policies for Amazon S3](bucket-policies.md).
+ **Access Points** – Access Points are named network endpoints that are attached to a bucket. A bucket can have up to 10,000 access points attached, and for each access point you can enforce distinct permissions and network controls to give you detailed control over access to your S3 objects. For more information, see [Managing access to shared datasets with access points](access-points.md).

### The AWS account owner or bucket owner wants distinct endpoints for every user or application that accesses their data


We recommend the following access management tool for this use case:
+ **Access Points** – Access Points are named network endpoints that are attached to a bucket. A bucket can have up to 10,000 access points attached, and for each access point you can enforce distinct permissions and network controls to give you detailed control over access to your S3 objects. Each access point enforces a customized access point policy that works in conjunction with the bucket policy that is attached to the underlying bucket. For more information, see [Managing access to shared datasets with access points](access-points.md).

### The AWS account owner or bucket owner must manage access from Virtual Private Cloud (VPC) endpoints for S3


Virtual Private Cloud (VPC) endpoints for Amazon S3 are logical entities within a VPC that allow connectivity only to S3. We recommend the following access management tools for this use case:
+ **Buckets in a VPC setting** – You can use a bucket policy to control who is allowed to access your buckets and which VPC endpoints they can access. For more information, see [Controlling access from VPC endpoints with bucket policies](example-bucket-policies-vpc-endpoint.md).
+ **Access Points** – If you choose to set up access points, you can use an access point policy. You can configure any access point to accept requests only from a virtual private cloud (VPC) to restrict Amazon S3 data access to a private network. You can also configure custom block public access settings for each access point. For more information, see [Managing access to shared datasets with access points](access-points.md).

### The AWS account owner or bucket owner must make a static website publicly available


With S3, you can host a static website and allow anyone to view the content of the website, which is hosted from an S3 bucket. 

We recommend the following access management tools for this use case:
+ **Amazon CloudFront** – This solution allows you to host an Amazon S3 static website to the public while also continuing to block all public access to a bucket's content. If you want to keep all four S3 Block Public Access settings enabled and host an S3 static website, you can use Amazon CloudFront origin access control (OAC). Amazon CloudFront provides the capabilities required to set up a secure static website. Also, Amazon S3 static websites that do not use this solution can only support HTTP endpoints. CloudFront uses the durable storage of Amazon S3 while providing additional security headers, such as HTTPS. HTTPS adds security by encrypting a normal HTTP request and protecting against common cyberattacks.

  For more information, see [Getting started with a secure static website](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html) in the *Amazon CloudFront Developer Guide*.
+ **Making your Amazon S3 bucket publicly accessible** – You can configure a bucket to be used as a publicly accessed static website. 
**Warning**  
We do not recommend this method. Instead, we recommend you use Amazon S3 static websites as a part of Amazon CloudFront. For more information, see the previous option, or see [Getting started with a secure static website](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html).

  To create an Amazon S3 static website, without Amazon CloudFront, first, you must turn off all Block Public Access settings. When writing the bucket policy for your static website, make sure that you allow only `s3:GetObject` actions, not `ListObject` or `PutObject` permissions. This helps make sure that users cannot view all the objects in your bucket or add their own content. For more information, see [Setting permissions for website access](WebsiteAccessPermissionsReqd.md).

### The AWS account owner or bucket owner wants to make the content of a bucket publicly available


When creating a new Amazon S3 bucket, the *Block Public Access* setting is enabled by default. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md). 

We do not recommend allowing public access to your bucket. However, if you must do so for a particular use case, we recommend the following access management tool for this use case:
+ **Disable Block Public Access setting** – A bucket owner can allow unauthenticated requests to the bucket. For example, unauthenticated [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) requests are allowed when a bucket has a public bucket policy, or when a bucket ACL grants public access. All unauthenticated requests are made by other arbitrary AWS users, or even unauthenticated, anonymous users. This user is represented in ACLs by the specific canonical user ID `65a011a29cdf8ec533ec3d1ccaae921c`. If an object is uploaded to a `WRITE` or `FULL_CONTROL`, then this specifically grants access to the All Users group or the anonymous user. For more information about public bucket policies and public access control lists (ACLs), see [The meaning of "public"](access-control-block-public-access.md#access-control-block-public-access-policy-status).

### The AWS account owner or bucket owner has exceeded access policy size limits


Both bucket policies and identity-based policies have a 20 KB size limit. If your access permission requirements are complex, you might exceed this size limit. 

We recommended the following access management tools for this use case:
+ **Access Points** – Use access points if this works with your use case. With access points, each bucket has multiple named network endpoints, each with its own access point policy that works with the underlying bucket policy. However, access points can only act on objects, not buckets, and does not support cross-Region replication. For more information, see [Managing access to shared datasets with access points](access-points.md).
+ **S3 Access Grants** – Use S3 Access Grants, which supports a very large number of grants that give access to buckets, prefixes, or objects. For more information, see [Getting started with S3 Access Grants](access-grants-get-started.md).

### The AWS account owner or admin role wants to grant bucket, prefix, or object access directly to users or groups in a corporate directory


Instead of managing users, groups, and roles through AWS Identity and Access Management (IAM), you can add your corporate directory to AWS IAM Identity Center. For more information, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html). 

After you add your corporate directory to AWS IAM Identity Center, we recommend that you use the following access management tool to grant corporate directory identities access to your S3 resources:
+ **S3 Access Grants** – Use S3 Access Grants, which supports granting access to users or roles in your corporate directory. For more information, see [Getting started with S3 Access Grants](access-grants-get-started.md).

### The AWS account owner or bucket owner wants to give the AWS CloudFront service access to write CloudFront logs to an S3 bucket


We recommended the following access management tool for this use case:
+ **Bucket ACL** – The only recommended use case for bucket ACLs is to grant permissions to certain AWS services, such as the Amazon CloudFront `awslogsdelivery` account. When you create or update a distribution and turn on CloudFront logging, CloudFront updates the bucket ACL to give the `awslogsdelivery` account `FULL_CONTROL` permissions to write logs to your bucket. For more information, see [Permissions required to configure standard logging and to access your log files](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#AccessLogsBucketAndFileOwnership) in the *Amazon CloudFront Developer Guide*. If the bucket that stores the logs uses the *Bucket owner enforced* setting for S3 Object Ownership to turn off ACLs, CloudFront cannot write logs to the bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

### You, as the bucket owner, want to maintain full control of objects that are added to the bucket by other users


You can grant other accounts access to upload objects to your bucket by using a bucket policy, access point, or S3 Access Grants. If you have granted cross-account access to your bucket, you can make sure that any objects uploaded to your bucket remain under your full control. 

We recommended the following access management tool for this use case:
+ **Object Ownership** – Keep the bucket-level setting *Object Ownership* at the default *Bucket owner enforced* setting.

## Access management troubleshooting


The following resources can help you troubleshoot any issues with S3 access management: 

**Troubleshooting Access Denied (403 Forbidden) errors**  
If you encounter access denial issues, check the account-level and bucket-level settings. Also, check the access management feature that you are using to grant access to make sure that the policy, setting, or configuration is correct. For more information about common causes of Access Denied (403 Forbidden) errors in Amazon S3, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

**IAM Access Analyzer for S3**  
If you do not want to make any of your resources publicly available, or if you want to limit public access to your resources, you can use IAM Access Analyzer for S3. On the Amazon S3 console, use IAM Access Analyzer for S3 to review all buckets that have bucket access control lists (ACLs), bucket policies, or access point policies that grant public or shared access. IAM Access Analyzer for S3 alerts you to buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings that report the source and level of public or shared access. 

In IAM Access Analyzer for S3, you can block all public access to a bucket with a single action. We recommend that you block all public access to your buckets, unless you require public access to support a specific use case. Before you block all public access, make sure that your applications will continue to work correctly without public access. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

You can also review your bucket-level permission settings to configure detailed levels of access. For specific and verified use cases that require public or shared access, you can acknowledge and record your intent for the bucket to remain public or shared by archiving the findings for the bucket. You can revisit and modify these bucket configurations at any time. You can also download your findings as a CSV report for auditing purposes.

IAM Access Analyzer for S3 is available at no extra cost on the Amazon S3 console. IAM Access Analyzer for S3 is powered by AWS Identity and Access Management (IAM) IAM Access Analyzer. To use IAM Access Analyzer for S3 on the Amazon S3 console, you must visit the [IAM Console](https://console.aws.amazon.com/iam/) and create an account-level analyzer in IAM Access Analyzer for each individual Region. 

For more information about IAM Access Analyzer for S3, see [Reviewing bucket access using IAM Access Analyzer for S3](access-analyzer.md).

**Logging and monitoring**  
Monitoring is an important part of maintaining the reliability, availability, and performance of your Amazon S3 solutions so that you can more easily debug an access failure. Logging can provide insight into any errors users are receiving, and when and what requests are made. AWS provides several tools for monitoring your Amazon S3 resources, such as the following: 
+ AWS CloudTrail
+ Amazon S3 Access Logs
+ AWS Trusted Advisor
+ Amazon CloudWatch

For more information, see [Logging and monitoring in Amazon S3](monitoring-overview.md).

# Identity and Access Management for Amazon S3
Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use Amazon S3 resources. IAM is an AWS service that you can use with no additional charge.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Topics**
+ [

## Audience
](#security_iam_audience)
+ [

## Authenticating with identities
](#security_iam_authentication)
+ [

## Managing access using policies
](#security_iam_access-manage)
+ [

# How Amazon S3 works with IAM
](security_iam_service-with-iam.md)
+ [

# How Amazon S3 authorizes a request
](how-s3-evaluates-access-control.md)
+ [

# Required permissions for Amazon S3 API operations
](using-with-s3-policy-actions.md)
+ [

# Policies and permissions in Amazon S3
](access-policy-language-overview.md)
+ [

# Bucket policies for Amazon S3
](bucket-policies.md)
+ [

# Identity-based policies for Amazon S3
](security_iam_id-based-policy-examples.md)
+ [

# Walkthroughs that use policies to manage access to your Amazon S3 resources
](example-walkthroughs-managing-access.md)
+ [

# Using service-linked roles for Amazon S3 Storage Lens
](using-service-linked-roles.md)
+ [

# Troubleshooting Amazon S3 identity and access
](security_iam_troubleshoot.md)
+ [

# AWS managed policies for Amazon S3
](security-iam-awsmanpol.md)

## Audience


How you use AWS Identity and Access Management (IAM) differs based on your role:
+ **Service user** - request permissions from your administrator if you cannot access features (see [Troubleshooting Amazon S3 identity and access](security_iam_troubleshoot.md))
+ **Service administrator** - determine user access and submit permission requests (see [How Amazon S3 works with IAM](security_iam_service-with-iam.md))
+ **IAM administrator** - write policies to manage access (see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md))

## Authenticating with identities


Authentication is how you sign in to AWS using your identity credentials. You must be authenticated as the AWS account root user, an IAM user, or by assuming an IAM role.

You can sign in as a federated identity using credentials from an identity source like AWS IAM Identity Center (IAM Identity Center), single sign-on authentication, or Google/Facebook credentials. For more information about signing in, see [How to sign in to your AWS account](https://docs.aws.amazon.com/signin/latest/userguide/how-to-sign-in.html) in the *AWS Sign-In User Guide*.

For programmatic access, AWS provides an SDK and CLI to cryptographically sign requests. For more information, see [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) in the *IAM User Guide*.

### AWS account root user


 When you create an AWS account, you begin with one sign-in identity called the AWS account *root user* that has complete access to all AWS services and resources. We strongly recommend that you don't use the root user for everyday tasks. For tasks that require root user credentials, see [Tasks that require root user credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks) in the *IAM User Guide*. 

### Federated identity


As a best practice, require human users to use federation with an identity provider to access AWS services using temporary credentials.

A *federated identity* is a user from your enterprise directory, web identity provider, or Directory Service that accesses AWS services using credentials from an identity source. Federated identities assume roles that provide temporary credentials.

For centralized access management, we recommend AWS IAM Identity Center. For more information, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in the *AWS IAM Identity Center User Guide*.

### IAM users and groups


An *[IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)* is an identity with specific permissions for a single person or application. We recommend using temporary credentials instead of IAM users with long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

An [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) specifies a collection of IAM users and makes permissions easier to manage for large sets of users. For more information, see [Use cases for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/gs-identities-iam-users.html) in the *IAM User Guide*.

### IAM roles


An *[IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)* is an identity with specific permissions that provides temporary credentials. You can assume a role by [switching from a user to an IAM role (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html) or by calling an AWS CLI or AWS API operation. For more information, see [Methods to assume a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage-assume.html) in the *IAM User Guide*.

IAM roles are useful for federated user access, temporary IAM user permissions, cross-account access, cross-service access, and applications running on Amazon EC2. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Managing access using policies


You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy defines permissions when associated with an identity or resource. AWS evaluates these policies when a principal makes a request. Most policies are stored in AWS as JSON documents. For more information about JSON policy documents, see [Overview of JSON policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#access_policies-json) in the *IAM User Guide*.

Using policies, administrators specify who has access to what by defining which **principal** can perform **actions** on what **resources**, and under what **conditions**.

By default, users and roles have no permissions. An IAM administrator creates IAM policies and adds them to roles, which users can then assume. IAM policies define permissions regardless of the method used to perform the operation.

### Identity-based policies


Identity-based policies are JSON permissions policy documents that you attach to an identity (user, group, or role). These policies control what actions identities can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

Identity-based policies can be *inline policies* (embedded directly into a single identity) or *managed policies* (standalone policies attached to multiple identities). To learn how to choose between managed and inline policies, see [Choose between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-choosing-managed-or-inline.html) in the *IAM User Guide*.

### Resource-based policies


Resource-based policies are JSON policy documents that you attach to a resource. Examples include IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy.

Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy.

### Other policy types


AWS supports additional policy types that can set the maximum permissions granted by more common policy types:
+ **Permissions boundaries** – Set the maximum permissions that an identity-based policy can grant to an IAM entity. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*.
+ **Service control policies (SCPs)** – Specify the maximum permissions for an organization or organizational unit in AWS Organizations. For more information, see [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ **Resource control policies (RCPs)** – Set the maximum available permissions for resources in your accounts. For more information, see [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in the *AWS Organizations User Guide*.
+ **Session policies** – Advanced policies passed as a parameter when creating a temporary session for a role or federated user. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) in the *IAM User Guide*.

### Multiple policy types


When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*.

# How Amazon S3 works with IAM
How Amazon S3 works with IAM

Before you use IAM to manage access to Amazon S3, learn what IAM features are available to use with Amazon S3.






**IAM features you can use with Amazon S3**  

| IAM feature | Amazon S3 support | 
| --- | --- | 
|  [Identity-based policies](#security_iam_service-with-iam-id-based-policies)  |   Yes  | 
|  [Resource-based policies](#security_iam_service-with-iam-resource-based-policies)  |   Yes  | 
|  [Policy actions](#security_iam_service-with-iam-id-based-policies-actions)  |   Yes  | 
|  [Policy resources](#security_iam_service-with-iam-id-based-policies-resources)  |   Yes  | 
|  [Policy condition keys (service-specific)](#security_iam_service-with-iam-id-based-policies-conditionkeys)  |   Yes  | 
|  [ACLs](#security_iam_service-with-iam-acls)  |   Yes  | 
|  [ABAC (tags in policies)](#security_iam_service-with-iam-tags)  |   Partial  | 
|  [Temporary credentials](#security_iam_service-with-iam-roles-tempcreds)  |   Yes  | 
|  [Forward access sessions (FAS)](#security_iam_service-with-iam-principal-permissions)  |   Yes  | 
|  [Service roles](#security_iam_service-with-iam-roles-service)  |   Yes  | 
|  [Service-linked roles](#security_iam_service-with-iam-roles-service-linked)  |   Partial  | 

To get a high-level view of how Amazon S3 and other AWS services work with most IAM features, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Identity-based policies for Amazon S3
Identity-based policies

**Supports identity-based policies:** Yes

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for Amazon S3


To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).

## Resource-based policies within Amazon S3
Resource-based policies

**Supports resource-based policies:** Yes

Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services.

To enable cross-account access, you can specify an entire account or IAM entities in another account as the principal in a resource-based policy. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

The Amazon S3 service supports *bucket policies*, *access points policies*, and *access grants*:
+ Bucket policies are resource-based policies that are attached to an Amazon S3 bucket. A bucket policy defines which principals can perform actions on the bucket.
+ Access point policies are resource-based polices that are evaluated in conjunction with the underlying bucket policy.
+ Access grants are a simplified model for defining access permissions to data in Amazon S3 by prefix, bucket, or object. For information about S3 Access Grants, see [Managing access with S3 Access Grants](access-grants.md).

### Principals for bucket policies
Principals

The `Principal` element specifies the user, account, service, or other entity that is either allowed or denied access to a resource. The following are examples of specifying `Principal`. For more information, see [Principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in the *IAM User Guide*.

#### Grant permissions to an AWS account


To grant permissions to an AWS account, identify the account using the following format.

```
"AWS":"account-ARN"
```

The following are examples.

```
"Principal":{"AWS":"arn:aws:iam::AccountIDWithoutHyphens:root"}
```

```
"Principal":{"AWS":["arn:aws:iam::AccountID1WithoutHyphens:root","arn:aws:iam::AccountID2WithoutHyphens:root"]}
```

**Note**  
The examples above grant permissions to the root user, which delegates permissions to the account level. However, IAM policies are still required on the specific roles and users in the account.

#### Grant permissions to an IAM user


To grant permission to an IAM user within your account, you must provide an `"AWS":"user-ARN"` name-value pair.

```
"Principal":{"AWS":"arn:aws:iam::account-number-without-hyphens:user/username"}
```

For detailed examples that provide step-by-step instructions, see [Example 1: Bucket owner granting its users bucket permissions](example-walkthroughs-managing-access-example1.md) and [Example 3: Bucket owner granting permissions to objects it does not own](example-walkthroughs-managing-access-example3.md).

**Note**  
If an IAM identity is deleted after you update your bucket policy, the bucket policy will show a unique identifier in the principal element instead of an ARN. These unique IDs are never reused, so you can safely remove principals with unique identifiers from all of your policy statements. For more information about unique identifiers, see [IAM identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) in the *IAM User Guide*.

#### Grant anonymous permissions


**Warning**  
Use caution when granting anonymous access to your Amazon S3 bucket. When you grant anonymous access, anyone in the world can access your bucket. We highly recommend that you never grant any kind of anonymous write access to your S3 bucket.

To grant permission to everyone, also referred as anonymous access, you set the wildcard (`"*"`) as the `Principal` value. For example, if you configure your bucket as a website, you want all the objects in the bucket to be publicly accessible.

```
"Principal":"*"
```

```
"Principal":{"AWS":"*"}
```

Using `"Principal": "*"` with an `Allow` effect in a resource-based policy allows anyone, even if they’re not signed in to AWS, to access your resource. 

Using `"Principal" : { "AWS" : "*" }` with an `Allow` effect in a resource-based policy allows any root user, IAM user, assumed-role session, or federated user in any account in the same partition to access your resource.

For anonymous users, these two methods are equivalent. For more information, see [All principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-anonymous) in the *IAM User Guide*.

You cannot use a wildcard to match part of a principal name or ARN.

**Important**  
In AWS access control policies, the Principals "\$1" and \$1"AWS": "\$1"\$1 behave identically.

#### Restrict resource permissions


You can also use resource policy to restrict access to resources that would otherwise be available to IAM principals. Use a `Deny` statement to prevent access.

The following example blocks access if a secure transport protocol isn’t used:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyBucketAccessIfSTPNotUsed",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}
```

------

Using `"Principal": "*"` so that this restriction applies to everyone is a best practice for this policy, instead of attempting to deny access only to specific accounts or principals using this method. 

#### Require access through CloudFront URLs


You can require that your users access your Amazon S3 content only by using CloudFront URLs instead of Amazon S3 URLs. To do this, create a CloudFront origin access control (OAC). Then, change the permissions on your S3 data. In your bucket policy, you can set CloudFront as the Principal as follows:

```
"Principal":{"Service":"cloudfront.amazonaws.com"}
```

Use a `Condition` element in the policy to allow CloudFront to access the bucket only when the request is on behalf of the CloudFront distribution that contains the S3 origin.

```
        "Condition": {
           "StringEquals": {
              "AWS:SourceArn": "arn:aws:cloudfront::111122223333:distribution/CloudFront-distribution-ID"
           }
        }
```

For more information about requiring S3 access through CloudFront URLs, see [Restricting access to an Amazon Simple Storage Service origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*. For more information about the security and privacy benefits of using Amazon CloudFront, see [Configuring secure access and restricting access to content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecurityAndPrivateContent.html). 

### Resource-based policy examples for Amazon S3

+ To view policy examples for Amazon S3 buckets, see [Bucket policies for Amazon S3](bucket-policies.md).
+ To view policy examples for access points, see [Configuring IAM policies for using access points](access-points-policies.md).

## Policy actions for Amazon S3
Policy actions

**Supports policy actions:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Include actions in a policy to grant permissions to perform the associated operation.

The following shows different types of mapping relationship between S3 API operations and the required policy actions.
+ One-to-one mapping with the same name. For example, to use the `PutBucketPolicy` API operation, the `s3:PutBucketPolicy` policy action is required.
+ One-to-one mapping with different names. For example, to use the `ListObjectsV2` API operation, the `s3:ListBucket` policy action is required.
+ One-to-many mapping. For example, to use the `HeadObject` API operation, the `s3:GetObject` is required. Also, when you use S3 Object Lock and want to get an object's Legal Hold status or retention settings, the corresponding `s3:GetObjectLegalHold` or `s3:GetObjectRetention` policy actions are also required before you can use the `HeadObject` API operation.
+ Many-to-one mapping. For example, to use the `ListObjectsV2` or `HeadBucket` API operations, the `s3:ListBucket` policy action is required.



To see a list of Amazon S3 actions for use in policies, see [Actions defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-actions-as-permissions) in the *Service Authorization Reference*. For a complete list of Amazon S3 API operations, see [Amazon S3 API Actions](https://docs.aws.amazon.com//AmazonS3/latest/API/API_Operations.html) in the *Amazon Simple Storage Service API Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

Policy actions in Amazon S3 use the following prefix before the action:

```
s3
```

To specify multiple actions in a single statement, separate them with commas.

```
"Action": [
      "s3:action1",
      "s3:action2"
         ]
```





### Bucket operations


Bucket operations are S3 API operations that operate on the bucket resource type. For example, `CreateBucket`, `ListObjectsV2`, and `PutBucketPolicy`. S3 policy actions for bucket operations require the `Resource` element in bucket policies or IAM identity-based policies to be the S3 bucket type Amazon Resource Name (ARN) identifier in the following example format. 

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
```

The following bucket policy grants the user `Akua` with account `12345678901` the `s3:ListBucket` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html) API operation and list objects in an S3 bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to list objects in the bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
        }
    ]
}
```

------
<a name="bucket-operations-ap"></a>
**Bucket operations in policies for access points for general purpose buckets**  
Permissions granted in an access point for general purpose buckets policy are effective only if the underlying bucket allows the same permissions. When you use S3 Access Points, you must delegate access control from the bucket to the access point or add the same permissions in the access point policies to the underlying bucket's policy. For more information, see [Configuring IAM policies for using access points](access-points-policies.md). In access point policies, S3 policy actions for bucket operations require you to use the access point ARN for the `Resource` element in the following format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:ListBucket` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html) API operation through the S3 access point named `example-access-point`. This permission allows `Akua` to list the objects in the bucket that's associated with `example-access-point`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAkuaToListObjectsInBucketThroughAccessPoint",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:us-west-2:111122223333:accesspoint/example-access-point"
        }
    ]
}
```

------

**Note**  
Not all bucket operations are supported by access points for general purpose buckets. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
<a name="bucket-operations-ap-directory-buckets"></a>
**Bucket operations in policies for access points for directory buckets**  
Permissions granted in an access points for directory buckets policy are effective only if the underlying bucket allows the same permissions. When you use S3 Access Points, you must delegate access control from the bucket to the access point or add the same permissions in the access point policies to the underlying bucket's policy. For more information, see [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md). In access point policies, S3 policy actions for bucket operations require you to use the access point ARN for the `Resource` element in the following format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:ListBucket` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html) API operation through the access point named `example-access-point--usw2-az1--xa-s3`. This permission allows `Akua` to list the objects in the bucket that's associated with `example-access-point--usw2-az1--xa-s3`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAkuaToListObjectsInTheBucketThroughAccessPoint",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3express:us-east-1:111122223333:accesspoint/example-access-point-usw2-az1-xa-s3"
        }
    ]
}
```

------

**Note**  
Not all bucket operations are supported by access points for directory buckets. For more information, see [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md).

### Object operations


Object operations are S3 API operations that act upon the object resource type. For example, `GetObject`, `PutObject`, and `DeleteObject`. S3 policy actions for object operations require the `Resource` element in policies to be the S3 object ARN in the following example formats. 

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
```

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/prefix/*"
```

**Note**  
The object ARN must contain a forward slash after the bucket name, as seen in the previous examples.

The following bucket policy grants the user `Akua` with account `12345678901` the `s3:PutObject` permission. This permission allows `Akua` to use the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html) API operation to upload objects to the S3 bucket named `amzn-s3-demo-bucket`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to upload objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

------
<a name="object-operations-ap"></a>
**Object operations in access point policies**  
When you use S3 Access Points to control access to object operations, you can use access point policies. When you use access point policies, S3 policy actions for object operations require you to use the access point ARN for the `Resource` element in the following format: `arn:aws:s3:region:account-id:accesspoint/access-point-name/object/resource`. For object operations that use access points, you must include the `/object/` value after the whole access point ARN in the `Resource` element. Here are some examples.

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point/object/*"
```

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point/object/prefix/*"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:GetObject` permission. This permission allows `Akua` to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html) API operation through the access point named `example-access-point` on all objects in the bucket that's associated with the access point. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to get objects through access point",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-access-point/object/*"
        }
    ]
}
```

------

**Note**  
Not all object operations are supported by access points. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
<a name="object-operations-ap-directory-buckets"></a>
**Object operations in policies for access points for directory buckets**  
When you use access points for directory buckets to control access to object operations, you can use access point policies. When you use access point policies, S3 policy actions for object operations require you to use the access point ARN for the `Resource` element in the following format: `arn:aws:s3:region:account-id:accesspoint/access-point-name/object/resource`. For object operations that use access points, you must include the `/object/` value after the whole access point ARN in the `Resource` element. Here are some examples.

```
"Resource": "arn:aws:s3express:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3/object/*"
```

```
"Resource": "arn:aws:s3express:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3/object/prefix/*"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:GetObject` permission. This permission allows `Akua` to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html) API operation through the access point named `example-access-point--usw2-az1--xa-s3` on all objects in the bucket that's associated with the access point. 

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to get objects through access point",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::12345678901:user/Akua"
            },
            "Action": "s3express:CreateSession","s3:GetObject"
            "Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3/object/*"
        }
    ]
}
```

**Note**  
Not all object operations are supported by access points for directory buckets. For more information, see [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md).

### Access point for general purpose bucket operations


Access point operations are S3 API operations that operate on the `accesspoint` resource type. For example, `CreateAccessPoint`, `DeleteAccessPoint`, and `GetAccessPointPolicy`. S3 policy actions for access point operations can only be used in IAM identity-based policies, not in bucket policies or access point policies. Access points operations require the `Resource` element to be the access point ARN in the following example format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point"
```

The following IAM identity-based policy grants the `s3:GetAccessPointPolicy` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html) API operation on the S3 access point named `example-access-point`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "GrantPermissionToRetrieveTheAccessPointPolicyOfAccessPointExampleAccessPoint",
            "Effect": "Allow",
            "Action": [
            "s3:GetAccessPointPolicy"
            ],
            "Resource": "arn:aws:s3:*:123456789012:accesspoint/example-access-point"
        }
    ]
}
```

------

When you use Access Points, to control access to bucket operations, see [Bucket operations in policies for access points for general purpose buckets](#bucket-operations-ap); to control access to object operations, see [Object operations in access point policies](#object-operations-ap). For more information about how to configure access point policies, see [Configuring IAM policies for using access points](access-points-policies.md).

### Access point for directory buckets operations


Access point for directory buckets operations are S3 API operations that operate on the `accesspoint` resource type. For example, `CreateAccessPoint`, `DeleteAccessPoint`, and `GetAccessPointPolicy`. S3 policy actions for access point operations can only be used in IAM identity-based policies, not in bucket policies or access point policies. Access points for directory buckets operations require the `Resource` element to be the access point ARN in the following example format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3"
```

The following IAM identity-based policy grants the `s3express:GetAccessPointPolicy` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html) API operation on the access point named `example-access-point--usw2-az1--xa-s3`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "GrantPermissionToRetrieveTheAccessPointPolicyOfAccessPointExampleAccessPointUsw2Az1XaS3",
            "Effect": "Allow",
            "Action": [
            "s3express:CreateSession","s3express:GetAccessPointPolicy"
            ],
            "Resource": "arn:aws:s3:*:111122223333:accesspoint/example-access-point"
        }
    ]
}
```

------

The following IAM identity-based policy grants the `s3express:CreateAccessPoint` permission to create an access points for directory buckets.

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "Grant CreateAccessPoint.",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "s3express:CreateAccessPoint""Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

The following IAM identity-based policy grants the `s3express:PutAccessPointScope` permission to create access point scope for access points for directory buckets.

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "Grant PutAccessPointScope",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "s3express:CreateAccessPoint",
            "S3Express:PutAccessPointScope""Effect": "Allow",
            "Resource": "*",
        }
    ]
}
```

When you use access points for directory buckets to control access to bucket operations, see [Bucket operations in policies for access points for directory buckets](#bucket-operations-ap-directory-buckets); to control access to object operations, see [Object operations in policies for access points for directory buckets](#object-operations-ap-directory-buckets). For more information about how to configure access points for directory buckets policies, see [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md).

### Object Lambda Access Point operations


With Amazon S3 Object Lambda, you can add your own code to Amazon S3 `GET`, `LIST`, and `HEAD` requests to modify and process data as it is returned to an application. You can make requests through an Object Lambda Access Point, which works the same as making requests through other access points. For more information, see [Transforming objects with S3 Object Lambda](transforming-objects.md).

For more information about how to configure policies for Object Lambda Access Point operations, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

### Multi-Region Access Point operations


A Multi-Region Access Point provides a global endpoint that applications can use to fulfill requests from S3 buckets that are located in multiple AWS Region. You can use a Multi-Region Access Point to build multi-Region applications with the same architecture that's used in a single Region, and then run those applications anywhere in the world. For more information, see [Managing multi-Region traffic with Multi-Region Access Points](MultiRegionAccessPoints.md).

For more information about how to configure policies for Multi-Region Access Point operations, see [Multi-Region Access Point policy examples](MultiRegionAccessPointPermissions.md#MultiRegionAccessPointPolicyExamples).

### Batch job operations


(Batch Operations) job operations are S3 API operations that operate on the job resource type. For example, `DescribeJob` and `CreateJob`. S3 policy actions for job operations can only be used in IAM identity-based policies, not in bucket policies. Also, job operations require the `Resource` element in IAM identity-based policies to be the `job` ARN in the following example format. 

```
"Resource": "arn:aws:s3:*:123456789012:job/*"
```

The following IAM identity-based policy grants the `s3:DescribeJob` permission to perform the [DescribeJob](https://docs.aws.amazon.com//AmazonS3/latest/API/API_DescribeJob.html) API operation on the S3 Batch Operations job named `example-job`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowDescribingBatchOperationJob",
            "Effect": "Allow",
            "Action": [
            "s3:DescribeJob"
            ],
            "Resource": "arn:aws:s3:*:111122223333:job/example-job"
        }
    ]
}
```

------

### S3 Storage Lens configuration operations


For more information about how to configure S3 Storage Lens configuration operations, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md).

### Account operations


Account operations are S3 API operations that operate on the account level. For example, `GetPublicAccessBlock` (for account). Account isn't a resource type defined by Amazon S3. S3 policy actions for account operations can only be used in IAM identity-based policies, not in bucket policies. Also, account operations require the `Resource` element in IAM identity-based policies to be `"*"`. 

The following IAM identity-based policy grants the `s3:GetAccountPublicAccessBlock` permission to perform the account-level [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetPublicAccessBlock.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetPublicAccessBlock.html) API operation and retrieve the account-level Public Access Block settings.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowRetrievingTheAccountLevelPublicAccessBlockSettings",
         "Effect":"Allow",
         "Action":[
            "s3:GetAccountPublicAccessBlock" 
         ],
         "Resource":[
            "*"
         ]
       }
    ]
}
```

------

### Policy examples for Amazon S3

+ To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).
+ To view examples of Amazon S3 resource-based policies, see [Bucket policies for Amazon S3](bucket-policies.md) and [Configuring IAM policies for using access points](access-points-policies.md).

## Policy resources for Amazon S3
Policy resources

**Supports policy resources:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

Some Amazon S3 API actions support multiple resources. For example, `s3:GetObject` accesses `example-resource-1` and `example-resource-2`, so a principal must have permissions to access both resources. To specify multiple resources in a single statement, separate the ARNs with commas, as shown in the following example. 

```
"Resource": [
      "example-resource-1",
      "example-resource-2"
```

Resources in Amazon S3 are buckets, objects, access points, or jobs. In a policy, use the Amazon Resource Name (ARN) of the bucket, object, access point, or job to identify the resource.

To see a complete list of Amazon S3 resource types and their ARNs, see [Resources defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-resources-for-iam-policies) in the *Service Authorization Reference*. To learn with which actions you can specify the ARN of each resource, see [Actions defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-actions-as-permissions).

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

### Wildcard characters in resource ARNs


You can use wildcard characters as part of the resource ARN. You can use the wildcard characters (`*` and `?`) within any ARN segment (the parts separated by colons). An asterisk (`*`) represents any combination of zero or more characters, and a question mark (`?`) represents any single character. You can use multiple `*` or `?` characters in each segment. However, a wildcard character can't span segments. 
+ The following ARN uses the `*` wildcard character in the `relative-ID` part of the ARN to identify all objects in the `amzn-s3-demo-bucket` bucket.

  ```
  1. arn:aws:s3:::amzn-s3-demo-bucket/*
  ```
+ The following ARN uses `*` to indicate all S3 buckets and objects.

  ```
  arn:aws:s3:::*
  ```
+ The following ARN uses both of the wildcard characters, `*` and `?`, in the `relative-ID` part. This ARN identifies all objects in buckets such as *`amzn-s3-demo-example1bucket`*, `amzn-s3-demo-example2bucket`, `amzn-s3-demo-example3bucket`, and so on.

  ```
  1. arn:aws:s3:::amzn-s3-demo-example?bucket/*
  ```

### Policy variables for resource ARNs


You can use policy variables in Amazon S3 ARNs. At policy-evaluation time, these predefined variables are replaced by their corresponding values. Suppose that you organize your bucket as a collection of folders, with one folder for each of your users. The folder name is the same as the username. To grant users permission to their folders, you can specify a policy variable in the resource ARN:

```
arn:aws:s3:::bucket_name/developers/${aws:username}/
```

At runtime, when the policy is evaluated, the variable `${aws:username}` in the resource ARN is substituted with the username of the person who is making the request. 





### Policy examples for Amazon S3

+ To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).
+ To view examples of Amazon S3 resource-based policies, see [Bucket policies for Amazon S3](bucket-policies.md) and [Configuring IAM policies for using access points](access-points-policies.md).

## Policy condition keys for Amazon S3
Policy condition keys

**Supports service-specific policy condition keys:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

Each Amazon S3 condition key maps to the same name request header allowed by the API on which the condition can be set. Amazon S3‐specific condition keys dictate the behavior of the same name request headers. For example, the condition key `s3:VersionId` used to grant conditional permission for the `s3:GetObjectVersion` permission defines behavior of the `versionId` query parameter that you set in a GET Object request.

To see a list of Amazon S3 condition keys, see [Condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-policy-keys) in the *Service Authorization Reference*. To learn with which actions and resources you can use a condition key, see [Actions defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-actions-as-permissions).

### Example: Restricting object uploads to objects with a specific storage class
Example 5: Restricting Object Uploads to a Specific Storage Class

Suppose that Account A, represented by account ID `123456789012`, owns a bucket. The Account A administrator wants to restrict *`Dave`*, a user in Account A, so that *`Dave`* can upload objects to the bucket only if the object is stored in the `STANDARD_IA` storage class. To restrict object uploads to a specific storage class, the Account A administrator can use the `s3:x-amz-storage-class` condition key, as shown in the following example bucket policy. 

------
#### [ JSON ]

****  

```
{
                 "Version":"2012-10-17",		 	 	 
                 "Statement": [
                   {
                     "Sid": "statement1",
                     "Effect": "Allow",
                     "Principal": {
                       "AWS": "arn:aws:iam::123456789012:user/Dave"
                     },
                     "Action": "s3:PutObject",
                     "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*",
                     "Condition": {
                       "StringEquals": {
                         "s3:x-amz-storage-class": [
                           "STANDARD_IA"
                         ]
                       }
                     }
                   }
                 ]
            }
```

------

In the example, the `Condition` block specifies the `StringEquals` condition that is applied to the specified key-value pair, `"s3:x-amz-acl":["public-read"]`. There is a set of predefined keys that you can use in expressing a condition. The example uses the `s3:x-amz-acl` condition key. This condition requires the user to include the `x-amz-acl` header with value `public-read` in every `PutObject` request.

### Policy examples for Amazon S3

+ To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).
+ To view examples of Amazon S3 resource-based policies, see [Bucket policies for Amazon S3](bucket-policies.md) and [Configuring IAM policies for using access points](access-points-policies.md).

## ACLs in Amazon S3
ACLs

**Supports ACLs:** Yes

In Amazon S3, access control lists (ACLs) control which AWS accounts have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

**Important**  
A majority of modern use cases in Amazon S3 no longer require the use of ACLs. 

For information about using ACLs to control access in Amazon S3, see [Managing access with ACLs](acls.md).

## ABAC with Amazon S3
ABAC

**Supports ABAC (tags in policies):** Partial

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

For information about resources that support ABAC in Amazon S3, see [Using tags for attribute-based access control (ABAC)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-abac).

To view example identity-based policies for limiting access to S3 Batch Operations jobs based on tags, see [Controlling permissions for Batch Operations using job tags](batch-ops-job-tags-examples.md).

### ABAC and object tags
ABAC for objects

In ABAC policies, objects use `s3:` tags instead of `aws:` tags. To control access to objects based on object tags, you provide tag information in the [Condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the following tags:
+ `s3:ExistingObjectTag/tag-key`
+ `s3:RequestObjectTagKeys`
+ `s3:RequestObjectTag/tag-key`

For information about using object tags to control access, including example permission policies, see [Tagging and access control policies](tagging-and-policies.md).

## Using temporary credentials with Amazon S3
Temporary credentials

**Supports temporary credentials:** Yes

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Forward access sessions for Amazon S3
Forward access sessions

**Supports forward access sessions (FAS):** Yes

 Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 
+ FAS is used by Amazon S3 to make calls to AWS KMS to decrypt an object when SSE-KMS was used to encrypt it. For more information, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md). 
+ S3 Access Grants also uses FAS. After you create an access grant to your S3 data for a particular identity, the grantee requests a temporary credential from S3 Access Grants. S3 Access Grants obtains a temporary credential for the requester from AWS STS and vends the credential to the requester. For more information, see [Request access to Amazon S3 data through S3 Access Grants](access-grants-credentials.md).

## Service roles for Amazon S3
Service roles

**Supports service roles:** Yes

 A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**Warning**  
Changing the permissions for a service role might break Amazon S3 functionality. Edit service roles only when Amazon S3 provides guidance to do so.

## Service-linked roles for Amazon S3
Service-linked roles

**Supports service-linked roles:** Partial

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

Amazon S3 supports service-linked roles for Amazon S3 Storage Lens. For details about creating or managing Amazon S3 service-linked roles, see [Using service-linked roles for Amazon S3 Storage Lens](using-service-linked-roles.md).

**Amazon S3 Service as a Principal**


| Service name in the policy | S3 feature | More information | 
| --- | --- | --- | 
|  `s3.amazonaws.com`  |  S3 Replication  |  [Setting up live replication overview](replication-how-setup.md)  | 
|  `s3.amazonaws.com`  |  S3 event notifications  |  [Amazon S3 Event Notifications](EventNotifications.md)  | 
|  `s3.amazonaws.com`  |  S3 Inventory  |  [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md)  | 
|  `access-grants.s3.amazonaws.com`  |  S3 Access Grants  |  [Register a location](access-grants-location-register.md)  | 
|  `batchoperations.s3.amazonaws.com`  |  S3 Batch Operations  |  [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md)  | 
|  `logging.s3.amazonaws.com`  |  S3 Server Access Logging  |  [Enabling Amazon S3 server access logging](enable-server-access-logging.md)  | 
|  `storage-lens.s3.amazonaws.com`  |  S3 Storage Lens  |  [Viewing Amazon S3 Storage Lens metrics using a data export](storage_lens_view_metrics_export.md)  | 

# How Amazon S3 authorizes a request
Request authorization

When Amazon S3 receives a request—for example, a bucket or an object operation—it first verifies that the requester has the necessary permissions. Amazon S3 evaluates all the relevant access policies, user policies, and resource-based policies (bucket policy, bucket access control list (ACL), and object ACL) in deciding whether to authorize the request. 

**Note**  
If the Amazon S3 permission check fails to find valid permissions, an Access Denied (403 Forbidden)permission denied error is returned. For more information, see [Troubleshoot Access Denied (403 Forbidden) errors in Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/troubleshoot-403-errors.html).

To determine whether the requester has permission to perform the specific operation, Amazon S3 does the following, in order, when it receives a request:

1. Converts all the relevant access policies (user policy, bucket policy, and ACLs) at run time into a set of policies for evaluation.

1. Evaluates the resulting set of policies in the following steps. In each step, Amazon S3 evaluates a subset of policies in a specific context, based on the context authority. 

   1. **User context** – In the user context, the parent account to which the user belongs is the context authority.

      Amazon S3 evaluates a subset of policies owned by the parent account. This subset includes the user policy that the parent attaches to the user. If the parent also owns the resource in the request (bucket or object), Amazon S3 also evaluates the corresponding resource policies (bucket policy, bucket ACL, and object ACL) at the same time. 

      A user must have permission from the parent account to perform the operation.

      This step applies only if the request is made by a user in an AWS account. If the request is made by using the root user credentials of an AWS account, Amazon S3 skips this step.

   1. **Bucket context** – In the bucket context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket. 

      If the request is for a bucket operation, the requester must have permission from the bucket owner. If the request is for an object, Amazon S3 evaluates all the policies owned by the bucket owner to check if the bucket owner has not explicitly denied access to the object. If there is an explicit deny set, Amazon S3 does not authorize the request. 

   1. **Object context** – If the request is for an object, Amazon S3 evaluates the subset of policies owned by the object owner. 

Following are some example scenarios that illustrate how Amazon S3 authorizes a request.

**Example – Requester is an IAM principal**  
If the requester is an IAM principal, Amazon S3 must determine if the parent AWS account to which the principal belongs has granted the principal necessary permission to perform the operation. In addition, if the request is for a bucket operation, such as a request to list the bucket content, Amazon S3 must verify that the bucket owner has granted permission for the requester to perform the operation. To perform a specific operation on a resource, an IAM principal needs permission from both the parent AWS account to which it belongs and the AWS account that owns the resource.

 

**Example – Requester is an IAM principal – If the request is for an operation on an object that the bucket owner doesn't own**  
If the request is for an operation on an object that the bucket owner doesn't own, in addition to making sure the requester has permissions from the object owner, Amazon S3 must also check the bucket policy to ensure the bucket owner has not set explicit deny on the object. A bucket owner (who pays the bill) can explicitly deny access to objects in the bucket regardless of who owns it. The bucket owner can also delete any object in the bucket.  
By default, when another AWS account uploads an object to your S3 general purpose bucket, that account (the object writer) owns the object, has access to it, and can grant other users access to it through access control lists (ACLs). You can use Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner, automatically own every object in your general purpose bucket. As a result, access control for your data is based on policies, such as IAM user policies, S3 bucket policies, virtual private cloud (VPC) endpoint policies, and AWS Organizations service control policies (SCPs). For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

For more information about how Amazon S3 evaluates access policies to authorize or deny requests for bucket operations and object operations, see the following topics:

**Topics**
+ [

# How Amazon S3 authorizes a request for a bucket operation
](access-control-auth-workflow-bucket-operation.md)
+ [

# How Amazon S3 authorizes a request for an object operation
](access-control-auth-workflow-object-operation.md)

# How Amazon S3 authorizes a request for a bucket operation
For a bucket operation

When Amazon S3 receives a request for a bucket operation, Amazon S3 converts all the relevant permissions into a set of policies to evaluate at run time. Relevant permissions include resource-based permissions (for example, bucket policies and bucket access control lists) and user policies if the request is from an IAM principal. Amazon S3 then evaluates the resulting set of policies in a series of steps according to a specific context—user context or bucket context: 

1. **User context** – If the requester is an IAM principal, the principal must have permission from the parent AWS account to which it belongs. In this step, Amazon S3 evaluates a subset of policies owned by the parent account (also referred to as the context authority). This subset of policies includes the user policy that the parent account attaches to the principal. If the parent also owns the resource in the request (in this case, the bucket), Amazon S3 also evaluates the corresponding resource policies (bucket policy and bucket ACL) at the same time. Whenever a request for a bucket operation is made, the server access logs record the canonical ID of the requester. For more information, see [Logging requests with server access logging](ServerLogs.md).

1. **Bucket context** – The requester must have permissions from the bucket owner to perform a specific bucket operation. In this step, Amazon S3 evaluates a subset of policies owned by the AWS account that owns the bucket. 

   The bucket owner can grant permission by using a bucket policy or bucket ACL. If the AWS account that owns the bucket is also the parent account of an IAM principal, then it can configure bucket permissions in a user policy. 

 The following is a graphical illustration of the context-based evaluation for bucket operation. 

![\[Illustration that shows the context-based evaluation for bucket operation.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/AccessControlAuthorizationFlowBucketResource.png)


The following examples illustrate the evaluation logic. 

## Example 1: Bucket operation requested by bucket owner


 In this example, the bucket owner sends a request for a bucket operation by using the root credentials of the AWS account. 

![\[Illustration that shows a bucket operation requested by bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example10-policy-eval-logic.png)


 Amazon S3 performs the context evaluation as follows:

1.  Because the request is made by using the root user credentials of an AWS account, the user context is not evaluated.

1.  In the bucket context, Amazon S3 reviews the bucket policy to determine if the requester has permission to perform the operation. Amazon S3 authorizes the request. 

## Example 2: Bucket operation requested by an AWS account that is not the bucket owner


In this example, a request is made by using the root user credentials of AWS account 1111-1111-1111 for a bucket operation owned by AWS account 2222-2222-2222. No IAM users are involved in this request.

![\[Illustration that shows a bucket operation requested by an AWS account that is not the bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example20-policy-eval-logic.png)


In this example, Amazon S3 evaluates the context as follows:

1. Because the request is made by using the root user credentials of an AWS account, the user context is not evaluated.

1. In the bucket context, Amazon S3 examines the bucket policy. If the bucket owner (AWS account 2222-2222-2222) has not authorized AWS account 1111-1111-1111 to perform the requested operation, Amazon S3 denies the request. Otherwise, Amazon S3 grants the request and performs the operation.

## Example 3: Bucket operation requested by an IAM principal whose parent AWS account is also the bucket owner


 In the example, the request is sent by Jill, an IAM user in AWS account 1111-1111-1111, which also owns the bucket. 

![\[Illustration that shows a bucket operation requested by an IAM principal and bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example30-policy-eval-logic.png)


 Amazon S3 performs the following context evaluation:

1.  Because the request is from an IAM principal, in the user context, Amazon S3 evaluates all policies that belong to the parent AWS account to determine if Jill has permission to perform the operation. 

    In this example, parent AWS account 1111-1111-1111, to which the principal belongs, is also the bucket owner. As a result, in addition to the user policy, Amazon S3 also evaluates the bucket policy and bucket ACL in the same context because they belong to the same account.

1. Because Amazon S3 evaluated the bucket policy and bucket ACL as part of the user context, it does not evaluate the bucket context.

## Example 4: Bucket operation requested by an IAM principal whose parent AWS account is not the bucket owner


In this example, the request is sent by Jill, an IAM user whose parent AWS account is 1111-1111-1111, but the bucket is owned by another AWS account, 2222-2222-2222. 

![\[Illustration that shows a bucket operation requested by an IAM principal that is not the bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example40-policy-eval-logic.png)


Jill will need permissions from both the parent AWS account and the bucket owner. Amazon S3 evaluates the context as follows:

1. Because the request is from an IAM principal, Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Jill has the necessary permissions. If Jill has permission, then Amazon S3 moves on to evaluate the bucket context. If Jill doesn't have permission, it denies the request.

1.  In the bucket context, Amazon S3 verifies that bucket owner 2222-2222-2222 has granted Jill (or her parent AWS account) permission to perform the requested operation. If she has that permission, Amazon S3 grants the request and performs the operation. Otherwise, Amazon S3 denies the request. 

# How Amazon S3 authorizes a request for an object operation
For an object operation

When Amazon S3 receives a request for an object operation, it converts all the relevant permissions— resource-based permissions (object access control list (ACL), bucket policy, bucket ACL) and IAM user policies—into a set of policies to be evaluated at run time. It then evaluates the resulting set of policies in a series of steps. In each step, it evaluates a subset of policies in three specific contexts—user context, bucket context, and object context:

1. **User context** – If the requester is an IAM principal, the principal must have permission from the parent AWS account to which it belongs. In this step, Amazon S3 evaluates a subset of policies owned by the parent account (also referred as the context authority). This subset of policies includes the user policy that the parent attaches to the principal. If the parent also owns the resource in the request (bucket or object), Amazon S3 evaluates the corresponding resource policies (bucket policy, bucket ACL, and object ACL) at the same time. 
**Note**  
If the parent AWS account owns the resource (bucket or object), it can grant resource permissions to its IAM principal by using either the user policy or the resource policy. 

1. **Bucket context** – In this context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket.

   If the AWS account that owns the object in the request is not same as the bucket owner, Amazon S3 checks the policies if the bucket owner has explicitly denied access to the object. If there is an explicit deny set on the object, Amazon S3 does not authorize the request. 

1. **Object context** – The requester must have permissions from the object owner to perform a specific object operation. In this step, Amazon S3 evaluates the object ACL. 
**Note**  
If bucket and object owners are the same, access to the object can be granted in the bucket policy, which is evaluated at the bucket context. If the owners are different, the object owners must use an object ACL to grant permissions. If the AWS account that owns the object is also the parent account to which the IAM principal belongs, it can configure object permissions in a user policy, which is evaluated at the user context. For more information about using these access policy alternatives, see [Walkthroughs that use policies to manage access to your Amazon S3 resources](example-walkthroughs-managing-access.md).  
If you as the bucket owner want to own all the objects in your bucket and use bucket policies or policies based on IAMto manage access to these objects, you can apply the bucket owner enforced setting for Object Ownership. With this setting, you as the bucket owner automatically own and have full control over every object in your bucket. Bucket and object ACLs can’t be edited and are no longer considered for access. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

 The following is an illustration of the context-based evaluation for an object operation.

![\[Illustration that shows the context-based evaluation for an object operation.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/AccessControlAuthorizationFlowObjectResource.png)


## Example of an object operation request


In this example, IAM user Jill, whose parent AWS account is 1111-1111-1111, sends an object operation request (for example, `GetObject`) for an object owned by AWS account 3333-3333-3333 in a bucket owned by AWS account 2222-2222-2222. 

![\[Illustration that shows an object operation request.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example50-policy-eval-logic.png)


Jill will need permission from the parent AWS account, the bucket owner, and the object owner. Amazon S3 evaluates the context as follows:

1. Because the request is from an IAM principal, Amazon S3 evaluates the user context to verify that the parent AWS account 1111-1111-1111 has given Jill permission to perform the requested operation. If she has that permission, Amazon S3 evaluates the bucket context. Otherwise, Amazon S3 denies the request.

1. In the bucket context, the bucket owner, AWS account 2222-2222-2222, is the context authority. Amazon S3 evaluates the bucket policy to determine if the bucket owner has explicitly denied Jill access to the object. 

1. In the object context, the context authority is AWS account 3333-3333-3333, the object owner. Amazon S3 evaluates the object ACL to determine if Jill has permission to access the object. If she does, Amazon S3 authorizes the request. 

# Required permissions for Amazon S3 API operations
Required permissions for S3 API operations

**Note**  
This page is about Amazon S3 policy actions for general purpose buckets. To learn more about Amazon S3 policy actions for directory buckets, see [Actions for directory buckets](s3-express-security-iam.md#s3-express-security-iam-actions).

To perform an S3 API operation, you must have the right permissions. This page maps S3 API operations to the required permissions. To grant permissions to perform an S3 API operation, you must compose a valid policy (such as an S3 bucket policy or IAM identity-based policy), and specify corresponding actions in the `Action` element of the policy. These actions are called policy actions. Not every S3 API operation is represented by a single permission (a single policy action), and some permissions (some policy actions) are required for many different API operations. 

When you compose policies, you must specify the `Resource` element based on the correct resource type required by the corresponding Amazon S3 policy actions. This page categorizes permissions to S3 API operations by the resource types. For more information about the resource types, see [ Resource types defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-resources-for-iam-policies) in the *Service Authorization Reference*. For a full list of Amazon S3 policy actions, resources, and condition keys for use in policies, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*. For a complete list of Amazon S3 API operations, see [Amazon S3 API Actions](https://docs.aws.amazon.com//AmazonS3/latest/API/API_Operations.html) in the *Amazon Simple Storage Service API Reference*.

For more information on how to address the HTTP `403 Forbidden` errors in S3, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md). For more information on the IAM features to use with S3, see [How Amazon S3 works with IAM](security_iam_service-with-iam.md). For more information on S3 security best practices, see [Security best practices for Amazon S3](security-best-practices.md). 

**Topics**
+ [

## Bucket operations and permissions
](#using-with-s3-policy-actions-related-to-buckets)
+ [

## Object operations and permissions
](#using-with-s3-policy-actions-related-to-objects)
+ [

## Access point for general purpose buckets operations and permissions
](#using-with-s3-policy-actions-related-to-accesspoint)
+ [

## Object Lambda Access Point operations and permissions
](#using-with-s3-policy-actions-related-to-olap)
+ [

## Multi-Region Access Point operations and permissions
](#using-with-s3-policy-actions-related-to-mrap)
+ [

## Batch job operations and permissions
](#using-with-s3-policy-actions-related-to-batchops)
+ [

## S3 Storage Lens configuration operations and permissions
](#using-with-s3-policy-actions-related-to-lens)
+ [

## S3 Storage Lens groups operations and permissions
](#using-with-s3-policy-actions-related-to-lens-groups)
+ [

## S3 Access Grants instance operations and permissions
](#using-with-s3-policy-actions-related-to-s3ag-instances)
+ [

## S3 Access Grants location operations and permissions
](#using-with-s3-policy-actions-related-to-s3ag-locations)
+ [

## S3 Access Grants grant operations and permissions
](#using-with-s3-policy-actions-related-to-s3ag-grants)
+ [

## Account operations and permissions
](#using-with-s3-policy-actions-related-to-accounts)

## Bucket operations and permissions


Bucket operations are S3 API operations that operate on the bucket resource type. You must specify S3 policy actions for bucket operations in bucket policies or IAM identity-based policies.

In the policies, the `Resource` element must be the bucket Amazon Resource Name (ARN). For more information about the `Resource` element format and example policies, see [Bucket operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-buckets).

**Note**  
To grant permissions to bucket operations in access point policies, note the following:  
Permissions granted for bucket operations in an access point policy are effective only if the underlying bucket allows the same permissions. When you use an access point, you must delegate access control from the bucket to the access point or add the same permissions in the access point policy to the underlying bucket's policy.
In access point policies that grant permissions to bucket operations, the `Resource` element must be the `accesspoint` ARN. For more information about the `Resource` element format and example policies, see [Bucket operations in policies for access points for general purpose buckets](security_iam_service-with-iam.md#bucket-operations-ap). For more information about access point policies, see [Configuring IAM policies for using access points](access-points-policies.md). 
Not all bucket operations are supported by access points. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).

The following is the mapping of bucket operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)  |  (Required) `s3:CreateBucket`  |  Required to create a new s3 bucket.  | 
|    |  (Conditionally required) `s3:PutBucketAcl`  |  Required if you want to use access control list (ACL) to specify permissions on a bucket when you make a `CreateBucket` request.  | 
|    |  (Conditionally required) `s3:PutBucketObjectLockConfiguration`, `s3:PutBucketVersioning`  |  Required if you want to enable Object Lock when you create a bucket.  | 
|    |  (Conditionally required) `s3:PutBucketOwnershipControls`  |  Required if you want to specify S3 Object Ownership when you create a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html) (V2 API operation. The IAM policy action name is the same for the V1 and V2 API operations.)  |  (Required) `s3:CreateBucketMetadataTableConfiguration`, `s3tables:CreateTableBucket`, `s3tables:CreateNamespace`, `s3tables:CreateTable`, `s3tables:GetTable`, `s3tables:PutTablePolicy`, `s3tables:PutTableEncryption`, `kms:DescribeKey`  |  Required to create a metadata table configuration on a general purpose bucket.  To create your AWS managed table bucket and the metadata tables that are specified in your metadata table configuration, you must have the specified `s3tables` permissions. If you want to encrypt your metadata tables with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you need additional permissions in your KMS key policy. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md). If you also want to integrate your AWS managed table bucket with AWS analytics services so that you can query your metadata table, you need additional permissions. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataTableConfiguration.html) (V1 API operation)  |  (Required) `s3:CreateBucketMetadataTableConfiguration`, `s3tables:CreateNamespace`, `s3tables:CreateTable`, `s3tables:GetTable`, `s3tables:PutTablePolicy`  |  Required to create a metadata table configuration on a general purpose bucket.  To create the metadata table in the table bucket that's specified in your metadata table configuration, you must have the specified `s3tables` permissions. If you want to encrypt your metadata tables with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you need additional permissions. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md). If you also want to integrate your table bucket with AWS analytics services so that you can query your metadata table, you need additional permissions. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)  |  (Required) `s3:DeleteBucket`  |  Required to delete an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketAnalyticsConfiguration.html)  |  (Required) `s3:PutAnalyticsConfiguration`  |  Required to delete an S3 analytics configuration from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketCors.html)  |  (Required) `s3:PutBucketCORS`  |  Required to delete the cross-origin resource sharing (CORS) configuration for an bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)  |  (Required) `s3:PutEncryptionConfiguration`  |  Required to reset the default encryption configuration for an S3 bucket as server-side encryption with Amazon S3 managed keys (SSE-S3).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketIntelligentTieringConfiguration.html)  |  (Required) `s3:PutIntelligentTieringConfiguration`  |  Required to delete the existing S3 Intelligent-Tiering configuration from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html)  |  (Required) `s3:PutInventoryConfiguration`  |  Required to delete an S3 Inventory configuration from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html)  |  (Required) `s3:PutLifecycleConfiguration`  |  Required to delete the S3 Lifecycle configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html) (V2 API operation. The IAM policy action name is the same for the V1 and V2 API operations.)  |  (Required) `s3:DeleteBucketMetadataTableConfiguration`  |  Required to delete a metadata table configuration from a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html) (V1 API operation)  |  (Required) `s3:DeleteBucketMetadataTableConfiguration`  |  Required to delete a metadata table configuration from a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetricsConfiguration.html)  |  (Required) `s3:PutMetricsConfiguration`  |  Required to delete a metrics configuration for the Amazon CloudWatch request metrics from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html)   |  (Required) `s3:PutBucketOwnershipControls`  |  Required to remove the Object Ownership setting for an S3 bucket. After removal, the Object Ownership setting becomes `Object writer`.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)  |  (Required) `s3:DeleteBucketPolicy`  |  Required to delete the policy of an S3 bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html)  |  (Required) `s3:PutReplicationConfiguration`  |  Required to delete the replication configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketTagging.html)  |  (Required) `s3:PutBucketTagging`  |  Required to delete tags from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketWebsite.html)  |  (Required) `s3:DeleteBucketWebsite`  |  Required to remove the website configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html) (Bucket-level)  |  (Required) `s3:PutBucketPublicAccessBlock`  |  Required to remove the block public access configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAccelerateConfiguration.html)  |  (Required) `s3:GetAccelerateConfiguration`  |  Required to use the accelerate subresource to return the Amazon S3 Transfer Acceleration state of a bucket, which is either Enabled or Suspended.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html)  |  (Required) `s3:GetBucketAcl`  |  Required to return the access control list (ACL) of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAnalyticsConfiguration.html)  |  (Required) `s3:GetAnalyticsConfiguration`  |  Required to return an analytics configuration that's identified by the analytics configuration ID from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html)  |  (Required) `s3:GetBucketCORS`  |  Required to return the cross-origin resource sharing (CORS) configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)  |  (Required) `s3:GetEncryptionConfiguration`  |  Required to return the default encryption configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketIntelligentTieringConfiguration.html)  |  (Required) `s3:GetIntelligentTieringConfiguration`  |  Required to get the S3 Intelligent-Tiering configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html)  |  (Required) `s3:GetInventoryConfiguration`  |  Required to return an inventory configuration that's identified by the inventory configuration ID from the bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycle.html)  |  (Required) `s3:GetLifecycleConfiguration`  |  Required to return the S3 Lifecycle configuration of the bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html)  |  (Required) `s3:GetBucketLocation`  |  Required to return the AWS Region that an S3 bucket resides in.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLogging.html)  |  (Required) `s3:GetBucketLogging`  |  Required to return the logging status of an S3 bucket and the permissions that users have to view and modify that status.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html) (V2 API operation. The IAM policy action name is the same for the V1 and V2 API operations.)  |  (Required) `s3:GetBucketMetadataTableConfiguration`  |  Required to retrieve a metadata table configuration for a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html) (V1 API operation)  |  (Required) `s3:GetBucketMetadataTableConfiguration`  |  Required to retrieve a metadata table configuration for a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetricsConfiguration.html)  |  (Required) `s3:GetMetricsConfiguration`  |  Required to get a metrics configuration that's specified by the metrics configuration ID from the bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html)  |  (Required) `s3:GetBucketNotification`  |  Required to return the notification configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html)  |  (Required) `s3:GetBucketOwnershipControls`  |  Required to retrieve the Object Ownership setting for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)  |  (Required) `s3:GetBucketPolicy`  |  Required to return the policy of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html)  |  (Required) `s3:GetBucketPolicyStatus`  |  Required to retrieve the policy status for an S3 bucket, indicating whether the bucket is public.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html)  |  (Required) `s3:GetReplicationConfiguration`  |  Required to return the replication configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketRequestPayment.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketRequestPayment.html)  |  (Required) `s3:GetBucketRequestPayment`  |  Required to return the request payment configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html)  |  (Required) `s3:GetBucketVersioning`  |  Required to return the versioning state of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html)  |  (Required) `s3:GetBucketTagging`  |  Required to return the tag set that's associated with an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html)  |  (Required) `s3:GetBucketWebsite`  |  Required to return the website configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html)  |  (Required) `s3:GetBucketObjectLockConfiguration`  |  Required to get the Object Lock configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html) (Bucket-level)  |  (Required) `s3:GetBucketPublicAccessBlock`  |  Required to retrieve the block public access configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)  |  (Required) `s3:ListBucket`  |  Required to determine if a bucket exists and if you have permission to access it.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketAnalyticsConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketAnalyticsConfigurations.html)  |  (Required) `s3:GetAnalyticsConfiguration`  |  Required to list the analytics configurations for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketIntelligentTieringConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketIntelligentTieringConfigurations.html)  |  (Required) `s3:GetIntelligentTieringConfiguration`  |  Required to list the S3 Intelligent-Tiering configurations of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketInventoryConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketInventoryConfigurations.html)  |  (Required) `s3:GetInventoryConfiguration`  |  Required to return a list of inventory configurations for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketMetricsConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketMetricsConfigurations.html)  |  (Required) `s3:GetMetricsConfiguration`  |  Required to list the metrics configurations for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html)  |  (Required) `s3:ListBucket`  |  Required to list some or all (up to 1,000) of the objects in an S3 bucket.  | 
|    |  (Conditionally required) `s3:GetObjectAcl`  |  Required if you want to display object owner information.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)  |  (Required) `s3:ListBucket`  |  Required to list some or all (up to 1,000) of the objects in an S3 bucket.  | 
|    |  (Conditionally required) `s3:GetObjectAcl`  |  Required if you want to display object owner information.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html)  |  (Required) `s3:ListBucketVersions`  |  Required to get metadata about all the versions of objects in an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html)  |  (Required) `s3:PutAccelerateConfiguration`  |  Required to set the accelerate configuration of an existing bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAcl.html)  |  (Required) `s3:PutBucketAcl`  |  Required to use access control lists (ACLs) to set the permissions on an existing bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAnalyticsConfiguration.html)  |  (Required) `s3:PutAnalyticsConfiguration`  |  Required to set an analytics configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketCors.html)  |  (Required) `s3:PutBucketCORS`  |  Required to set the cross-origin resource sharing (CORS) configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)  |  (Required) `s3:PutEncryptionConfiguration`  |  Required to configure the default encryption for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html)  |  (Required) `s3:PutIntelligentTieringConfiguration`  |  Required to put the S3 Intelligent-Tiering configuration to an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html)  |  (Required) `s3:PutInventoryConfiguration`  |  Required to add an inventory configuration to an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html)  |  (Required) `s3:PutLifecycleConfiguration`  |  Required to create a new S3 Lifecycle configuration or replace an existing lifecycle configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html)  |  (Required) `s3:PutBucketLogging`  |  Required to set the logging parameters for an S3 bucket and specify permissions for who can view and modify the logging parameters.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketMetricsConfiguration.html)  |  (Required) `s3:PutMetricsConfiguration`  |  Required to set or update a metrics configuration for the Amazon CloudWatch request metrics of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html)  |  (Required) `s3:PutBucketNotification`  |  Required to enable notifications of specified events for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html)  |  (Required) `s3:PutBucketOwnershipControls`  |  Required to create or modify the Object Ownership setting for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)  |  (Required) `s3:PutBucketPolicy`  |  Required to apply an S3 bucket policy to a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html)  |  (Required) `s3:PutReplicationConfiguration`  |  Required to create a new replication configuration or replace an existing one for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketRequestPayment.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketRequestPayment.html)  |  (Required) `s3:PutBucketRequestPayment`  |  Required to set the request payment configuration for a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html)  |  (Required) `s3:PutBucketTagging`  |  Required to add a set of tags to an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html)  |  (Required) `s3:PutBucketVersioning`  |  Required to set the versioning state of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html)  |  (Required) `s3:PutBucketWebsite`  |  Required to configure a bucket as a website and set the configuration of the website.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html)  |  (Required) `s3:PutBucketObjectLockConfiguration`  |  Required to put Object Lock configuration on an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html) (Bucket-level)  |  (Required) `s3:PutBucketPublicAccessBlock`  |  Required to create or modify the block public access configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html)  |  (Required) `s3:UpdateBucketMetadataInventoryTableConfiguration`, `s3tables:CreateTableBucket`, `s3tables:CreateNamespace`, `s3tables:CreateTable`, `s3tables:GetTable`, `s3tables:PutTablePolicy`, `s3tables:PutTableEncryption`, `kms:DescribeKey`  |  Required to enable or disable an inventory table for a metadata table configuration on a general purpose bucket. If you want to encrypt your inventory table with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you need additional permissions in your KMS key policy. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html)  |  (Required) `s3:UpdateBucketMetadataJournalTableConfiguration`  |  Required to enable or disable journal table record expiration for a metadata table configuration on a general purpose bucket.  | 

## Object operations and permissions


Object operations are S3 API operations that operate on the object resource type. You must specify S3 policy actions for object operations in resource-based policies (such as bucket policies, access point policies, Multi-Region Access Point policies, VPC endpoint policies) or IAM identity-based policies.

In the policies, the `Resource` element must be the object ARN. For more information about the `Resource` element format and example policies, see [Object operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-objects). 

**Note**  
AWS KMS policy actions (`kms:GenerateDataKey` and `kms:Decrypt`) are only applicable for the AWS KMS resource type and must be specified in IAM identity-based policies and AWS KMS resource-based policies (AWS KMS key policies). You can't specify AWS KMS policy actions in S3 resource-based policies, such as S3 bucket policies.
When you use access points to control access to object operations, you can use access point policies. To grant permissions to object operations in access point policies, note the following:  
In access point policies that grant permissions to object operations, the `Resource` element must be the ARNs for objects accessed through an access point. For more information about the `Resource` element format and example policies, see [Object operations in access point policies](security_iam_service-with-iam.md#object-operations-ap).
Not all object operations are supported by access points. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
Not all object operations are supported by Multi-Region Access Points. For more information, see [Multi-Region Access Point compatibility with S3 operations](MrapOperations.md#mrap-operations-support).

The following is the mapping of object operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)  |  (Required) `s3:AbortMultipartUpload`  |  Required to abort a multipart upload.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)  |  (Required) `s3:PutObject`  |  Required to complete a multipart upload.  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to complete a multipart upload for an AWS KMS customer managed key encrypted object.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)  |  For source object:  |  For source object:  | 
|    |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to copy an AWS KMS customer managed key encrypted object from the source bucket.   | 
|    |  For destination object:  |  For destination object:  | 
|    |  (Required) `s3:PutObject`  |  Required to put the copied object in the destination bucket.  | 
|    |  (Conditionally required) `s3:PutObjectAcl`  |  Required if you want to put the copied object with the object access control list (ACL) to the destination bucket when you make a `CopyObject` request.  | 
|    |  (Conditionally required) `s3:PutObjectTagging`  |  Required if you want to put the copied object with object tagging to the destination bucket when you make a `CopyObject` request.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to encrypt the copied object with an AWS KMS customer managed key and put it to the destination bucket.   | 
|    |  (Conditionally required) `s3:PutObjectRetention`  |  Required if you want to set an Object Lock retention configuration for the new object.  | 
|    |  (Conditionally required) `s3:PutObjectLegalHold`  |  Required if you want to place an Object Lock legal hold on the new object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)  |  (Required) `s3:PutObject`  |  Required to create multipart upload.  | 
|    |  (Conditionally required) `s3:PutObjectAcl`  |  Required if you want to set the object access control list (ACL) permissions for the uploaded object.  | 
|    |  (Conditionally required) `s3:PutObjectTagging`  |  Required if you want to add object tagging(s) to the uploaded object.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to use an AWS KMS customer managed key to encrypt an object when you initiate a multipart upload.   | 
|    |  (Conditionally required) `s3:PutObjectRetention`  |  Required if you want to set an Object Lock retention configuration for the uploaded object.  | 
|    |  (Conditionally required) `s3:PutObjectLegalHold`  |  Required if you want to apply an Object Lock legal hold to the uploaded object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)  |  (Required) Either `s3:DeleteObject` or `s3:DeleteObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `s3:BypassGovernanceRetention`  |  Required if you want to delete an object that's protected by governance mode for Object Lock retention.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)  |  (Required) Either `s3:DeleteObject` or `s3:DeleteObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `s3:BypassGovernanceRetention`  |  Required if you want to delete objects that are protected by governance mode for Object Lock retention.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html)  |  (Required) Either `s3:DeleteObjectTagging` or `s3:DeleteObjectVersionTagging`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)  |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to get and decrypt an AWS KMS customer managed key encrypted object.   | 
|    |  (Conditionally required) `s3:GetObjectTagging`  |  Required if you want to get the tag-set of an object when you make a `GetObject` request.  | 
|    |  (Conditionally required) `s3:GetObjectLegalHold`  |  Required if you want to get an object's current Object Lock legal hold status.  | 
|    |  (Conditionally required) `s3:GetObjectRetention`  |  Required if you want to retrieve the Object Lock retention settings for an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html)  |  (Required) Either `s3:GetObjectAcl` or `s3:GetObjectVersionAcl`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)  |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to retrieve attributes related to an AWS KMS customer managed key encrypted object.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html)  |  (Required) `s3:GetObjectLegalHold`  |  Required to get an object's current Object Lock legal hold status.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html)  |  (Required) `s3:GetObjectRetention`  |  Required to retrieve the Object Lock retention settings for an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html)  |  (Required) Either `s3:GetObjectTagging` or `s3:GetObjectVersionTagging`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTorrent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTorrent.html)  |  (Required) `s3:GetObject`  |  Required to return torrent files of an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)  |  (Required) `s3:GetObject`  |  Required to retrieve metadata from an object without returning the object itself.  | 
|    |  (Conditionally required) `s3:GetObjectLegalHold`  |  Required if you want to get an object's current Object Lock legal hold status.  | 
|    |  (Conditionally required) `s3:GetObjectRetention`  |  Required if you want to retrieve the Object Lock retention settings for an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)  |  (Required) `s3:ListBucketMultipartUploads`  |  Required to list in-progress multipart uploads in a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)  |  (Required) `s3:ListMultipartUploadParts`  |  Required to list the parts that have been uploaded for a specific multipart upload.  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to list parts of an AWS KMS customer managed key encrypted multipart upload.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)  |  (Required) `s3:PutObject`  |  Required to put an object.  | 
|    |  (Conditionally required) `s3:PutObjectAcl`  |  Required if you want to put the object access control list (ACL) when you make a `PutObject` request.  | 
|    |  (Conditionally required) `s3:PutObjectTagging`  |  Required if you want to put object tagging when you make a `PutObject` request.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to encrypt an object with an AWS KMS customer managed key.   | 
|    |  (Conditionally required) `s3:PutObjectRetention`  |  Required if you want to set an Object Lock retention configuration on an object.  | 
|    |  (Conditionally required) `s3:PutObjectLegalHold`  |  Required if you want to apply an Object Lock legal hold configuration to a specified object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html)  |  (Required) Either `s3:PutObjectAcl` or `s3:PutObjectVersionAcl`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html)  |  (Required) `s3:PutObjectLegalHold`  |  Required to apply an Object Lock legal hold configuration to an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html)  |  (Required) `s3:PutObjectRetention`  |  Required to apply an Object Lock retention configuration to an object.  | 
|    |  (Conditionally required) `s3:BypassGovernanceRetention`  |  Required if you want to bypass the governance mode of an Object Lock retention configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html)  |  (Required) Either `s3:PutObjectTagging` or `s3:PutObjectVersionTagging`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html)  |  (Required) `s3:RestoreObject`  |  Required to restore a copy of an archived object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html)  |  (Required) `s3:GetObject`  |  Required to filter the contents of an S3 object based on a simple structured query language (SQL) statement.  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to filter the contents of an S3 object that's encrypted with an AWS KMS customer managed key.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateObjectEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateObjectEncryption.html) | (Required) `s3:UpdateObjectEncryption`, `s3:PutObject`, `kms:Encrypt`, `kms:Decrypt`, `kms:GenerateDataKey`, `kms:ReEncrypt*`  | Required if you want to change encrypted objects between server-side encryption with Amazon S3 managed encryption (SSE-S3) and server-side encryption with AWS Key Management Service (AWS KMS) encryption keys (SSE-KMS). You can also use the `UpdateObjectEncryption` operation to apply S3 Bucket Keys to reduce AWS KMS request costs or change the customer-managed KMS key that's used to encrypt your data so that you can comply with custom key-rotation standards. | 
|    | (Conditionally required) `organizations:DescribeAccount` | If you're using AWS Organizations, to use the `UpdateObjectEncryption` operation with customer-managed KMS keys from other AWS accounts within your organization, you must have the `organizations:DescribeAccount` permission.   You must also request the ability to use AWS KMS keys owned by other member accounts within your organization by contacting AWS Support.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)  |  (Required) `s3:PutObject`  |  Required to upload a part in a multipart upload.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to put an upload part and encrypt it with an AWS KMS customer managed key.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)  |  For source object:  |  For source object:  | 
|    |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to copy an AWS KMS customer managed key encrypted object from the source bucket.   | 
|    |  For destination part:  |  For destination part:  | 
|    |  (Required) `s3:PutObject`  |  Required to upload a multipart upload part to the destination bucket.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to encrypt a part with an AWS KMS customer managed key when you upload the part to the destination bucket.   | 

## Access point for general purpose buckets operations and permissions


Access point operations are S3 API operations that operate on the `accesspoint` resource type. You must specify S3 policy actions for access point operations in IAM identity-based policies, not in bucket policies or access point policies.

In the policies, the `Resource` element must be the `accesspoint` ARN. For more information about the `Resource` element format and example policies, see [Access point for general purpose bucket operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-accesspoint).

**Note**  
If you want to use access points to control access to bucket or object operations, note the following:  
For using access points to control access to bucket operations, see [Bucket operations in policies for access points for general purpose buckets](security_iam_service-with-iam.md#bucket-operations-ap).
For using access points to control access to object operations, see [Object operations in access point policies](security_iam_service-with-iam.md#object-operations-ap).
For more information about how to configure access point policies, see [Configuring IAM policies for using access points](access-points-policies.md).

The following is the mapping of access point operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html)  |  (Required) `s3:CreateAccessPoint`  |  Required to create an access point that's associated with an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html)  |  (Required) `s3:DeleteAccessPoint`  |  Required to delete an access point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html)  |  (Required) `s3:DeleteAccessPointPolicy`  |  Required to delete an access point policy.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html)  |  (Required) `s3:GetAccessPointPolicy`  |  Required to retrieve an access point policy.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicyStatus.html)  |  (Required) `s3:GetAccessPointPolicyStatus`  |  Required to retrieve the information on whether the specified access point currently has a policy that allows public access.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html)  |  (Required) `s3:PutAccessPointPolicy`  |  Required to put an access point policy.  | 

## Object Lambda Access Point operations and permissions


Object Lambda Access Point operations are S3 API operations that operate on the `objectlambdaaccesspoint` resource type. For more information about how to configure policies for Object Lambda Access Point operations, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

The following is the mapping of Object Lambda Access Point operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html)  |  (Required) `s3:CreateAccessPointForObjectLambda`  |  Required to create an Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointForObjectLambda.html)  |  (Required) `s3:DeleteAccessPointForObjectLambda`  |  Required to delete a specified Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicyForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicyForObjectLambda.html)  |  (Required) `s3:DeleteAccessPointPolicyForObjectLambda`  |  Required to delete the policy on a specified Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointConfigurationForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointConfigurationForObjectLambda.html)  |  (Required) `s3:GetAccessPointConfigurationForObjectLambda`  |  Required to retrieve the configuration of the Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointForObjectLambda.html)  |  (Required) `s3:GetAccessPointForObjectLambda`  |  Required to retrieve information about the Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyForObjectLambda.html)  |  (Required) `s3:GetAccessPointPolicyForObjectLambda`  |  Required to return the access point policy that's associated with the specified Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyStatusForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyStatusForObjectLambda.html)  |  (Required) `s3:GetAccessPointPolicyStatusForObjectLambda`  |  Required to return the policy status for a specific Object Lambda Access Point policy.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointConfigurationForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointConfigurationForObjectLambda.html)  |  (Required) `s3:PutAccessPointConfigurationForObjectLambda`  |  Required to set the configuration of the Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointPolicyForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointPolicyForObjectLambda.html)  |  (Required) `s3:PutAccessPointPolicyForObjectLambda`  |  Required to associate an access policy with a specified Object Lambda Access Point.  | 

## Multi-Region Access Point operations and permissions


Multi-Region Access Point operations are S3 API operations that operate on the `multiregionaccesspoint` resource type. For more information about how to configure policies for Multi-Region Access Point operations, see [Multi-Region Access Point policy examples](MultiRegionAccessPointPermissions.md#MultiRegionAccessPointPolicyExamples).

The following is the mapping of Multi-Region Access Point operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateMultiRegionAccessPoint.html)  |  (Required) `s3:CreateMultiRegionAccessPoint`  |  Required to create a Multi-Region Access Point and associate it with S3 buckets.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteMultiRegionAccessPoint.html)  |  (Required) `s3:DeleteMultiRegionAccessPoint`  |  Required to delete a Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeMultiRegionAccessPointOperation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeMultiRegionAccessPointOperation.html)  |  (Required) `s3:DescribeMultiRegionAccessPointOperation`  |  Required to retrieve the status of an asynchronous request to manage a Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPoint.html)  |  (Required) `s3:GetMultiRegionAccessPoint`  |  Required to return configuration information about the specified Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicy.html)  |  (Required) `s3:GetMultiRegionAccessPointPolicy`  |  Required to return the access control policy of the specified Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicyStatus.html)  |  (Required) `s3:GetMultiRegionAccessPointPolicyStatus`  |  Required to return the policy status for a specific Multi-Region Access Point about whether the specified Multi-Region Access Point has an access control policy that allows public access.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointRoutes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointRoutes.html)  |  (Required) `s3:GetMultiRegionAccessPointRoutes`  |  Required to return the routing configuration for a Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html)  |  (Required) `s3:PutMultiRegionAccessPointPolicy`  |  Required to update the access control policy of the specified Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SubmitMultiRegionAccessPointRoutes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SubmitMultiRegionAccessPointRoutes.html)  |  (Required) `s3:SubmitMultiRegionAccessPointRoutes`  |  Required to submit an updated route configuration for a Multi-Region Access Point.  | 

## Batch job operations and permissions


(Batch Operations) job operations are S3 API operations that operate on the `job` resource type. You must specify S3 policy actions for job operations in IAM identity-based policies, not in bucket policies.

In the policies, the `Resource` element must be the `job` ARN. For more information about the `Resource` element format and example policies, see [Batch job operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-batchops).

The following is the mapping of batch job operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteJobTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteJobTagging.html)  |  (Required) `s3:DeleteJobTagging`  |  Required to remove tags from an existing S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeJob.html)  |  (Required) `s3:DescribeJob`  |  Required to retrieve the configuration parameters and status for a Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetJobTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetJobTagging.html)  |  (Required) `s3:GetJobTagging`  |  Required to return the tag set of an existing S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutJobTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutJobTagging.html)  |  (Required) `s3:PutJobTagging`  |  Required to put or replace tags on an existing S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobPriority.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobPriority.html)  |  (Required) `s3:UpdateJobPriority`  |  Required to update the priority of an existing job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobStatus.html)  |  (Required) `s3:UpdateJobStatus`  |  Required to update the status for the specified job.  | 

## S3 Storage Lens configuration operations and permissions


S3 Storage Lens configuration operations are S3 API operations that operate on the `storagelensconfiguration` resource type. For more information about how to configure S3 Storage Lens configuration operations, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md).

The following is the mapping of S3 Storage Lens configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfiguration.html)  |  (Required) `s3:DeleteStorageLensConfiguration`  |  Required to delete the S3 Storage Lens configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfigurationTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfigurationTagging.html)  |  (Required) `s3:DeleteStorageLensConfigurationTagging`  |  Required to delete the S3 Storage Lens configuration tags.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfiguration.html)  |  (Required) `s3:GetStorageLensConfiguration`  |  Required to get the S3 Storage Lens configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfigurationTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfigurationTagging.html)  |  (Required) `s3:GetStorageLensConfigurationTagging`  |  Required to get the tags of S3 Storage Lens configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfigurationTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfigurationTagging.html)  |  (Required) `s3:PutStorageLensConfigurationTagging`  |  Required to put or replace tags on an existing S3 Storage Lens configuration.  | 

## S3 Storage Lens groups operations and permissions


S3 Storage Lens groups operations are S3 API operations that operate on the `storagelensgroup` resource type. For more information about how to configure S3 Storage Lens groups permissions, see [Storage Lens groups permissions](storage-lens-groups.md#storage-lens-group-permissions).

The following is the mapping of S3 Storage Lens groups operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensGroup.html)  |  (Required) `s3:DeleteStorageLensGroup`  |  Required to delete an existing S3 Storage Lens group.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensGroup.html)  |  (Required) `s3:GetStorageLensGroup`  |  Required to retrieve the S3 Storage Lens group configuration details.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateStorageLensGroup.html)  |  (Required) `s3:UpdateStorageLensGroup`  |  Required to update the existing S3 Storage Lens group.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html) | (Required) `s3:CreateStorageLensGroup` | Required to create a new Storage Lens group. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html) | (Required) `s3:CreateStorageLensGroup`, `s3:TagResource` | Required to create a new Storage Lens group with tags. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html) | (Required) `s3:ListStorageLensGroups` | Required to list all Storage Lens groups in your home Region. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html) | (Required) `s3:ListTagsForResource` | Required to list the tags that were added to your Storage Lens group. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html) | (Required) `s3:TagResource` | Required to add or update a Storage Lens group tag for an existing Storage Lens group. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html) | (Required) `s3:UntagResource` | Required to delete a tag from a Storage Lens group. | 

## S3 Access Grants instance operations and permissions


S3 Access Grants instance operations are S3 API operations that operate on the `accessgrantsinstance` resource type. An S3 Access Grants instance is a logical container for your access grants. For more information on working with S3 Access Grants instances, see [Working with S3 Access Grants instances](access-grants-instance.md).

The following is the mapping of the S3 Access Grants instance configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html)  |  (Required) `s3:AssociateAccessGrantsIdentityCenter`  |  Required to associate an AWS IAM Identity Center instance with your S3 Access Grants instance, thus enabling you to create access grants for users and groups in your corporate identity directory. You must also have the following permissions:  `sso:CreateApplication`, `sso:PutApplicationGrant`, and `sso:PutApplicationAuthenticationMethod`.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html)  |  (Required) `s3:CreateAccessGrantsInstance`  |  Required to create an S3 Access Grants instance (`accessgrantsinstance` resource) which is a container for your individual access grants.  To associate an AWS IAM Identity Center instance with your S3 Access Grants instance, you must also have the `sso:DescribeInstance`, `sso:CreateApplication`, `sso:PutApplicationGrant`, and `sso:PutApplicationAuthenticationMethod` permissions.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html)  |  (Required) `s3:DeleteAccessGrantsInstance`  |  Required to delete an S3 Access Grants instance (`accessgrantsinstance` resource) from an AWS Region in your account.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstanceResourcePolicy.html)  |  (Required) `s3:DeleteAccessGrantsInstanceResourcePolicy`  |  Required to delete a resource policy for your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html)  |  (Required) `s3:DissociateAccessGrantsIdentityCenter`  |  Required to disassociate an AWS IAM Identity Center instance from your S3 Access Grants instance. You must also have the following permissions: `sso:DeleteApplication`  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html)  |  (Required) `s3:GetAccessGrantsInstance`  |  Required to retrieve the S3 Access Grants instance for an AWS Region in your account.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html)  |  (Required) `s3:GetAccessGrantsInstanceForPrefix`  |  Required to retrieve the S3 Access Grants instance that contains a particular prefix.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html)  |  (Required) `s3:GetAccessGrantsInstanceResourcePolicy`  |  Required to return the resource policy of your S3 Access Grants instance.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html)  |  (Required) `s3:ListAccessGrantsInstances`  |  Required to return a list of the S3 Access Grants instances in your account.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html)  |  (Required) `s3:PutAccessGrantsInstanceResourcePolicy`  |  Required to update the resource policy of the S3 Access Grants instance.  | 

## S3 Access Grants location operations and permissions


S3 Access Grants location operations are S3 API operations that operate on the `accessgrantslocation` resource type. For more information on working with S3 Access Grants locations, see [Working with S3 Access Grants locations](access-grants-location.md).

The following is the mapping of the S3 Access Grants location configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsLocation.html)  |  (Required) `s3:CreateAccessGrantsLocation`  |  Required to register a location in your S3 Access Grants instance (create an `accessgrantslocation` resource). You must also have the following permission for the specified IAM role:  `iam:PassRole`  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html)  |  (Required) `s3:DeleteAccessGrantsLocation`  |  Required to remove a registered location from your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html)  |  (Required) `s3:GetAccessGrantsLocation`  |  Required to retrieve the details of a particular location registered in your S3 Access Grants instance.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html)  |  (Required) `s3:ListAccessGrantsLocations`  |  Required to return a list of the locations registered in your S3 Access Grants instance.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html)  |  (Required) `s3:UpdateAccessGrantsLocation`  |  Required to update the IAM role of a registered location in your S3 Access Grants instance.  | 

## S3 Access Grants grant operations and permissions


S3 Access Grants grant operations are S3 API operations that operate on the `accessgrant` resource type. For more information on working with individual grants using S3 Access Grants, see [Working with grants in S3 Access Grants](access-grants-grant.md).

The following is the mapping of the S3 Access Grants grant configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrant.html)  |  (Required) `s3:CreateAccessGrant`  |  Required to create an individual grant (`accessgrant` resource) for a user or group in your S3 Access Grants instance. You must also have the following permissions: For any directory identity — `sso:DescribeInstance` and `sso:DescribeApplication` For directory users — `identitystore:DescribeUser`  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html)  |  (Required) `s3:DeleteAccessGrant`  |  Required to delete an individual access grant (`accessgrant` resource) from your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html)  |  (Required) `s3:GetAccessGrant`  |  Required to get the details about an individual access grant in your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html)  |  (Required) `s3:ListAccessGrants`  |  Required to return a list of individual access grants in your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html)  |  (Required) `s3:ListCallerAccessGrants`  |  Required to list the access grants that grant the caller access to Amazon S3 data through S3 Access Grants.   | 

## Account operations and permissions


Account operations are S3 API operations that operate on the account level. Account isn't a resource type defined by Amazon S3. You must specify S3 policy actions for account operations in IAM identity-based policies, not in bucket policies.

In the policies, the `Resource` element must be `"*"`. For more information about example policies, see [Account operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-accounts).

The following is the mapping of account operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html)  |  (Required) `s3:CreateJob`  |  Required to create a new S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html)  |  (Required) `s3:CreateStorageLensGroup`  |  Required to create a new S3 Storage Lens group and associate it with the specified AWS account ID.  | 
|    |  (Conditionally required) `s3:TagResource`  |  Required if you want to create an S3 Storage Lens group with AWS resource tags.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeletePublicAccessBlock.html) (Account-level)  |  (Required) `s3:PutAccountPublicAccessBlock`  |  Required to remove the block public access configuration from an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html)  |  (Required) `s3:GetAccessPoint`  |  Required to retrieve configuration information about the specified access point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html) (Account-level)  |  (Required) `s3:GetAccountPublicAccessBlock`  |  Required to retrieve the block public access configuration for an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPoints.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPoints.html)  |  (Required) `s3:ListAccessPoints`  |  Required to list access points of an S3 bucket that are owned by an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForObjectLambda.html)  |  (Required) `s3:ListAccessPointsForObjectLambda`  |  Required to list the Object Lambda Access Points.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html)  |  (Required) `s3:ListAllMyBuckets`  |  Required to return a list of all buckets that are owned by the authenticated sender of the request.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListJobs.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListJobs.html)  |  (Required) `s3:ListJobs`  |  Required to list current jobs and jobs that have ended recently.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListMultiRegionAccessPoints.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListMultiRegionAccessPoints.html)  |  (Required) `s3:ListMultiRegionAccessPoints`  |  Required to return a list of the Multi-Region Access Points that are currently associated with the specified AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensConfigurations.html)  |  (Required) `s3:ListStorageLensConfigurations`  |  Required to get a list of S3 Storage Lens configurations for an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html)  |  (Required) `s3:ListStorageLensGroups`  |  Required to list all the S3 Storage Lens groups in the specified home AWS Region.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html) (Account-level)  |  (Required) `s3:PutAccountPublicAccessBlock`  |  Required to create or modify the block public access configuration for an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html)  |  (Required) `s3:PutStorageLensConfiguration`  |  Required to put an S3 Storage Lens configuration.  | 

# Policies and permissions in Amazon S3
Policies and permissions

This page provides an overview of bucket and user policies in Amazon S3 and describes the basic elements of an AWS Identity and Access Management (IAM) policy. Each listed element links to more details about that element and examples of how to use it. 

For a complete list of Amazon S3 actions, resources, and conditions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

In its most basic sense, a policy contains the following elements:
+ [Resource](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources) – The Amazon S3 bucket, object, access point, or job that the policy applies to. Use the Amazon Resource Name (ARN) of the bucket, object, access point, or job to identify the resource. 

  An example for bucket-level operations:

  `"Resource": "arn:aws:s3:::bucket_name"`

  Examples for object-level operations: 
  + `"Resource": "arn:aws:s3:::bucket_name/*"` for all objects in the bucket.
  + `"Resource": "arn:aws:s3:::bucket_name/prefix/*"` for objects under a certain prefix in the bucket.

  For more information, see [Policy resources for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources).
+ [Actions](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions) – For each resource, Amazon S3 supports a set of operations. You identify resource operations that you will allow (or deny) by using action keywords. 

  For example, the `s3:ListBucket` permission allows the user to use the Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) operation. (The `s3:ListBucket` permission is a case where the action name doesn't map directly to the operation name.) For more information about using Amazon S3 actions, see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions). For a complete list of Amazon S3 actions, see [Actions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html) in the *Amazon Simple Storage Service API Reference*.
+ [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html) – What the effect will be when the user requests the specific action—this can be either `Allow` or `Deny`. 

  If you don't explicitly grant access to (allow) a resource, access is implicitly denied. You can also explicitly deny access to a resource. You might do this to make sure that a user can't access the resource, even if a different policy grants access. For more information, see [IAM JSON Policy Elements: Effect](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html) in the *IAM User Guide*.
+ [Principal](security_iam_service-with-iam.md#s3-bucket-user-policy-specifying-principal-intro) – The account or user who is allowed access to the actions and resources in the statement. In a bucket policy, the principal is the user, account, service, or other entity that is the recipient of this permission. For more information, see [Principals for bucket policies](security_iam_service-with-iam.md#s3-bucket-user-policy-specifying-principal-intro).
+ [Condition](amazon-s3-policy-keys.md) – Conditions for when a policy is in effect. You can use AWS‐wide keys and Amazon S3‐specific keys to specify conditions in an Amazon S3 access policy. For more information, see [Bucket policy examples using condition keys](amazon-s3-policy-keys.md).

The following example bucket policy shows the `Effect`, `Principal`, `Action`, and `Resource` elements. This policy allows `Akua`, a user in account `123456789012`, `s3:GetObject`, `s3:GetBucketLocation`, and `s3:ListBucket` Amazon S3 permissions on the `amzn-s3-demo-bucket1` bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "ExamplePolicy01",
    "Statement": [
        {
            "Sid": "ExampleStatement01",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Akua"
            },
            "Action": [
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1/*",
                "arn:aws:s3:::amzn-s3-demo-bucket1"
            ]
        }
    ]
}
```

------

For complete policy language information, see [Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) and [IAM JSON policy reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html) in the *IAM User Guide*.

## Permission delegation


If an AWS account owns a resource, it can grant those permissions to another AWS account. That account can then delegate those permissions, or a subset of them, to users in the account. This is referred to as* permission delegation*. But an account that receives permissions from another account can't delegate permission cross-account to another AWS account. 

## Amazon S3 bucket and object ownership
Bucket and object ownership

Buckets and objects are Amazon S3 resources. By default, only the resource owner can access these resources. The resource owner refers to the AWS account that creates the resource. For example: 
+ The AWS account that you use to create buckets and upload objects owns those resources. 
+  If you upload an object using AWS Identity and Access Management (IAM) user or role credentials, the AWS account that the user or role belongs to owns the object. 
+ A bucket owner can grant cross-account permissions to another AWS account (or users in another account) to upload objects. In this case, the AWS account that uploads objects owns those objects. The bucket owner doesn't have permissions on the objects that other accounts own, with the following exceptions:
  + The bucket owner pays the bills. The bucket owner can deny access to any objects, or delete any objects in the bucket, regardless of who owns them. 
  + The bucket owner can archive any objects or restore archived objects regardless of who owns them. Archival refers to the storage class used to store the objects. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

### Ownership and request authentication
Request authentication

All requests to a bucket are either authenticated or unauthenticated. Authenticated requests must include a signature value that authenticates the request sender, and unauthenticated requests do not. For more information about request authentication, see [Making requests ](https://docs.aws.amazon.com/AmazonS3/latest/API/MakingRequests.html) in the *Amazon S3 API Reference*.

A bucket owner can allow unauthenticated requests. For example, unauthenticated [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) requests are allowed when a bucket has a public bucket policy, or when a bucket ACL grants `WRITE` or `FULL_CONTROL` access to the `All Users` group or the anonymous user specifically. For more information about public bucket policies and public access control lists (ACLs), see [The meaning of "public"](access-control-block-public-access.md#access-control-block-public-access-policy-status).

All unauthenticated requests are made by the anonymous user. This user is represented in ACLs by the specific canonical user ID `65a011a29cdf8ec533ec3d1ccaae921c`. If an object is uploaded to a bucket through an unauthenticated request, the anonymous user owns the object. The default object ACL grants `FULL_CONTROL` to the anonymous user as the object's owner. Therefore, Amazon S3 allows unauthenticated requests to retrieve the object or modify its ACL. 

To prevent objects from being modified by the anonymous user, we recommend that you do not implement bucket policies that allow anonymous public writes to your bucket or use ACLs that allow the anonymous user write access to your bucket. You can enforce this recommended behavior by using Amazon S3 Block Public Access. 

For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md). For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md).

**Important**  
We recommend that you don't use the AWS account root user credentials to make authenticated requests. Instead, create an IAM role and grant that role full access. We refer to users with this role as *administrator users*. You can use credentials assigned to the administrator role, instead of AWS account root user credentials, to interact with AWS and perform tasks, such as create a bucket, create users, and grant permissions. For more information, see [AWS security credentials](https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html) and [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.







# Bucket policies for Amazon S3
Bucket policies

A bucket policy is a resource-based policy that you can use to grant access permissions to your Amazon S3 bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. These permissions don't apply to objects that are owned by other AWS accounts.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to control ownership of objects uploaded to your bucket and to disable or enable access control lists (ACLs). By default, Object Ownership is set to the Bucket owner enforced setting and all ACLs are disabled. The bucket owner owns all the objects in the bucket and manages access to data exclusively using policies.

Bucket policies use JSON-based AWS Identity and Access Management (IAM) policy language. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies can allow or deny requests based on the elements in the policy. These elements include the requester, S3 actions, resources, and aspects or conditions of the request (such as the IP address that's used to make the request). 

For example, you can create a bucket policy that does the following: 
+ Grants other accounts cross-account permissions to upload objects to your S3 bucket
+ Makes sure that you, the bucket owner, has full control of the uploaded objects

For more information, see [Examples of Amazon S3 bucket policies](example-bucket-policies.md).

**Important**  
You can't use a bucket policy to prevent deletions or transitions by an [S3 Lifecycle](object-lifecycle-mgmt.md) rule. For example, even if your bucket policy denies all actions for all principals, your S3 Lifecycle configuration still functions as normal.

The topics in this section provide examples and show you how to add a bucket policy in the S3 console. For information about identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md). For information about bucket policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [

# Adding a bucket policy by using the Amazon S3 console
](add-bucket-policy.md)
+ [

# Controlling access from VPC endpoints with bucket policies
](example-bucket-policies-vpc-endpoint.md)
+ [

# Examples of Amazon S3 bucket policies
](example-bucket-policies.md)
+ [

# Bucket policy examples using condition keys
](amazon-s3-policy-keys.md)

# Adding a bucket policy by using the Amazon S3 console
Adding a bucket policy

You can use the [AWS Policy Generator](https://aws.amazon.com/blogs/aws/aws-policy-generator/) and the Amazon S3 console to add a new bucket policy or edit an existing bucket policy. A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates. For more information about bucket policies, see [Identity and Access Management for Amazon S3](security-iam.md).

Make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices. To learn more about validating policies by using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html).

For guidance on troubleshooting errors with a policy, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

**To create or edit a bucket policy**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets** or **Directory buckets**.

1. In the list of buckets, choose the name of the bucket that you want to create a bucket policy for or whose bucket policy you want to edit.

1. Choose the **Permissions** tab.

1. Under **Bucket policy**, choose **Edit**. The **Edit bucket policy** page appears.

1. On the **Edit bucket policy** page, do one of the following: 
   + To see examples of bucket policies, choose **Policy examples**. Or see [Examples of Amazon S3 bucket policies](example-bucket-policies.md) in the *Amazon S3 User Guide*.
   + To generate a policy automatically, or edit the JSON in the **Policy** section, choose **Policy generator**.

   If you choose **Policy generator**, the AWS Policy Generator opens in a new window.

   1. On the **AWS Policy Generator** page, for **Select Type of Policy**, choose **S3 Bucket Policy**.

   1. Add a statement by entering the information in the provided fields, and then choose **Add Statement**. Repeat this step for as many statements as you would like to add. For more information about these fields, see the [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*. 
**Note**  
For your convenience, the **Edit bucket policy** page displays the **Bucket ARN **(Amazon Resource Name) of the current bucket above the **Policy** text field. You can copy this ARN for use in the statements on the **AWS Policy Generator** page. 

   1. After you finish adding statements, choose **Generate Policy**.

   1. Copy the generated policy text, choose **Close**, and return to the **Edit bucket policy** page in the Amazon S3 console.

1. In the **Policy** box, edit the existing policy or paste the bucket policy from the AWS Policy Generator. Make sure to resolve security warnings, errors, general warnings, and suggestions before you save your policy.
**Note**  
Bucket policies are limited to 20 KB in size.

1. (Optional) Choose **Preview external access** in the lower-right corner to preview how your new policy affects public and cross-account access to your resource. Before you save your policy, you can check whether it introduces new IAM Access Analyzer findings or resolves existing findings. If you don’t see an active analyzer, choose **Go to Access Analyzer** to [ create an account analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-enabling) in IAM Access Analyzer. For more information, see [Preview access](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-access-preview.html) in the *IAM User Guide*. 

1. Choose **Save changes**, which returns you to the **Permissions** tab. 

# Controlling access from VPC endpoints with bucket policies
Controlling VPC access

You can use Amazon S3 bucket policies to control access to buckets from specific virtual private cloud (VPC) endpoints or specific VPCs. This section contains example bucket policies that you can use to control Amazon S3 bucket access from VPC endpoints. To learn how to set up VPC endpoints, see [VPC Endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) in the *VPC User Guide*. 

A VPC enables you to launch AWS resources into a virtual network that you define. A VPC endpoint enables you to create a private connection between your VPC and another AWS service. This private connection doesn't require access over the internet, through a virtual private network (VPN) connection, through a NAT instance, or through Direct Connect. 

A VPC endpoint for Amazon S3 is a logical entity within a VPC that allows connectivity only to Amazon S3. The VPC endpoint routes requests to Amazon S3 and routes responses back to the VPC. VPC endpoints change only how requests are routed. Amazon S3 public endpoints and DNS names will continue to work with VPC endpoints. For important information about using VPC endpoints with Amazon S3, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html) and [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html) in the *VPC User Guide*. 

VPC endpoints for Amazon S3 provide two ways to control access to your Amazon S3 data: 
+ You can control the requests, users, or groups that are allowed through a specific VPC endpoint. For information about this type of access control, see [Controlling access to VPC endpoints using endpoint policies](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *VPC User Guide*.
+ You can control which VPCs or VPC endpoints have access to your buckets by using Amazon S3 bucket policies. For examples of this type of bucket policy access control, see the following topics on restricting access.

**Topics**
+ [

## Restricting access to a specific VPC endpoint
](#example-bucket-policies-restrict-accesss-vpc-endpoint)
+ [

## Restricting access to a specific VPC
](#example-bucket-policies-restrict-access-vpc)
+ [

## Restricting access to an IPv6 VPC endpoint
](#example-bucket-policies-ipv6-vpc-endpoint)

**Important**  
When applying the Amazon S3 bucket policies for VPC endpoints described in this section, you might block your access to the bucket unintentionally. Bucket permissions that are intended to specifically limit bucket access to connections originating from your VPC endpoint can block all connections to the bucket. For information about how to fix this issue, see [How do I fix my bucket policy when it has the wrong VPC or VPC endpoint ID?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-regain-access/) in the *AWS Support Knowledge Center*.

## Restricting access to a specific VPC endpoint


The following is an example of an Amazon S3 bucket policy that restricts access to a specific bucket, `awsexamplebucket1`, only from the VPC endpoint with the ID `vpce-1a2b3c4d`. If the specified endpoint is not used, the policy denies all access to the bucket. The `aws:SourceVpce` condition specifies the endpoint. The `aws:SourceVpce` condition doesn't require an Amazon Resource Name (ARN) for the VPC endpoint resource, only the VPC endpoint ID. For more information about using conditions in a policy, see [Bucket policy examples using condition keys](amazon-s3-policy-keys.md).

**Important**  
Before using the following example policy, replace the VPC endpoint ID with an appropriate value for your use case. Otherwise, you won't be able to access your bucket.
This policy disables console access to the specified bucket because console requests don't originate from the specified VPC endpoint.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Id": "Policy1415115909152",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPCE-only",
       "Principal": "*",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket",
                    "arn:aws:s3:::amzn-s3-demo-bucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:SourceVpce": "vpce-0abcdef1234567890"
         }
       }
     }
   ]
}
```

------

## Restricting access to a specific VPC


You can create a bucket policy that restricts access to a specific VPC by using the `aws:SourceVpc` condition. This is useful if you have multiple VPC endpoints configured in the same VPC, and you want to manage access to your Amazon S3 buckets for all of your endpoints. The following is an example of a policy that denies access to `awsexamplebucket1` and its objects from anyone outside VPC `vpc-111bbb22`. If the specified VPC isn't used, the policy denies all access to the bucket. This statement doesn't  grant access to the bucket. To grant access, you must add a separate `Allow` statement. The `vpc-111bbb22` condition key doesn't require an ARN for the VPC resource, only the VPC ID.

**Important**  
Before using the following example policy, replace the VPC ID with an appropriate value for your use case. Otherwise, you won't be able to access your bucket.
This policy disables console access to the specified bucket because console requests don't originate from the specified VPC.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Id": "Policy1415115909153",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPC-only",
       "Principal": "*",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket",
                    "arn:aws:s3:::amzn-s3-demo-bucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:SourceVpc": "vpc-1a2b3c4d"
         }
       }
     }
   ]
}
```

------

## Restricting access to an IPv6 VPC endpoint


The following example policy denies all Amazon S3 (`s3:`) actions on the *amzn-s3-demo-bucket* bucket and its objects, unless the request originates from the specified VPC endpoint (`vpce-0a1b2c3d4e5f6g`) and the source IP address matches the provided IPv6 CIDR block.

```
{
   "Version": "2012-10-17", 		 	 	 
   "Id": "Policy1415115909154",
   "Statement": [
     {
       "Sid": "AccessSpecificIPv6VPCEOnly",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket",
                    "arn:aws:s3:::amzn-s3-demo-bucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:SourceVpc": "vpc-0a1b2c3d4e5f6g4h2"
         },
        "NotIpAddress": {
          "aws:VpcSourceIp": "2001:db8::/32"
        }
       }
     }
   ]
}
```

For information on how to restrict access to your bucket based on specific IPs or VPCs, see [How do I allow only specific VPC endpoints or IP addresses to access my Amazon S3 bucket?](https://repost.aws/knowledge-center/block-s3-traffic-vpc-ip) in the AWS re:Post Knowledge Center.

# Examples of Amazon S3 bucket policies
Bucket policy examples

With Amazon S3 bucket policies, you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them. You can even prevent authenticated users without the appropriate permissions from accessing your Amazon S3 resources.

This section presents examples of typical use cases for bucket policies. These sample policies use `amzn-s3-demo-bucket` as the resource value. To test these policies, replace the `user input placeholders` with your own information (such as your bucket name). 

To grant or deny permissions to a set of objects, you can use wildcard characters (`*`) in Amazon Resource Names (ARNs) and other values. For example, you can control access to groups of objects that begin with a common [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) or end with a specific extension, such as `.html`. 

For more information about AWS Identity and Access Management (IAM) policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Note**  
When testing permissions by using the Amazon S3 console, you must grant additional permissions that the console requires—`s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket`. For an example walkthrough that grants permissions to users and tests those permissions by using the console, see [Controlling access to a bucket with user policies](walkthrough1.md).

Additional resources for creating bucket policies include the following:
+ For a list of the IAM policy actions, resources, and condition keys that you can use when creating a bucket policy, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.
+ For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
+ For guidance on creating your S3 policy, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md).
+ To troubleshoot errors with a policy, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

If you're having trouble adding or updating a policy, see [Why do I get the error "Invalid principal in policy" when I try to update my Amazon S3 bucket policy?](https://repost.aws/knowledge-center/s3-invalid-principal-in-policy-error) in the AWS re:Post Knowledge Center.

**Topics**
+ [

## Granting read-only permission to a public anonymous user
](#example-bucket-policies-anonymous-user)
+ [

## Requiring encryption
](#example-bucket-policies-encryption)
+ [

## Managing buckets using canned ACLs
](#example-bucket-policies-public-access)
+ [

## Managing object access with object tagging
](#example-bucket-policies-object-tags)
+ [

## Managing object access by using global condition keys
](#example-bucket-policies-global-condition-keys)
+ [

## Managing access based on HTTP or HTTPS requests
](#example-bucket-policies-HTTP-HTTPS)
+ [

## Managing user access to specific folders
](#example-bucket-policies-folders)
+ [

## Managing access for access logs
](#example-bucket-policies-access-logs)
+ [

## Managing access to an Amazon CloudFront OAI
](#example-bucket-policies-cloudfront)
+ [

## Managing access for Amazon S3 Storage Lens
](#example-bucket-policies-lens)
+ [

## Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports
](#example-bucket-policies-s3-inventory)
+ [

## Requiring MFA
](#example-bucket-policies-MFA)
+ [

## Preventing users from deleting objects
](#using-with-s3-actions-related-to-bucket-subresources)

## Granting read-only permission to a public anonymous user


You can use your policy settings to grant access to public anonymous users, which is useful if you're configuring your bucket as a static website. Granting access to public anonymous users requires you to disable the Block Public Access settings for your bucket. For more information about how to do this, and the policy required, see [Setting permissions for website access](WebsiteAccessPermissionsReqd.md). To learn how to set up more restrictive policies for the same purpose, see [How can I grant public read access to some objects in my Amazon S3 bucket?](https://repost.aws/knowledge-center/read-access-objects-s3-bucket) in the AWS Knowledge Center.

By default, Amazon S3 blocks public access to your account and buckets. If you want to use a bucket to host a static website, you can use these steps to edit your block public access settings. 

**Warning**  
Before you complete these steps, review [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md) to ensure that you understand and accept the risks involved with allowing public access. When you turn off block public access settings to make your bucket public, anyone on the internet can access your bucket. We recommend that you block all public access to your buckets.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the bucket that you have configured as a static website.

1. Choose **Permissions**.

1. Under **Block public access (bucket settings)**, choose **Edit**.

1. Clear **Block *all* public access**, and choose **Save changes**.  
![\[The Amazon S3 console, showing the block public access bucket settings.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/edit-public-access-clear.png)

   Amazon S3 turns off the Block Public Access settings for your bucket. To create a public static website, you might also have to [edit the Block Public Access settings](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html) for your account before adding a bucket policy. If the Block Public Access settings for your account are currently turned on, you see a note under **Block public access (bucket settings)**.

## Requiring encryption


You can require server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), as shown in the following examples.

### Require SSE-KMS for all objects written to a bucket


The following example policy requires every object that is written to the bucket to be encrypted with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). If the object isn't encrypted with SSE-KMS, the request is denied.

------
#### [ JSON ]

****  

```
{
"Version":"2012-10-17",		 	 	 
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMS",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
  "Condition": {
    "Null": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "true"
    }
  }
}]
}
```

------

### Require SSE-KMS with a specific AWS KMS key for all objects written to a bucket


The following example policy denies any objects from being written to the bucket if they aren’t encrypted with SSE-KMS by using a specific KMS key ID. Even if the objects are encrypted with SSE-KMS by using a per-request header or bucket default encryption, the objects can't be written to the bucket if they haven't been encrypted with the specified KMS key. Make sure to replace the KMS key ARN that's used in this example with your own KMS key ARN.

------
#### [ JSON ]

****  

```
{
"Version":"2012-10-17",		 	 	 
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMSWithSpecificKey",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
  "Condition": {
    "ArnNotEqualsIfExists": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
    }
  }
}]
}
```

------

## Managing buckets using canned ACLs


### Granting permissions to multiple accounts to upload objects or set object ACLs for public access


The following example policy grants the `s3:PutObject` and `s3:PutObjectAcl` permissions to multiple AWS accounts. Also, the example policy requires that any requests for these operations must include the `public-read` [canned access control list (ACL)](acl-overview.md#canned-acl). For more information, see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions) and [Policy condition keys for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys).

**Warning**  
The `public-read` canned ACL allows anyone in the world to view the objects in your bucket. Use caution when granting anonymous access to your Amazon S3 bucket or disabling block public access settings. When you grant anonymous access, anyone in the world can access your bucket. We recommend that you never grant anonymous access to your Amazon S3 bucket unless you specifically need to, such as with [static website hosting](WebsiteHosting.md). If you want to enable block public access settings for static website hosting, see [Tutorial: Configuring a static website on Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AddPublicReadCannedAcl",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:root",
                    "arn:aws:iam::444455556666:root"
                ]
            },
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": [
                        "public-read"
                    ]
                }
            }
        }
    ]
}
```

------

### Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control


The following example shows how to allow another AWS account to upload objects to your bucket while ensuring that you have full control of the uploaded objects. This policy grants a specific AWS account (*`111122223333`*) the ability to upload objects only if that account includes the `bucket-owner-full-control` canned ACL on upload. The `StringEquals` condition in the policy specifies the `s3:x-amz-acl` condition key to express the canned ACL requirement. For more information, see [Policy condition keys for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
     {
       "Sid":"PolicyForAllowUploadWithACL",
       "Effect":"Allow",
       "Principal":{"AWS":"111122223333"},
       "Action":"s3:PutObject",
       "Resource":"arn:aws:s3:::amzn-s3-demo-bucket/*",
       "Condition": {
         "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"}
       }
     }
   ]
}
```

------

## Managing object access with object tagging


### Allow a user to read only objects that have a specific tag key and value


The following permissions policy limits a user to only reading objects that have the `environment: production` tag key and value. This policy uses the `s3:ExistingObjectTag` condition key to specify the tag key and value.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Principal":{
            "AWS":"arn:aws:iam::111122223333:role/JohnDoe"
         },
         "Effect":"Allow",
         "Action":[
            "s3:GetObject",
            "s3:GetObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket/*",
         "Condition":{
            "StringEquals":{
               "s3:ExistingObjectTag/environment":"production"
            }
         }
      }
   ]
}
```

------

### Restrict which object tag keys that users can add


The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object. The condition uses the `s3:RequestObjectTagKeys` condition key to specify the allowed tag keys, such as `Owner` or `CreationDate`. For more information, see [Creating a condition that tests multiple key values](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*.

The policy ensures that every tag key specified in the request is an authorized tag key. The `ForAnyValue` qualifier in the condition ensures that at least one of the specified keys must be present in the request.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
  "Statement": [
    {"Principal":{"AWS":[
            "arn:aws:iam::111122223333:role/JohnDoe"
         ]
       },
 "Effect": "Allow",
      "Action": [
        "s3:PutObjectTagging"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {"ForAnyValue:StringEquals": {"s3:RequestObjectTagKeys": [
            "Owner",
            "CreationDate"
          ]
        }
      }
    }
  ]
}
```

------

### Require a specific tag key and value when allowing users to add object tags


The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object. The condition requires the user to include a specific tag key (such as `Project`) with the value set to `X`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
  "Statement": [
    {"Principal":{"AWS":[
       "arn:aws:iam::111122223333:user/JohnDoe"
         ]
       },
      "Effect": "Allow",
      "Action": [
        "s3:PutObjectTagging"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {"StringEquals": {"s3:RequestObjectTag/Project": "X"
        }
      }
    }
  ]
}
```

------

### Allow a user to only add objects with a specific object tag key and value


The following example policy grants a user permission to perform the `s3:PutObject` action so that they can add objects to a bucket. However, the `Condition` statement restricts the tag keys and values that are allowed on the uploaded objects. In this example, the user can only add objects that have the specific tag key (`Department`) with the value set to `Finance` to the bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Principal":{
            "AWS":[
                 "arn:aws:iam::111122223333:user/JohnDoe"
         ]
        },
        "Effect": "Allow",
        "Action": [
            "s3:PutObject"
        ],
        "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket/*"
        ],
        "Condition": {
            "StringEquals": {
                "s3:RequestObjectTag/Department": "Finance"
            }
        }
    }]
}
```

------

## Managing object access by using global condition keys


[Global condition keys](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html) are condition context keys with an `aws` prefix. AWS services can support global condition keys or service-specific keys that include the service prefix. You can use the `Condition` element of a JSON policy to compare the keys in a request with the key values that you specify in your policy.

### Restrict access to only Amazon S3 server access log deliveries


In the following example bucket policy, the [https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) global condition key is used to compare the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) of the resource, making a service-to-service request with the ARN that is specified in the policy. The `aws:SourceArn` global condition key is used to prevent the Amazon S3 service from being used as a [confused deputy](https://docs.aws.amazon.com//IAM/latest/UserGuide/confused-deputy.html) during transactions between services. Only the Amazon S3 service is allowed to add objects to the Amazon S3 bucket.

This example bucket policy grants `s3:PutObject` permissions to only the logging service principal (`logging.s3.amazonaws.com`). 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowPutObjectS3ServerAccessLogsPolicy",
            "Principal": {
                "Service": "logging.s3.amazonaws.com"
            },
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket-logs/*",
            "Condition": {
                "StringEquals": {
                "aws:SourceAccount": "111122223333"
                },
                "ArnLike": {
                "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket1"
                }
            }
        },
        {
            "Sid": "RestrictToS3ServerAccessLogs",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket-logs/*",
            "Condition": {
                "ForAllValues:StringNotEquals": {
                    "aws:PrincipalServiceNamesList": "logging.s3.amazonaws.com"
                }
            }
        }
    ]
}
```

------

### Allow access to only your organization


If you want to require all [IAM principals](https://docs.aws.amazon.com//IAM/latest/UserGuide/intro-structure.html#intro-structure-principal) accessing a resource to be from an AWS account in your organization (including the AWS Organizations management account), you can use the `aws:PrincipalOrgID` global condition key.

To grant or restrict this type of access, define the `aws:PrincipalOrgID` condition and set the value to your [organization ID](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_org_details.html) in the bucket policy. The organization ID is used to control access to the bucket. When you use the `aws:PrincipalOrgID` condition, the permissions from the bucket policy are also applied to all new accounts that are added to the organization.

Here’s an example of a resource-based bucket policy that you can use to grant specific IAM principals in your organization direct access to your bucket. By adding the `aws:PrincipalOrgID` global condition key to your bucket policy, the principal account is now required to be in your organization to obtain access to the resource. Even if you accidentally specify an incorrect account when granting access, the [aws:PrincipalOrgID global condition key](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgid) acts as an additional safeguard. When this global key is used in a policy, it prevents all principals from outside of the specified organization from accessing the S3 bucket. Only principals from accounts in the listed organization are able to obtain access to the resource.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Sid": "AllowGetObject",
        "Principal": {
            "AWS": "*"
        },
        "Effect": "Allow",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
        "Condition": {
            "StringEquals": {
                "aws:PrincipalOrgID": ["o-aa111bb222"]
            }
        }
    }]
}
```

------

## Managing access based on HTTP or HTTPS requests


### Restrict access to only HTTPS requests


If you want to prevent potential attackers from manipulating network traffic, you can use HTTPS (TLS) to only allow encrypted connections while restricting HTTP requests from accessing your bucket. To determine whether the request is HTTP or HTTPS, use the [https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-securetransport](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-securetransport) global condition key in your S3 bucket policy. The `aws:SecureTransport` condition key checks whether a request was sent by using HTTP.

If a request returns `true`, then the request was sent through HTTPS. If the request returns `false`, then the request was sent through HTTP. You can then allow or deny access to your bucket based on the desired request scheme.

In the following example, the bucket policy explicitly denies HTTP requests. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Sid": "RestrictToTLSRequestsOnly",
        "Action": "s3:*",
        "Effect": "Deny",
        "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket",
            "arn:aws:s3:::amzn-s3-demo-bucket/*"
        ],
        "Condition": {
            "Bool": {
                "aws:SecureTransport": "false"
            }
        },
        "Principal": "*"
    }]
}
```

------

### Restrict access to a specific HTTP referer


Suppose that you have a website with the domain name *`www.example.com`* or *`example.com`* with links to photos and videos stored in your bucket named `amzn-s3-demo-bucket`. By default, all Amazon S3 resources are private, so only the AWS account that created the resources can access them. 

To allow read access to these objects from your website, you can add a bucket policy that allows the `s3:GetObject` permission with a condition that the `GET` request must originate from specific webpages. The following policy restricts requests by using the `StringLike` condition with the `aws:Referer` condition key.

Make sure that the browsers that you use include the HTTP `referer` header in the request.

**Warning**  
We recommend that you use caution when using the `aws:Referer` condition key. It is dangerous to include a publicly known HTTP referer header value. Unauthorized parties can use modified or custom browsers to provide any `aws:Referer` value that they choose. Therefore, do not use `aws:Referer` to prevent unauthorized parties from making direct AWS requests.   
The `aws:Referer` condition key is offered only to allow customers to protect their digital content, such as content stored in Amazon S3, from being referenced on unauthorized third-party sites. For more information, see [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-referer](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-referer) in the *IAM User Guide*.

## Managing user access to specific folders


### Grant users access to specific folders


Suppose that you're trying to grant users access to a specific folder. If the IAM user and the S3 bucket belong to the same AWS account, then you can use an IAM policy to grant the user access to a specific bucket folder. With this approach, you don't need to update your bucket policy to grant access. You can add the IAM policy to an IAM role that multiple users can switch to. 

If the IAM identity and the S3 bucket belong to different AWS accounts, then you must grant cross-account access in both the IAM policy and the bucket policy. For more information about granting cross-account access, see [Bucket owner granting cross-account bucket permissions](https://docs.aws.amazon.com//AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html).

The following example bucket policy grants `JohnDoe` full console access to only his folder (`home/JohnDoe/`). By creating a `home` folder and granting the appropriate permissions to your users, you can have multiple users share a single bucket. This policy consists of three `Allow` statements:
+ `AllowRootAndHomeListingOfCompanyBucket`: Allows the user (`JohnDoe`) to list objects at the root level of the `amzn-s3-demo-bucket` bucket and in the `home` folder. This statement also allows the user to search on the prefix `home/` by using the console.
+ `AllowListingOfUserFolder`: Allows the user (`JohnDoe`) to list all objects in the `home/JohnDoe/` folder and any subfolders.
+ `AllowAllS3ActionsInUserFolder`: Allows the user to perform all Amazon S3 actions by granting `Read`, `Write`, and `Delete` permissions. Permissions are limited to the bucket owner's home folder.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowRootAndHomeListingOfCompanyBucket",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/JohnDoe"
                ]
            },
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket"],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": ["", "home/", "home/JohnDoe"],
                    "s3:delimiter": ["/"]
                }
            }
        },
        {
            "Sid": "AllowListingOfUserFolder",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/JohnDoe"
                ]
            },
            "Action": ["s3:ListBucket"],
            "Effect": "Allow",
            "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket"],
            "Condition": {
                "StringLike": {
                    "s3:prefix": ["home/JohnDoe/*"]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/JohnDoe"
                ]
            },
            "Action": ["s3:*"],
            "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket/home/JohnDoe/*"]
        }
    ]
}
```

------

## Managing access for access logs


### Grant access to Application Load Balancer for enabling access logs


When you enable access logs for Application Load Balancer, you must specify the name of the S3 bucket where the load balancer will [store the logs](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#access-log-create-bucket). The bucket must have an [attached policy](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#attach-bucket-policy) that grants Elastic Load Balancing permission to write to the bucket.

In the following example, the bucket policy grants Elastic Load Balancing (ELB) permission to write the access logs to the bucket:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/prefix/AWSLogs/111122223333/*"
        }
    ]
}
```

------

**Note**  
Make sure to replace `elb-account-id` with the AWS account ID for Elastic Load Balancing for your AWS Region. For the list of Elastic Load Balancing Regions, see [Attach a policy to your Amazon S3 bucket](https://docs.aws.amazon.com//elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy) in the *Elastic Load Balancing User Guide*.

If your AWS Region does not appear in the supported Elastic Load Balancing Regions list, use the following policy, which grants permissions to the specified log delivery service.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
       "Principal": {
         "Service": "logdelivery.elasticloadbalancing.amazonaws.com"
          },
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/prefix/AWSLogs/111122223333/*"
    }
  ]
}
```

------

Then, make sure to configure your [Elastic Load Balancing access logs](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#enable-access-logs) by enabling them. You can [verify your bucket permissions](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#verify-bucket-permissions) by creating a test file.

## Managing access to an Amazon CloudFront OAI


### Grant permission to an Amazon CloudFront OAI


The following example bucket policy grants a CloudFront origin access identity (OAI) permission to get (read) all objects in your S3 bucket. You can use a CloudFront OAI to allow users to access objects in your bucket through CloudFront but not directly through Amazon S3. For more information, see [Restricting access to Amazon S3 content by using an Origin Access Identity](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*.

The following policy uses the OAI's ID as the policy's `Principal`. For more information about using S3 bucket policies to grant access to a CloudFront OAI, see [Migrating from origin access identity (OAI) to origin access control (OAC)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#migrate-from-oai-to-oac) in the *Amazon CloudFront Developer Guide*.

To use this example:
+ Replace `EH1HDMB1FH2TC` with the OAI's ID. To find the OAI's ID, see the [Origin Access Identity page](https://console.aws.amazon.com/cloudfront/home?region=us-east-1#oai:) on the CloudFront console, or use [https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html) in the CloudFront API.
+ Replace `amzn-s3-demo-bucket` with the name of your bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EH1HDMB1FH2TC"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

------

## Managing access for Amazon S3 Storage Lens


### Grant permissions for Amazon S3 Storage Lens


S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket.

S3 Storage Lens can export your aggregated storage usage metrics to an Amazon S3 bucket for further analysis. The bucket where S3 Storage Lens places its metrics exports is known as the *destination bucket*. When setting up your S3 Storage Lens metrics export, you must have a bucket policy for the destination bucket. For more information, see [Monitoring your storage activity and usage with Amazon S3 Storage Lens](storage_lens.md).

The following example bucket policy grants Amazon S3 permission to write objects (`PUT` requests) to a destination bucket. You use a bucket policy like this on the destination bucket when setting up an S3 Storage Lens metrics export.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3StorageLensExamplePolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "storage-lens.s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-destination-bucket/destination-prefix/StorageLens/111122223333/*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "aws:SourceAccount": "111122223333",
                    "aws:SourceArn": "arn:aws:s3:region-code:111122223333:storage-lens/storage-lens-dashboard-configuration-id"
                }
            }
        }
    ]
}
```

------

When you're setting up an S3 Storage Lens organization-level metrics export, use the following modification to the previous bucket policy's `Resource` statement.

```
1. "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/destination-prefix/StorageLens/your-organization-id/*",
```

## Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports


### Grant permissions for S3 Inventory and S3 analytics


S3 Inventory creates lists of the objects in a bucket, and S3 analytics Storage Class Analysis export creates output files of the data used in the analysis. The bucket that the inventory lists the objects for is called the *source bucket*. The bucket where the inventory file or the analytics export file is written to is called a *destination bucket*. When setting up an inventory or an analytics export, you must create a bucket policy for the destination bucket. For more information, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md) and [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md).

The following example bucket policy grants Amazon S3 permission to write objects (`PUT` requests) from the account for the source bucket to the destination bucket. You use a bucket policy like this on the destination bucket when setting up S3 Inventory and S3 analytics export.

------
#### [ JSON ]

****  

```
{  
      "Version":"2012-10-17",		 	 	 
      "Statement": [
        {
            "Sid": "InventoryAndAnalyticsExamplePolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": [
            "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
            ],
            "Condition": {
                "ArnLike": {
                "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket"
                },
                "StringEquals": {
                    "aws:SourceAccount": "111122223333",
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
```

------

### Control S3 Inventory report configuration creation


[Cataloging and analyzing your data with S3 Inventory](storage-inventory.md) creates lists of the objects in an S3 bucket and the metadata for each object. The `s3:PutInventoryConfiguration` permission allows a user to create an inventory configuration that includes all object metadata fields that are available by default and to specify the destination bucket to store the inventory. A user with read access to objects in the destination bucket can access all object metadata fields that are available in the inventory report. For more information about the metadata fields that are available in S3 Inventory, see [Amazon S3 Inventory list](storage-inventory.md#storage-inventory-contents).

To restrict a user from configuring an S3 Inventory report, remove the `s3:PutInventoryConfiguration` permission from the user.

Some object metadata fields in S3 Inventory report configurations are optional, meaning that they're available by default but they can be restricted when you grant a user the `s3:PutInventoryConfiguration` permission. You can control whether users can include these optional metadata fields in their reports by using the `s3:InventoryAccessibleOptionalFields` condition key. For a list of the optional metadata fields available in S3 Inventory, see [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html#API_PutBucketInventoryConfiguration_RequestBody](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html#API_PutBucketInventoryConfiguration_RequestBody) in the *Amazon Simple Storage Service API Reference*.

To grant a user permission to create an inventory configuration with specific optional metadata fields, use the `s3:InventoryAccessibleOptionalFields` condition key to refine the conditions in your bucket policy. 

The following example policy grants a user (`Ana`) permission to create an inventory configuration conditionally. The `ForAllValues:StringEquals` condition in the policy uses the `s3:InventoryAccessibleOptionalFields` condition key to specify the two allowed optional metadata fields, namely `Size` and `StorageClass`. So, when `Ana` is creating an inventory configuration, the only optional metadata fields that she can include are `Size` and `StorageClass`. 

------
#### [ JSON ]

****  

```
{
	"Id": "InventoryConfigPolicy",
	"Version":"2012-10-17",		 	 	 
	"Statement": [{
			"Sid": "AllowInventoryCreationConditionally",
			"Effect": "Allow",			
			"Principal": {
				"AWS": "arn:aws:iam::111122223333:user/Ana"
			},			
			"Action": 
				"s3:PutInventoryConfiguration",
			"Resource": 
				"arn:aws:s3:::DOC-EXAMPLE-SOURCE-BUCKET",
			"Condition": {
				"ForAllValues:StringEquals": {
					"s3:InventoryAccessibleOptionalFields": [
					   "Size",
					   "StorageClass"
					   ]
				  }
				}
			}
	]
}
```

------

To restrict a user from configuring an S3 Inventory report that includes specific optional metadata fields, add an explicit `Deny` statement to the bucket policy for the source bucket. The following example bucket policy denies the user `Ana` from creating an inventory configuration in the source bucket `amzn-s3-demo-source-bucket` that includes the optional `ObjectAccessControlList` or `ObjectOwner` metadata fields. The user `Ana` can still create an inventory configuration with other optional metadata fields.

```
 1. {
 2. 	"Id": "InventoryConfigSomeFields",
 3. 	"Version": "2012-10-17",		 	 	 
 4. 	"Statement": [{
 5. 			"Sid": "AllowInventoryCreation",
 6. 			"Effect": "Allow",
 7. 			"Principal": {
 8. 				"AWS": "arn:aws:iam::111122223333:user/Ana"
 9. 			},
10. 			"Action": "s3:PutInventoryConfiguration",			
11. 			"Resource": 
12. 				"arn:aws:s3:::amzn-s3-demo-source-bucket",
13. 
14. 		},
15. 		{
16. 			"Sid": "DenyCertainInventoryFieldCreation",
17. 			"Effect": "Deny",
18. 			"Principal": {
19. 				"AWS": "arn:aws:iam::111122223333:user/Ana"
20. 			},
21. 			"Action": "s3:PutInventoryConfiguration",	
22. 			"Resource": 
23. 			  "arn:aws:s3:::amzn-s3-demo-source-bucket",			
24. 			"Condition": {
25. 				"ForAnyValue:StringEquals": {
26. 					"s3:InventoryAccessibleOptionalFields": [
27. 					   "ObjectOwner",
28. 					   "ObjectAccessControlList"
29. 					   ]
30. 				  }
31. 				}
32. 			}
33. 	]
34. }
```

**Note**  
The use of the `s3:InventoryAccessibleOptionalFields` condition key in bucket policies doesn't affect the delivery of inventory reports based on the existing inventory configurations. 

**Important**  
We recommend that you use `ForAllValues` with an `Allow` effect or `ForAnyValue` with a `Deny` effect, as shown in the prior examples.  
Don't use `ForAllValues` with a `Deny` effect nor `ForAnyValue` with an `Allow` effect, because these combinations can be overly restrictive and block inventory configuration deletion.  
To learn more about the `ForAllValues` and `ForAnyValue` condition set operators, see [Multivalued context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-single-vs-multi-valued-context-keys.html#reference_policies_condition-multi-valued-context-keys) in the *IAM User Guide*.

## Requiring MFA


Amazon S3 supports MFA-protected API access, a feature that can enforce multi-factor authentication (MFA) for access to your Amazon S3 resources. Multi-factor authentication provides an extra level of security that you can apply to your AWS environment. MFA is a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code. For more information, see [AWS Multi-Factor Authentication](https://aws.amazon.com/mfa/). You can require MFA for any requests to access your Amazon S3 resources. 

To enforce the MFA requirement, use the `aws:MultiFactorAuthAge` condition key in a bucket policy. IAM users can access Amazon S3 resources by using temporary credentials issued by the AWS Security Token Service (AWS STS). You provide the MFA code at the time of the AWS STS request. 

When Amazon S3 receives a request with multi-factor authentication, the `aws:MultiFactorAuthAge` condition key provides a numeric value that indicates how long ago (in seconds) the temporary credential was created. If the temporary credential provided in the request was not created by using an MFA device, this key value is null (absent). In a bucket policy, you can add a condition to check this value, as shown in the following example. 

This example policy denies any Amazon S3 operation on the *`/taxdocuments`* folder in the `amzn-s3-demo-bucket` bucket if the request is not authenticated by using MFA. To learn more about MFA, see [Using Multi-Factor Authentication (MFA) in AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "123",
    "Statement": [
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": { "Null": { "aws:MultiFactorAuthAge": true }}
      }
    ]
 }
```

------

The `Null` condition in the `Condition` block evaluates to `true` if the `aws:MultiFactorAuthAge` condition key value is null, indicating that the temporary security credentials in the request were created without an MFA device. 

The following bucket policy is an extension of the preceding bucket policy. The following policy includes two policy statements. One statement allows the `s3:GetObject` permission on a bucket (`amzn-s3-demo-bucket`) to everyone. Another statement further restricts access to the `amzn-s3-demo-bucket/taxdocuments` folder in the bucket by requiring MFA. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "123",
    "Statement": [
      {
        "Sid": "DenyInsecureConnections",
        "Effect": "Deny",
        "Principal": {
            "AWS": "arn:aws:iam::111122223333:root"
        },
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": { "Null": { "aws:MultiFactorAuthAge": true } }
      },
      {
        "Sid": "AllowGetObject",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::111122223333:root"
        },
        "Action": ["s3:GetObject"],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
      }
    ]
 }
```

------

You can optionally use a numeric condition to limit the duration for which the `aws:MultiFactorAuthAge` key is valid. The duration that you specify with the `aws:MultiFactorAuthAge` key is independent of the lifetime of the temporary security credential that's used in authenticating the request. 

For example, the following bucket policy, in addition to requiring MFA authentication, also checks how long ago the temporary session was created. The policy denies any operation if the `aws:MultiFactorAuthAge` key value indicates that the temporary session was created more than an hour ago (3,600 seconds). 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "123",
    "Statement": [
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": {"Null": {"aws:MultiFactorAuthAge": true }}
      },
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 3600 }}
       },
       {
         "Sid": "",
         "Effect": "Allow",
         "Principal": "*",
         "Action": ["s3:GetObject"],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
       }
    ]
 }
```

------

## Preventing users from deleting objects


By default, users have no permissions. But as you create policies, you might grant users permissions that you didn't intend to grant. To avoid such permission loopholes, you can write a stricter access policy by adding an explicit deny. 

To explicitly block users or accounts from deleting objects, you must add the following actions to a bucket policy: `s3:DeleteObject`, `s3:DeleteObjectVersion`, and `s3:PutLifecycleConfiguration` permissions. All three actions are required because you can delete objects either by explicitly calling the `DELETE Object` API operations or by configuring their lifecycle (see [Managing the lifecycle of objects](object-lifecycle-mgmt.md)) so that Amazon S3 can remove the objects when their lifetime expires.

In the following policy example, you explicitly deny `DELETE Object` permissions to the user `MaryMajor`. An explicit `Deny` statement always supersedes any other permission granted.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "statement1",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:user/MaryMajor"
      },
      "Action": [
        "s3:GetObjectVersion",
        "s3:GetBucketAcl"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket1",
	 	"arn:aws:s3:::amzn-s3-demo-bucket1/*"
      ]
    },
    {
      "Sid": "statement2",
      "Effect": "Deny",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:user/MaryMajor"
      },
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteObjectVersion",
        "s3:PutLifecycleConfiguration"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket1",
	    "arn:aws:s3:::amzn-s3-demo-bucket1/*"
      ]
    }
  ]
}
```

------

# Bucket policy examples using condition keys
Condition key examples

You can use access policy language to specify conditions when you grant permissions. You can use the optional `Condition` element, or `Condition` block, to specify conditions for when a policy is in effect. 

For policies that use Amazon S3 condition keys for object and bucket operations, see the following examples. For more information about condition keys, see [Policy condition keys for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys). For a complete list of Amazon S3 actions, condition keys, and resources that you can specify in policies, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Examples: Amazon S3 condition keys for object operations
Object operation examples

The following examples show how you can use Amazon S3‐specific condition keys for object operations. For a complete list of Amazon S3 actions, condition keys, and resources that you can specify in policies, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

Several of the example policies show how you can use conditions keys with [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) operations. PUT Object operations allow access control list (ACL)–specific headers that you can use to grant ACL-based permissions. By using these condition keys, you can set a condition to require specific access permissions when the user uploads an object. You can also grant ACL–based permissions with the PutObjectAcl operation. For more information, see [PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) in the *Amazon S3 Amazon Simple Storage Service API Reference*. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md).

**Topics**
+ [

### Example 1: Granting `s3:PutObject` permission requiring that objects be stored using server-side encryption
](#putobject-require-sse-2)
+ [

### Example 2: Granting `s3:PutObject` permission to copy objects with a restriction on the copy source
](#putobject-limit-copy-source-3)
+ [

### Example 3: Granting access to a specific version of an object
](#getobjectversion-limit-access-to-specific-version-3)
+ [

### Example 4: Granting permissions based on object tags
](#example-object-tagging-access-control)
+ [

### Example 5: Restricting access by the AWS account ID of the bucket owner
](#example-object-resource-account)
+ [

### Example 6: Requiring a minimum TLS version
](#example-object-tls-version)
+ [

### Example 7: Excluding certain principals from a `Deny` statement
](#example-exclude-principal-from-deny-statement)
+ [

### Example 8: Enforcing clients to conditionally upload objects based on object key names or ETags
](#example-conditional-writes-enforce)

### Example 1: Granting `s3:PutObject` permission requiring that objects be stored using server-side encryption
Example 1: Requiring objects to be stored using server-side encryption

Suppose that Account A owns a bucket. The account administrator wants to grant Jane, a user in Account A, permission to upload objects with the condition that Jane always request server-side encryption with Amazon S3 managed keys (SSE-S3). The Account A administrator can specify this requirement by using the `s3:x-amz-server-side-encryption` condition key as shown. The key-value pair in the following `Condition` block specifies the `s3:x-amz-server-side-encryption` condition key and SSE-S3 (`AES256`) as the encryption type:

```
"Condition": {
     "StringNotEquals": {
         "s3:x-amz-server-side-encryption": "AES256"
     }}
```

When testing this permission by using the AWS CLI, you must add the required encryption by using the `--server-side-encryption` parameter, as shown in the following example. To use this example command, replace the `user input placeholders` with your own information. 

```
aws s3api put-object --bucket amzn-s3-demo-bucket --key HappyFace.jpg --body c:\HappyFace.jpg --server-side-encryption "AES256" --profile AccountAadmin
```

### Example 2: Granting `s3:PutObject` permission to copy objects with a restriction on the copy source
Example 2: Copying objects with a restriction on the copy source

In a `PUT` object request, when you specify a source object, the request is a copy operation (see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html)). Accordingly, the bucket owner can grant a user permission to copy objects with restrictions on the source, for example:
+ Allow copying objects only from the specified source bucket (for example, `amzn-s3-demo-source-bucket`).
+ Allow copying objects from the specified source bucket and only the objects whose key name prefix starts with as specific prefix, such as *`public/`* (for example, `amzn-s3-demo-source-bucket/public/*`).
+ Allow copying only a specific object from the source bucket (for example, `amzn-s3-demo-source-bucket/example.jpg`).

The following bucket policy grants a user (`Dave`) the `s3:PutObject` permission. This policy allows him to copy objects only with a condition that the request include the `s3:x-amz-copy-source` header and that the header value specify the `/amzn-s3-demo-source-bucket/public/*` key name prefix. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
       {
            "Sid": "cross-account permission to user in your own account",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
        },
        {
            "Sid": "Deny your user permission to upload object if copy source is not /bucket/prefix",
            "Effect": "Deny",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
            "Condition": {
                "StringNotLike": {
                    "s3:x-amz-copy-source": "amzn-s3-demo-source-bucket/public/*"
                }
            }
        }
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the permission using the AWS CLI `copy-object` command. You specify the source by adding the `--copy-source` parameter; the key name prefix must match the prefix allowed in the policy. You need to provide the user Dave credentials using the `--profile` parameter. For more information about setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

```
aws s3api copy-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg 
--copy-source amzn-s3-demo-source-bucket/public/PublicHappyFace1.jpg --profile AccountADave
```

**Give permission to copy only a specific object**  
The preceding policy uses the `StringNotLike` condition. To grant permission to copy only a specific object, you must change the condition from `StringNotLike` to `StringNotEquals` and then specify the exact object key, as shown in the following example. To use this example command, replace the `user input placeholders` with your own information.

```
"Condition": {
       "StringNotEquals": {
           "s3:x-amz-copy-source": "amzn-s3-demo-source-bucket/public/PublicHappyFace1.jpg"
       }
}
```

### Example 3: Granting access to a specific version of an object


Suppose that Account A owns a versioning-enabled bucket. The bucket has several versions of the `HappyFace.jpg` object. The Account A administrator now wants to grant the user `Dave` permission to get only a specific version of the object. The account administrator can accomplish this by granting the user `Dave` the `s3:GetObjectVersion` permission conditionally, as shown in the following example. The key-value pair in the `Condition` block specifies the `s3:VersionId` condition key. In this case, to retrieve the object from the specified versioning-enabled bucket, `Dave` needs to know the exact object version ID. To use this example policy, replace the `user input placeholders` with your own information.

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) in the *Amazon Simple Storage Service API Reference*. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:GetObjectVersion",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/HappyFace.jpg"
        },
        {
            "Sid": "statement2",
            "Effect": "Deny",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:GetObjectVersion",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/HappyFace.jpg",
            "Condition": {
                "StringNotEquals": {
                    "s3:VersionId": "AaaHbAQitwiL_h47_44lRO2DDfLlBO5e"
                }
            }
        }
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the permissions in this policy by using the AWS CLI `get-object` command with the `--version-id` parameter to identify the specific object version to retrieve. The command retrieves the specified version of the object and saves it to the `OutputFile.jpg` file.

```
aws s3api get-object --bucket amzn-s3-demo-bucket --key HappyFace.jpg OutputFile.jpg --version-id AaaHbAQitwiL_h47_44lRO2DDfLlBO5e --profile AccountADave
```

### Example 4: Granting permissions based on object tags
Example 4: Object Tagging

For examples of how to use object tagging condition keys with Amazon S3 operations, see [Tagging and access control policies](tagging-and-policies.md).

### Example 5: Restricting access by the AWS account ID of the bucket owner
Example 5: Restricting access by AWS account ID

You can use either the `aws:ResourceAccount` or `s3:ResourceAccount` condition key to write IAM or virtual private cloud (VPC) endpoint policies that restrict user, role, or application access to the Amazon S3 buckets that are owned by a specific AWS account ID. You can use these condition keys to restrict clients within your VPC from accessing buckets that you don't own.

However, be aware that some AWS services rely on access to AWS managed buckets. Therefore, using the `aws:ResourceAccount` or `s3:ResourceAccount` key in your IAM policy might also affect access to these resources. For more information, see the following resources:
+ [Restrict access to buckets in a specified AWS account](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#bucket-policies-s3) in the *AWS PrivateLink Guide*
+ [Restrict access to buckets that Amazon ECR uses](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html#ecr-minimum-s3-perms) in the *Amazon ECR Guide*
+ [Provide required access to Systems Manager for AWS managed Amazon S3 buckets](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-minimum-s3-permissions.html) in the *AWS Systems Manager Guide*

For more information about the `aws:ResourceAccount` and `s3:ResourceAccount` condition keys and examples that show how to use them, see [Limit access to Amazon S3 buckets owned by specific AWS accounts](https://aws.amazon.com/blogs/storage/limit-access-to-amazon-s3-buckets-owned-by-specific-aws-accounts/) in the *AWS Storage Blog*.

### Example 6: Requiring a minimum TLS version


You can use the `s3:TlsVersion` condition key to write IAM, virtual private cloud endpoint (VPCE), or bucket policies that restrict user or application access to Amazon S3 buckets based on the TLS version that's used by the client. You can use this condition key to write policies that require a minimum TLS version. 

**Note**  
When AWS services make calls to other AWS services on your behalf (service-to-service calls), certain network-specific authorization context is redacted, including `s3:TlsVersion`, `aws:SecureTransport`, `aws:SourceIp`, and `aws:VpcSourceIp`. If your policy uses these condition keys with `Deny` statements, AWS service principals might be unintentionally blocked. To allow AWS services to work properly while maintaining your security requirements, exclude service principals from your `Deny` statements by adding the `aws:PrincipalIsAWSService` condition key with a value of `false`. For example:  

```
{
  "Effect": "Deny",
  "Action": "s3:*",
  "Resource": "*",
  "Condition": {
    "Bool": {
      "aws:SecureTransport": "false",
      "aws:PrincipalIsAWSService": "false"
    }
  }
}
```
This policy denies access to S3 operations when HTTPS is not used (`aws:SecureTransport` is false), but only for non-AWS service principals. This ensures your conditional restrictions apply to all principals except AWS service principals.

**Example**  
The following example bucket policy *denies* `PutObject` requests by clients that have a TLS version earlier than 1.2, for example, 1.1 or 1.0. To use this example policy, replace the `user input placeholders` with your own information.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1",
                "arn:aws:s3:::amzn-s3-demo-bucket1/*"
            ],
            "Condition": {
                "NumericLessThan": {
                    "s3:TlsVersion": 1.2
                }
            }
        }
    ]
}
```

**Example**  
The following example bucket policy *allows* `PutObject` requests by clients that have a TLS version later than 1.1, for example, 1.2, 1.3, or later:    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1",
                "arn:aws:s3:::amzn-s3-demo-bucket1/*"
            ],
            "Condition": {
                "NumericGreaterThan": {
                    "s3:TlsVersion": 1.1
                }
            }
        }
    ]
}
```

### Example 7: Excluding certain principals from a `Deny` statement
Example 7: Excluding principals from `Deny` statements

The following bucket policy denies `s3:GetObject` access to the `amzn-s3-demo-bucket`, except to principals with the account number *`123456789012`*. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyAccessFromPrincipalNotInSpecificAccount",
      "Principal": {
        "AWS": "*"
      },
      "Action": "s3:GetObject",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:PrincipalAccount": [
            "123456789012"
          ]
        }
      }
    }
  ]
}
```

------

### Example 8: Enforcing clients to conditionally upload objects based on object key names or ETags


With conditional writes, you can add an additional header to your `WRITE` requests in order to specify preconditions for your S3 operation. This header specifies a condition that, if not met, will result in the S3 operation failing. For example you can prevent overwrites of existing data by validating there is no object with the same key name already in your bucket during object upload. You can alternatively check an object's entity tag (ETag) in Amazon S3 before writing an object.

For bucket policy examples that use conditions in a bucket policy to enforce conditional writes, see [Enforce conditional writes on Amazon S3 buckets](conditional-writes-enforce.md).

## Examples: Amazon S3 condition keys for bucket operations
Bucket operation examples

The following example policies show how you can use Amazon S3 specific condition keys for bucket operations.

**Topics**
+ [

### Example 1: Granting `s3:GetObject` permission with a condition on an IP address
](#AvailableKeys-iamV2)
+ [

### Example 2: Getting a list of objects in a bucket with a specific prefix
](#condition-key-bucket-ops-2)
+ [

### Example 3: Setting the maximum number of keys
](#example-numeric-condition-operators)

### Example 1: Granting `s3:GetObject` permission with a condition on an IP address


You can give authenticated users permission to use the `s3:GetObject` action if the request originates from a specific range of IP addresses (for example, `192.0.2.*`), unless the IP address is one that you want to exclude (for example, `192.0.2.188`). In the `Condition` block, `IpAddress` and `NotIpAddress` are conditions, and each condition is provided a key-value pair for evaluation. Both of the key-value pairs in this example use the `aws:SourceIp` AWS wide key. To use this example policy, replace the `user input placeholders` with your own information.

**Note**  
The `IPAddress` and `NotIpAddress` key values specified in the `Condition` block use CIDR notation, as described in RFC 4632. For more information, see [http://www.rfc-editor.org/rfc/rfc4632.txt](http://www.rfc-editor.org/rfc/rfc4632.txt).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "S3PolicyId1",
    "Statement": [
        {
            "Sid": "statement1",
            "Effect": "Allow",
            "Principal": "*",
            "Action":"s3:GetObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition" : {
                "IpAddress" : {
                    "aws:SourceIp": "192.0.2.0/24" 
                },
                "NotIpAddress" : {
                    "aws:SourceIp": "192.0.2.188/32" 
                } 
            } 
        } 
    ]
}
```

------

You can also use other AWS‐wide condition keys in Amazon S3 policies. For example, you can specify the `aws:SourceVpce` and `aws:SourceVpc` condition keys in bucket policies for VPC endpoints. For specific examples, see [Controlling access from VPC endpoints with bucket policies](example-bucket-policies-vpc-endpoint.md).

**Note**  
For some AWS global condition keys, only certain resource types are supported. Therefore, check whether Amazon S3 supports the global condition key and resource type that you want to use, or if you'll need to use an Amazon S3 specific condition key instead. For a complete list of supported resource types and condition keys for Amazon S3, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.  
For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

### Example 2: Getting a list of objects in a bucket with a specific prefix
Example 2: Listing objects with a specific prefix

You can use the `s3:prefix` condition key to limit the response of the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) API operation to key names with a specific prefix. If you are the bucket owner, you can use this condition key to restrict a user to list the contents of a specific prefix in the bucket. The `s3:prefix` condition key is useful if the objects in the bucket are organized by key name prefixes. 

The Amazon S3 console uses key name prefixes to show a folder concept. Only the console supports the concept of folders; the Amazon S3 API supports only buckets and objects. For example, if you have two objects with the key names *`public/object1.jpg`* and *`public/object2.jpg`*, the console shows the objects under the *`public`* folder. In the Amazon S3 API, these are objects with prefixes, not objects in folders. For more information about using prefixes and delimiters to filter access permissions, see [Controlling access to a bucket with user policies](walkthrough1.md). 

In the following scenario, the bucket owner and the parent account to which the user belongs are the same. So the bucket owner can use either a bucket policy or a user policy to grant access. For more information about other condition keys that you can use with the `ListObjectsV2` API operation, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html).

**Note**  
If the bucket is versioning-enabled, to list the objects in the bucket, you must grant the `s3:ListBucketVersions` permission in the following policies, instead of the `s3:ListBucket` permission. The `s3:ListBucketVersions` permission also supports the `s3:prefix` condition key. 

**User policy**  
The following user policy grants the `s3:ListBucket` permission (see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)) with a `Condition` statement that requires the user to specify a prefix in the request with a value of `projects`. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Action": "s3:ListBucket",
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringEquals" : {
                 "s3:prefix": "projects" 
             }
          } 
       },
      {
         "Sid":"statement2",
         "Effect":"Deny",
         "Action": "s3:ListBucket",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringNotEquals" : {
                 "s3:prefix": "projects" 
             }
          } 
       }         
    ]
}
```

------

The `Condition` statement restricts the user to listing only object keys that have the `projects` prefix. The added explicit `Deny` statement denies the user from listing keys with any other prefix, no matter what other permissions the user might have. For example, it's possible that the user could get permission to list object keys without any restriction, either through updates to the preceding user policy or through a bucket policy. Because explicit `Deny` statements always override `Allow` statements, if the user tries to list keys other than those that have the `projects` prefix, the request is denied. 

**Bucket policy**  
If you add the `Principal` element to the above user policy, identifying the user, you now have a bucket policy, as shown in the following example. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/bucket-owner"
         },  
         "Action":  "s3:ListBucket",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringEquals" : {
                 "s3:prefix": "projects" 
             }
          } 
       },
      {
         "Sid":"statement2",
         "Effect":"Deny",
         "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/bucket-owner"
         },  
         "Action": "s3:ListBucket",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringNotEquals" : {
                 "s3:prefix": "projects"  
             }
          } 
       }         
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the policy using the following `list-object` AWS CLI command. In the command, you provide user credentials using the `--profile` parameter. For more information about setting up and using the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

```
aws s3api list-objects --bucket amzn-s3-demo-bucket --prefix projects --profile AccountA
```

### Example 3: Setting the maximum number of keys


You can use the `s3:max-keys` condition key to set the maximum number of keys that a requester can return in a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) or [https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectVersions.html) request. By default, these API operations return up to 1,000 keys. For a list of numeric condition operators that you can use with `s3:max-keys` and accompanying examples, see [Numeric Condition Operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_Numeric) in the *IAM User Guide*.

# Identity-based policies for Amazon S3
Identity-based policies

By default, users and roles don't have permission to create or modify Amazon S3 resources. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies.

To learn how to create an IAM identity-based policy by using these example JSON policy documents, see [Create IAM policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) in the *IAM User Guide*.

For details about actions and resource types defined by Amazon S3, including the format of the ARNs for each of the resource types, see [Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [

## Policy best practices
](#security_iam_service-with-iam-policy-best-practices)
+ [

# Controlling access to a bucket with user policies
](walkthrough1.md)
+ [

# Identity-based policy examples for Amazon S3
](example-policies-s3.md)

## Policy best practices


Identity-based policies determine whether someone can create, access, or delete Amazon S3 resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

# Controlling access to a bucket with user policies
Controlling bucket access

This walkthrough explains how user permissions work with Amazon S3. In this example, you create a bucket with folders. You then create AWS Identity and Access Management IAM users in your AWS account and grant those users incremental permissions on your Amazon S3 bucket and the folders in it. 

**Topics**
+ [

## Basics of buckets and folders
](#walkthrough-background1)
+ [

## Walkthrough summary
](#walkthrough-scenario)
+ [

## Preparing for the walkthrough
](#walkthrough-what-you-need)
+ [

## Step 1: Create a bucket
](#walkthrough1-create-bucket)
+ [

## Step 2: Create IAM users and a group
](#walkthrough1-add-users)
+ [

## Step 3: Verify that IAM users have no permissions
](#walkthrough1-verify-no-user-permissions)
+ [

## Step 4: Grant group-level permissions
](#walkthrough-group-policy)
+ [

## Step 5: Grant IAM user Alice specific permissions
](#walkthrough-grant-user1-permissions)
+ [

## Step 6: Grant IAM user Bob specific permissions
](#walkthrough1-grant-permissions-step5)
+ [

## Step 7: Secure the private folder
](#walkthrough-secure-private-folder-explicit-deny)
+ [

## Step 8: Clean up
](#walkthrough-cleanup)
+ [

## Related resources
](#RelatedResources-walkthrough1)

## Basics of buckets and folders


The Amazon S3 data model is a flat structure: You create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders, but you can emulate a folder hierarchy. Tools like the Amazon S3 console can present a view of these logical folders and subfolders in your bucket.

The console shows that a bucket named `companybucket` has three folders, `Private`, `Development`, and `Finance`, and an object, `s3-dg.pdf`. The console uses the object names (keys) to create a logical hierarchy with folders and subfolders. Consider the following examples:
+ When you create the `Development` folder, the console creates an object with the key `Development/`. Note the trailing slash (`/`) delimiter.
+ When you upload an object named `Projects1.xls` in the `Development` folder, the console uploads the object and gives it the key `Development/Projects1.xls`. 

  In the key, `Development` is the [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) and `/` is the delimiter. The Amazon S3 API supports prefixes and delimiters in its operations. For example, you can get a list of all objects from a bucket with a specific prefix and delimiter. On the console, when you open the `Development` folder, the console lists the objects in that folder. In the following example, the `Development` folder contains one object. 

  When the console lists the `Development` folder in the `companybucket` bucket, it sends a request to Amazon S3 in which it specifies a prefix of `Development` and a delimiter of `/` in the request. The console's response looks just like a folder list in your computer's file system. The preceding example shows that the bucket `companybucket` has an object with the key `Development/Projects1.xls`.

The console is using object keys to infer a logical hierarchy. Amazon S3 has no physical hierarchy. Amazon S3 only has buckets that contain objects in a flat file structure. When you create objects using the Amazon S3 API, you can use object keys that imply a logical hierarchy. When you create a logical hierarchy of objects, you can manage access to individual folders, as this walkthrough demonstrates.

Before you start, be sure that you are familiar with the concept of the *root-level* bucket content. Suppose that your `companybucket` bucket has the following objects:
+ `Private/privDoc1.txt`
+ `Private/privDoc2.zip`
+ `Development/project1.xls`
+ `Development/project2.xls`
+ `Finance/Tax2011/document1.pdf`
+ `Finance/Tax2011/document2.pdf`
+ `s3-dg.pdf`

These object keys create a logical hierarchy with `Private`, `Development`, and the `Finance` as root-level folders and `s3-dg.pdf` as a root-level object. When you choose the bucket name on the Amazon S3 console, the root-level items appear. The console shows the top-level prefixes (`Private/`, `Development/`, and `Finance/`) as root-level folders. The object key `s3-dg.pdf` has no prefix, and so it appears as a root-level item.



## Walkthrough summary


In this walkthrough, you create a bucket with three folders (`Private`, `Development`, and `Finance`) in it. 

You have two users, Alice and Bob. You want Alice to access only the `Development` folder, and you want Bob to access only the `Finance` folder. You want to keep the `Private` folder content private. In the walkthrough, you manage access by creating IAM users (the example uses the usernames Alice and Bob) and granting them the necessary permissions. 

IAM also supports creating user groups and granting group-level permissions that apply to all users in the group. This helps you better manage permissions. For this exercise, both Alice and Bob need some common permissions. So you also create a group named `Consultants` and then add both Alice and Bob to the group. You first grant permissions by attaching a group policy to the group. Then you add user-specific permissions by attaching policies to specific users.

**Note**  
The walkthrough uses `companybucket` as the bucket name, Alice and Bob as the IAM users, and `Consultants` as the group name. Because Amazon S3 requires that bucket names be globally unique, you must replace the bucket name with a name that you create.

## Preparing for the walkthrough


 In this example, you use your AWS account credentials to create IAM users. Initially, these users have no permissions. You incrementally grant these users permissions to perform specific Amazon S3 actions. To test these permissions, you sign in to the console with each user's credentials. As you incrementally grant permissions as an AWS account owner and test permissions as an IAM user, you need to sign in and out, each time using different credentials. You can do this testing with one browser, but the process will go faster if you can use two different browsers. Use one browser to connect to the AWS Management Console with your AWS account credentials and another browser to connect with the IAM user credentials. 

 To sign in to the AWS Management Console with your AWS account credentials, go to [https://console.aws.amazon.com/](https://console.aws.amazon.com/).  An IAM user can't sign in using the same link. An IAM user must use an IAM-enabled sign-in page. As the account owner, you can provide this link to your users. 

For more information about IAM, see [The AWS Management Console Sign-in Page](https://docs.aws.amazon.com/IAM/latest/UserGuide/console.html) in the *IAM User Guide*.

### To provide a sign-in link for IAM users


1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the **Navigation** pane, choose **IAM Dashboard **.

1. Note the URL under **IAM users sign in link:**. You will give this link to IAM users to sign in to the console with their IAM user name and password.

## Step 1: Create a bucket


In this step, you sign in to the Amazon S3 console with your AWS account credentials, create a bucket, add folders to the bucket, and upload one or two sample documents in each folder. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket. 

   For step-by-step instructions, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Upload one document to the bucket.

   This exercise assumes that you have the `s3-dg.pdf` document at the root level of this bucket. If you upload a different document, substitute its file name for `s3-dg.pdf`.

1. Add three folders named `Private`, `Finance`, and `Development` to the bucket.

   For step-by-step instructions to create a folder, see [Organizing objects in the Amazon S3 console by using folders](using-folders.md)> in the *Amazon Simple Storage Service User Guide*.

1. Upload one or two documents to each folder. 

   For this exercise, assume that you have uploaded a couple of documents in each folder, resulting in the bucket having objects with the following keys:
   + `Private/privDoc1.txt`
   + `Private/privDoc2.zip`
   + `Development/project1.xls`
   + `Development/project2.xls`
   + `Finance/Tax2011/document1.pdf`
   + `Finance/Tax2011/document2.pdf`
   + `s3-dg.pdf`

   

   For step-by-step instructions, see [Uploading objects](upload-objects.md). 

## Step 2: Create IAM users and a group


Now use the [IAM Console](https://console.aws.amazon.com/iam/) to add two IAM users, Alice and Bob, to your AWS account. For step-by-step instructions, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

Also create an administrative group named `Consultants`. Then add both users to the group. For step-by-step instructions, see [Creating IAM user groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_create.html). 

**Warning**  
When you add users and a group, do not attach any policies that grant permissions to these users. At first, these users don't have any permissions. In the following sections, you grant permissions incrementally. First you must ensure that you have assigned passwords to these IAM users. You use these user credentials to test Amazon S3 actions and verify that the permissions work as expected.

For step-by-step instructions for creating a new IAM user, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. When you create the users for this walkthrough, select **AWS Management Console access** and clear [programmatic access](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).

For step-by-step instructions for creating an administrative group, see [Creating Your First IAM Admin User and Group](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html) in the *IAM User Guide*.



## Step 3: Verify that IAM users have no permissions


If you are using two browsers, you can now use the second browser to sign in to the console using one of the IAM user credentials.

1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the AWS Management Console using either of the IAM user credentials.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

    Verify the console message telling you that access is denied. 

Now, you can begin granting incremental permissions to the users. First, you attach a group policy that grants permissions that both users must have. 

## Step 4: Grant group-level permissions
Step 4: Attach group policy

You want the users to be able to do the following:
+ List all buckets owned by the parent account. To do so, Bob and Alice must have permission for the `s3:ListAllMyBuckets` action.
+ List root-level items, folders, and objects in the `companybucket` bucket. To do so, Bob and Alice must have permission for the `s3:ListBucket` action on the `companybucket` bucket.

First, you create a policy that grants these permissions, and then you attach it to the `Consultants` group. 

### Step 4.1: Grant permission to list all buckets


In this step, you create a managed policy that grants the users minimum permissions to enable them to list all buckets owned by the parent account. Then you attach the policy to the `Consultants` group. When you attach the managed policy to a user or a group, you grant the user or group permission to obtain a list of buckets owned by the parent AWS account.

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).
**Note**  
Because you are granting user permissions, sign in using your AWS account credentials, not as an IAM user.

1. Create the managed policy.

   1. In the navigation pane on the left, choose **Policies**, and then choose **Create Policy**.

   1. Choose the **JSON** tab.

   1. Copy the following access policy and paste it into the policy text field.

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": ["s3:ListAllMyBuckets"],
            "Effect": "Allow",
            "Resource": ["arn:aws:s3:::*"]
          }
        ]
      }
      ```

------

      A policy is a JSON document. In the document, a `Statement` is an array of objects, each describing a permission using a collection of name-value pairs. The preceding policy describes one specific permission. The `Action` specifies the type of access. In the policy, the `s3:ListAllMyBuckets` is a predefined Amazon S3 action. This action covers the Amazon S3 GET Service operation, which returns a list of all buckets owned by the authenticated sender. The `Effect` element value determines whether specific permission is allowed or denied.

   1. Choose **Review Policy**. On the next page, enter `AllowGroupToSeeBucketListInTheConsole` in the **Name** field, and then choose **Create policy**.
**Note**  
The **Summary** entry displays a message stating that the policy does not grant any permissions. For this walkthrough, you can safely ignore this message.

1. Attach the `AllowGroupToSeeBucketListInTheConsole` managed policy that you created to the `Consultants` group.

   For step-by-step instructions for attaching a managed policy, see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#attach-managed-policy-console) in the *IAM User Guide*. 

   You attach policy documents to IAM users and groups in the IAM console. Because you want both users to be able to list the buckets, you attach the policy to the group. 

1. Test the permission.

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the console using any one of IAM user credentials.

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

      The console should now list all the buckets but not the objects in any of the buckets.

### Step 4.2: Enable users to list root-level content of a bucket


Next, you allow all users in the `Consultants` group to list the root-level `companybucket` bucket items. When a user chooses the company bucket on the Amazon S3 console, the user can see the root-level items in the bucket.

**Note**  
This example uses `companybucket` for illustration. You must use the name of the bucket that you created.

To understand the request that the console sends to Amazon S3 when you choose a bucket name, the response that Amazon S3 returns, and how the console interprets the response, examine the flow a little more closely.

When you choose a bucket name, the console sends the [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) request to Amazon S3. This request includes the following parameters:
+ The `prefix` parameter with an empty string as its value. 
+ The `delimiter` parameter with `/` as its value. 

The following is an example request.

```
GET ?prefix=&delimiter=/ HTTP/1.1 
Host: companybucket.s3.amazonaws.com
Date: Wed, 01 Aug  2012 12:00:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:xQE0diMbLRepdf3YB+FIEXAMPLE=
```

Amazon S3 returns a response that includes the following `<ListBucketResult/>` element.

```
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>companybucket</Name>
  <Prefix></Prefix>
  <Delimiter>/</Delimiter>
   ...
  <Contents>
    <Key>s3-dg.pdf</Key>
    ...
  </Contents>
  <CommonPrefixes>
    <Prefix>Development/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>Finance/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>Private/</Prefix>
  </CommonPrefixes>
</ListBucketResult>
```

The key `s3-dg.pdf` object does not contain the slash (`/`) delimiter, and Amazon S3 returns the key in the `<Contents>` element. However, all other keys in the example bucket contain the `/` delimiter. Amazon S3 groups these keys and returns a `<CommonPrefixes>` element for each of the distinct prefix values `Development/`, `Finance/`, and `Private/` that is a substring from the beginning of these keys to the first occurrence of the specified `/` delimiter. 

The console interprets this result and displays the root-level items as three folders and one object key. 

If Bob or Alice opens the **Development** folder, the console sends the [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) request to Amazon S3 with the `prefix` and the `delimiter` parameters set to the following values:
+ The `prefix` parameter with the value `Development/`.
+ The `delimiter` parameter with the "`/`" value. 

In response, Amazon S3 returns the object keys that start with the specified prefix. 

```
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>companybucket</Name>
  <Prefix>Development</Prefix>
  <Delimiter>/</Delimiter>
   ...
  <Contents>
    <Key>Project1.xls</Key>
    ...
  </Contents>
  <Contents>
    <Key>Project2.xls</Key>
    ...
  </Contents> 
</ListBucketResult>
```

The console shows the object keys.

Now, return to granting users permission to list the root-level bucket items. To list bucket content, users need permission to call the `s3:ListBucket` action, as shown in the following policy statement. To ensure that they see only the root-level content, you add a condition that users must specify an empty `prefix` in the request—that is, they are not allowed to double-click any of the root-level folders. Finally, you add a condition to require folder-style access by requiring user requests to include the `delimiter` parameter with the value "`/`". 

```
{
  "Sid": "AllowRootLevelListingOfCompanyBucket",
  "Action": ["s3:ListBucket"],
  "Effect": "Allow",
  "Resource": ["arn:aws:s3:::companybucket"],
  "Condition":{ 
         "StringEquals":{
             "s3:prefix":[""], "s3:delimiter":["/"]
                        }
              }
}
```

When you choose a bucket on the Amazon S3 console, the console first sends the [GET Bucket location](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html) request to find the AWS Region where the bucket is deployed. Then the console uses the Region-specific endpoint for the bucket to send the [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) request. As a result, if users are going to use the console, you must grant permission for the `s3:GetBucketLocation` action as shown in the following policy statement.

```
{
   "Sid": "RequiredByS3Console",
   "Action": ["s3:GetBucketLocation"],
   "Effect": "Allow",
   "Resource": ["arn:aws:s3:::*"]
}
```

**To enable users to list root-level bucket content**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Replace the existing `AllowGroupToSeeBucketListInTheConsole` managed policy that is attached to the `Consultants` group with the following policy, which also allows the `s3:ListBucket` action. Remember to replace *`companybucket`* in the policy `Resource` with the name of your bucket. 

   For step-by-step instructions, see [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*. When following the step-by-step instructions, be sure to follow the steps for applying your changes to all principal entities that the policy is attached to. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	                  
     "Statement": [
        {
          "Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
          "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ],
          "Effect": "Allow",
          "Resource": [ "arn:aws:s3:::*"  ]
        },
        {
          "Sid": "AllowRootLevelListingOfCompanyBucket",
          "Action": ["s3:ListBucket"],
          "Effect": "Allow",
          "Resource": ["arn:aws:s3:::companybucket"],
          "Condition":{ 
                "StringEquals":{
                       "s3:prefix":[""], "s3:delimiter":["/"]
                              }
                      }
        }
     ] 
   }
   ```

------

1. Test the updated permissions.

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the AWS Management Console. 

      Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Choose the bucket that you created, and the console shows the root-level bucket items. If you choose any folders in the bucket, you won't be able to see the folder content because you haven't yet granted those permissions.

This test succeeds when users use the Amazon S3 console. When you choose a bucket on the console, the console implementation sends a request that includes the `prefix` parameter with an empty string as its value and the `delimiter` parameter with "`/`" as its value.

### Step 4.3: Summary of the group policy


The net effect of the group policy that you added is to grant the IAM users Alice and Bob the following minimum permissions:
+ List all buckets owned by the parent account.
+ See root-level items in the `companybucket` bucket. 

However, the users still can't do much. Next, you grant user-specific permissions, as follows:
+ Allow Alice to get and put objects in the `Development` folder.
+ Allow Bob to get and put objects in the `Finance` folder.

For user-specific permissions, you attach a policy to the specific user, not to the group. In the following section, you grant Alice permission to work in the `Development` folder. You can repeat the steps to grant similar permission to Bob to work in the `Finance` folder.

## Step 5: Grant IAM user Alice specific permissions


Now you grant additional permissions to Alice so that she can see the content of the `Development` folder and get and put objects in that folder.

### Step 5.1: Grant IAM user Alice permission to list the development folder content


For Alice to list the `Development` folder content, you must apply a policy to the user Alice that grants permission for the `s3:ListBucket` action on the `companybucket` bucket, provided the request includes the prefix `Development/`. You want this policy to be applied only to the user Alice, so you use an inline policy. For more information about inline policies, see [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*.

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Create an inline policy to grant the user Alice permission to list the `Development` folder content.

   1. In the navigation pane on the left, choose **Users**.

   1. Choose the username **Alice**.

   1. On the user details page, choose the **Permissions** tab and then choose **Add inline policy**.

   1. Choose the **JSON** tab.

   1. Copy the following policy, and paste it into the policy text field.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	   
          "Statement": [
          {
            "Sid": "AllowListBucketIfSpecificPrefixIsIncludedInRequest",
            "Action": ["s3:ListBucket"],
            "Effect": "Allow",
            "Resource": ["arn:aws:s3:::companybucket"],
            "Condition": { "StringLike": {"s3:prefix": ["Development/*"] }
             }
          }
        ]
      }
      ```

------

   1. Choose **Review Policy**. On the next page, enter a name in the **Name** field, and then choose **Create policy**.

1. Test the change to Alice's permissions:

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the AWS Management Console. 

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. On the Amazon S3 console, verify that Alice can see the list of objects in the `Development/` folder in the bucket. 

      When the user chooses the `/Development` folder to see the list of objects in it, the Amazon S3 console sends the `ListObjects` request to Amazon S3 with the prefix `/Development`. Because the user is granted permission to see the object list with the prefix `Development` and delimiter `/`, Amazon S3 returns the list of objects with the key prefix `Development/`, and the console displays the list.

### Step 5.2: Grant IAM user Alice permissions to get and put objects in the development folder


For Alice to get and put objects in the `Development` folder, she needs permission to call the `s3:GetObject` and `s3:PutObject` actions. The following policy statements grant these permissions, provided that the request includes the `prefix` parameter with a value of `Development/`.

```
{
    "Sid":"AllowUserToReadWriteObjectData",
    "Action":["s3:GetObject", "s3:PutObject"],
    "Effect":"Allow",
    "Resource":["arn:aws:s3:::companybucket/Development/*"]
 }
```



1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Edit the inline policy that you created in the previous step. 

   1. In the navigation pane on the left, choose **Users**.

   1. Choose the user name Alice.

   1. On the user details page, choose the **Permissions** tab and expand the **Inline Policies** section.

   1. Next to the name of the policy that you created in the previous step, choose **Edit Policy**.

   1. Copy the following policy and paste it into the policy text field, replacing the existing policy.

------
#### [ JSON ]

****  

      ```
      {
           "Version":"2012-10-17",		 	 	 
           "Statement":[
            {
               "Sid":"AllowListBucketIfSpecificPrefixIsIncludedInRequest",
               "Action":["s3:ListBucket"],
               "Effect":"Allow",
               "Resource":["arn:aws:s3:::companybucket"],
               "Condition":{
                  "StringLike":{"s3:prefix":["Development/*"]
                  }
               }
            },
            {
              "Sid":"AllowUserToReadWriteObjectDataInDevelopmentFolder", 
              "Action":["s3:GetObject", "s3:PutObject"],
              "Effect":"Allow",
              "Resource":["arn:aws:s3:::companybucket/Development/*"]
            }
         ]
      }
      ```

------

1. Test the updated policy:

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign into the AWS Management Console. 

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. On the Amazon S3 console, verify that Alice can now add an object and download an object in the `Development` folder. 

### Step 5.3: Explicitly deny IAM user Alice permissions to any other folders in the bucket


User Alice can now list the root-level content in the `companybucket` bucket. She can also get and put objects in the `Development` folder. If you really want to tighten the access permissions, you could explicitly deny Alice access to any other folders in the bucket. If there is any other policy (bucket policy or ACL) that grants Alice access to any other folders in the bucket, this explicit deny overrides those permissions. 

You can add the following statement to the user Alice policy that requires all requests that Alice sends to Amazon S3 to include the `prefix` parameter, whose value can be either `Development/*` or an empty string. 



```
{
   "Sid": "ExplicitlyDenyAnyRequestsForAllOtherFoldersExceptDevelopment",
   "Action": ["s3:ListBucket"],
   "Effect": "Deny",
   "Resource": ["arn:aws:s3:::companybucket"],
   "Condition":{  "StringNotLike": {"s3:prefix":["Development/*",""] },
                  "Null"         : {"s3:prefix":false }
    }
}
```

There are two conditional expressions in the `Condition` block. The result of these conditional expressions is combined by using the logical `AND`. If both conditions are true, the result of the combined condition is true. Because the `Effect` in this policy is `Deny`, when the `Condition` evaluates to true, users can't perform the specified `Action`.
+ The `Null` conditional expression ensures that requests from Alice include the `prefix` parameter. 

  The `prefix` parameter requires folder-like access. If you send a request without the `prefix` parameter, Amazon S3 returns all the object keys. 

  If the request includes the `prefix` parameter with a null value, the expression evaluates to true, and so the entire `Condition` evaluates to true. You must allow an empty string as value of the `prefix` parameter. From the preceding discussion, recall that allowing the null string allows Alice to retrieve root-level bucket items as the console does in the preceding discussion. For more information, see [Step 4.2: Enable users to list root-level content of a bucket](#walkthrough1-grant-permissions-step2). 
+ The `StringNotLike` conditional expression ensures that if the value of the `prefix` parameter is specified and is not `Development/*`, the request fails. 

Follow the steps in the preceding section and again update the inline policy that you created for user Alice.

Copy the following policy and paste it into the policy text field, replacing the existing policy.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowListBucketIfSpecificPrefixIsIncludedInRequest",
         "Action":["s3:ListBucket"],
         "Effect":"Allow",
         "Resource":["arn:aws:s3:::companybucket"],
         "Condition":{
            "StringLike":{"s3:prefix":["Development/*"]
            }
         }
      },
      {
        "Sid":"AllowUserToReadWriteObjectDataInDevelopmentFolder", 
        "Action":["s3:GetObject", "s3:PutObject"],
        "Effect":"Allow",
        "Resource":["arn:aws:s3:::companybucket/Development/*"]
      },
      {
         "Sid": "ExplicitlyDenyAnyRequestsForAllOtherFoldersExceptDevelopment",
         "Action": ["s3:ListBucket"],
         "Effect": "Deny",
         "Resource": ["arn:aws:s3:::companybucket"],
         "Condition":{  "StringNotLike": {"s3:prefix":["Development/*",""] },
                        "Null"         : {"s3:prefix":false }
          }
      }
   ]
}
```

------

## Step 6: Grant IAM user Bob specific permissions


Now you want to grant Bob permission to the `Finance` folder. Follow the steps that you used earlier to grant permissions to Alice, but replace the `Development` folder with the `Finance` folder. For step-by-step instructions, see [Step 5: Grant IAM user Alice specific permissions](#walkthrough-grant-user1-permissions). 

## Step 7: Secure the private folder


In this example, you have only two users. You granted all the minimum required permissions at the group level and granted user-level permissions only when you really need to permissions at the individual user level. This approach helps minimize the effort of managing permissions. As the number of users increases, managing permissions can become cumbersome. For example, you don't want any of the users in this example to access the content of the `Private` folder. How do you ensure that you don't accidentally grant a user permission to the `Private` folder? You add a policy that explicitly denies access to the folder. An explicit deny overrides any other permissions. 

To ensure that the `Private` folder remains private, you can add the following two deny statements to the group policy:
+ Add the following statement to explicitly deny any action on resources in the `Private` folder (`companybucket/Private/*`).

  ```
  {
    "Sid": "ExplictDenyAccessToPrivateFolderToEveryoneInTheGroup",
    "Action": ["s3:*"],
    "Effect": "Deny",
    "Resource":["arn:aws:s3:::companybucket/Private/*"]
  }
  ```
+ You also deny permission for the list objects action when the request specifies the `Private/` prefix. On the console, if Bob or Alice opens the `Private` folder, this policy causes Amazon S3 to return an error response.

  ```
  {
    "Sid": "DenyListBucketOnPrivateFolder",
    "Action": ["s3:ListBucket"],
    "Effect": "Deny",
    "Resource": ["arn:aws:s3:::*"],
    "Condition":{
        "StringLike":{"s3:prefix":["Private/"]}
     }
  }
  ```

Replace the `Consultants` group policy with an updated policy that includes the preceding deny statements. After the updated policy is applied, none of the users in the group can access the `Private` folder in your bucket. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Replace the existing `AllowGroupToSeeBucketListInTheConsole` managed policy that is attached to the `Consultants` group with the following policy. Remember to replace *`companybucket`* in the policy with the name of your bucket. 

   For instructions, see [Editing customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html#edit-managed-policy-console) in the *IAM User Guide*. When following the instructions, make sure to follow the directions for applying your changes to all principal entities that the policy is attached to. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
         "Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
         "Effect": "Allow",
         "Resource": ["arn:aws:s3:::*"]
       },
       {
         "Sid": "AllowRootLevelListingOfCompanyBucket",
         "Action": ["s3:ListBucket"],
         "Effect": "Allow",
         "Resource": ["arn:aws:s3:::companybucket"],
         "Condition":{
             "StringEquals":{"s3:prefix":[""]}
          }
       },
       {
         "Sid": "RequireFolderStyleList",
         "Action": ["s3:ListBucket"],
         "Effect": "Deny",
         "Resource": ["arn:aws:s3:::*"],
         "Condition":{
             "StringNotEquals":{"s3:delimiter":"/"}
          }
        },
       {
         "Sid": "ExplictDenyAccessToPrivateFolderToEveryoneInTheGroup",
         "Action": ["s3:*"],
         "Effect": "Deny",
         "Resource":["arn:aws:s3:::companybucket/Private/*"]
       },
       {
         "Sid": "DenyListBucketOnPrivateFolder",
         "Action": ["s3:ListBucket"],
         "Effect": "Deny",
         "Resource": ["arn:aws:s3:::*"],
         "Condition":{
             "StringLike":{"s3:prefix":["Private/"]}
          }
       }
     ]
   }
   ```

------



## Step 8: Clean up


To clean up, open the [IAM Console](https://console.aws.amazon.com/iam/) and remove the users Alice and Bob. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*.

To ensure that you aren't charged further for storage, you should also delete the objects and the bucket that you created for this exercise.

## Related resources

+ [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*

# Identity-based policy examples for Amazon S3
Identity-based policy examples

This section shows several example AWS Identity and Access Management (IAM) identity-based policies for controlling access to Amazon S3. For example *bucket policies* (resource-based policies), see [Bucket policies for Amazon S3](bucket-policies.md). For information about IAM policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

The following example policies will work if you use them programmatically. However, to use them with the Amazon S3 console, you must grant additional permissions that are required by the console. For information about using policies such as these with the Amazon S3 console, see [Controlling access to a bucket with user policies](walkthrough1.md). 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [

## Allowing an IAM user access to one of your buckets
](#iam-policy-ex0)
+ [

## Allowing each IAM user access to a folder in a bucket
](#iam-policy-ex1)
+ [

## Allowing a group to have a shared folder in Amazon S3
](#iam-policy-ex2)
+ [

## Allowing all your users to read objects in a portion of a bucket
](#iam-policy-ex3)
+ [

## Allowing a partner to drop files into a specific portion of a bucket
](#iam-policy-ex4)
+ [

## Restricting access to Amazon S3 buckets within a specific AWS account
](#iam-policy-ex6)
+ [

## Restricting access to Amazon S3 buckets within your organizational unit
](#iam-policy-ex7)
+ [

## Restricting access to Amazon S3 buckets within your organization
](#iam-policy-ex8)
+ [

## Granting permission to retrieve the PublicAccessBlock configuration for an AWS account
](#using-with-s3-actions-related-to-accountss)
+ [

## Restricting bucket creation to one Region
](#condition-key-bucket-ops-1)

## Allowing an IAM user access to one of your buckets


In this example, you want to grant an IAM user in your AWS account access to one of your buckets, *amzn-s3-demo-bucket1*, and allow the user to add, update, and delete objects. 

In addition to granting the `s3:PutObject`, `s3:GetObject`, and `s3:DeleteObject` permissions to the user, the policy also grants the `s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket` permissions. These are the additional permissions required by the console. Also, the `s3:PutObjectAcl` and the `s3:GetObjectAcl` actions are required to be able to copy, cut, and paste objects in the console. For an example walkthrough that grants permissions to users and tests them using the console, see [Controlling access to a bucket with user policies](walkthrough1.md). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action": "s3:ListAllMyBuckets",
         "Resource":"*"
      },
      {
         "Effect":"Allow",
         "Action":["s3:ListBucket","s3:GetBucketLocation"],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:DeleteObject"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/*"
      }
   ]
}
```

------

## Allowing each IAM user access to a folder in a bucket


In this example, you want two IAM users, Mary and Carlos, to have access to your bucket, *amzn-s3-demo-bucket1*, so that they can add, update, and delete objects. However, you want to restrict each user's access to a single prefix (folder) in the bucket. You might create folders with names that match their usernames. 

```
amzn-s3-demo-bucket1
   Mary/
   Carlos/
```

To grant each user access only to their folder, you can write a policy for each user and attach it individually. For example, you can attach the following policy to the user Mary to allow her specific Amazon S3 permissions on the `amzn-s3-demo-bucket1/Mary` folder.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/Mary/*"
      }
   ]
}
```

------

You can then attach a similar policy to the user Carlos, specifying the folder `Carlos` in the `Resource` value.

Instead of attaching policies to individual users, you can write a single policy that uses a policy variable and then attach the policy to a group. First, you must create a group and add both Mary and Carlos to the group. The following example policy allows a set of Amazon S3 permissions in the `amzn-s3-demo-bucket1/${aws:username}` folder. When the policy is evaluated, the policy variable `${aws:username}` is replaced by the requester's username. For example, if Mary sends a request to put an object, the operation is allowed only if Mary is uploading the object to the `amzn-s3-demo-bucket1/Mary` folder.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/${aws:username}/*"
      }
   ]
}
```

------

**Note**  
When using policy variables, you must explicitly specify version `2012-10-17` in the policy. The default version of the IAM policy language, 2008-10-17, does not support policy variables. 

 If you want to test the preceding policy on the Amazon S3 console, the console requires additional permissions, as shown in the following policy. For information about how the console uses these permissions, see [Controlling access to a bucket with user policies](walkthrough1.md). 

------
#### [ JSON ]

****  

```
{
 "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowGroupToSeeBucketListInTheConsole",
      "Action": [ 
      	"s3:ListAllMyBuckets", 
      	"s3:GetBucketLocation" 
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::*"  
    },
    {
      "Sid": "AllowRootLevelListingOfTheBucket",
      "Action": "s3:ListBucket",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1",
      "Condition": { 
            "StringEquals": {
                    "s3:prefix": [""], "s3:delimiter": ["/"]
                           }
                 }
    },
    {
      "Sid": "AllowListBucketOfASpecificUserPrefix",
      "Action": "s3:ListBucket",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1",
      "Condition": {  "StringLike": {"s3:prefix": ["${aws:username}/*"] }
       }
    },
      {
     "Sid": "AllowUserSpecificActionsOnlyInTheSpecificUserPrefix",
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/${aws:username}/*"
      }
  ]
}
```

------

**Note**  
In the 2012-10-17 version of the policy, policy variables start with `$`. This change in syntax can potentially create a conflict if your object key (object name) includes a `$`.   
To avoid this conflict, specify the `$` character by using `${$}`. For example, to include the object key `my$file` in a policy, specify it as `my${$}file`.

Although IAM user names are friendly, human-readable identifiers, they aren't required to be globally unique. For example, if the user Carlos leaves the organization and another Carlos joins, then the new Carlos could access the old Carlos's information.

Instead of using usernames, you could create folders based on IAM user IDs. Each IAM user ID is unique. In this case, you must modify the preceding policy to use the `${aws:userid}` policy variable. For more information about user identifiers, see [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/home/${aws:userid}/*"
      }
   ]
}
```

------

### Allowing non-IAM users (mobile app users) access to folders in a bucket


Suppose that you want to develop a mobile app, a game that stores users' data in an S3 bucket. For each app user, you want to create a folder in your bucket. You also want to limit each user's access to their own folder. But you can't create folders before someone downloads your app and starts playing the game, because you don't have their user ID.

In this case, you can require users to sign in to your app by using public identity providers such as Login with Amazon, Facebook, or Google. After users have signed in to your app through one of these providers, they have a user ID that you can use to create user-specific folders at runtime.

You can then use web identity federation in AWS Security Token Service to integrate information from the identity provider with your app and to get temporary security credentials for each user. You can then create IAM policies that allow the app to access your bucket and perform such operations as creating user-specific folders and uploading data. For more information about web identity federation, see [About web identity Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html) in the *IAM User Guide*.

## Allowing a group to have a shared folder in Amazon S3


Attaching the following policy to the group grants everybody in the group access to the following folder in Amazon S3: `amzn-s3-demo-bucket1/share/marketing`. Group members are allowed to access only the specific Amazon S3 permissions shown in the policy and only for objects in the specified folder. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/share/marketing/*"
      }
   ]
}
```

------

## Allowing all your users to read objects in a portion of a bucket


In this example, you create a group named `AllUsers`, which contains all the IAM users that are owned by the AWS account. You then attach a policy that gives the group access to `GetObject` and `GetObjectVersion`, but only for objects in the `amzn-s3-demo-bucket1/readonly` folder. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject",
            "s3:GetObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/readonly/*"
      }
   ]
}
```

------

## Allowing a partner to drop files into a specific portion of a bucket


In this example, you create a group called `AnyCompany` that represents a partner company. You create an IAM user for the specific person or application at the partner company that needs access, and then you put the user in the group. 

You then attach a policy that gives the group `PutObject` access to the following folder in a bucket:

`amzn-s3-demo-bucket1/uploads/anycompany` 

You want to prevent the `AnyCompany` group from doing anything else with the bucket, so you add a statement that explicitly denies permission to any Amazon S3 actions except `PutObject` on any Amazon S3 resource in the AWS account.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/uploads/anycompany/*"
      },
      {
         "Effect":"Deny",
         "Action":"s3:*",
         "NotResource":"arn:aws:s3:::amzn-s3-demo-bucket1/uploads/anycompany/*"
      }
   ]
}
```

------

## Restricting access to Amazon S3 buckets within a specific AWS account


If you want to ensure that your Amazon S3 principals are accessing only the resources that are inside of a trusted AWS account, you can restrict access. For example, this [identity-based IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) uses a `Deny` effect to block access to Amazon S3 actions, unless the Amazon S3 resource that's being accessed is in account `222222222222`. To prevent an IAM principal in an AWS account from accessing Amazon S3 objects outside of the account, attach the following IAM policy:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceAccount": [
            "222222222222"
          ]
        }
      }
    }
  ]
}
```

------

**Note**  
This policy doesn't replace your existing IAM access controls, because it doesn't grant any access. Instead, this policy acts as an additional guardrail for your other IAM permissions, regardless of the permissions granted through other IAM policies.

Make sure to replace account ID `222222222222` in the policy with your own AWS account. To apply a policy to multiple accounts while still maintaining this restriction, replace the account ID with the `aws:PrincipalAccount` condition key. This condition requires that the principal and the resource must be in the same account.

## Restricting access to Amazon S3 buckets within your organizational unit


If you have an [organizational unit (OU)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html) set up in AWS Organizations, you might want to restrict Amazon S3 bucket access to a specific part of your organization. In this example, we'll use the `aws:ResourceOrgPaths` key to restrict Amazon S3 bucket access to an OU in your organization. For this example, the [OU ID](https://docs.aws.amazon.com/organizations/latest/APIReference/API_OrganizationalUnit.html) is `ou-acroot-exampleou`. Make sure to replace this value in your own policy with your own OU IDs.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
     {
       "Sid": "AllowS3AccessOutsideMyBoundary",
       "Effect": "Allow",
       "Action": [
         "s3:*"
       ],
       "Resource": "*",
       "Condition": {
         "ForAllValues:StringLike": {
           "aws:ResourceOrgPaths": [
             "o-acorg/r-acroot/ou-acroot-exampleou/"
           ] 
         }
       }
     }
   ]
 }
```

------

**Note**  
This policy doesn't grant any access. Instead, this policy acts as a backstop for your other IAM permissions, preventing your principals from accessing Amazon S3 objects outside of an OU-defined boundary.

The policy denies access to Amazon S3 actions unless the Amazon S3 object that's being accessed is in the `ou-acroot-exampleou` OU in your organization. The [IAM policy condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) requires `aws:ResourceOrgPaths`, a multivalued condition key, to contain any of the listed OU paths. The policy uses the `ForAllValues:StringNotLike` operator to compare the values of `aws:ResourceOrgPaths` to the listed OUs without case-sensitive matching.

## Restricting access to Amazon S3 buckets within your organization


To restrict access to Amazon S3 objects within your organization, attach an IAM policy to the root of the organization, applying it to all accounts in your organization. To require your IAM principals to follow this rule, use a [service-control policy (SCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html). If you choose to use an SCP, make sure to thoroughly [test the SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-warning-testing-effect) before attaching the policy to the root of the organization.

In the following example policy, access is denied to Amazon S3 actions unless the Amazon S3 object that's being accessed is in the same organization as the IAM principal that is accessing it:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
     {
       "Sid": "DenyS3AccessOutsideMyBoundary",
       "Effect": "Deny",
       "Action": [
         "s3:*"
       ],
       "Resource": "arn:aws:s3:::*/*",
       "Condition": {
         "StringNotEquals": {
           "aws:ResourceOrgID": "${aws:PrincipalOrgID}"
         }
       }
     }
   ]
 }
```

------

**Note**  
This policy doesn't grant any access. Instead, this policy acts as a backstop for your other IAM permissions, preventing your principals from accessing any Amazon S3 objects outside of your organization. This policy also applies to Amazon S3 resources that are created after the policy is put into effect.

The [IAM policy condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in this example requires `aws:ResourceOrgID` and `aws:PrincipalOrgID` to be equal to each other. With this requirement, the principal making the request and the resource being accessed must be in the same organization.

## Granting permission to retrieve the PublicAccessBlock configuration for an AWS account


The following example identity-based policy grants the `s3:GetAccountPublicAccessBlock` permission to a user. For these permissions, you set the `Resource` value to `"*"`. For information about resource ARNs, see [Policy resources for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources).

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Action":[
            "s3:GetAccountPublicAccessBlock" 
         ],
         "Resource":[
            "*"
         ]
       }
    ]
}
```

------

## Restricting bucket creation to one Region


Suppose that an AWS account administrator wants to grant its user (Dave) permission to create a bucket in the South America (São Paulo) Region only. The account administrator can attach the following user policy granting the `s3:CreateBucket` permission with a condition as shown. The key-value pair in the `Condition` block specifies the `s3:LocationConstraint` key and the `sa-east-1` Region as its value.

**Note**  
In this example, the bucket owner is granting permission to one of its users, so either a bucket policy or a user policy can be used. This example shows a user policy.

For a list of Amazon S3 Regions, see [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "statement1",
         "Effect": "Allow",
         "Action": "s3:CreateBucket",
         "Resource": "arn:aws:s3:::*",
         "Condition": {
             "StringLike": {
                 "s3:LocationConstraint": "sa-east-1"
             }
         }
       }
    ]
}
```

------

**Add explicit deny**  
The preceding policy restricts the user from creating a bucket in any other Region except `sa-east-1`. However, some other policy might grant this user permission to create buckets in another Region. For example, if the user belongs to a group, the group might have a policy attached to it that allows all users in the group permission to create buckets in another Region. To ensure that the user doesn't get permission to create buckets in any other Region, you can add an explicit deny statement in the above policy. 

The `Deny` statement uses the `StringNotLike` condition. That is, a create bucket request is denied if the location constraint is not `sa-east-1`. The explicit deny doesn't allow the user to create a bucket in any other Region, no matter what other permission the user gets. The following policy includes an explicit deny statement.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Action": "s3:CreateBucket",
         "Resource": "arn:aws:s3:::*",
         "Condition": {
             "StringLike": {
                 "s3:LocationConstraint": "sa-east-1"
             }
         }
       },
      {
         "Sid":"statement2",
         "Effect":"Deny",
         "Action": "s3:CreateBucket",
         "Resource": "arn:aws:s3:::*",
         "Condition": {
             "StringNotLike": {
                 "s3:LocationConstraint": "sa-east-1"
             }
         }
       }
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the policy using the following `create-bucket` AWS CLI command. This example uses the `bucketconfig.txt` file to specify the location constraint. Note the Windows file path. You need to update the bucket name and path as appropriate. You must provide user credentials using the `--profile` parameter. For more information about setting up and using the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

```
aws s3api create-bucket --bucket examplebucket --profile AccountADave --create-bucket-configuration file://c:/Users/someUser/bucketconfig.txt
```

The `bucketconfig.txt` file specifies the configuration as follows.

```
{"LocationConstraint": "sa-east-1"}
```

# Walkthroughs that use policies to manage access to your Amazon S3 resources
Walkthroughs using policies

This topic provides the following introductory walkthrough examples for granting access to Amazon S3 resources. These examples use the AWS Management Console to create resources (buckets, objects, users) and grant them permissions. The examples then show you how to verify permissions using the command line tools, so you don't have to write any code. We provide commands using both the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell.
+ [Example 1: Bucket owner granting its users bucket permissions](example-walkthroughs-managing-access-example1.md)

  The IAM users you create in your account have no permissions by default. In this exercise, you grant a user permission to perform bucket and object operations.
+ [Example 2: Bucket owner granting cross-account bucket permissions](example-walkthroughs-managing-access-example2.md)

  In this exercise, a bucket owner, Account A, grants cross-account permissions to another AWS account, Account B. Account B then delegates those permissions to users in its account. 
+ **Managing object permissions when the object and bucket owners are not the same**

  The example scenarios in this case are about a bucket owner granting object permissions to others, but not all objects in the bucket are owned by the bucket owner. What permissions does the bucket owner need, and how can it delegate those permissions?

  The AWS account that creates a bucket is called the *bucket owner*. The owner can grant other AWS accounts permission to upload objects, and the AWS accounts that create objects own them. The bucket owner has no permissions on those objects created by other AWS accounts. If the bucket owner writes a bucket policy granting access to objects, the policy doesn't apply to objects that are owned by other accounts. 

  In this case, the object owner must first grant permissions to the bucket owner using an object ACL. The bucket owner can then delegate those object permissions to others, to users in its own account, or to another AWS account, as illustrated by the following examples.
  + [Example 3: Bucket owner granting permissions to objects it does not own](example-walkthroughs-managing-access-example3.md)

    In this exercise, the bucket owner first gets permissions from the object owner. The bucket owner then delegates those permissions to users in its own account.
  + [Example 4 - Bucket owner granting cross-account permission to objects it does not own](example-walkthroughs-managing-access-example4.md)

    After receiving permissions from the object owner, the bucket owner can't delegate permission to other AWS accounts because cross-account delegation isn't supported (see [Permission delegation](access-policy-language-overview.md#permission-delegation)). Instead, the bucket owner can create an IAM role with permissions to perform specific operations (such as get object) and allow another AWS account to assume that role. Anyone who assumes the role can then access objects. This example shows how a bucket owner can use an IAM role to enable this cross-account delegation. 

## Before you try the example walkthroughs


These examples use the AWS Management Console to create resources and grant permissions. To test permissions, the examples use the command line tools, AWS CLI, and AWS Tools for Windows PowerShell, so you don't need to write any code. To test permissions, you must set up one of these tools. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

In addition, when creating resources, these examples don't use root user credentials of an AWS account. Instead, you create an administrator user in these accounts to perform these tasks. 

### About using an administrator user to create resources and grant permissions


AWS Identity and Access Management (IAM) recommends not using the root user credentials of your AWS account to make requests. Instead, create an IAM user or role, grant them full access, and then use their credentials to make requests. We refer to this as an administrative user or role. For more information, go to [AWS account root user credentials and IAM identities](https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html) in the *AWS General Reference* and [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

All example walkthroughs in this section use the administrator user credentials. If you have not created an administrator user for your AWS account, the topics show you how. 

To sign in to the AWS Management Console using the user credentials, you must use the IAM user sign-In URL. The [IAM Console](https://console.aws.amazon.com/iam/) provides this URL for your AWS account. The topics show you how to get the URL.

# Setting up the tools for the walkthroughs
Setting up tools

The introductory examples (see [Walkthroughs that use policies to manage access to your Amazon S3 resources](example-walkthroughs-managing-access.md)) use the AWS Management Console to create resources and grant permissions. To test permissions, the examples use the command line tools, AWS Command Line Interface (AWS CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code. To test permissions, you must set up one of these tools. 

**To set up the AWS CLI**

1. Download and configure the AWS CLI. For instructions, see the following topics in the *AWS Command Line Interface User Guide*: 

    [Install or update to the latest version of the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html) 

    [Get started with the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) 

1. Set the default profile. 

   You store user credentials in the AWS CLI config file. Create a default profile in the config file using your AWS account credentials. For instructions on finding and editing your AWS CLI config file, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html).

   ```
   [default]
   aws_access_key_id = access key ID
   aws_secret_access_key = secret access key
   region = us-west-2
   ```

1. Verify the setup by entering the following command at the command prompt. Both these commands don't provide credentials explicitly, so the credentials of the default profile are used.
   + Try the `help` command.

     ```
     aws help
     ```
   + To get a list of buckets on the configured account, use the `aws s3 ls` command.

     ```
     aws s3 ls
     ```

As you go through the walkthroughs, you will create users, and you will save user credentials in the config files by creating profiles, as the following example shows. These profiles have the names of `AccountAadmin` and `AccountBadmin`.

```
[profile AccountAadmin]
aws_access_key_id = User AccountAadmin access key ID
aws_secret_access_key = User AccountAadmin secret access key
region = us-west-2

[profile AccountBadmin]
aws_access_key_id = Account B access key ID
aws_secret_access_key = Account B secret access key
region = us-east-1
```

To run a command using these user credentials, you add the `--profile` parameter specifying the profile name. The following AWS CLI command retrieves a listing of objects in *`examplebucket`* and specifies the `AccountBadmin` profile. 

```
aws s3 ls s3://examplebucket --profile AccountBadmin
```

Alternatively, you can configure one set of user credentials as the default profile by changing the `AWS_DEFAULT_PROFILE` environment variable from the command prompt. After you've done this, whenever you perform AWS CLI commands without the `--profile` parameter, the AWS CLI uses the profile you set in the environment variable as the default profile.

```
$ export AWS_DEFAULT_PROFILE=AccountAadmin
```

**To set up AWS Tools for Windows PowerShell**

1. Download and configure the AWS Tools for Windows PowerShell. For instructions, go to [Installing the AWS Tools for Windows PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html#pstools-installing-download) in the *AWS Tools for PowerShell User Guide*. 
**Note**  
To load the AWS Tools for Windows PowerShell module, you must enable PowerShell script execution. For more information, see [Enable Script Execution](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html#enable-script-execution) in the *AWS Tools for PowerShell User Guide*.

1. For these walkthroughs, you specify AWS credentials per session using the `Set-AWSCredentials` command. The command saves the credentials to a persistent store (`-StoreAs `parameter).

   ```
   Set-AWSCredentials -AccessKey AccessKeyID -SecretKey SecretAccessKey -storeas string
   ```

1. Verify the setup.
   + To retrieve a list of available commands that you can use for Amazon S3 operations, run the `Get-Command` command. 

     ```
     Get-Command -module awspowershell -noun s3* -StoredCredentials string
     ```
   + To retrieve a list of objects in a bucket, run the `Get-S3Object` command.

     ```
     Get-S3Object -BucketName bucketname -StoredCredentials string
     ```

For a list of commands, see [AWS Tools for PowerShell Cmdlet Reference](https://docs.aws.amazon.com/powershell/latest/reference/Index.html). 

Now you're ready to try the walkthroughs. Follow the links provided at the beginning of each section.

# Example 1: Bucket owner granting its users bucket permissions
Granting permissions

**Important**  
Granting permissions to IAM roles is a better practice than granting permissions to individual users.For more information about how to grant permissions to IAM roles, see [Understanding cross-account permissions and using IAM roles](example-walkthroughs-managing-access-example4.md#access-policies-walkthrough-example4-overview).

**Topics**
+ [

## Preparing for the walkthrough
](#grant-permissions-to-user-in-your-account-step0)
+ [

## Step 1: Create resources in Account A and grant permissions
](#grant-permissions-to-user-in-your-account-step1)
+ [

## Step 2: Test permissions
](#grant-permissions-to-user-in-your-account-test)

In this walkthrough, an AWS account owns a bucket, and the account includes an IAM user By default, the user has no permissions. For the user to perform any tasks, the parent account must grant them permissions. The bucket owner and parent account are the same. Therefore, to grant the user permissions on the bucket, the AWS account can use a bucket policy, a user policy, or both. The account owner will grant some permissions using a bucket policy and other permissions using a user policy.

The following steps summarize the walkthrough:

![\[Diagram showing an AWS account granting permissions.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex1.png)


1. Account administrator creates a bucket policy granting a set of permissions to the user.

1. Account administrator attaches a user policy to the user granting additional permissions.

1. User then tries permissions granted via both the bucket policy and the user policy.

For this example, you will need an AWS account. Instead of using the root user credentials of the account, you will create an administrator user (see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials)). We refer to the AWS account and the administrator user as shown in the following table.


| Account ID | Account referred to as | Administrator user in the account | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 

**Note**  
The administrator user in this example is **AccountAadmin**, which refers to Account A, and not **AccountAdmin**.

All the tasks of creating users and granting permissions are done in the AWS Management Console. To verify permissions, the walkthrough uses the command line tools, AWS Command Line Interface (AWS CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code.

## Preparing for the walkthrough


1. Make sure you have an AWS account and that it has a user with administrator privileges.

   1. Sign up for an AWS account, if needed. We refer to this account as Account A.

      1.  Go to [https://aws.amazon.com/s3](https://aws.amazon.com/s3) and choose **Create an AWS account**. 

      1. Follow the on-screen instructions.

         AWS will notify you by email when your account is active and available for you to use.

   1. In Account A, create an administrator user **AccountAadmin**. Using Account A credentials, sign in to the [IAM console](https://console.aws.amazon.com/iam/home?#home) and do the following: 

      1. Create user **AccountAadmin** and note the user security credentials. 

         For instructions, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 

      1. Grant administrator privileges to **AccountAadmin** by attaching a user policy giving full access. 

         For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 

      1. Note the **IAM user Sign-In URL** for **AccountAadmin**. You will need to use this URL when signing in to the AWS Management Console. For more information about where to find the sign-in URL, see [Sign in to the AWS Management Console as an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in *IAM User Guide*. Note the URL for each of the accounts.

1. Set up either the AWS CLI or the AWS Tools for Windows PowerShell. Make sure that you save administrator user credentials as follows:
   + If using the AWS CLI, create a profile, `AccountAadmin`, in the config file.
   + If using the AWS Tools for Windows PowerShell, make sure you store credentials for the session as `AccountAadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

## Step 1: Create resources in Account A and grant permissions


Using the credentials of user `AccountAadmin` in Account A, and the special IAM user sign-in URL, sign in to the AWS Management Console and do the following:

1. Create resources of a bucket and an IAM user

   1. In the Amazon S3 console, create a bucket. Note the AWS Region in which you created the bucket. For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

   1. In the [IAM Console](https://console.aws.amazon.com/iam/), do the following: 

      1. Create a user named Dave.

         For step-by-step instructions, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

      1. Note the `UserDave` credentials.

      1. Note the Amazon Resource Name (ARN) for user Dave. In the [IAM Console](https://console.aws.amazon.com/iam/), select the user, and the **Summary** tab provides the user ARN.

1. Grant permissions. 

   Because the bucket owner and the parent account to which the user belongs are the same, the AWS account can grant user permissions using a bucket policy, a user policy, or both. In this example, you do both. If the object is also owned by the same account, the bucket owner can grant object permissions in the bucket policy (or an IAM policy).

   1. In the Amazon S3 console, attach the following bucket policy to *awsexamplebucket1*. 

      The policy has two statements. 
      + The first statement grants Dave the bucket operation permissions `s3:GetBucketLocation` and `s3:ListBucket`.
      + The second statement grants the `s3:GetObject` permission. Because Account A also owns the object, the account administrator is able to grant the `s3:GetObject` permission. 

      In the `Principal` statement, Dave is identified by his user ARN. For more information about policy elements, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "statement1",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::111122223333:user/Dave"
                  },
                  "Action": [
                      "s3:GetBucketLocation",
                      "s3:ListBucket"
                  ],
                  "Resource": [
                      "arn:aws:s3:::awsexamplebucket1"
                  ]
              },
              {
                  "Sid": "statement2",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::111122223333:user/Dave"
                  },
                  "Action": [
                      "s3:GetObject"
                  ],
                  "Resource": [
                      "arn:aws:s3:::awsexamplebucket1/*"
                  ]
              }
          ]
      }
      ```

------

   1. Create an inline policy for the user Dave by using the following policy. The policy grants Dave the `s3:PutObject` permission. You need to update the policy by providing your bucket name.

------
#### [ JSON ]

****  

      ```
      {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
            {
               "Sid": "PermissionForObjectOperations",
               "Effect": "Allow",
               "Action": [
                  "s3:PutObject"
               ],
               "Resource": [
                  "arn:aws:s3:::awsexamplebucket1/*"
               ]
            }
         ]
      }
      ```

------

      For instructions, see [Managing IAMpolicies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_inline-using.html) in the *IAM User Guide*. Note you need to sign in to the console using Account A credentials.

## Step 2: Test permissions


Using Dave's credentials, verify that the permissions work. You can use either of the following two procedures.

**Test permissions using the AWS CLI**

1. Update the AWS CLI config file by adding the following `UserDaveAccountA` profile. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

   ```
   [profile UserDaveAccountA]
   aws_access_key_id = access-key
   aws_secret_access_key = secret-access-key
   region = us-east-1
   ```

1. Verify that Dave can perform the operations as granted in the user policy. Upload a sample object using the following AWS CLI `put-object` command. 

   The `--body` parameter in the command identifies the source file to upload. For example, if the file is in the root of the C: drive on a Windows machine, you specify `c:\HappyFace.jpg`. The `--key` parameter provides the key name for the object.

   ```
   aws s3api put-object --bucket awsexamplebucket1 --key HappyFace.jpg --body HappyFace.jpg --profile UserDaveAccountA
   ```

   Run the following AWS CLI command to get the object. 

   ```
   aws s3api get-object --bucket awsexamplebucket1 --key HappyFace.jpg OutputFile.jpg --profile UserDaveAccountA
   ```

**Test permissions using the AWS Tools for Windows PowerShell**

1. Store Dave's credentials as `AccountADave`. You then use these credentials to `PUT` and `GET` an object.

   ```
   set-awscredentials -AccessKey AccessKeyID -SecretKey SecretAccessKey -storeas AccountADave
   ```

1. Upload a sample object using the AWS Tools for Windows PowerShell `Write-S3Object` command using user Dave's stored credentials. 

   ```
   Write-S3Object -bucketname awsexamplebucket1 -key HappyFace.jpg -file HappyFace.jpg -StoredCredentials AccountADave
   ```

   Download the previously uploaded object.

   ```
   Read-S3Object -bucketname awsexamplebucket1 -key HappyFace.jpg -file Output.jpg -StoredCredentials AccountADave
   ```

# Example 2: Bucket owner granting cross-account bucket permissions
Granting cross-account permissions

**Important**  
Granting permissions to IAM roles is a better practice than granting permissions to individual users. To learn how to do this, see [Understanding cross-account permissions and using IAM roles](example-walkthroughs-managing-access-example4.md#access-policies-walkthrough-example4-overview).

**Topics**
+ [

## Preparing for the walkthrough
](#cross-acct-access-step0)
+ [

## Step 1: Do the Account A tasks
](#access-policies-walkthrough-cross-account-permissions-acctA-tasks)
+ [

## Step 2: Do the Account B tasks
](#access-policies-walkthrough-cross-account-permissions-acctB-tasks)
+ [

## Step 3: (Optional) Try explicit deny
](#access-policies-walkthrough-example2-explicit-deny)
+ [

## Step 4: Clean up
](#access-policies-walkthrough-example2-cleanup-step)

An AWS account—for example, Account A—can grant another AWS account, Account B, permission to access its resources such as buckets and objects. Account B can then delegate those permissions to users in its account. In this example scenario, a bucket owner grants cross-account permission to another account to perform specific bucket operations.

**Note**  
Account A can also directly grant a user in Account B permissions using a bucket policy. However, the user will still need permission from the parent account, Account B, to which the user belongs, even if Account B doesn't have permissions from Account A. As long as the user has permission from both the resource owner and the parent account, the user will be able to access the resource.

The following is a summary of the walkthrough steps:

![\[An AWS account granting another AWS account permission to access its resources.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex2.png)


1. Account A administrator user attaches a bucket policy granting cross-account permissions to Account B to perform specific bucket operations.

   Note that administrator user in Account B will automatically inherit the permissions.

1. Account B administrator user attaches user policy to the user delegating the permissions it received from Account A.

1. User in Account B then verifies permissions by accessing an object in the bucket owned by Account A.

For this example, you need two accounts. The following table shows how we refer to these accounts and the administrator users in them. In accordance with the IAM guidelines (see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials)), we don't use the root user credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials when creating resources and granting them permissions. 


| AWS account ID | Account referred to as | Administrator user in the account  | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 
|  *2222-2222-2222*  |  Account B  |  AccountBadmin  | 

All the tasks of creating users and granting permissions are done in the AWS Management Console. To verify permissions, the walkthrough uses the command line tools, AWS Command Line Interface (CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code.

## Preparing for the walkthrough


1. Make sure you have two AWS accounts and that each account has one administrator user as shown in the table in the preceding section.

   1. Sign up for an AWS account, if needed. 

   1. Using Account A credentials, sign in to the [IAM console](https://console.aws.amazon.com/iam/home?#home) to create the administrator user:

      1. Create user **AccountAadmin** and note the security credentials. For instructions, see [Creating an IAM user in Your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 

      1. Grant administrator privileges to **AccountAadmin** by attaching a user policy giving full access. For instructions, see [Working with Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 

   1. While you are in the IAM console, note the **IAM user Sign-In URL** on the **Dashboard**. All users in the account must use this URL when signing in to the AWS Management Console.

      For more information, see [How Users Sign in to Your Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in *IAM User Guide*. 

   1. Repeat the preceding step using Account B credentials and create administrator user **AccountBadmin**.

1. Set up either the AWS Command Line Interface (AWS CLI) or the AWS Tools for Windows PowerShell. Make sure that you save administrator user credentials as follows:
   + If using the AWS CLI, create two profiles, `AccountAadmin` and `AccountBadmin`, in the config file.
   + If using the AWS Tools for Windows PowerShell, make sure that you store credentials for the session as `AccountAadmin` and `AccountBadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

1. Save the administrator user credentials, also referred to as profiles. You can use the profile name instead of specifying credentials for each command you enter. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

   1. Add profiles in the AWS CLI credentials file for each of the administrator users, `AccountAadmin` and `AccountBadmin`, in the two accounts. 

      ```
      [AccountAadmin]
      aws_access_key_id = access-key-ID
      aws_secret_access_key = secret-access-key
      region = us-east-1
      
      [AccountBadmin]
      aws_access_key_id = access-key-ID
      aws_secret_access_key = secret-access-key
      region = us-east-1
      ```

   1. If you're using the AWS Tools for Windows PowerShell, run the following command.

      ```
      set-awscredentials –AccessKey AcctA-access-key-ID –SecretKey AcctA-secret-access-key –storeas AccountAadmin
      set-awscredentials –AccessKey AcctB-access-key-ID –SecretKey AcctB-secret-access-key –storeas AccountBadmin
      ```

## Step 1: Do the Account A tasks


### Step 1.1: Sign in to the AWS Management Console


Using the IAM user sign-in URL for Account A, first sign in to the AWS Management Console as **AccountAadmin** user. This user will create a bucket and attach a policy to it. 

### Step 1.2: Create a bucket


1. In the Amazon S3 console, create a bucket. This exercise assumes the bucket is created in the US East (N. Virginia) AWS Region and is named `amzn-s3-demo-bucket`.

   For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

1. Upload a sample object to the bucket.

   For instructions, go to [Step 2: Upload an object to your bucket](GetStartedWithS3.md#uploading-an-object-bucket). 

### Step 1.3: Attach a bucket policy to grant cross-account permissions to Account B


The bucket policy grants the `s3:GetLifecycleConfiguration` and `s3:ListBucket` permissions to Account B. It's assumed that you're still signed in to the console using **AccountAadmin** user credentials.

1. Attach the following bucket policy to `amzn-s3-demo-bucket`. The policy grants Account B permission for the `s3:GetLifecycleConfiguration` and `s3:ListBucket` actions.

   For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). 

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Sid": "Example permissions",
            "Effect": "Allow",
            "Principal": {
               "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": [
               "s3:GetLifecycleConfiguration",
               "s3:ListBucket"
            ],
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-bucket"
            ]
         }
      ]
   }
   ```

------

1. Verify that Account B (and thus its administrator user) can perform the operations.
   + Verify using the AWS CLI

     ```
     aws s3 ls s3://amzn-s3-demo-bucket --profile AccountBadmin
     aws s3api get-bucket-lifecycle-configuration --bucket amzn-s3-demo-bucket --profile AccountBadmin
     ```
   + Verify using the AWS Tools for Windows PowerShell

     ```
     get-s3object -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBadmin 
     get-s3bucketlifecycleconfiguration -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBadmin
     ```

## Step 2: Do the Account B tasks


Now the Account B administrator creates a user, Dave, and delegates the permissions received from Account A. 

### Step 2.1: Sign in to the AWS Management Console


Using the IAM user sign-in URL for Account B, first sign in to the AWS Management Console as **AccountBadmin** user. 

### Step 2.2: Create user Dave in Account B


In the [IAM Console](https://console.aws.amazon.com/iam/), create a user, **Dave**. 

For instructions, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

### Step 2.3: Delegate permissions to user Dave


Create an inline policy for the user Dave by using the following policy. You will need to update the policy by providing your bucket name.

It's assumed that you're signed in to the console using **AccountBadmin** user credentials.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "Example",
         "Effect": "Allow",
         "Action": [
            "s3:ListBucket"
         ],
         "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket"
         ]
      }
   ]
}
```

------

For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_inline-using.html) in the *IAM User Guide*.

### Step 2.4: Test permissions


Now Dave in Account B can list the contents of `amzn-s3-demo-bucket` owned by Account A. You can verify the permissions using either of the following procedures. 

**Test permissions using the AWS CLI**

1. Add the `UserDave` profile to the AWS CLI config file. For more information about the config file, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

   ```
   [profile UserDave]
   aws_access_key_id = access-key
   aws_secret_access_key = secret-access-key
   region = us-east-1
   ```

1. At the command prompt, enter the following AWS CLI command to verify Dave can now get an object list from the `amzn-s3-demo-bucket` owned by Account A. Note the command specifies the `UserDave` profile.

   ```
   aws s3 ls s3://amzn-s3-demo-bucket --profile UserDave
   ```

   Dave doesn't have any other permissions. So, if he tries any other operation—for example, the following `get-bucket-lifecycle` configuration—Amazon S3 returns permission denied. 

   ```
   aws s3api get-bucket-lifecycle-configuration --bucket amzn-s3-demo-bucket --profile UserDave
   ```

**Test permissions using AWS Tools for Windows PowerShell**

1. Store Dave's credentials as `AccountBDave`.

   ```
   set-awscredentials -AccessKey AccessKeyID -SecretKey SecretAccessKey -storeas AccountBDave
   ```

1. Try the List Bucket command.

   ```
   get-s3object -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBDave
   ```

   Dave doesn't have any other permissions. So, if he tries any other operation—for example, the following `get-s3bucketlifecycleconfiguration`—Amazon S3 returns permission denied. 

   ```
   get-s3bucketlifecycleconfiguration -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBDave
   ```

## Step 3: (Optional) Try explicit deny


You can have permissions granted by using an access control list (ACL), a bucket policy, or a user policy. But if there is an explicit deny set by either a bucket policy or a user policy, the explicit deny takes precedence over any other permissions. For testing, update the bucket policy and explicitly deny Account B the `s3:ListBucket` permission. The policy also grants `s3:ListBucket` permission. However, explicit deny takes precedence, and Account B or users in Account B will not be able to list objects in `amzn-s3-demo-bucket`.

1. Using credentials of user `AccountAadmin` in Account A, replace the bucket policy by the following. 

1. Now if you try to get a bucket list using `AccountBadmin` credentials, access is denied.
   + Using the AWS CLI, run the following command:

     ```
     aws s3 ls s3://amzn-s3-demo-bucket --profile AccountBadmin
     ```
   + Using the AWS Tools for Windows PowerShell, run the following command:

     ```
     get-s3object -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBDave
     ```

## Step 4: Clean up


1. After you're done testing, you can do the following to clean up:

   1. Sign in to the AWS Management Console ([AWS Management Console](https://console.aws.amazon.com/)) using Account A credentials, and do the following:
     + In the Amazon S3 console, remove the bucket policy attached to `amzn-s3-demo-bucket`. In the bucket **Properties**, delete the policy in the **Permissions** section. 
     + If the bucket is created for this exercise, in the Amazon S3 console, delete the objects and then delete the bucket. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the `AccountAadmin` user.

1. Sign in to the [IAM Console](https://console.aws.amazon.com/iam/) using Account B credentials. Delete user `AccountBadmin`. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*.

# Example 3: Bucket owner granting permissions to objects it does not own
Granting object permissions

**Important**  
Granting permissions to IAM roles is a better practice than granting permissions to individual users. To learn how to do this, see [Understanding cross-account permissions and using IAM roles](example-walkthroughs-managing-access-example4.md#access-policies-walkthrough-example4-overview).

**Topics**
+ [

## Step 0: Preparing for the walkthrough
](#access-policies-walkthrough-cross-account-acl-step0)
+ [

## Step 1: Do the Account A tasks
](#access-policies-walkthrough-cross-account-acl-acctA-tasks)
+ [

## Step 2: Do the Account B tasks
](#access-policies-walkthrough-cross-account-acl-acctB-tasks)
+ [

## Step 3: Test permissions
](#access-policies-walkthrough-cross-account-acl-verify)
+ [

## Step 4: Clean up
](#access-policies-walkthrough-cross-account-acl-cleanup)

The scenario for this example is that a bucket owner wants to grant permission to access objects, but the bucket owner doesn't own all objects in the bucket. For this example, the bucket owner is trying to grant permission to users in its own account.

A bucket owner can enable other AWS accounts to upload objects. By default, the bucket owner doesn't own objects written to a bucket by another AWS account. Objects are owned by the accounts that write them to an S3 bucket. If the bucket owner doesn't own objects in the bucket, the object owner must first grant permission to the bucket owner using an object access control list (ACL). Then, the bucket owner can grant permissions to an object that they don't own. For more information, see [Amazon S3 bucket and object ownership](access-policy-language-overview.md#about-resource-owner).

If the bucket owner applies the bucket owner enforced setting for S3 Object Ownership for the bucket, the bucket owner will own all objects in the bucket, including objects written by another AWS account. This approach resolves the issue that objects are not owned by the bucket owner. Then, you can delegate permission to users in your own account or to other AWS accounts.

**Note**  
S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.  
 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

In this example, we assume the bucket owner has not applied the bucket owner enforced setting for Object Ownership. The bucket owner delegates permission to users in its own account. The following is a summary of the walkthrough steps:

![\[A bucket owner granting permissions to objects it does not own.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex3.png)


1. Account A administrator user attaches a bucket policy with two statements.
   + Allow cross-account permission to Account B to upload objects.
   + Allow a user in its own account to access objects in the bucket.

1. Account B administrator user uploads objects to the bucket owned by Account A.

1. Account B administrator updates the object ACL adding grant that gives the bucket owner full-control permission on the object.

1. User in Account A verifies by accessing objects in the bucket, regardless of who owns them.

For this example, you need two accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. In this walkthrough, you don't use the account root user credentials, according to the recommended IAM guidelines. For more information, see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials). Instead, you create an administrator in each account and use those credentials in creating resources and granting them permissions.


| AWS account ID | Account referred to as | Administrator in the account  | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 
|  *2222-2222-2222*  |  Account B  |  AccountBadmin  | 

All the tasks of creating users and granting permissions are done in the AWS Management Console. To verify permissions, the walkthrough uses the command line tools, AWS Command Line Interface (AWS CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code. 

## Step 0: Preparing for the walkthrough


1. Make sure that you have two AWS accounts and each account has one administrator as shown in the table in the preceding section.

   1. Sign up for an AWS account, if needed. 

   1. Using Account A credentials, sign in to the [IAM Console](https://console.aws.amazon.com/iam/) and do the following to create an administrator user:
      + Create user **AccountAadmin** and note the user's security credentials. For more information about adding users, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 
      + Grant administrator permissions to **AccountAadmin** by attaching a user policy that gives full access. For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 
      + In the [IAM Console](https://console.aws.amazon.com/iam/) **Dashboard**, note the** IAM User Sign-In URL**. Users in this account must use this URL when signing in to the AWS Management Console. For more information, see [How users sign in to your account](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in *IAM User Guide*. 

   1. Repeat the preceding step using Account B credentials and create administrator user **AccountBadmin**.

1. Set up either the AWS CLI or the Tools for Windows PowerShell. Make sure that you save the administrator credentials as follows:
   + If using the AWS CLI, create two profiles, `AccountAadmin` and `AccountBadmin`, in the config file.
   + If using the Tools for Windows PowerShell, make sure that you store credentials for the session as `AccountAadmin` and `AccountBadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

## Step 1: Do the Account A tasks


Perform the following steps for Account A:

### Step 1.1: Sign in to the console


Using the IAM user sign-in URL for Account A, sign in to the AWS Management Console as **AccountAadmin** user. This user will create a bucket and attach a policy to it. 

### Step 1.2: Create a bucket and user, and add a bucket policy to grant user permissions


1. In the Amazon S3 console, create a bucket. This exercise assumes that the bucket is created in the US East (N. Virginia) AWS Region, and the name is `amzn-s3-demo-bucket1`.

   For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

1. In the [IAM Console](https://console.aws.amazon.com/iam/), create a user **Dave**. 

   For step-by-step instructions, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

1. Note the user Dave credentials. 

1. In the Amazon S3 console, attach the following bucket policy to `amzn-s3-demo-bucket1` bucket. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). Follow the steps to add a bucket policy. For information about how to find account IDs, see [Finding your AWS account ID](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html#FindingYourAccountIdentifiers). 

   The policy grants Account B the `s3:PutObject` and `s3:ListBucket` permissions. The policy also grants user `Dave` the `s3:GetObject` permission. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Statement1",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:root"
               },
               "Action": [
                   "s3:PutObject",
                   "s3:ListBucket"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket1/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket1"
               ]
           },
           {
               "Sid": "Statement3",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:user/Dave"
               },
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket1/*"
               ]
           }
       ]
   }
   ```

------

## Step 2: Do the Account B tasks


Now that Account B has permissions to perform operations on Account A's bucket, the Account B administrator does the following:
+ Uploads an object to Account A's bucket 
+ Adds a grant in the object ACL to allow Account A, the bucket owner, full control

**Using the AWS CLI**

1. Using the `put-object` AWS CLI command, upload an object. The `--body` parameter in the command identifies the source file to upload. For example, if the file is on the `C:` drive of a Windows machine, specify `c:\HappyFace.jpg`. The `--key` parameter provides the key name for the object. 

   ```
   aws s3api put-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --body HappyFace.jpg --profile AccountBadmin
   ```

1. Add a grant to the object ACL to allow the bucket owner full control of the object. For information about how to find a canonical user ID, see [Find the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId) in the *AWS Account Management Reference Guide*.

   ```
   aws s3api put-object-acl --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --grant-full-control id="AccountA-CanonicalUserID" --profile AccountBadmin
   ```

**Using the Tools for Windows PowerShell**

1. Using the `Write-S3Object` command, upload an object. 

   ```
   Write-S3Object -BucketName amzn-s3-demo-bucket1 -key HappyFace.jpg -file HappyFace.jpg -StoredCredentials AccountBadmin
   ```

1. Add a grant to the object ACL to allow the bucket owner full control of the object.

   ```
   Set-S3ACL -BucketName amzn-s3-demo-bucket1 -Key HappyFace.jpg -CannedACLName "bucket-owner-full-control" -StoredCreden
   ```

## Step 3: Test permissions


Now verify that user Dave in Account A can access the object owned by Account B.

**Using the AWS CLI**

1. Add user Dave credentials to the AWS CLI config file and create a new profile, `UserDaveAccountA`. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

   ```
   [profile UserDaveAccountA]
   aws_access_key_id = access-key
   aws_secret_access_key = secret-access-key
   region = us-east-1
   ```

1. Run the `get-object` CLI command to download `HappyFace.jpg` and save it locally. You provide user Dave credentials by adding the `--profile` parameter.

   ```
   aws s3api get-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg Outputfile.jpg --profile UserDaveAccountA
   ```

**Using the Tools for Windows PowerShell**

1. Store user Dave AWS credentials, as `UserDaveAccountA`, to persistent store. 

   ```
   Set-AWSCredentials -AccessKey UserDave-AccessKey -SecretKey UserDave-SecretAccessKey -storeas UserDaveAccountA
   ```

1. Run the `Read-S3Object` command to download the `HappyFace.jpg` object and save it locally. You provide user Dave credentials by adding the `-StoredCredentials` parameter. 

   ```
   Read-S3Object -BucketName amzn-s3-demo-bucket1 -Key HappyFace.jpg -file HappyFace.jpg  -StoredCredentials UserDaveAccountA
   ```

## Step 4: Clean up


1. After you're done testing, you can do the following to clean up:

   1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) using Account A credentials, and do the following:
     + In the Amazon S3 console, remove the bucket policy attached to *amzn-s3-demo-bucket1*. In the bucket **Properties**, delete the policy in the **Permissions** section. 
     + If the bucket is created for this exercise, in the Amazon S3 console, delete the objects and then delete the bucket. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the **AccountAadmin** user. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*.

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) using Account B credentials. In the [IAM Console](https://console.aws.amazon.com/iam/), delete the user **AccountBadmin**.

# Example 4 - Bucket owner granting cross-account permission to objects it does not own
Granting cross-account object permissions

**Topics**
+ [

## Understanding cross-account permissions and using IAM roles
](#access-policies-walkthrough-example4-overview)
+ [

## Step 0: Preparing for the walkthrough
](#access-policies-walkthrough-example4-step0)
+ [

## Step 1: Do the account A tasks
](#access-policies-walkthrough-example4-step1)
+ [

## Step 2: Do the Account B tasks
](#access-policies-walkthrough-example4-step2)
+ [

## Step 3: Do the Account C tasks
](#access-policies-walkthrough-example4-step3)
+ [

## Step 4: Clean up
](#access-policies-walkthrough-example4-step6)
+ [

## Related resources
](#RelatedResources-managing-access-example4)

 In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. If you have applied the bucket owner enforced setting for S3 Object Ownership for the bucket, you will own all objects in the bucket, including objects written by another AWS account. This approach resolves the issue that objects are not owned by you, the bucket owner. Then, you can delegate permission to users in your own account or to other AWS accounts. Suppose the bucket owner enforced setting for S3 Object Ownership is not enabled. That is, your bucket can have objects that other AWS accounts own. 

Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:
+ The bucket owner has no permissions on those objects created by other AWS accounts. For the bucket owner to grant permissions on objects it doesn't own, the object owner must first grant permission to the bucket owner. The object owner is the AWS account that created the objects. The bucket owner can then delegate those permissions.
+ The bucket owner account can delegate permissions to users in its own account (see [Example 3: Bucket owner granting permissions to objects it does not own](example-walkthroughs-managing-access-example3.md)). However, the bucket owner account can't delegate permissions to other AWS accounts because cross-account delegation isn't supported. 

In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects Then, the bucket owner can grant another AWS account permission to assume the role, temporarily enabling it to access objects in the bucket. 

**Note**  
S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.  
 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

## Understanding cross-account permissions and using IAM roles


 IAM roles enable several scenarios to delegate access to your resources, and cross-account access is one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another AWS account, Account C. Each IAM role that you create has the following two policies attached to it:
+ A trust policy identifying another AWS account that can assume the role.
+ An access policy defining what permissions—for example, `s3:GetObject`—are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions).

The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:
+ Assume the role and, in response, get temporary security credentials. 
+ Using the temporary security credentials, access the objects in the bucket.

For more information about IAM roles, see [IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*. 

The following is a summary of the walkthrough steps:

![\[Cross-account permissions using IAM roles.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex4.png)


1. Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.

1. Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can access Account A. The access policy attached to the role limits what user in Account C can do when the user accesses Account A.

1. Account B administrator uploads an object to the bucket owned by Account A, granting full-control permission to the bucket owner.

1. Account C administrator creates a user and attaches a user policy that allows the user to assume the role.

1. User in Account C first assumes the role, which returns the user temporary security credentials. Using those temporary credentials, the user then accesses objects in the bucket.

For this example, you need three accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. In accordance with the IAM guidelines (see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials)), we don't use the AWS account root user credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials when creating resources and granting them permissions.


| AWS account ID | Account referred to as | Administrator user in the account  | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 
|  *2222-2222-2222*  |  Account B  |  AccountBadmin  | 
|  *3333-3333-3333*  |  Account C  |  AccountCadmin  | 



## Step 0: Preparing for the walkthrough


**Note**  
You might want to open a text editor, and write down some of the information as you go through the steps. In particular, you will need account IDs, canonical user IDs, IAM user Sign-in URLs for each account to connect to the console, and Amazon Resource Names (ARNs) of the IAM users, and roles. 

1. Make sure that you have three AWS accounts and each account has one administrator user as shown in the table in the preceding section.

   1. Sign up for AWS accounts, as needed. We refer to these accounts as Account A, Account B, and Account C.

   1. Using Account A credentials, sign in to the [IAM console](https://console.aws.amazon.com/iam/home?#home) and do the following to create an administrator user:
      + Create user **AccountAadmin** and note its security credentials. For more information about adding users, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 
      + Grant administrator privileges to **AccountAadmin** by attaching a user policy giving full access. For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 
      + In the IAM Console **Dashboard**, note the **IAM User Sign-In URL**. Users in this account must use this URL when signing in to the AWS Management Console. For more information, see [Sign in to the AWS Management Console as an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in the *IAM User Guide*. 

   1. Repeat the preceding step to create administrator users in Account B and Account C.

1. For Account C, note the canonical user ID. 

   When you create an IAM role in Account A, the trust policy grants Account C permission to assume the role by specifying the account ID. You can find account information as follows:

   1. Use your AWS account ID or account alias, your IAM user name, and your password to sign in to the [Amazon S3 console](https://console.aws.amazon.com/s3/).

   1. Choose the name of an Amazon S3 bucket to view the details about that bucket.

   1. Choose the **Permissions** tab and then choose **Access Control List**. 

   1. In the **Access for your AWS account** section, in the **Account** column is a long identifier, such as `c1daexampleaaf850ea79cf0430f33d72579fd1611c97f7ded193374c0b163b6`. This is your canonical user ID.

1. When creating a bucket policy, you will need the following information. Note these values:
   + **Canonical user ID of Account A** – When the Account A administrator grants conditional upload object permission to the Account B administrator, the condition specifies the canonical user ID of the Account A user that must get full-control of the objects. 
**Note**  
The canonical user ID is the Amazon S3–only concept. It is a 64-character obfuscated version of the account ID. 
   + **User ARN for Account B administrator** – You can find the user ARN in the [IAM Console](https://console.aws.amazon.com/iam/).You must select the user and find the user's ARN in the **Summary** tab.

     In the bucket policy, you grant `AccountBadmin` permission to upload objects and you specify the user using the ARN. Here's an example ARN value:

     ```
     arn:aws:iam::AccountB-ID:user/AccountBadmin
     ```

1. Set up either the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell. Make sure that you save administrator user credentials as follows:
   + If using the AWS CLI, create profiles, `AccountAadmin` and `AccountBadmin`, in the config file.
   + If using the AWS Tools for Windows PowerShell, make sure that you store credentials for the session as `AccountAadmin` and `AccountBadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

## Step 1: Do the account A tasks


In this example, Account A is the bucket owner. So user AccountAadmin in Account A will do the following: 
+ Create a bucket.
+ Attach a bucket policy that grants the Account B administrator permission to upload objects.
+ Create an IAM role that grants Account C permission to assume the role so it can access objects in the bucket.

### Step 1.1: Sign in to the AWS Management Console


Using the IAM user Sign-in URL for Account A, first sign in to the AWS Management Console as **AccountAadmin** user. This user will create a bucket and attach a policy to it. 

### Step 1.2: Create a bucket and attach a bucket policy


In the Amazon S3 console, do the following:

1. Create a bucket. This exercise assumes the bucket name is `amzn-s3-demo-bucket1`.

   For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

1. Attach the following bucket policy. The policy grants conditional permission to the Account B administrator permission to upload objects.

   Update the policy by providing your own values for `amzn-s3-demo-bucket1`, `AccountB-ID`, and the `CanonicalUserId-of-AWSaccountA-BucketOwner`. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "111",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:user/AccountBadmin"
               },
               "Action": "s3:PutObject",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*"
           },
           {
               "Sid": "112",
               "Effect": "Deny",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:user/AccountBadmin"
               },
               "Action": "s3:PutObject",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*",
               "Condition": {
                   "StringNotEquals": {
                       "s3:x-amz-grant-full-control": "id=CanonicalUserId-of-AWSaccountA-BucketOwner"
                   }
               }
           }
       ]
   }
   ```

------

### Step 1.3: Create an IAM role to allow Account C cross-account access in Account A


In the [IAM Console](https://console.aws.amazon.com/iam/), create an IAM role (**examplerole**) that grants Account C permission to assume the role. Make sure that you are still signed in as the Account A administrator because the role must be created in Account A.

1. Before creating the role, prepare the managed policy that defines the permissions that the role requires. You attach this policy to the role in a later step.

   1. In the navigation pane on the left, choose **Policies** and then choose **Create Policy**.

   1. Next to **Create Your Own Policy**, choose **Select**.

   1. Enter **access-accountA-bucket** in the **Policy Name** field.

   1. Copy the following access policy and paste it into the **Policy Document** field. The access policy grants the role `s3:GetObject` permission so, when the Account C user assumes the role, it can only perform the `s3:GetObject` operation.

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*"
          }
        ]
      }
      ```

------

   1. Choose **Create Policy**.

      The new policy appears in the list of managed policies.

1. In the navigation pane on the left, choose **Roles** and then choose **Create New Role**.

1. Under **Select Role Type**, select **Role for Cross-Account Access**, and then choose the **Select** button next to **Provide access between AWS accounts you own**.

1. Enter the Account C account ID.

   For this walkthrough, you don't need to require users to have multi-factor authentication (MFA) to assume the role, so leave that option unselected.

1. Choose **Next Step** to set the permissions that will be associated with the role.

1. 

   Select the checkbox next to the **access-accountA-bucket** policy that you created, and then choose **Next Step**.

   The Review page appears so you can confirm the settings for the role before it's created. One very important item to note on this page is the link that you can send to your users who need to use this role. Users who use the link go straight to the **Switch Role** page with the Account ID and Role Name fields already filled in. You can also see this link later on the **Role Summary** page for any cross-account role.

1. Enter `examplerole` for the role name, and then choose **Next Step**.

1. After reviewing the role, choose **Create Role**.

   The `examplerole` role is displayed in the list of roles.

1. Choose the role name `examplerole`.

1. Select the **Trust Relationships** tab.

1. Choose **Show policy document** and verify the trust policy shown matches the following policy.

   The following trust policy establishes trust with Account C, by allowing it the `sts:AssumeRole` action. For more information, see [https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) in the *AWS Security Token Service API Reference*.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:root"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Note the Amazon Resource Name (ARN) of the `examplerole` role that you created. 

   Later in the following steps, you attach a user policy to allow an IAM user to assume this role, and you identify the role by the ARN value. 

## Step 2: Do the Account B tasks


The example bucket owned by Account A needs objects owned by other accounts. In this step, the Account B administrator uploads an object using the command line tools.
+ Using the `put-object` AWS CLI command, upload an object to `amzn-s3-demo-bucket1`. 

  ```
  aws s3api put-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --body HappyFace.jpg --grant-full-control id="canonicalUserId-ofTheBucketOwner" --profile AccountBadmin
  ```

  Note the following:
  + The `--Profile` parameter specifies the `AccountBadmin` profile, so the object is owned by Account B.
  + The parameter `grant-full-control` grants the bucket owner full-control permission on the object as required by the bucket policy.
  + The `--body` parameter identifies the source file to upload. For example, if the file is on the C: drive of a Windows computer, you specify `c:\HappyFace.jpg`. 

## Step 3: Do the Account C tasks


In the preceding steps, Account A has already created a role, `examplerole`, establishing trust with Account C. This role allows users in Account C to access Account A. In this step, the Account C administrator creates a user (Dave) and delegates him the `sts:AssumeRole` permission it received from Account A. This approach allows Dave to assume the `examplerole` and temporarily gain access to Account A. The access policy that Account A attached to the role limits what Dave can do when he accesses Account A—specifically, get objects in `amzn-s3-demo-bucket1`.

### Step 3.1: Create a user in Account C and delegate permission to assume examplerole


1. Using the IAM user sign-in URL for Account C, first sign in to the AWS Management Console as **AccountCadmin** user. 

   

1. In the [IAM Console](https://console.aws.amazon.com/iam/), create a user, Dave. 

   For step-by-step instructions, see [Creating IAM users (AWS Management Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

1. Note the Dave credentials. Dave will need these credentials to assume the `examplerole` role.

1. Create an inline policy for the Dave IAM user to delegate the `sts:AssumeRole` permission to Dave on the `examplerole` role in Account A. 

   1. In the navigation pane on the left, choose **Users**.

   1. Choose the user name **Dave**.

   1. On the user details page, select the **Permissions** tab and then expand the **Inline Policies** section.

   1. Choose **click here** (or **Create User Policy**).

   1. Choose **Custom Policy**, and then choose **Select**.

   1. Enter a name for the policy in the **Policy Name** field.

   1. Copy the following policy into the **Policy Document** field.

      You must update the policy by providing the `AccountA-ID`.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "sts:AssumeRole"
                  ],
                  "Resource": "arn:aws:iam::111122223333:role/examplerole"
              }
          ]
      }
      ```

------

   1. Choose **Apply Policy**.

1. Save Dave's credentials to the config file of the AWS CLI by adding another profile, `AccountCDave`.

   ```
   [profile AccountCDave]
   aws_access_key_id = UserDaveAccessKeyID
   aws_secret_access_key = UserDaveSecretAccessKey
   region = us-west-2
   ```

### Step 3.2: Assume role (examplerole) and access objects


Now Dave can access objects in the bucket owned by Account A as follows:
+ Dave first assumes the `examplerole` using his own credentials. This will return temporary credentials.
+ Using the temporary credentials, Dave will then access objects in Account A's bucket.

1. At the command prompt, run the following AWS CLI `assume-role` command using the `AccountCDave` profile. 

   You must update the ARN value in the command by providing the `AccountA-ID` where `examplerole` is defined.

   ```
   aws sts assume-role --role-arn arn:aws:iam::AccountA-ID:role/examplerole --profile AccountCDave --role-session-name test
   ```

   In response, AWS Security Token Service (AWS STS) returns temporary security credentials (access key ID, secret access key, and a session token).

1. Save the temporary security credentials in the AWS CLI config file under the `TempCred` profile.

   ```
   [profile TempCred]
   aws_access_key_id = temp-access-key-ID
   aws_secret_access_key = temp-secret-access-key
   aws_session_token = session-token
   region = us-west-2
   ```

1. At the command prompt, run the following AWS CLI command to access objects using the temporary credentials. For example, the command specifies the head-object API to retrieve object metadata for the `HappyFace.jpg` object.

   ```
   aws s3api get-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg SaveFileAs.jpg --profile TempCred
   ```

   Because the access policy attached to `examplerole` allows the actions, Amazon S3 processes the request. You can try any other action on any other object in the bucket.

   If you try any other action—for example, `get-object-acl`—you will get permission denied because the role isn't allowed that action.

   ```
   aws s3api get-object-acl --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --profile TempCred
   ```

   We used user Dave to assume the role and access the object using temporary credentials. It could also be an application in Account C that accesses objects in `amzn-s3-demo-bucket1`. The application can obtain temporary security credentials, and Account C can delegate the application permission to assume `examplerole`.

## Step 4: Clean up


1. After you're done testing, you can do the following to clean up:

   1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) using Account A credentials, and do the following:
     + In the Amazon S3 console, remove the bucket policy attached to `amzn-s3-demo-bucket1`. In the bucket **Properties**, delete the policy in the **Permissions** section. 
     + If the bucket is created for this exercise, in the Amazon S3 console, delete the objects and then delete the bucket. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the `examplerole` you created in Account A. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the **AccountAadmin** user.

1. Sign in to the [IAM Console](https://console.aws.amazon.com/iam/) by using Account B credentials. Delete the user **AccountBadmin**. 

1. Sign in to the [IAM Console](https://console.aws.amazon.com/iam/) by using Account C credentials. Delete **AccountCadmin** and the user Dave.

## Related resources


For more information that's related to this walkthrough, see the following resources in the *IAM User Guide*:
+ [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html)
+ [Tutorial: Delegate Access Across AWS accounts Using IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial-cross-account-with-roles.html)
+ [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html)

# Using service-linked roles for Amazon S3 Storage Lens


To use Amazon S3 Storage Lens to collect and aggregate metrics across all your accounts in AWS Organizations, you must first ensure that S3 Storage Lens has trusted access enabled by the management account in your organization. S3 Storage Lens creates a service-linked role (SLR) to allow it to get the list of AWS accounts belonging to your organization. This list of accounts is used by S3 Storage Lens to collect metrics for S3 resources in all the member accounts when the S3 Storage Lens dashboard or configurations are created or updated.

Amazon S3 Storage Lens uses AWS Identity and Access Management (IAM) [ service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). A service-linked role is a unique type of IAM role that is linked directly to S3 Storage Lens. Service-linked roles are predefined by S3 Storage Lens and include all the permissions that the service requires to call other AWS services on your behalf.

A service-linked role makes setting up S3 Storage Lens easier because you don't have to add the necessary permissions manually. S3 Storage Lens defines the permissions of its service-linked roles, and unless defined otherwise, only S3 Storage Lens can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy can't be attached to any other IAM entity.

You can delete this service-linked role only after first deleting the related resources. This protects your S3 Storage Lens resources because you can't inadvertently remove permission to access the resources.

For information about other services that support service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service.

## Service-linked role permissions for Amazon S3 Storage Lens


S3 Storage Lens uses the service-linked role named **AWSServiceRoleForS3StorageLens** – This enables access to AWS services and resources used or managed by S3 Storage Lens. This allows S3 Storage Lens to access AWS Organizations resources on your behalf.

The S3 Storage Lens service-linked role trusts the following service on your organization's storage:
+ `storage-lens.s3.amazonaws.com`

The role permissions policy allows S3 Storage Lens to complete the following actions:
+ `organizations:DescribeOrganization`

  `organizations:ListAccounts`

  `organizations:ListAWSServiceAccessForOrganization`

  `organizations:ListDelegatedAdministrators`

You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

## Creating a service-linked role for S3 Storage Lens


You don't need to manually create a service-linked role. When you complete one of the following tasks while signed into the AWS Organizations management or the delegate administrator accounts, S3 Storage Lens creates the service-linked role for you:
+ Create an S3 Storage Lens dashboard configuration for your organization in the Amazon S3 console.
+ `PUT` an S3 Storage Lens configuration for your organization using the REST API, AWS CLI and SDKs.

**Note**  
S3 Storage Lens will support a maximum of five delegated administrators per organization.

If you delete this service-linked role, the preceding actions will re-create it as needed.

### Example policy for S3 Storage Lens service-linked role
Example policy for S3 Storage Lens

**Example Permissions policy for the S3 Storage Lens service-linked role**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AwsOrgsAccess",
            "Effect": "Allow",
            "Action": [
                "organizations:DescribeOrganization",
                "organizations:ListAccounts",
                "organizations:ListAWSServiceAccessForOrganization",
                "organizations:ListDelegatedAdministrators"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
```

## Editing a service-linked role for Amazon S3 Storage Lens


S3 Storage Lens doesn't allow you to edit the AWSServiceRoleForS3StorageLens service-linked role. After you create a service-linked role, you can't change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see [Editing a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*.

## Deleting a service-linked role for Amazon S3 Storage Lens


If you no longer need to use the service-linked role, we recommend that you delete that role. That way you don't have an unused entity that is not actively monitored or maintained. However, you must clean up the resources for your service-linked role before you can manually delete it.

**Note**  
If the Amazon S3 Storage Lens service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again.

To delete the AWSServiceRoleForS3StorageLens you must delete all the organization level S3 Storage Lens configurations present in all AWS Regions using the AWS Organizations management or the delegate administrator accounts.

The resources are organization-level S3 Storage Lens configurations. Use S3 Storage Lens to clean up the resources and then use the [IAM Console](https://console.aws.amazon.com/iam/), CLI, REST API, or AWS SDK to delete the role. 

In the REST API, AWS CLI, and SDKs, S3 Storage Lens configurations can be discovered using `ListStorageLensConfigurations` in all the Regions where your organization has created S3 Storage Lens configurations. Use the action `DeleteStorageLensConfiguration` to delete these configurations so that you can then delete the role.

**Note**  
To delete the service-linked role, you must delete all the organization-level S3 Storage Lens configurations in all the Regions where they exist.

**To delete Amazon S3 Storage Lens resources used by the AWSServiceRoleForS3StorageLens SLR**

1. To get a list of your organization level configurations, you must use the `ListStorageLensConfigurations` in every Region that you have S3 Storage Lens configurations. This list can also be obtained from the Amazon S3 console.

1. Delete these configurations from the appropriate Regional endpoints by invoking the `DeleteStorageLensConfiguration` API call or by using the Amazon S3 console. 

**To manually delete the service-linked role using IAM**

After you have deleted the configurations, delete the AWSServiceRoleForS3StorageLens SLR from the [IAM Console](https://console.aws.amazon.com/iam/) or by invoking the IAM API `DeleteServiceLinkedRole`, or using the AWS CLI or AWS SDK. For more information, see [Deleting a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#delete-service-linked-role) in the *IAM User Guide*.

## Supported Regions for S3 Storage Lens service-linked roles


S3 Storage Lens supports using service-linked roles in all of the AWS Regions where the service is available. For more information, see [Amazon S3 Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html).

# Troubleshooting Amazon S3 identity and access
Troubleshooting Amazon S3 identity and access

Use the following information to help you diagnose and fix common issues that you might encounter when working with Amazon S3 and IAM.

**Topics**
+ [

## I received an access denied error
](#access_denied_403)
+ [

## I am not authorized to perform an action in Amazon S3
](#security_iam_troubleshoot-no-permissions)
+ [

## I am not authorized to perform iam:PassRole
](#security_iam_troubleshoot-passrole)
+ [

## I want to allow people outside of my AWS account to access my Amazon S3 resources
](#security_iam_troubleshoot-cross-account-access)
+ [

# Troubleshoot access denied (403 Forbidden) errors in Amazon S3
](troubleshoot-403-errors.md)

## I received an access denied error


Verify that there is not an explicit `Deny` statement against the requester you are trying to grant permissions to in either the bucket policy or the identity-based policy. 

For detailed information about troubleshooting access denied errors, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

## I am not authorized to perform an action in Amazon S3


If you receive an error that you're not authorized to perform an action, your policies must be updated to allow you to perform the action.

The following example error occurs when the `mateojackson` IAM user tries to use the console to view details about a fictional `my-example-widget` resource but doesn't have the fictional `s3:GetWidget` permissions.

```
User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: s3:GetWidget on resource: my-example-widget
```

In this case, the policy for the `mateojackson` user must be updated to allow access to the `my-example-widget` resource by using the `s3:GetWidget` action.

If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials.

## I am not authorized to perform iam:PassRole


If you receive an error that you're not authorized to perform the `iam:PassRole` action, your policies must be updated to allow you to pass a role to Amazon S3.

Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service.

The following example error occurs when an IAM user named `marymajor` tries to use the console to perform an action in Amazon S3. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service.

```
User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole
```

In this case, Mary's policies must be updated to allow her to perform the `iam:PassRole` action.

If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials.

## I want to allow people outside of my AWS account to access my Amazon S3 resources


You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources.

To learn more, consult the following:
+ To learn whether Amazon S3 supports these features, see [How Amazon S3 works with IAM](security_iam_service-with-iam.md).
+ To learn how to provide access to your resources across AWS accounts that you own, see [Providing access to an IAM user in another AWS account that you own](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html) in the *IAM User Guide*.
+ To learn how to provide access to your resources to third-party AWS accounts, see [Providing access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) in the *IAM User Guide*.
+ To learn how to provide access through identity federation, see [Providing access to externally authenticated users (identity federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) in the *IAM User Guide*.
+ To learn the difference between using roles and resource-based policies for cross-account access, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

# Troubleshoot access denied (403 Forbidden) errors in Amazon S3
Troubleshoot access denied (403 Forbidden) errors

Access denied (HTTP `403 Forbidden`) errors appear when AWS explicitly or implicitly denies an authorization request. 
+ An *explicit denial* occurs when a policy contains a `Deny` statement for the specific AWS action. 
+ An *implicit denial* occurs when there is no applicable `Deny` statement and also no applicable `Allow` statement. 

Because an AWS Identity and Access Management (IAM) policy implicitly denies an IAM principal by default, the policy must explicitly allow the principal to perform an action. Otherwise, the policy implicitly denies access. For more information, see [The difference between explicit and implicit denies](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#AccessPolicyLanguage_Interplay) in the *IAM User Guide*. For information about the policy evaluation logic that determines whether an access request is allowed or denied, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*. 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

The following topics cover the most common causes of access denied errors in Amazon S3.

**Note**  
For access denied (HTTP `403 Forbidden`) errors, Amazon S3 doesn't charge the bucket owner when the request is initiated outside of the bucket owner's individual AWS account or the bucket owner's AWS organization.

**Topics**
+ [

## Access denied message examples and how to troubleshoot them
](#access-denied-message-examples)
+ [

## Access denied due to Requester Pays settings
](#access-denied-requester-pays)
+ [

## Bucket policies and IAM policies
](#bucket-iam-policies)
+ [

## Amazon S3 ACL settings
](#troubleshoot-403-acl-settings)
+ [

## S3 Block Public Access settings
](#troubleshoot-403-bpa)
+ [

## Amazon S3 encryption settings
](#troubleshoot-403-encryption)
+ [

## S3 Object Lock settings
](#troubleshoot-403-object-lock)
+ [

## VPC endpoint policies
](#troubleshoot-403-vpc)
+ [

## AWS Organizations policies
](#troubleshoot-403-orgs)
+ [

## CloudFront distribution access
](#troubleshoot-403-cloudfront)
+ [

## Access point settings
](#troubleshoot-403-access-points)
+ [

## Additional resources
](#troubleshoot-403-additional-resources)

**Note**  
If you're trying to troubleshoot a permissions issue, start with the [Access denied message examples and how to troubleshoot them](#access-denied-message-examples) section, then go to the [Bucket policies and IAM policies](#bucket-iam-policies) section. Also be sure to follow the guidance in [Tips for checking permissions](#troubleshoot-403-tips).

## Access denied message examples and how to troubleshoot them


Amazon S3 now includes additional context in access denied (HTTP `403 Forbidden`) errors for requests made to resources within the same AWS account or same organization in AWS Organizations. This new context includes the type of policy that denied access, the reason for denial, and information about the IAM user or role that requested access to the resource. 

This additional context helps you to troubleshoot access issues, identify the root cause of access denied errors, and fix incorrect access controls by updating the relevant policies. This additional context is also available in AWS CloudTrail logs. Enhanced access denied error messages for same-account or same-organization requests are now available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions. 

Most access denied error messages appear in the format `User user-arn is not authorized to perform action on "resource-arn" because context`. In this example, *`user-arn`* is the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) of the user that doesn't receive access, *`action`* is the service action that the policy denies, and *`resource-arn`* is the ARN of the resource on which the policy acts. The *`context`* field represents additional context about the policy type that explains why the policy denied access.

When a policy explicitly denies access because the policy contains a `Deny` statement, then the access denied error message includes the phrase `with an explicit deny in a type policy`. When the policy implicitly denies access, then the access denied error message includes the phrase `because no type policy allows the action action`.

**Important**  
Enhanced access denied messages are returned only for same-account requests or for requests within the same organization in AWS Organizations. Cross-account requests outside of the same organization return a generic `Access Denied` message.   
For information about the policy evaluation logic that determines whether a cross-account access request is allowed or denied, see [ Cross-account policy evaluation logic](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_evaluation-logic-cross-account.html) in the *IAM User Guide*. For a walkthrough that shows how to grant cross-account access, see [Example 2: Bucket owner granting cross-account bucket permissions](example-walkthroughs-managing-access-example2.md). 
For requests within the same organization in AWS Organizations:  
Enhanced access denied messages aren't returned if a denial occurs because of a virtual private cloud (VPC) endpoint policy.
Enhanced access denied messages are provided whenever both the bucket owner and the caller account belong to the same organization in AWS Organizations. Although buckets configured with the S3 Object Ownership **Bucket owner preferred** or **Object writer** settings might contain objects owned by different accounts, object ownership doesn't affect enhanced access denied messages. Enhanced access denied messages are returned for all object requests as long as the bucket owner and caller are in the same organization, regardless of who owns the specific object. For information about Object Ownership settings and configurations, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).
Enhanced access denied error messages aren't returned for requests made to directory buckets. Directory bucket requests return a generic `Access Denied` message.
If multiple policies of the same policy type deny an authorization request, the access denied error message doesn't specify the number of policies.
If multiple policy types deny an authorization request, the error message includes only one of those policy types.
If an access request is denied due to multiple reasons, the error message includes only one of the reasons for denial. 

The following examples show the format for different types of access denied error messages and how to troubleshoot each type of message.

### Access denied due to Blocked Encryption Type
Access denied due to Blocked Encryption Type

To limit the server-side encryption types you can use in your general purpose buckets, you can choose to block SSE-C write requests by updating your default encryption configuration for your buckets. This bucket-level configuration blocks requests to upload objects that specify SSE-C. When SSE-C is blocked for a bucket, any `PutObject`, `CopyObject`, `PostObject`, or Multipart Upload or replication requests that specify SSE-C encryption will be rejected with an HTTP 403 `AccessDenied` error.

This setting is a parameter on the `PutBucketEncryption` API and can also be updated using the S3 Console, AWS CLI, and AWS SDKs, if you have the `s3:PutEncryptionConfiguration` permission. Valid values are `SSE-C`, which blocks SSE-C encryption for the general purpose bucket, and `NONE`, which allows the use SSE-C for writes to the bucket.

For example, when access is denied for a `PutObject` request because the `BlockedEncryptionTypes` setting blocks write requests specifying SSE-C, you receive the following message:

```
An error occurred (AccessDenied) when calling the PutObject operation:   
User: arn:aws:iam::123456789012:user/MaryMajor  is not   
authorized to perform: s3:PutObject on resource:   
"arn:aws:s3:::amzn-s3-demo-bucket1/object-name" because this   
bucket has blocked upload requests that specify   
Server Side Encryption with Customer provided keys (SSE-C).   
Please specify a different server-side encryption type
```

For more information about this setting, see [Blocking or unblocking SSE-C for a general purpose bucket](blocking-unblocking-s3-c-encryption-gpb.md).

### Access denied due to a resource control policy – explicit denial


1. Check for a `Deny` statement for the action in your resource control policies (RCPs). For the following example, the action is `s3:GetObject`.

1. Update your RCP by removing the `Deny` statement. For more information, see [Update a resource control policy (RCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_policies_update.html#update_policy-rcp) in the *AWS Organizations User Guide*. 

```
An error occurred (AccessDenied) when calling the GetObject operation: 
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
with an explicit deny in a resource control policy
```

### Access denied due to a Service Control Policy – implicit denial


1. Check for a missing `Allow` statement for the action in your service control policies (SCPs). For the following example, the action is `s3:GetObject`.

1. Update your SCP by adding the `Allow` statement. For more information, see [Updating an SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html#update_policy) in the *AWS Organizations User Guide*.

```
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform:
s3:GetObject because no service control policy allows the s3:GetObject action
```

### Access denied due to a Service Control Policy – explicit denial


1. Check for a `Deny` statement for the action in your Service Control Policies (SCPs). For the following example, the action is `s3:GetObject`.

1. Update your SCP by changing the `Deny` statement to allow the user the necessary access. For an example of how you can do this, see [Prevent IAM users and roles from making specified changes, with an exception for a specified admin role](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html#example-scp-restricts-with-exception) in the *AWS Organizations User Guide*. For more information about updating your SCP, see [Updating an SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html#update_policy) in the *AWS Organizations User Guide*.

```
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform: 
s3:GetObject with an explicit deny in a service control policy
```

### Access denied due to a VPC endpoint policy – implicit denial


1. Check for a missing `Allow` statement for the action in your virtual private cloud (VPC) endpoint policies. For the following example, the action is `s3:GetObject`.

1. Update your VPC endpoint policy by adding the `Allow` statement. For more information, see [Update a VPC endpoint policy](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#update-vpc-endpoint-policy) in the *AWS PrivateLink Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no VPC endpoint policy allows the s3:GetObject action
```

### Access denied due to a VPC endpoint policy – explicit denial


1. Check for an explicit `Deny` statement for the action in your virtual private cloud (VPC) endpoint policies. For the following example, the action is `s3:GetObject`.

1. Update your VPC endpoint policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [Example 7: Excluding certain principals from a `Deny` statement](amazon-s3-policy-keys.md#example-exclude-principal-from-deny-statement). For more information about updating your VPC endpoint policy, see [Update a VPC endpoint policy](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#update-vpc-endpoint-policy) in the *AWS PrivateLink Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in a VPC endpoint policy
```

### Access denied due to a permissions boundary – implicit denial


1. Check for a missing `Allow` statement for the action in your permissions boundary. For the following example, the action is `s3:GetObject`.

1. Update your permissions boundary by adding the `Allow` statement to your IAM policy. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
because no permissions boundary allows the s3:GetObject action
```

### Access denied due to a permissions boundary – explicit denial


1. Check for an explicit `Deny` statement for the action in your permissions boundary. For the following example, the action is `s3:GetObject`.

1. Update your permissions boundary by changing the `Deny` statement in your IAM policy to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount) in the *IAM User Guide*. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform: 
s3:GetObject with an explicit deny in a permissions boundary
```

### Access denied due to session policies – implicit denial


1. Check for a missing `Allow` statement for the action in your session policies. For the following example, the action is `s3:GetObject`.

1. Update your session policy by adding the `Allow` statement. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no session policy allows the s3:GetObject action
```

### Access denied due to session policies – explicit denial


1. Check for an explicit `Deny` statement for the action in your session policies. For the following example, the action is `s3:GetObject`.

1. Update your session policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [Example 7: Excluding certain principals from a `Deny` statement](amazon-s3-policy-keys.md#example-exclude-principal-from-deny-statement). For more information about updating your session policy, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in a session policy
```

### Access denied due to resource-based policies – implicit denial


**Note**  
*Resource-based policies* means policies such as bucket policies and access point policies.

1. Check for a missing `Allow` statement for the action in your resource-based policy. Also check whether the `IgnorePublicAcls` S3 Block Public Access setting is applied on the bucket, access point, or account level. For the following example, the action is `s3:GetObject`.

1. Update your policy by adding the `Allow` statement. For more information, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

   You might also need to adjust your `IgnorePublicAcls` block public access setting for the bucket, access point, or account. For more information, see [Access denied due to Block Public Access settings](#access-denied-bpa-examples) and [Configuring block public access settings for your S3 buckets](configuring-block-public-access-bucket.md).

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no resource-based policy allows the s3:GetObject action
```

### Access denied due to resource-based policies – explicit denial


**Note**  
*Resource-based policies* means policies such as bucket policies and access point policies.

1. Check for an explicit `Deny` statement for the action in your resource-based policy. Also check whether the `RestrictPublicBuckets` S3 Block Public Access setting is applied on the bucket, access point, or account level. For the following example, the action is `s3:GetObject`.

1. Update your policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [Example 7: Excluding certain principals from a `Deny` statement](amazon-s3-policy-keys.md#example-exclude-principal-from-deny-statement). For more information about updating your resource-based policy, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

   You might also need to adjust your `RestrictPublicBuckets` block public access setting for the bucket, access point, or account. For more information, see [Access denied due to Block Public Access settings](#access-denied-bpa-examples) and [Configuring block public access settings for your S3 buckets](configuring-block-public-access-bucket.md).

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in a resource-based policy
```

### Access denied due to identity-based policies – implicit denial


1. Check for a missing `Allow` statement for the action in identity-based policies attached to the identity. For the following example, the action is `s3:GetObject` attached to the user `MaryMajor`.

1. Update your policy by adding the `Allow` statement. For more information, see [Identity-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_id-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no identity-based policy allows the s3:GetObject action
```

### Access denied due to identity-based policies – explicit denial


1. Check for an explicit `Deny` statement for the action in identity-based policies attached to the identity. For the following example, the action is `s3:GetObject` attached to the user `MaryMajor`.

1. Update your policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount) in the *IAM User Guide*. For more information, see [Identity-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_id-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in an identity-based policy
```

### Access denied due to Block Public Access settings


The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. For more information about how Amazon S3 defines "public," see [The meaning of "public"](access-control-block-public-access.md#access-control-block-public-access-policy-status). 

By default, new buckets, access points, and objects don't allow public access. However, users can modify bucket policies, access point policies, IAM user policies, object permissions, or access control lists (ACLs) to allow public access. S3 Block Public Access settings override these policies, permissions, and ACLs. Since April 2023, all Block Public Access settings are enabled by default for new buckets. 

When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner's account has a block public access setting applied. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request.

Amazon S3 Block Public Access provides four settings. These settings are independent and can be used in any combination. Each setting can be applied to an access point, a bucket, or an entire AWS account. If the block public access settings for the access point, bucket, or account differ, then Amazon S3 applies the most restrictive combination of the access point, bucket, and account settings.

When Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates an access point, bucket, or account setting.

The four settings provided by Amazon S3 Block Public Access are as follows: 
+ `BlockPublicAcls` – This setting applies to `PutBucketAcl`, `PutObjectAcl`, `PutObject`, `CreateBucket`, `CopyObject`, and `POST Object` requests. The `BlockPublicAcls` setting causes the following behavior: 
  + `PutBucketAcl` and `PutObjectAcl` calls fail if the specified access control list (ACL) is public.
  + `PutObject` calls fail if the request includes a public ACL.
  + If this setting is applied to an account, then `CreateBucket` calls fail with an HTTP `400` (`Bad Request`) response if the request includes a public ACL.

  For example, when access is denied for a `CopyObject` request because of the `BlockPublicAcls` setting, you receive the following message: 

  ```
  An error occurred (AccessDenied) when calling the CopyObject operation: 
  User: arn:aws:sts::123456789012:user/MaryMajor is not authorized to 
  perform: s3:CopyObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
  because public ACLs are prevented by the BlockPublicAcls setting in S3 Block Public Access.
  ```
+ `IgnorePublicAcls` – The `IgnorePublicAcls` setting causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains. If your request's permission is granted only by a public ACL, then the `IgnorePublicAcls` setting rejects the request.

  Any denial resulting from the `IgnorePublicAcls` setting is implicit. For example, if `IgnorePublicAcls` denies a `GetObject` request because of a public ACL, you receive the following message: 

  ```
  User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
  s3:GetObject because no resource-based policy allows the s3:GetObject action
  ```
+ `BlockPublicPolicy` – This setting applies to `PutBucketPolicy` and `PutAccessPointPolicy` requests. 

  Setting `BlockPublicPolicy` for a bucket causes Amazon S3 to reject calls to `PutBucketPolicy` if the specified bucket policy allows public access. This setting also causes Amazon S3 to reject calls to `PutAccessPointPolicy` for all of the bucket's same-account access points if the specified policy allows public access.

  Setting `BlockPublicPolicy` for an access point causes Amazon S3 to reject calls to `PutAccessPointPolicy` and `PutBucketPolicy` that are made through the access point if the specified policy (for either the access point or the underlying bucket) allows public access.

  For example, when access is denied on a `PutBucketPolicy` request because of the `BlockPublicPolicy` setting, you receive the following message: 

  ```
  An error occurred (AccessDenied) when calling the PutBucketPolicy operation: 
  User: arn:aws:sts::123456789012:user/MaryMajor is not authorized to 
  perform: s3:PutBucketPolicy on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
  because public policies are prevented by the BlockPublicPolicy setting in S3 Block Public Access.
  ```
+ `RestrictPublicBuckets` – The `RestrictPublicBuckets` setting restricts access to an access point or bucket with a public policy to only AWS service principals and authorized users within the bucket owner's account and the access point owner's account. This setting blocks all cross-account access to the access point or bucket (except by AWS service principals), while still allowing users within the account to manage the access point or bucket. This setting also rejects all anonymous (or unsigned) calls.

  Any denial resulting from the `RestrictPublicBuckets` setting is explicit. For example, if `RestrictPublicBuckets` denies a `GetObject` request because of a public bucket or access point policy, you receive the following message: 

  ```
  User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
  s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
  an explicit deny in a resource-based policy
  ```

For more information about these settings, see [Block public access settings](access-control-block-public-access.md#access-control-block-public-access-options). To review and update these settings, see [Configuring block public access](access-control-block-public-access.md#configuring-block-public-access).

## Access denied due to Requester Pays settings


If the Amazon S3 bucket you are trying to access has the Requester Pays feature enabled, you need to make sure you are passing the correct request parameters when making requests to that bucket. The Requester Pays feature in Amazon S3 allows the requester, instead of the bucket owner, to pay the data transfer and request costs for accessing objects in the bucket. When Requester Pays is enabled for a bucket, the bucket owner is not charged for requests made by other AWS accounts.

If you make a request to a Requester Pays-enabled bucket without passing the necessary parameters, you will receive an Access Denied (403 Forbidden) error. To access objects in a Requester Pays-enabled bucket, you must do the following: 

1. When making requests using the AWS CLI, you must include the `--request-payer requester` parameter. For example, to copy an object with the key `object.txt` located in the `s3://amzn-s3-demo-bucket/` S3 bucket to a location on your local machine, you must also pass the parameter `--request-payer requester` if this bucket has Requester Pays enabled. 

   ```
   aws s3 cp s3://amzn-s3-demo-bucket/object.txt /local/path \
   --request-payer requester
   ```

1. When making programmatic requests using an AWS SDK, set the `x-amz-request-payer` header to the value `requester`. For an example, see [Downloading objects from Requester Pays buckets](ObjectsinRequesterPaysBuckets.md).

1. Make sure that the IAM user or role making the request has the necessary permissions to access the Requester Pays bucket, such as the `s3:GetObject` and `s3:ListBucket` permissions.

By including the `--request-payer requester` parameter or setting the `x-amz-request-payer` header, you are informing Amazon S3 that you, the requester, will pay the costs associated with accessing the objects in the Requester Pays-enabled bucket. This will prevent the Access Denied (403 Forbidden) error.

## Bucket policies and IAM policies


### Bucket-level operations


If there is no bucket policy in place, then the bucket implicitly allows requests from any AWS Identity and Access Management (IAM) identity in the bucket-owner's account. The bucket also implicitly denies requests from any other IAM identities from any other accounts, and anonymous (unsigned) requests. However, if there is no IAM user policy in place, the requester (unless they're the AWS account root user) is implicitly denied from making any requests. For more information about this evaluation logic, see [Determining whether a request is denied or allowed within an account](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow) in the *IAM User Guide*.

### Object-level operations


If the object is owned by the bucket-owning account, the bucket policy and IAM user policy will function in the same way for object-level operations as they do for bucket-level operations. For example, if there is no bucket policy in place, then the bucket implicitly allows object requests from any IAM identity in the bucket-owner's account. The bucket also implicitly denies object requests from any other IAM identities from any other accounts, and anonymous (unsigned) requests. However, if there is no IAM user policy in place, the requester (unless they're the AWS account root user) is implicitly denied from making any object requests.

If the object is owned by an external account, then access to the object can be granted only through object access control lists (ACLs). The bucket policy and IAM user policy can still be used to deny object requests. 

Therefore, to ensure that your bucket policy or IAM user policy isn't causing an Access Denied (403 Forbidden) error, make sure that the following requirements are met:
+ For same-account access, there must not be an explicit `Deny` statement against the requester you are trying to grant permissions to, in either the bucket policy or the IAM user policy. If you want to grant permissions by using only the bucket policy and the IAM user policy, there must be at least one explicit `Allow` statement in one of these policies.
+ For cross-account access, there must not be an explicit `Deny` statement against the requester that you're trying to grant permissions to, in either the bucket policy or the IAM user policy. To grant cross-account permissions by using only the bucket policy and IAM user policy, make sure that both the bucket policy and the IAM user policy of the requester include an explicit `Allow` statement.

**Note**  
`Allow` statements in a bucket policy apply only to objects that are [owned by the same bucket-owning account](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html). However, `Deny` statements in a bucket policy apply to all objects regardless of object ownership. 

**To review or edit your bucket policy**
**Note**  
To view or edit a bucket policy, you must have the `s3:GetBucketPolicy` permission.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the name of the bucket that you want to view or edit a bucket policy for.

1. Choose the **Permissions** tab.

1. Under **Bucket policy**, choose **Edit**. The **Edit bucket policy** page appears.

To review or edit your bucket policy by using the AWS Command Line Interface (AWS CLI), use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html) command.

**Note**  
If you get locked out of a bucket because of an incorrect bucket policy, [sign in to the AWS Management Console by using your AWS account root user credentials.](https://docs.aws.amazon.com/signin/latest/userguide/introduction-to-root-user-sign-in-tutorial.html) To regain access to your bucket, make sure to delete the incorrect bucket policy by using your AWS account root user credentials.

### Tips for checking permissions


To check whether the requester has proper permissions to perform an Amazon S3 operation, try the following:
+ Identify the requester. If it’s an unsigned request, then it's an anonymous request without an IAM user policy. If it’s a request that uses a presigned URL, then the user policy is the same as the one for the IAM user or role that signed the request.
+ Verify that you're using the correct IAM user or role. You can verify your IAM user or role by checking the upper-right corner of the AWS Management Console or by using the [https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) command.
+ Check the IAM policies that are related to the IAM user or role. You can use one of the following methods:
  + [Test IAM policies with the IAM policy simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html).
  + Review the different [IAM policy types](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html).
+ If needed, [edit your IAM user policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html).
+ Review the following examples of policies that explicitly deny or allow access:
  + Explicit allow IAM user policy: [IAM: Allows and denies access to multiple services programmatically and in the console](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam_multiple-services-console.html)
  + Explicit allow bucket policy: [Granting permissions to multiple accounts to upload objects or set object ACLs for public access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-acl-1)
  + Explicit deny IAM user policy: [AWS: Denies access to AWS based on the requested AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html)
  + Explicit deny bucket policy: [Require SSE-KMS for all objects written to a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-encryption-1)

## Amazon S3 ACL settings


When checking your ACL settings, first [review your Object Ownership setting](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-retrieving.html) to check whether ACLs are enabled on the bucket. Be aware that ACL permissions can be used only to grant permissions and can't be used to reject requests. ACLs also can't be used to grant access to requesters that are rejected by explicit denials in bucket policies or IAM user policies.

### The Object Ownership setting is set to Bucket owner enforced


If the **Bucket owner enforced** setting is enabled, then ACL settings are unlikely to cause an Access Denied (403 Forbidden) error because this setting disables all ACLs that apply to bucket and objects. **Bucket owner enforced** is the default (and recommended) setting for Amazon S3 buckets.

### The Object Ownership setting is set to Bucket owner preferred or Object writer


ACL permissions are still valid with the **Bucket owner preferred** setting or the **Object writer** setting. There are two kinds of ACLs: bucket ACLs and object ACLs. For the differences between these two types of ACLs, see [Mapping of ACL permissions and access policy permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#acl-access-policy-permission-mapping).

Depending on the action of the rejected request, [check the ACL permissions for your bucket or the object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html):
+ If Amazon S3 rejected a `LIST`, `PUT` object, `GetBucketAcl`, or `PutBucketAcl` request, then [review the ACL permissions for your bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html).
**Note**  
You can't grant `GET` object permissions with bucket ACL settings.
+ If Amazon S3 rejected a `GET` request on an S3 object, or a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) request, then [review the ACL permissions for the object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html).
**Important**  
If the account that owns the object is different from the account that owns the bucket, then access to the object isn't controlled by the bucket policy.

### Troubleshooting an Access Denied (403 Forbidden) error from a `GET` object request during cross-account object ownership


Review the bucket's [Object Ownership settings](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html#object-ownership-overview) to determine the object owner. If you have access to the [object ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html), then you can also check the object owner's account. (To view the object owner's account, review the object ACL setting in the Amazon S3 console.) Alternatively, you can also make a `GetObjectAcl` request to find the object owner’s [canonical ID](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html) to verify the object owner account. By default, ACLs grant explicit allow permissions for `GET` requests to the object owner’s account.

After you've confirmed that the object owner is different from the bucket owner, then depending on your use case and access level, choose one of the following methods to help address the Access Denied (403 Forbidden) error:
+ **Disable ACLs (recommended)** – This method will apply to all objects and can be performed by the bucket owner. This method automatically gives the bucket owner ownership and full control over every object in the bucket. Before you implement this method, check the [prerequisites for disabling ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-migrating-acls-prerequisites.html). For information about how to set your bucket to **Bucket owner enforced** (recommended) mode, see [Setting Object Ownership on an existing bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-existing-bucket.html).
**Important**  
To prevent an Access Denied (403 Forbidden) error, be sure to migrate the ACL permissions to a bucket policy before you disable ACLs. For more information, see [Bucket policy examples for migrating from ACL permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-migrating-acls-prerequisites.html#migrate-acl-permissions-bucket-policies).
+ **Change the object owner to the bucket owner** – This method can be applied to individual objects, but only the object owner (or a user with the appropriate permissions) can change an object's ownership. Additional `PUT` costs might apply. (For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).) This method grants the bucket owner full ownership of the object, allowing the bucket owner to control access to the object through a bucket policy. 

  To change the object's ownership, do one of the following:
  + You (the bucket owner) can [copy the object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/copy-object.html#CopyingObjectsExamples) back to the bucket. 
  + You can change the Object Ownership setting of the bucket to **Bucket owner preferred**. If versioning is disabled, the objects in the bucket are overwritten. If versioning is enabled, duplicate versions of the same object will appear in the bucket, which the bucket owner can [set a lifecycle rule to expire](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html). For instructions on how to change your Object Ownership setting, see [Setting Object Ownership on an existing bucket](object-ownership-existing-bucket.md).
**Note**  
When you update your Object Ownership setting to **Bucket owner preferred**, the setting is applied only to new objects that are uploaded to the bucket.
  + You can have the object owner upload the object again with the `bucket-owner-full-control` canned object ACL. 
**Note**  
For cross-account uploads, you can also require the `bucket-owner-full-control` canned object ACL in your bucket policy. For an example bucket policy, see [Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-acl-2).
+ **Keep the object writer as the object owner** – This method doesn't change the object owner, but it does allow you to grant access to objects individually. To grant access to an object, you must have the `PutObjectAcl` permission for the object. Then, to fix the Access Denied (403 Forbidden) error, add the requester as a [grantee](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#specifying-grantee) to access the object in the object's ACLs. For more information, see [Configuring ACLs](managing-acls.md).

## S3 Block Public Access settings


If the failed request involves public access or public policies, then check the S3 Block Public Access settings on your account, bucket, or access point. For more information about troubleshooting access denied errors related to S3 Block Public Access settings, see [Access denied due to Block Public Access settings](#access-denied-bpa-examples).

## Amazon S3 encryption settings


Amazon S3 supports server-side encryption on your bucket. Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in AWS data centers and decrypts it for you when you access it. 

By default, Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Amazon S3 also allows you to specify the server-side encryption method when uploading objects.

**To review your bucket's server-side encryption status and encryption settings**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the bucket that you want to check the encryption settings for.

1. Choose the **Properties** tab.

1. Scroll down to the **Default encryption** section and view the **Encryption type** settings.

To check your encryption settings by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-encryption.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-encryption.html) command.

**To check the encryption status of an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the name of the bucket that contains the object.

1. From the **Objects** list, choose the name of the object that you want to add or change encryption for. 

   The object's details page appears.

1. Scroll down to the **Server-side encryption settings** section to view the object's server-side encryption settings.

To check your object encryption status by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/s3api/head-object.html#examples](https://docs.aws.amazon.com/cli/latest/reference/s3api/head-object.html#examples) command.

### Encryption and permissions requirements


Amazon S3 supports three types of server-side encryption:
+ Server-side encryption with Amazon S3 managed keys (SSE-S3)
+ Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
+ Server-side encryption with customer-provided keys (SSE-C)

Based on your encryption settings, make sure that the following permissions requirements are met:
+ **SSE-S3** – No extra permissions are required.
+ **SSE-KMS (with a customer managed key)** – To upload objects, the `kms:GenerateDataKey` permission on the AWS KMS key is required. To download objects and perform multipart uploads of objects, the `kms:Decrypt` permission on the KMS key is required.
+ **SSE-KMS (with an AWS managed key)** – The requester must be from the same account that owns the `aws/s3` KMS key. The requester must also have the correct Amazon S3 permissions to access the object.
+ **SSE-C (with a customer provided key)** – No additional permissions are required. You can configure the bucket policy to [require and restrict server-side encryption with customer-provided encryption keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html#ssec-require-condition-key) for objects in your bucket.

If the object is encrypted with a customer managed key, make sure that the KMS key policy allows you to perform the `kms:GenerateDataKey` or `kms:Decrypt` actions. For instructions on checking your KMS key policy, see [Viewing a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-viewing.html) in the *AWS Key Management Service Developer Guide*.

## S3 Object Lock settings


If your bucket has [S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html) enabled and the object is protected by a [retention period](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-retention-periods) or [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-legal-holds) and you try to delete an object, Amazon S3 returns one of the following responses, depending on how you tried to delete the object:
+ **Permanent `DELETE` request** – If you issued a permanent `DELETE` request (a request that specifies a version ID), Amazon S3 returns an Access Denied (`403 Forbidden`) error when you try to delete the object.
+ **Simple `DELETE` request** – If you issued a simple `DELETE` request (a request that doesn't specify a version ID), Amazon S3 returns a `200 OK` response and inserts a [delete marker](DeleteMarker.md) in the bucket, and that marker becomes the current version of the object with a new ID.

**To check whether the bucket has Object Lock enabled**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the name of the bucket that you want to review.

1. Choose the **Properties** tab.

1. Scroll down to the **Object Lock** section. Verify whether the **Object Lock** setting is **Enabled** or **Disabled**.

To determine whether the object is protected by a retention period or legal hold, [view the lock information](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-managing-view) for your object. 

If the object is protected by a retention period or legal hold, check the following:
+ If the object version is protected by the compliance retention mode, there is no way to permanently delete it. A permanent `DELETE` request from any requester, including the AWS account root user, will result in an Access Denied (403 Forbidden) error. Also, be aware that when you submit a `DELETE` request for an object that's protected by the compliance retention mode, Amazon S3 creates a [delete marker](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html) for the object.
+ If the object version is protected with governance retention mode and you have the `s3:BypassGovernanceRetention` permission, you can bypass the protection and permanently delete the version. For more information, see [Bypassing governance mode](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-managing-bypass).
+ If the object version is protected by a legal hold, then a permanent `DELETE` request can result in an Access Denied (403 Forbidden) error. To permanently delete the object version, you must remove the legal hold on the object version. To remove a legal hold, you must have the `s3:PutObjectLegalHold` permission. For more information about removing a legal hold, see [Configuring S3 Object Lock](object-lock-configure.md).

## VPC endpoint policies


If you're accessing Amazon S3 by using a virtual private cloud (VPC) endpoint, make sure that the VPC endpoint policy isn't blocking you from accessing your Amazon S3 resources. By default, the VPC endpoint policy allows all requests to Amazon S3. You can also configure the VPC endpoint policy to restrict certain requests. For information about how to check your VPC endpoint policy, see the following resources: 
+ [Access denied due to a VPC endpoint policy – implicit denial](#access-denied-vpc-endpoint-examples-implicit)
+ [Access denied due to a VPC endpoint policy – explicit denial](#access-denied-vpc-endpoint-examples-explicit)
+ [Control access to VPC endpoints by using endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *AWS PrivateLink Guide*

## AWS Organizations policies


If your AWS account belongs to an organization, AWS Organizations policies can block you from accessing Amazon S3 resources. By default, AWS Organizations policies don't block any requests to Amazon S3. However, make sure that your AWS Organizations policies haven't been configured to block access to S3 buckets. For instructions on how to check your AWS Organizations policies, see the following resources: 
+ [Access denied due to a Service Control Policy – implicit denial](#access-denied-scp-examples-implicit)
+ [Access denied due to a Service Control Policy – explicit denial](#access-denied-scp-examples-explicit)
+ [Access denied due to a resource control policy – explicit denial](#access-denied-rcp-examples-explicit)
+ [Listing all policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_info-operations.html#list-all-pols-in-org) in the *AWS Organizations User Guide*

Additionally, if you incorrectly configured your bucket policy for a member account to deny all users access to your S3 bucket, you can unlock the bucket by launching a privileged session for the member account in IAM. After you launch a privileged session, you can delete the misconfigured bucket policy to regain access to the bucket. For more information, see [Perform a privileged task on an AWS Organizations member account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user-privileged-task.html) in the *AWS Identity and Access Management User Guide*. 

## CloudFront distribution access


If you receive a Access Denied (403 Forbidden) error when trying to access your S3 static website through CloudFront, check these common issues:
+ **Do you have the correct origin domain name format?**
  + Make sure you're using the S3 website endpoint format (bucket-name.s3-website-region.amazonaws.com) rather than the REST API endpoint
  + Verify that static website hosting is enabled on your bucket
+ **Does your bucket policy allow CloudFront access?**
  + Ensure your bucket policy includes permissions for your CloudFront distribution's Origin Access Identity (OAI) or Origin Access Control (OAC)
  + Verify the policy includes the required s3:GetObject permissions

For additional troubleshooting steps and configurations, including error page setups and protocol settings, see [Why do I get a "403 access denied" error when I use an Amazon S3 website endpoint as the origin of my CloudFront distribution?](https://repost.aws/knowledge-center/s3-website-cloudfront-error-403) in the AWS re:Post Knowledge Center.

**Note**  
This error is different from 403 errors you might receive when accessing S3 directly. For CloudFront-specific issues, make sure to check both your CloudFront distribution settings and S3 configurations.

## Access point settings


If you receive an Access Denied (403 Forbidden) error while making requests through Amazon S3 access points, you might need to check the following: 
+ The configurations for your access points
+ The IAM user policy that's used for your access points
+ The bucket policy that's used to manage or configure your cross-account access points

**Access point configurations and policies**
+ When you create an access point, you can choose to designate **Internet** or **VPC** as the network origin. If the network origin is set to VPC only, Amazon S3 will reject any requests made to the access point that don't originate from the specified VPC. To check the network origin of your access point, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).
+ With access points, you can also configure custom Block Public Access settings, which work similarly to the Block Public Access settings at the bucket or account level. To check your custom Block Public Access settings, see [Managing public access to access points for general purpose buckets](access-points-bpa-settings.md).
+ To make successful requests to Amazon S3 by using access points, make sure that the requester has the necessary IAM permissions. For more information, see [Configuring IAM policies for using access points](access-points-policies.md).
+ If the request involves cross-account access points, make sure that the bucket owner has updated the bucket policy to authorize requests from the access point. For more information, see [Granting permissions for cross-account access points](access-points-policies.md#access-points-cross-account).

If the Access Denied (403 Forbidden) error still persists after checking all the items in this topic, [retrieve your Amazon S3 request ID](https://docs.aws.amazon.com/AmazonS3/latest/userguide/get-request-ids.html) and contact Support for additional guidance.

## Additional resources


For more guidance on Access Denied (403 Forbidden) errors you can check the following resources:
+ [How do I troubleshoot 403 Access Denied errors from Amazon S3?](https://repost.aws/knowledge-center/s3-troubleshoot-403) in the AWS re:Post Knowledge Center.
+ [Why do I get a 403 Forbidden error when I try to access an Amazon S3 bucket or object?](https://repost.aws/knowledge-center/s3-403-forbidden-error) in the AWS re:Post Knowledge Center.
+ [Why do I get an Access Denied error when I try to access an Amazon S3 resource in the same AWS account?](https://repost.aws/knowledge-center/s3-troubleshoot-403-resource-same-account) in the AWS re:Post Knowledge Center.
+ [Why do I get an Access Denied error when I try to access an Amazon S3 bucket with public read access?](https://repost.aws/knowledge-center/s3-troubleshoot-403-public-read) in the AWS re:Post Knowledge Center.
+ [Why do I get a "signature mismatch" error when I try to use a presigned URL to upload an object to Amazon S3?](https://repost.aws/knowledge-center/s3-presigned-url-signature-mismatch) in the AWS re:Post Knowledge Center.
+ [Why do I get an Access Denied error for ListObjectsV2 when I run the sync command on my Amazon S3 bucket?](https://repost.aws/knowledge-center/s3-access-denied-listobjects-sync) in the AWS re:Post Knowledge Center.
+ [Why do I get a "403 access denied" error when I use an Amazon S3 website endpoint as the origin of my CloudFront distribution?](https://repost.aws/knowledge-center/s3-website-cloudfront-error-403) in the AWS re:Post Knowledge Center.

# AWS managed policies for Amazon S3
AWS managed policies

An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles.

Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining [ customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) that are specific to your use cases.

You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed policy when a new AWS service is launched or new API operations become available for existing services.

For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

## AWS managed policy: AmazonS3FullAccess
AmazonS3FullAccess

You can attach the `AmazonS3FullAccess` policy to your IAM identities. This policy grants permissions that allow full access to Amazon S3.

To view the permissions for this policy, see [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor) in the AWS Management Console.

## AWS managed policy: AmazonS3ReadOnlyAccess
AmazonS3ReadOnlyAccess

You can attach the `AmazonS3ReadOnlyAccess` policy to your IAM identities. This policy grants permissions that allow read-only access to Amazon S3.

To view the permissions for this policy, see [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess$jsonEditor](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess$jsonEditor) in the AWS Management Console.

## AWS managed policy: AmazonS3ObjectLambdaExecutionRolePolicy
AmazonS3ObjectLambdaExecutionRolePolicy

Provides AWS Lambda functions the required permissions to send data to S3 Object Lambda when requests are made to an S3 Object Lambda access point. Also grants Lambda permissions to write to Amazon CloudWatch logs.

To view the permissions for this policy, see [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/service-role/AmazonS3ObjectLambdaExecutionRolePolicy$jsonEditor](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/service-role/AmazonS3ObjectLambdaExecutionRolePolicy$jsonEditor) in the AWS Management Console.

## AWS managed policy: S3UnlockBucketPolicy
S3UnlockBucketPolicy

If you incorrectly configured your bucket policy for a member account to deny all users access to your S3 bucket, you can use this AWS managed policy (`S3UnlockBucketPolicy`) to unlock the bucket. For more information on how to remove a misconfigured bucket policy that denies all principals from accessing an Amazon S3 bucket, see [Perform a privileged task on an AWS Organizations member account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user-privileged-task.html) in the *AWS Identity and Access Management User Guide*. 

## Amazon S3 updates to AWS managed policies
Policy updates

View details about updates to AWS managed policies for Amazon S3 since this service began tracking these changes.


| Change | Description | Date | 
| --- | --- | --- | 
|  Amazon S3 added `S3UnlockBucketPolicy`  |  Amazon S3 added a new AWS-managed policy called `S3UnlockBucketPolicy` to unlock a bucket and remove a misconfigured bucket policy that denies all principals from accessing an Amazon S3 bucket.  | November 1, 2024 | 
|  Amazon S3 added Describe permissions to `AmazonS3ReadOnlyAccess`  |  Amazon S3 added `s3:Describe*` permissions to `AmazonS3ReadOnlyAccess`.  | August 11, 2023 | 
|  Amazon S3 added S3 Object Lambda permissions to `AmazonS3FullAccess` and `AmazonS3ReadOnlyAccess`  |  Amazon S3 updated the `AmazonS3FullAccess` and `AmazonS3ReadOnlyAccess` policies to include permissions for S3 Object Lambda.  | September 27, 2021 | 
|  Amazon S3 added `AmazonS3ObjectLambdaExecutionRolePolicy`  |  Amazon S3 added a new AWS-managed policy called `AmazonS3ObjectLambdaExecutionRolePolicy` that provides Lambda functions permissions to interact with S3 Object Lambda and write to CloudWatch logs.  | August 18, 2021 | 
|  Amazon S3 started tracking changes  |  Amazon S3 started tracking changes for its AWS managed policies.  | August 18, 2021 | 

# Managing access to shared datasets with access points
Working with access points

Amazon S3 access points simplify data access for any AWS service or customer application that stores data in S3. Access points are named network endpoints that are attached to a data source such as a bucket, Amazon FSx for NetApp ONTAP volume, or Amazon FSx for OpenZFS volume. For information about working with buckets, see [General purpose buckets overview](UsingBucket.md). For information about working with FSx for NetApp ONTAP, see [What is Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/what-is-fsx-ontap.html) in the *FSx for ONTAP User Guide*. For information about working with FSx for OpenZFS, see [What is Amazon FSx for OpenZFS](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/what-is-fsx.html) in the *FSx for OpenZFS User Guide*.

You can use access points to perform S3 object operations, such as `GetObject` and `PutObject`. Each access point has distinct permissions and network controls that S3 applies for any request that is made through that access point. Each endpoint enforces a customized access point policy that allow you to control use by resource, user, or other conditions. If your access point is attached to a bucket the access point policy works in conjunction with the underlying bucket policy. You can configure any access point to accept requests only from a virtual private cloud (VPC) to restrict Amazon S3 data access to a private network. You can also configure custom block public access settings for each access point.

**Note**  
You can only use access points to perform operations on objects. You can't use access points to perform other Amazon S3 operations, such as deleting buckets or creating S3 Replication configurations. For a complete list of S3 operations that support access points, see [Access point compatibility](access-points-service-api-support.md).

The topics in this section explain how to work with Amazon S3 access points. For topics on using access points with directory buckets see, [Managing access to shared datasets in directory buckets with access points](access-points-directory-buckets.md).

**Topics**
+ [

# Access points naming rules, restrictions, and limitations
](access-points-restrictions-limitations-naming-rules.md)
+ [

# Referencing access points with ARNs, access point aliases, or virtual-hosted–style URIs
](access-points-naming.md)
+ [

# Access point compatibility
](access-points-service-api-support.md)
+ [

# Configuring IAM policies for using access points
](access-points-policies.md)
+ [

# Monitoring and logging access points
](access-points-monitoring-logging.md)
+ [

# Creating an access point
](creating-access-points.md)
+ [

# Managing your Amazon S3 access points for general purpose buckets
](access-points-manage.md)
+ [

# Using Amazon S3 access points for general purpose buckets
](using-access-points.md)
+ [

# Using tags with S3 Access Points for general purpose buckets
](access-points-tagging.md)

# Access points naming rules, restrictions, and limitations
Naming rules, restrictions, and limitations

Access points are named network endpoints attached to a bucket or a volume on an Amazon FSx file system that simplify managing data. When you create an access point you choose a name and the AWS Region to create it in. The following topics provide information about access point naming rules, restrictions and limitations.

**Topics**
+ [

## Naming rules for access points
](#access-points-names)
+ [

## Restrictions and limitations for access points
](#access-points-restrictions-limitations)
+ [

## Restrictions and limitations for access points attached to a volume on an Amazon FSx file system
](#access-points-restrictions-limitations-fsx)

## Naming rules for access points


When you create an access point, you choose its name and the AWS Region to create it in. Unlike general purpose buckets access point names do not need to be unique across AWS accounts or AWS Regions. The same AWS account may create access points with the same name in different AWS Regions or two different AWS accounts may use the same access point name. However, within a single AWS Region an AWS account may not have two identically named access points.

**Note**  
If you choose to publicize your access point name, avoid including sensitive information in the access point name. Access point names are published in a publicly accessible database known as the Domain Name System (DNS).

Access point names must be DNS-compliant and must meet the following conditions:
+ Must be unique within a single AWS account and AWS Region
+ Must begin with a number or lowercase letter
+ Must be between 3 and 50 characters long
+ Can't begin or end with a hyphen (`-`)
+ Can't contain underscores (`_`), uppercase letters, spaces, or periods (`.`)
+ Can't end with the suffix `-s3alias` or `-ext-s3alias`. These suffixes are reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

## Restrictions and limitations for access points


Amazon S3 access points have the following restrictions and limitations:
+ Each access point is associated with exactly one bucket or FSx for OpenZFS volume. You must specify this when you create the access point. After you create an access point, you can't associate it with a different bucket or FSx for OpenZFS volume. However, you can delete an access point, and then create another one with the same name.
+ After you create an access point, you can't change its virtual private cloud (VPC) configuration.
+ Access point policies are limited to 20 KB in size.
+ You can create a maximum of 10,000 access points per AWS account per AWS Region. If you need more than 10,000 access points for a single account in a single Region, you can request a service quota increase. For more information about service quotas and requesting an increase, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *AWS General Reference*.
+ You can't use an access point as a destination for S3 Replication. For more information about replication, see [Replicating objects within and across Regions](replication.md).
+ You can't use S3 access point aliases as the source or destination for **Move** operations in the Amazon S3 console.
+ You can address access points only by using virtual-host-style URLs. For more information about virtual-host-style addressing, see [Accessing an Amazon S3 general purpose bucket](access-bucket-intro.md).
+ API operations that control access point functionality (for example, `PutAccessPoint` and `GetAccessPointPolicy`) don't support cross-account calls.
+ You must use AWS Signature Version 4 when making requests to an access point by using the REST APIs. For more information about authenticating requests, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) in the *Amazon Simple Storage Service API Reference*.
+ Access points only support requests over HTTPS. Amazon S3 will automatically respond with an HTTP redirect for any requests made via HTTP, to upgrade the request to HTTPS.
+ Access points don't support anonymous access.
+ After you create an access point, you can't change its block public access settings.
+ Cross-account access points don’t grant you access to data until you are granted permissions from the bucket owner. The bucket owner always retains ultimate control over access to the data and must update the bucket policy to authorize requests from the cross-account access point. To view a bucket policy example, see [Configuring IAM policies for using access points](access-points-policies.md).
+ In AWS Regions where you have more than 1,000 access points, you can't search for an access point by name in the Amazon S3 console.
+ When you're viewing a cross-account access point in the Amazon S3 console, the **Access** column displays **Unknown**. The Amazon S3 console can't determine if public access is granted for the associated bucket and objects. Unless you require a public configuration for a specific use case, we recommend that you and the bucket owner block all public access to the access point and the bucket. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

## Restrictions and limitations for access points attached to a volume on an Amazon FSx file system


The following are specific limitations when using access points attached to a volume on an Amazon FSx file system:
+ When creating an access points you can only attach the access point to a volume on a Amazon FSx file systems that you own. You cannot attach to a volume owned by another AWS account.
+ You cannot use the `CreateAccessPoint` API when creating and attaching an access point to a volume on a Amazon FSx file system. You must use the [https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateAndAttachS3AccessPoint.html](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateAndAttachS3AccessPoint.html) API.
+ You can not turn off any block public access settings when creating or using an access point attached to a volume on an Amazon FSx file system.
+ You can't list objects or use **Copy** or **Move** operations in the S3 console with access points attached to a volume on an Amazon FSx file system.
+ `CopyObject` is supported for access points attached to an FSx for NetApp ONTAP or FSx for OpenZFS volume only if the source and destination are the same access point. For more information, about access point compatibility, see [Access point compatibility](access-points-service-api-support.md).
+ Multipart uploads are limited to 5GB.
+ FSx for OpenZFS deployment type and storage class support varies by AWS Region. For more information, see [Availability by AWS Region](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/available-aws-regions.html) in the *OpenZFS User Guide*.

# Referencing access points with ARNs, access point aliases, or virtual-hosted–style URIs
Referencing access points

After you create an access point you can use these endpoints to preform a number of operations. When referring to an access point you can use the Amazon Resource Names (ARNs), access point alias, or virtual-hosted–style URI. 

**Topics**
+ [

## Access point ARNs
](#access-points-arns)
+ [

## Access point aliases
](#access-points-alias)
+ [

## Virtual-hosted–style URI
](#accessing-a-bucket-through-s3-access-point)

## Access point ARNs


Access points have Amazon Resource Names (ARNs). Access point ARNs are similar to bucket ARNs, but they are explicitly typed and encode the access point's AWS Region and the AWS account ID of the access point's owner. For more information about ARNs, see [Identify AWS resources with Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) in the *IAM User Guide*.

Access point ARNs use the following format:

```
arn:aws:s3:region:account-id:accesspoint/resource
```
+ `arn:aws:s3:us-west-2:123456789012:accesspoint/test` represents the access point named `test`, owned by account *`123456789012`* in the Region *`us-west-2`*.
+ `arn:aws:s3:us-west-2:123456789012:accesspoint/*` represents all access points under account *`123456789012`* in the Region *`us-west-2`*.

ARNs for objects accessed through an access point use the following format:

```
arn:aws:s3:region:account-id:accesspoint/access-point-name/object/resource
```
+ `arn:aws:s3:us-west-2:123456789012:accesspoint/test/object/unit-01` represents the object *`unit-01`*, accessed through the access point named *`test`*, owned by account *`123456789012`* in the Region *`us-west-2`*.
+ `arn:aws:s3:us-west-2:123456789012:accesspoint/test/object/*` represents all objects for the access point named *`test`*, in account *`123456789012`* in the Region *`us-west-2`*.
+ `arn:aws:s3:us-west-2:123456789012:accesspoint/test/object/unit-01/finance/*` represents all objects under prefix *`unit-01/finance/`* for the access point named *`test`*, in account *`123456789012`* in the Region *`us-west-2`*.

## Access point aliases


When you create an access point, Amazon S3 automatically generates an alias that you can use instead of an Amazon S3 bucket name for data access. You can use this access point alias instead of an Amazon Resource Name (ARN) for access point data plane operations. For a list of these operations, see [Access point compatibility](access-points-service-api-support.md).

An access point alias name is created within the same namespace as an Amazon S3 bucket. This alias name is automatically generated and cannot be changed. An access point alias name meets all the requirements of a valid Amazon S3 bucket name and consists of the following parts:

`ACCESS POINT NAME-METADATA-s3alias` (for access points attached to an Amazon S3 bucket)

`ACCESS POINT NAME-METADATA-ext-s3alias` (for access points attached to an non-S3 bucket data source)

**Note**  
The `-s3alias` and `-ext-s3alias` suffixes are reserved for access point alias names and can't be used for bucket or access point names. For more information about Amazon S3 bucket-naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).

### Access points aliases use cases and limitations
Access points aliases use cases and limitations

When adopting access points, you can use access point alias names without requiring extensive code changes.

When you create an access point, Amazon S3 automatically generates an access point alias name, as shown in the following example. To run this command, replace the `user input placeholders` with your own information.

```
aws s3control create-access-point --bucket amzn-s3-demo-bucket1 --name my-access-point --account-id 111122223333
{
    "AccessPointArn": "arn:aws:s3:region:111122223333:accesspoint/my-access-point",
    "Alias": "my-access-point-aqfqprnstn7aefdfbarligizwgyfouse1a-s3alias"
}
```

You can use this access point alias name instead of an Amazon S3 bucket name in any data plane operation. For a list of these operations, see [Access point compatibility](access-points-service-api-support.md).

The following AWS CLI example for the `get-object` command uses the bucket's access point alias to return information about the specified object. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api get-object --bucket my-access-point-aqfqprnstn7aefdfbarligizwgyfouse1a-s3alias --key dir/my_data.rtf my_data.rtf
            
{
    "AcceptRanges": "bytes",
    "LastModified": "2020-01-08T22:16:28+00:00",
    "ContentLength": 910,
    "ETag": "\"00751974dc146b76404bb7290f8f51bb\"",
    "VersionId": "null",
    "ContentType": "text/rtf",
    "Metadata": {}
}
```

#### Access point alias limitations
Limitations
+ Aliases cannot be configured by customers.
+ Aliases cannot be deleted or modified or disabled on an access point.
+ You can use this access point alias name instead of an Amazon S3 bucket name in some data plane operations. For a list of these operations, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
+ You can't use an access point alias name for Amazon S3 control plane operations. For a list of Amazon S3 control plane operations, see [Amazon S3 Control](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_AWS_S3_Control.html) in the *Amazon Simple Storage Service API Reference*.
+ You can't use S3 access point aliases as the source or destination for **Move** operations in the Amazon S3 console.
+ Aliases cannot be used in AWS Identity and Access Management (IAM) policies.
+ Aliases cannot be used as a logging destination for S3 server access logs.
+ Aliases cannot be used as a logging destination for AWS CloudTrail logs.
+ Amazon SageMaker AI GroundTruth does not support access point aliases.

## Virtual-hosted–style URI


Access points only support virtual-host-style addressing. In a virtual-hosted–style URI, the access point name, AWS account, and AWS Region is part of the domain name in the URL. For more information about virtual hosting, see [Virtual hosting of general purpose buckets](VirtualHosting.md).

Virtual-hosted–style URI for access points use the following format:

```
https://access-point-name-account-id.s3-accesspoint.region.amazonaws.com
```

**Note**  
If your access point name includes dash (-) characters, include the dashes in the URL and insert another dash before the account ID. For example, to use an access point named *`finance-docs`* owned by account *`123456789012`* in the Region *`us-west-2`*, the appropriate URL would be `https://finance-docs-123456789012.s3-accesspoint.us-west-2.amazonaws.com`.
S3 access points don't support access through HTTP. Access points support only secure access through HTTPS.

# Access point compatibility
Access point compatibility

You can use access points to access objects using the following subset of Amazon S3 APIs. All the operations listed below can accept either access point ARNs or access point aliases.

For examples of using access points to perform operations on objects, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md).

## Access points compatibility with S3 operations


The following table is a partial list of Amazon S3 operations and if they are compatible with access points. All operations below are supported by access points using an S3 bucket as its data source, while only some operations are supported by access points using an FSx for ONTAP or FSx for OpenZFS volume as a data source.

For more information see, access point compatibility in the [https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/access-points-for-fsxn-object-api-support.html](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/access-points-for-fsxn-object-api-support.html) or the [https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/access-points-object-api-support.html](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/access-points-object-api-support.html).


| S3 operation | Access point attached to an S3 bucket | Access point attached to an FSx for OpenZFS volume | 
| --- | --- | --- | 
|  `[AbortMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)`  |  Supported  |  Supported  | 
|  `[CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)`  |  Supported  |  Supported  | 
|  `[CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)` (same-Region copies only)  |  Supported  |  Supported, if source and destination are the same access point  | 
|  `[CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)`  |  Supported  |  Supported  | 
|  `[DeleteObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)`  |  Supported  |  Supported  | 
|  `[DeleteObjects](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)`  |  Supported  |  Supported  | 
|  `[DeleteObjectTagging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html)`  |  Supported  |  Supported  | 
|  `[GetBucketAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html)`  |  Supported  |  Not supported  | 
|  `[GetBucketCors](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html)`  |  Supported  |  Not supported  | 
|  `[GetBucketLocation](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html)`  |  Supported  |  Supported  | 
|  `[GetBucketNotificationConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html)`  |  Supported  |  Not supported  | 
|  `[GetBucketPolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)`  |  Supported  |  Not supported  | 
|  `[GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)`  |  Supported  |  Supported  | 
|  `[GetObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html)`  |  Supported  |  Not supported  | 
|  `[GetObjectAttributes](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)`  |  Supported  |  Supported  | 
|  `[GetObjectLegalHold](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html)`  |  Supported  |  Not supported  | 
|  `[GetObjectRetention](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html)`  |  Supported  |  Not supported  | 
|  `[GetObjectTagging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html)`  |  Supported  |  Supported  | 
|  `[HeadBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)`  |  Supported  |  Supported  | 
|  `[HeadObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)`  |  Supported  |  Supported  | 
|  `[ListMultipartUploads](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)`  |  Supported  |  Supported  | 
|  `[ListObjects](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html)`  |  Supported  |  Supported  | 
|  `[ListObjectsV2](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)`  |  Supported  |  Supported  | 
|  `[ListObjectVersions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html)`  |  Supported  |  Not supported  | 
|  `[ListParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)`  |  Supported  |  Supported  | 
|  `[Presign](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html)`  |  Supported  |  Supported  | 
|  `[PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)`  |  Supported  |  Supported  | 
|  `[PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html)`  |  Supported  |  Not supported  | 
|  `[PutObjectLegalHold](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html)`  |  Supported  |  Not supported  | 
|  `[PutObjectRetention](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html)`  |  Supported  |  Not supported  | 
|  `[PutObjectTagging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html)`  |  Supported  |  Supported  | 
|  `[RestoreObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html)`  |  Supported  |  Not supported  | 
|  `[UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)`  |  Supported  |  Supported  | 
|  `[UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)` (same-Region copies only)  |  Supported  |  Supported, if source and destination are the same access point  | 

# Configuring IAM policies for using access points
Configuring IAM policies

Amazon S3 access points support AWS Identity and Access Management (IAM) resource policies that allow you to control the use of the access point by resource, user, or other conditions. For an application or user to be able to access objects through an access point, both the access point and the underlying bucket or Amazon FSx file system must permit the request.

**Important**  
Restrictions that you include in an access point policy apply only to requests made through that access point. Attaching an access point to a bucket does not change underlying resource's behavior. All existing operations against the bucket not made through your access point will continue to work as before. 

When you're using IAM resource policies, make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). These checks generate findings and provide recommendations to help you author policies that are functional and conform to security best practices. 

To learn more about validating policies by using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html).

## Policy examples for access points


The following examples demonstrate how to create IAM policies to control requests made through an access point.

**Note**  
Permissions granted in an access point policy are effective only if the underlying bucket also allows the same access. You can accomplish this in two ways:  
**(Recommended)** Delegate access control from the bucket to the access point, as described in [Delegating access control to access points](#access-points-delegating-control).
Add the same permissions contained in the access point policy to the underlying bucket's policy. The Example 1 access point policy example demonstrates how to modify the underlying bucket policy to allow the necessary access.

**Example 1 – Access point policy grant**  
The following access point policy grants IAM user `Jane` in account `123456789012` permissions to `GET` and `PUT` objects with the prefix `Jane/` through the access point *`my-access-point`* in account *`123456789012`*.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Jane"
        },
        "Action": ["s3:GetObject", "s3:PutObject"],
        "Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/Jane/*"
    }]
}
```

**Note**  
For the access point policy to effectively grant access to *`Jane`*, the underlying bucket must also allow the same access to *`Jane`*. You can delegate access control from the bucket to the access point as described in [Delegating access control to access points](#access-points-delegating-control). Or, you can add the following policy to the underlying bucket to grant the necessary permissions to Jane. Note that the `Resource` entry differs between the access point and bucket policies.   

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Jane"
        },
        "Action": ["s3:GetObject", "s3:PutObject"],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/Jane/*"
    }]    
}
```

**Example 2 – Access point policy with tag condition**  
The following access point policy grants IAM user *`Mateo`* in account *`123456789012`* permissions to `GET` objects through the access point *`my-access-point`* in the account *`123456789012`* that have the tag key *`data`* set with a value of *`finance`*.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Mateo"
        },
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/*",
        "Condition": {
            "StringEquals": {
                "s3:ExistingObjectTag/data": "finance"
            }
        }
    }]
}
```

**Example 3 – Access point policy that allows bucket listing**  
The following access point policy allows IAM user `Arnav` in the account *`123456789012`* permission to view the objects contained in the bucket underlying the access point *`my-access-point`* in the account *`123456789012`*.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Arnav"
        },
        "Action": "s3:ListBucket",
        "Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point"
    }]
}
```

**Example 4 – Service control policy**  
The following service control policy requires all new access points to be created with a virtual private cloud (VPC) network origin. With this policy in place, users in your organization can't create new access points that are accessible from the internet.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Effect": "Deny",
        "Action": "s3:CreateAccessPoint",
        "Resource": "*",
        "Condition": {
            "StringNotEquals": {
                "s3:AccessPointNetworkOrigin": "VPC"
            }
        }
    }]
}
```

**Example 5 – Bucket policy to limit S3 operations to VPC network origins**  
The following bucket policy limits access to all S3 object operations for the bucket `amzn-s3-demo-bucket` to access points with a VPC network origin.  
Before using a statement like the one shown in this example, make sure that you don't need to use features that aren't supported by access points, such as Cross-Region Replication.  
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:BypassGovernanceRetention",
                "s3:DeleteObject",
                "s3:DeleteObjectTagging",
                "s3:DeleteObjectVersion",
                "s3:DeleteObjectVersionTagging",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectLegalHold",
                "s3:GetObjectRetention",
                "s3:GetObjectTagging",
                "s3:GetObjectVersion",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTagging",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:PutObjectLegalHold",
                "s3:PutObjectRetention",
                "s3:PutObjectTagging",
                "s3:PutObjectVersionAcl",
                "s3:PutObjectVersionTagging",
                "s3:RestoreObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:AccessPointNetworkOrigin": "VPC"
                }
            }
        }
    ]
}
```

## Condition keys


S3 access points have condition keys that you can use in IAM policies to control access to your resources. The following condition keys represent only part of an IAM policy. For full policy examples, see [Policy examples for access points](#access-points-policy-examples), [Delegating access control to access points](#access-points-delegating-control), and [Granting permissions for cross-account access points](#access-points-cross-account).

**`s3:DataAccessPointArn`**  
This example shows a string that you can use to match on an access point ARN. The following example matches all access points for AWS account *`123456789012`* in Region *`us-west-2`*:  

```
"Condition" : {
    "StringLike": {
        "s3:DataAccessPointArn": "arn:aws:s3:us-west-2:123456789012:accesspoint/*"
    }
}
```

**`s3:DataAccessPointAccount`**  
This example shows a string operator that you can use to match on the account ID of the owner of an access point. The following example matches all access points that are owned by the AWS account *`123456789012`*.  

```
"Condition" : {
    "StringEquals": {
        "s3:DataAccessPointAccount": "123456789012"
    }
}
```

**`s3:AccessPointNetworkOrigin`**  
This example shows a string operator that you can use to match on the network origin, either `Internet` or `VPC`. The following example matches only access points with a VPC origin.  

```
"Condition" : {
    "StringEquals": {
        "s3:AccessPointNetworkOrigin": "VPC"
    }
}
```

For more information about using condition keys with Amazon S3, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Delegating access control to access points


You can delegate access control for a bucket to the bucket's access points. The following example bucket policy allows full access to all access points that are owned by the bucket owner's account. Thus, all access to this bucket is controlled by the policies attached to its access points. We recommend configuring your buckets this way for all use cases that don't require direct access to the bucket.

**Example 6 – Bucket policy that delegates access control to access points**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement" : [
    {
        "Effect": "Allow",
        "Principal" : { "AWS": "*" },
        "Action" : "*",
        "Resource" : [ "arn:aws:s3:::amzn-s3-demo-bucket", "arn:aws:s3:::amzn-s3-demo-bucket/*"],
        "Condition": {
            "StringEquals" : { "s3:DataAccessPointAccount" : "111122223333" }
        }
    }]
}
```

## Granting permissions for cross-account access points


To create an access point to a bucket that's owned by another account, you must first create the access point by specifying the bucket name and account owner ID. Then, the bucket owner must update the bucket policy to authorize requests from the access point. Creating an access point is similar to creating a DNS CNAME in that the access point doesn't provide access to the bucket contents. All bucket access is controlled by the bucket policy. The following example bucket policy allows `GET` and `LIST` requests on the bucket from an access point that's owned by a trusted AWS account.

Replace *Bucket ARN* with the ARN of the bucket.

**Example 7 – Bucket policy delegating permissions to another AWS account**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement" : [
    {
        "Effect": "Allow",
        "Principal" : { "AWS": "*" },
        "Action" : ["s3:GetObject","s3:ListBucket"],
        "Resource" : ["arn:aws:s3:::amzn-s3-demo-bucket", "arn:aws:s3:::amzn-s3-demo-bucket/*"],
        "Condition": {
            "StringEquals" : { "s3:DataAccessPointAccount" : "Access point owner's account ID" }
        }
    }]
}
```
Cross-account access points are only available for access points attached to S3 buckets. You cannot attach an access point to a volume on an Amazon FSx file system owned by another AWS account.

# Monitoring and logging access points
Monitoring and logging

Amazon S3 logs requests made through access points and requests made to the API operations that manage access points, such as `CreateAccessPoint` and `GetAccessPointPolicy`. To monitor and manage usage patterns, you can also configure Amazon CloudWatch Logs request metrics for access points. 

**Topics**
+ [

## CloudWatch request metrics
](#request-metrics-access-points)
+ [

## AWS CloudTrail logs for requests made through access points
](#logging-access-points)

## CloudWatch request metrics


To understand and improve the performance of applications that are using access points, you can use CloudWatch for Amazon S3 request metrics. Request metrics help you monitor Amazon S3 requests to quickly identify and act on operational issues. 

By default, request metrics are available at the bucket level. However, you can define a filter for request metrics using a shared prefix, object tags, or an access point. When you create an access point filter, the request metrics configuration includes requests to the access point that you specify. You can receive metrics, set alarms, and access dashboards to view real-time operations performed through this access point. 

You must opt in to request metrics by configuring them in the console or by using the Amazon S3 API. Request metrics are available at 1-minute intervals after some latency for processing. Request metrics are billed at the same rate as CloudWatch custom metrics. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

To create a request metrics configuration that filters by access point, see [Creating a metrics configuration that filters by prefix, object tag, or access point](metrics-configurations-filter.md). 

## AWS CloudTrail logs for requests made through access points


You can log requests made through access points and requests made to the APIs that manage access points, such as `CreateAccessPoint` and `GetAccessPointPolicy,` by using server access logging and AWS CloudTrail. 



CloudTrail log entries for requests made through access points include the access point ARN in the `resources` section of the log.

For example, suppose you have the following configuration: 
+ A bucket named *`amzn-s3-demo-bucket1`* in Region *`us-west-2`* that contains an object named *`my-image.jpg`*
+ An access point named *`my-bucket-ap`* that is associated with *`amzn-s3-demo-bucket1`*
+ An AWS account ID of *`123456789012`*

The following example shows the `resources` section of a CloudTrail log entry for the preceding configuration:

```
"resources": [
        {"type": "AWS::S3::Object",
            "ARN": "arn:aws:s3:::amzn-s3-demo-bucket1/my-image.jpg"
        },
        {"accountId": "123456789012",
            "type": "AWS::S3::Bucket",
            "ARN": "arn:aws:s3:::amzn-s3-demo-bucket1"
        },
        {"accountId": "123456789012",
            "type": "AWS::S3::AccessPoint",
            "ARN": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-bucket-ap"
        }
    ]
```

If you are using an access point attached to a volume on an Amazon FSx file system, the `resources` section of a CloudTrail log entry will look different. For example:

```
"resources": [
        {
            "accountId": "123456789012",
            "type": "AWS::FSx::Volume",
            "ARN": "arn:aws:fsx:us-east-1:123456789012:volume/fs-0123456789abcdef9/fsvol-01234567891112223"
        }
    ]
```

For more information about S3 Server Access Logs, see [Logging requests with server access logging](ServerLogs.md). For more information about AWS CloudTrail, see [What is AWS CloudTrail?](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrail User Guide*.

# Creating an access point


You can create S3 access points by using the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API. Access points are named network endpoints that are attached to a data source such as a bucket, Amazon FSx for ONTAP volume, or Amazon FSx for OpenZFS volume.

By default, you can create up to 10,000 access points per Region for each of your AWS accounts. If you need more than 10,000 access points for a single account in a single Region, you can request a service quota increase. For more information about service quotas and requesting an increase, see [AWS Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *AWS General Reference*.

**Topics**
+ [

## Creating access points with S3 buckets
](#create-access-points)
+ [

## Creating access points with Amazon FSx
](#create-access-points-with-fsx)
+ [

# Creating access points restricted to a virtual private cloud
](access-points-vpc.md)
+ [

# Managing public access to access points for general purpose buckets
](access-points-bpa-settings.md)

## Creating access points with S3 buckets


An access point is associated with exactly one Amazon S3 general purpose bucket. If you want to use a bucket in your AWS account, you must first create a bucket. For more information about creating buckets, see [Creating, configuring, and working with Amazon S3 general purpose buckets](creating-buckets-s3.md).

You can also create a cross-account access point that's associated with a bucket in another AWS account, as long as you know the bucket name and the bucket owner's account ID. However, creating cross-account access points doesn't grant you access to data in the bucket until you are granted permissions from the bucket owner. The bucket owner must grant the access point owner's account (your account) access to the bucket through the bucket policy. For more information, see [Granting permissions for cross-account access points](access-points-policies.md#access-points-cross-account).

### Using the S3 console


**To create an access point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create an access point. The access point must be created in the same Region as the associated bucket. 

1. In the left navigation pane, choose **Access Points**.

1. On the **Access Points** page, choose **Create access point**.

1. In the **Access point name** field, enter the name for the access point. For more information about naming access points, see [Naming rules for access points](access-points-restrictions-limitations-naming-rules.md#access-points-names).

1. For **Data source**, specify the S3 bucket that you want to use with the access point.

   To use a bucket in your account, choose **Choose a bucket in this account**, and enter or browse for the bucket name. 

   To use a bucket in a different AWS account, choose **Specify a bucket in another account**, and enter the AWS account ID and name of the bucket. If you're using a bucket in a different AWS account, the bucket owner must update the bucket policy to authorize requests from the access point. For an example bucket policy, see [Granting permissions for cross-account access points](access-points-policies.md#access-points-cross-account).
**Note**  
For information about using an FSx for OpenZFS volume as a data source, see [Creating access points with Amazon FSx](#create-access-points-with-fsx).

1. Choose a **Network origin**, either **Internet** or **virtual private cloud (VPC)**. If you choose **virtual private cloud (VPC)**, enter the **VPC ID** that you want to use with the access point.

   For more information about network origins for access points, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).

1. Under **Block Public Access settings for this Access Point**, select the block public access settings that you want to apply to the access point. All block public access settings are enabled by default for new access points. We recommend that you keep all settings enabled unless you know that you have a specific need to disable any of them. 
**Note**  
After you create an access point, you can't change its block public access settings.

   For more information about using Amazon S3 Block Public Access with access points, see [Managing public access to access points for general purpose buckets](access-points-bpa-settings.md).

1. (Optional) Under **Access Point policy - *optional***, specify the access point policy. Before you save your policy, make sure to resolve any security warnings, errors, general warnings, and suggestions. For more information about specifying an access point policy, see [Policy examples for access points](access-points-policies.md#access-points-policy-examples).

1. Choose **Create access point**.

### Using the AWS CLI


The following example command creates an access point named *`example-ap`* for the bucket *`amzn-s3-demo-bucket`* in the account *`111122223333`*. To create the access point, you send a request to Amazon S3 that specifies the following:
+ The access point name. For information about naming rules, see [Naming rules for access points](access-points-restrictions-limitations-naming-rules.md#access-points-names).
+ The name of the bucket that you want to associate the access point with.
+ The account ID for the AWS account that owns the access point.

```
aws s3control create-access-point --name example-ap --account-id 111122223333 --bucket amzn-s3-demo-bucket
```

When you're creating an access point by using a bucket in a different AWS account, include the `--bucket-account-id` parameter. The following example command creates an access point in the AWS account *`111122223333`*, using the bucket *`amzn-s3-demo-bucket2`*, which is in the AWS account *`444455556666`*.

```
aws s3control create-access-point --name example-ap --account-id 111122223333 --bucket amzn-s3-demo-bucket --bucket-account-id 444455556666
```

### Using the REST API


You can use the REST API to create an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html) in the *Amazon Simple Storage Service API Reference*.

## Creating access points with Amazon FSx


You can create and attach an access point to an FSx for OpenZFS volume using the Amazon FSx console, AWS CLI, or API. Once attached, you can use the S3 object APIs to access your file data. Your data continues to reside on the Amazon FSx file system and continues to be directly accessible for your existing workloads. You continue to manage your storage using all the FSx for OpenZFS storage management capabilities, including backups, snapshots, user and group quotas, and compression.

For instructions on creating an access point and attaching it to an FSx for OpenZFS volume see, [Creating an access point](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/create-access-points.html) in the *FSx for OpenZFS User Guide*.

# Creating access points restricted to a virtual private cloud
Creating access points restricted to a VPC

When you create an access point you can choose to make the access point accessible from the internet, or you can specify that all requests made through that access point must originate from a specific virtual private cloud (VPC). An access point that's accessible from the internet is said to have a network origin of `Internet`. It can be used from anywhere on the internet, subject to any other access restrictions in place for the access point, underlying data source, and related resources, such as the requested objects. An access point that's only accessible from a specified VPC has a network origin of `VPC`, and Amazon S3 rejects any request made to the access point that doesn't originate from that VPC.

**Important**  
You can only specify an access point's network origin when you create the access point. After you create the access point, you can't change its network origin.

To restrict an access point to VPC-only access, you include the `VpcConfiguration` parameter with the request to create the access point. In the `VpcConfiguration` parameter, you specify the VPC ID that you want to be able to use the access point. If a request is made through the access point, the request must originate from the VPC or Amazon S3 will reject it. 

You can retrieve an access point's network origin using the AWS CLI, AWS SDKs, or REST APIs. If an access point has a VPC configuration specified, its network origin is `VPC`. Otherwise, the access point's network origin is `Internet`.

## Example: Create and restrict an access point to a VPC ID


The following example creates an access point named `example-vpc-ap` for bucket `amzn-s3-demo-bucket` in account `123456789012` that allows access only from the `vpc-1a2b3c` VPC. The example then verifies that the new access point has a network origin of `VPC`.

------
#### [ AWS CLI ]

```
aws s3control create-access-point --name example-vpc-ap --account-id 123456789012 --bucket amzn-s3-demo-bucket --vpc-configuration VpcId=vpc-1a2b3c
```

```
aws s3control get-access-point --name example-vpc-ap --account-id 123456789012

{
    "Name": "example-vpc-ap",
    "Bucket": "amzn-s3-demo-bucket",
    "NetworkOrigin": "VPC",
    "VpcConfiguration": {
        "VpcId": "vpc-1a2b3c"
    },
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    },
    "CreationDate": "2019-11-27T00:00:00Z"
}
```

------

To use an access point with a VPC, you must modify the access policy for your VPC endpoint. VPC endpoints allow traffic to flow from your VPC to Amazon S3. They have access control policies that control how resources within the VPC are allowed to interact with Amazon S3. Requests from your VPC to Amazon S3 only succeed through an access point if the VPC endpoint policy grants access to both the access point and the underlying bucket.

**Note**  
To make resources accessible only within a VPC, make sure to create a [private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) for your VPC endpoint. To use a private hosted zone, [modify your VPC settings](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating) so that the [VPC network attributes](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-support) `enableDnsHostnames` and `enableDnsSupport` are set to `true`.

The following example policy statement configures a VPC endpoint to allow calls to `GetObject` for a bucket named `awsexamplebucket1` and an access point named `example-vpc-ap`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Principal": "*",
        "Action": [
            "s3:GetObject"
        ],
        "Effect": "Allow",
        "Resource": [
            "arn:aws:s3:::awsexamplebucket1/*",
            "arn:aws:s3:us-west-2:123456789012:accesspoint/example-vpc-ap/object/*"
        ]
    }]
}
```

------

**Note**  
The `"Resource"` declaration in this example uses an Amazon Resource Name (ARN) to specify the access point. For more information about access point ARNs, see [Referencing access points with ARNs, access point aliases, or virtual-hosted–style URIs](access-points-naming.md). 

For more information about VPC endpoint policies, see [Using endpoint policies for Amazon S3](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) in the *VPC User Guide*.

For a tutorial on creating access points with VPC endpoints, see [Managing Amazon S3 access with VPC endpoints and access points](https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/).

## Example: Create and restrict an access point attached to an FSx for OpenZFS volume to a VPC ID


You can create an access point that is attached it to an FSx for OpenZFS volume using the Amazon FSx console, AWS CLI, or API. Once attached, you can use the S3 object APIs to access your file data from a specified VPC.

For instructions on creating and restricting an access point attached to an FSx for OpenZFS volume see, the [Creating access points restricted to a virtual private cloud (VPC)](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/create-access-points.html) in the *FSx for OpenZFS User Guide*.

## Example: Create and restrict an access point attached to an FSX for ONTAP volume to a VPC ID


You can create an access point that is attached it to an FSx for ONTAP volume using the Amazon FSx console, AWS CLI, or API. Once attached, you can use the S3 object APIs to access your file data from a specified VPC.

For instructions on creating and restricting an access point attached to an FSx for ONTAP volume see, the [https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/access-points-for-fsxn-vpc.html](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/access-points-for-fsxn-vpc.html).

# Managing public access to access points for general purpose buckets
Managing public access

Amazon S3 access points support independent *block public access* settings for each access point. When you create an access point, you can specify block public access settings that apply to that access point. For any request made through an access point, Amazon S3 evaluates the block public access settings for that access point, the underlying bucket, and the bucket owner's account. If any of these settings indicate that the request should be blocked, Amazon S3 rejects the request.

For more information about the S3 Block Public Access feature, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

**Important**  
All block public access settings are enabled by default for access points. You must explicitly disable any settings that you don't want during access point creation.
You can not turn off any block public access settings when creating or using an access point attached to an Amazon FSx file system.
After you create an access point, you can't change its block public access settings.

**Example**  
***Example: Create an access point with Custom Block Public Access Settings***  
This example creates an access point named `example-ap` for bucket `amzn-s3-demo-bucket` in account `123456789012` with non-default Block Public Access settings. The example then retrieves the new access point's configuration to verify its Block Public Access settings.  

```
aws s3control create-access-point --name example-ap --account-id 123456789012 --bucket amzn-s3-demo-bucket--public-access-block-configuration BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=true,RestrictPublicBuckets=true
```

```
aws s3control get-access-point --name example-ap --account-id 123456789012

{
    "Name": "example-ap",
    "Bucket": "amzn-s3-demo-bucket",
    "NetworkOrigin": "Internet",
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": false,
        "IgnorePublicAcls": false,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    },
    "CreationDate": "2019-11-27T00:00:00Z"
}
```

# Managing your Amazon S3 access points for general purpose buckets
Managing access points

This section explains how to manage your Amazon S3 access points for general purpose buckets using the AWS Management Console, AWS Command Line Interface, or REST API. For information on managing access points attached to an FSx for OpenZFS volume, see [Managing your Amazon S3 access points](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/access-points-manage.html) in the *FSx for OpenZFS User Guide*.

**Note**  
You can only use access points to perform operations on objects. You can't use access points to perform other Amazon S3 operations, such as deleting buckets or creating S3 Replication configurations. For a complete list of S3 operations that support access points, see [Access point compatibility](access-points-service-api-support.md).

**Topics**
+ [

# List your access points for general purpose buckets
](access-points-list.md)
+ [

# View details for your access point for general purpose buckets
](access-points-details.md)
+ [

# Delete your access point for a general purpose bucket
](access-points-delete.md)

# List your access points for general purpose buckets


This section explains how to list your access points for general purpose buckets using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To list access points in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

## Using the AWS CLI


The following `list-access-points` example command shows how you can use the AWS CLI to list your access points.

The following command lists access points for AWS account *111122223333*.

```
aws s3control list-access-points --account-id 111122223333      
```

The following command lists access points for AWS account *111122223333* that are attached to bucket *amzn-s3-demo-bucket*.

```
aws s3control list-access-points --account-id 111122223333 --bucket amzn-s3-demo-bucket     
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to list your access points. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPoints.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPoints.html) in the *Amazon Simple Storage Service API Reference*.

# View details for your access point for general purpose buckets


This section explains how to view details for your access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To view details for your access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Select the **Properties** tab to view the access point data source, account ID, AWS Region, creation date, network origin, S3 URI, ARN, and access point alias for the selected access point.

1. Select the **Permissions** tab to view the block public access settings and access point policy for the selected access point.
**Note**  
You can't change any block public access settings for an access point after the access point is created.

## Using the AWS CLI


The following `get-access-point` example command shows how you can use the AWS CLI to view details for your access point.

The following command lists details for the access point *my-access-point* for AWS account *111122223333* attached to S3 bucket *amzn-s3-demo-bucket*.

```
aws s3control get-access-point --name my-access-point --account-id 111122223333         
```

Example output:

```
{
    "Name": "my-access-point",
    "Bucket": "amzn-s3-demo-bucket",
    "NetworkOrigin": "Internet",
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    },
    "CreationDate": "2016-08-29T22:57:52Z",
    "Alias": "my-access-point-u1ny6bhm7moymqx8cuon8o1g4mwikuse2a-s3alias",
    "AccessPointArn": "arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point",
    "Endpoints": {
        "ipv4": "s3-accesspoint.AWS Region.amazonaws.com",
        "fips": "s3-accesspoint-fips.AWS Region.amazonaws.com",
        "fips_dualstack": "s3-accesspoint-fips.dualstack.AWS Region.amazonaws.com",
        "dualstack": "s3-accesspoint.dualstack.AWS Region.amazonaws.com"
    },
    "BucketAccountId": "111122223333"
}
```

The following command lists details for the access point *example-fsx-ap* for AWS account *444455556666*. This access point is attached to an Amazon FSx file system.

```
aws s3control get-access-point --name example-fsx-ap --account-id 444455556666         
```

Example output:

```
{
    "Name": "example-fsx-ap",
    "Bucket": "",
    "NetworkOrigin": "Internet",
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    },
    "CreationDate": "2025-01-19T14:16:12Z",
    "Alias": "example-fsx-ap-qrqbyebjtsxorhhaa5exx6r3q7-ext-s3alias",
    "AccessPointArn": "arn:aws:s3:AWS Region:444455556666:accesspoint/example-fsx-ap",
    "Endpoints": {
        "ipv4": "s3-accesspoint.AWS Region.amazonaws.com",
        "fips": "s3-accesspoint-fips.AWS Region.amazonaws.com",
        "fips_dualstack": "s3-accesspoint-fips.dualstack.AWS Region.amazonaws.com",
        "dualstack": "s3-accesspoint.dualstack.AWS Region.amazonaws.com"
    },
    "DataSourceId": "arn:aws::fsx:AWS Region:444455556666:file-system/fs-5432106789abcdef0/volume/vol-0123456789abcdef0",
    "DataSourceType": "FSX_OPENZFS"
}
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to view details for your access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html) in the *Amazon Simple Storage Service API Reference*.

# Delete your access point for a general purpose bucket


This section explains how to delete your access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To delete for your access points in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. From the **Access Point** page, select **Delete** to delete the access point you've selected.

1. To confirm deletion, type the name of the access point and choose **Delete**.

## Using the AWS CLI


The following `delete-access-point` example command shows how you can use the AWS CLI to delete your access point.

The following command deletes the access point *my-access-point* for AWS account *111122223333*.

```
aws s3control delete-access-point --name my-access-point --account-id 111122223333      
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to view details for your access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html) in the *Amazon Simple Storage Service API Reference*.

# Using Amazon S3 access points for general purpose buckets
Using access points

The following examples demonstrate how to use access points for general purpose buckets with compatible operations in Amazon S3.

**Note**  
S3 automatically generate access point aliases for all access points and these aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

You can only use access points for general purpose buckets to perform operations on objects. You can't use access points to perform other Amazon S3 operations, such as modifying or deleting buckets. For a complete list of S3 operations that support access points, see [Access point compatibility](access-points-service-api-support.md).

**Topics**
+ [

# List objects through an access point for a general purpose bucket
](list-object-ap.md)
+ [

# Download an object through an access point for a general purpose bucket
](get-object-ap.md)
+ [

# Configure access control lists (ACLs) through an access point for a general purpose bucket
](put-acl-permissions-ap.md)
+ [

# Upload an object through an access point for a general purpose bucket
](put-object-ap.md)
+ [

# Add a tag-set through an access point for a general purpose bucket
](add-tag-set-ap.md)
+ [

# Delete an object through an access point for a general purpose bucket
](delete-object-ap.md)

# List objects through an access point for a general purpose bucket


This section explains how to list your objects through an access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To list your objects through an access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Under the **Objects** tab, you can view the name of objects that you want to access through the access point. While you're using the access point, you can only perform the object operations that are allowed by the access point permissions.
**Note**  
The console view always shows all objects in the bucket. Using an access point as described in this procedure restricts the operations you can perform on those objects, but not whether you can see that they exist in the bucket.
The AWS Management Console doesn't support using virtual private cloud (VPC) access points to access bucket resources. To access bucket resources from a VPC access point, use the AWS CLI, AWS SDKs, or Amazon S3 REST APIs.

## Using the AWS CLI


The following `list-objects-v2` example command shows how you can use the AWS CLI to list your object through an access point.

The following command lists objects for AWS account *111122223333* using access point *my-access-point*.

```
aws s3api list-objects-v2 --bucket arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point      
```

**Note**  
S3 automatically generate access point aliases for all access points and these aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects-v2.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects-v2.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to list your access points. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) in the *Amazon Simple Storage Service API Reference*.

# Download an object through an access point for a general purpose bucket


This section explains how to download an object through an access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To download an object through an access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Under the **Objects** tab, select the name of object that you want to download.

1. Choose **Download**.

## Using the AWS CLI


The following `get-object` example command shows how you can use the AWS CLI to download an object through an access point.

The following command downloads the object `puppy.jpg` for AWS account *111122223333* using access point *my-access-point*. You must include an `outfile`, which is a file name for the downloaded object, such as `my_downloaded_image.jpg`.

```
aws s3api get-object --bucket arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point --key puppy.jpg my_downloaded_image.jpg      
```

**Note**  
S3 automatically generate access point aliases for all access points and these aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to download an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


You can use the AWS SDK for Python to download an object through an access point. 

------
#### [ Python ]

In the following example, the file named `hello.txt` is downloaded for AWS account *111122223333* using the access point named *my-access-point*.

```
import boto3
s3 = boto3.client('s3')
s3.download_file('arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point', 'hello.txt', '/tmp/hello.txt')
```

------

# Configure access control lists (ACLs) through an access point for a general purpose bucket


This section explains how to configure ACLs through an access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md). 

## Using the S3 console


**To configure ACLs through an access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Under the **Objects** tab, select the name of the object you wish to configure an ACL for.

1. Under the **Permissions** tab, select **Edit** to configure the object ACL.
**Note**  
Amazon S3 currently doesn't support changing an access point's block public access settings after the access point has been created.

## Using the AWS CLI


The following `put-object-acl` example command shows how you can use the AWS CLI to configure access permissions through an access point using an ACL.

The following command applies an ACL to an existing object `puppy.jpg` through an access point owned by AWS account *111122223333*.

```
aws s3api put-object-acl --bucket arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point --key puppy.jpg --acl private      
```

**Note**  
S3 automatically generate access point aliases for all access points and these aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-acl.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-acl.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to configure access permissions through an access point using an ACL. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) in the *Amazon Simple Storage Service API Reference*.

# Upload an object through an access point for a general purpose bucket


This section explains how to upload an object through an access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To upload an object through an access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Under the **Objects** tab, select **Upload**.

1. Drag and drop files and folders you want to upload here, or choose **Add files** or **Add folder**.
**Note**  
The maximum size of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160 GB, use the AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API.

1. To change access control list permissions, choose **Permissions**.

1. Under **Access control list (ACL)**, edit the permissions.

   For information about object access permissions, see [Using the S3 console to set ACL permissions for an object](managing-acls.md#set-object-permissions). You can grant read access to your objects to the public (everyone in the world) for all of the files that you're uploading. However, we recommend not changing the default setting for public read access. Granting public read access is applicable to a small subset of use cases, such as when buckets are used for websites. You can always change the object permissions after you upload the object. 

1. To configure other additional properties, choose **Properties**.

1. Under **Storage class**, choose the storage class for the files that you're uploading.

   For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md).

1. To update the encryption settings for your objects, under **Server-side encryption settings**, do the following.

   1. Choose **Specify an encryption key**.

   1. Under **Encryption settings**, choose **Use bucket settings for default encryption** or **Override bucket settings for default encryption**.

   1. If you chose **Override bucket settings for default encryption**, you must configure the following encryption settings.
      + To encrypt the uploaded files by using keys that are managed by Amazon S3, choose **Amazon S3 managed key (SSE-S3)**.

        For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md).
      + To encrypt the uploaded files by using keys stored in AWS Key Management Service (AWS KMS), choose **AWS Key Management Service key (SSE-KMS)**. Then choose one of the following options for **AWS KMS key**:
        + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and then choose your **KMS key** from the list of available keys.

          Both the AWS managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
        + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and then enter your KMS key ARN in the field that appears. 
        + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

          For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that is not listed, you must enter your KMS key ARN. If you want to use a KMS key that is owned by a different account, you must first have permission to use the key and then you must enter the KMS key ARN.   
Amazon S3 supports only symmetric encryption KMS keys, and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

1. To use additional checksums, choose **On**. Then for **Checksum function**, choose the function that you would like to use. Amazon S3 calculates and stores the checksum value after it receives the entire object. You can use the **Precalculated value** box to supply a precalculated value. If you do, Amazon S3 compares the value that you provided to the value that it calculates. If the two values do not match, Amazon S3 generates an error.

   Additional checksums enable you to specify the checksum algorithm that you would like to use to verify your data. For more information about additional checksums, see [Checking object integrity in Amazon S3](checking-object-integrity.md).

1. To add tags to all of the objects that you are uploading, choose **Add tag**. Enter a tag name in the **Key** field. Enter a value for the tag.

   Object tagging gives you a way to categorize storage. Each tag is a key-value pair. Key and tag values are case sensitive. You can have up to 10 tags per object. A tag key can be up to 128 Unicode characters in length, and tag values can be up to 255 Unicode characters in length. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md).

1. To add metadata, choose **Add metadata**.

   1. Under **Type**, choose **System defined** or **User defined**.

      For system-defined metadata, you can select common HTTP headers, such as **Content-Type** and **Content-Disposition**. For a list of system-defined metadata and information about whether you can add the value, see [System-defined object metadata](UsingMetadata.md#SysMetadata). Any metadata starting with the prefix `x-amz-meta-` is treated as user-defined metadata. User-defined metadata is stored with the object and is returned when you download the object. Both the keys and their values must conform to US-ASCII standards. User-defined metadata can be as large as 2 KB. For more information about system-defined and user-defined metadata, see [Working with object metadata](UsingMetadata.md).

   1. For **Key**, choose a key.

   1. Type a value for the key. 

1. To upload your objects, choose **Upload**.

   Amazon S3 uploads your object. When the upload completes, you can see a success message on the **Upload: status** page.

## Using the AWS CLI


The following `put-object` example command shows how you can use the AWS CLI to upload an object through an access point.

The following command uploads the object `puppy.jpg` for AWS account *111122223333* using access point *my-access-point*.

```
aws s3api put-object --bucket arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point --key puppy.jpg --body puppy.jpg      
```

**Note**  
S3 automatically generate access point aliases for all access points and access point aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to upload an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


You can use the AWS SDK for Python to upload an object through an access point. 

------
#### [ Python ]

In the following example, the file named `hello.txt` is uploaded for AWS account *111122223333* using the access point named *my-access-point*.

```
import boto3
s3 = boto3.client('s3')
s3.upload_file('/tmp/hello.txt', 'arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point', 'hello.txt')
```

------

# Add a tag-set through an access point for a general purpose bucket


This section explains how to add a tag-set through an access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API. For more information, see [Categorizing your objects using tags](object-tagging.md).

## Using the S3 console


**To add a tag-set through an access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Under the **Objects** tab, select the name of the object you wish to add a tag-set to.

1. Under the **Properties** tab, find the **Tags** sub-header and choose **Edit**.

1. Review the objects listed, and choose **Add tag**.

1. Each object tag is a key-value pair. Enter a **Key** and a **Value**. To add another tag, choose **Add Tag**.

   You can enter up to 10 tags for an object.

1. Choose **Save changes**.

## Using the AWS CLI


The following `put-object-tagging` example command shows how you can use the AWS CLI to add a tag-set through an access point.

The following command adds a tag-set for existing object `puppy.jpg` using access point *my-access-point*.

```
aws s3api put-object-tagging --bucket arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point --key puppy.jpg --tagging TagSet=[{Key="animal",Value="true"}]     
```

**Note**  
S3 automatically generate access point aliases for all access points and access point aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-tagging.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-tagging.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to add a tag-set to an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.

# Delete an object through an access point for a general purpose bucket


This section explains how to delete an object through an access point for a general purpose bucket using the AWS Management Console, AWS Command Line Interface, or REST API.

## Using the S3 console


**To delete an object or objects through an access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access Points**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage or use.

1. Under the **Objects** tab, select the name of the object or objects you wish to delete.

1. Review the objects listed for deletion, and type *delete* in the confirmation box.

1. Choose **Delete objects**.

## Using the AWS CLI


The following `delete-object` example command shows how you can use the AWS CLI to delete an object through an access point.

The following command deletes the existing object `puppy.jpg` using access point *my-access-point*.

```
aws s3api delete-object --bucket arn:aws:s3:AWS Region:111122223333:accesspoint/my-access-point --key puppy.jpg      
```

**Note**  
S3 automatically generate access point aliases for all access points and access point aliases can be used anywhere a bucket name is used to perform object-level operations. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-object.html) in the *AWS CLI Command Reference*.

## Using the REST API


You can use the REST API to delete an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html) in the *Amazon Simple Storage Service API Reference*.

# Using tags with S3 Access Points for general purpose buckets
Tagging Access Points

An AWS tag is a key-value pair that holds metadata about resources, in this case Amazon S3 Access Points. You can tag access points when you create them or manage tags on existing access points. For general information about tags, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

**Note**  
There is no additional charge for using tags on access points beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## Common ways to use tags with access points


Attribute-based access control (ABAC) allows you to scale access permissions and grant access to access points based on their tags. For more information about ABAC in Amazon S3, see [Using tags for ABAC](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#).

### ABAC for S3 Access Points


Amazon S3 Access Points support attribute-based access control (ABAC) using tags. Use tag-based condition keys in your AWS organizations, IAM, and Access Points policies. For enterprises, ABAC in Amazon S3 supports authorization across multiple AWS accounts. 

In your IAM policies, you can control access to access points based on the access points's tags by using the following [global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-tagkeys):
+ `aws:ResourceTag/key-name`
**Important**  
The `aws:ResourceTag` condition key can only be used for S3 actions performed via an access point ARN for general purpose buckets and covers the underlying access point tags only.
  + Use this key to compare the tag key-value pair that you specify in the policy with the key-value pair attached to the resource. For example, you could require that access to a resource is allowed only if the resource has the attached tag key `Dept` with the value `Marketing`. For more information, see [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).
+ `aws:RequestTag/key-name`
  + Use this key to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy. For example, you could check whether the request includes the tag key `Dept` and that it has the value `Accounting`. For more information, see [Controlling access during AWS requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-requests). You can use this condition key to restrict which tag key-value pairs can be passed during the `TagResource` and `CreateAccessPoint` API operations.
+ `aws:TagKeys`
  + Use this key to compare the tag keys in a request with the keys that you specify in the policy. We recommend that when you use policies to control access using tags, use the `aws:TagKeys` condition key to define what tag keys are allowed. For example policies and more information, see [Controlling access based on tag keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys). You can create an access point with tags. To allow tagging during the `CreateAccessPoint` API operation, you must create a policy that includes both the `s3:TagResource` and `s3:CreateAccessPoint` actions. You can then use the `aws:TagKeys` condition key to enforce using specific tags in the `CreateAccessPoint` request.
+ `s3:AccessPointTag/tag-key`
  + Use this condition key to grant permissions to specific data via access points using tags. When using `aws:ResourceTag/tag-key` in an IAM policy, both the access point as well as the bucket to which the access point points to are required to have the same tag as they are both considered during authorization. If you want to control access to your data specifically via the access-point tag only, you can use `s3:AccessPointTag/tag-key` condition key.

### Example ABAC policies for access points


See the following example ABAC policies for Amazon S3 Access Points.

#### 1.1 - IAM policy to create or modify buckets with specific tags


In this IAM policy, users or roles with this policy can only create access points if they tag the access points with the tag key `project` and tag value `Trinity` in the access points creation request. They can also add or modify tags on existing access points as long as the `TagResource` request includes the tag key-value pair `project:Trinity`. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "CreateAccessPointWithTags",
      "Effect": "Allow",
      "Action": [
        "s3:CreateAccessPoint",
        "s3:TagResource"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/project": [
            "Trinity"
          ]
        }
      }
    }
  ]
}
```

#### 1.2 - Access Point policy to restrict operations on the access point using tags


In this Access Point policy, IAM principals (users and roles) can perform operations using the `GetObject` action on the access point only if the value of the access point's `project` tag matches the value of the principal's `project` tag.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowObjectOperations",
      "Effect": "Allow",
      "Principal": {
        "AWS": "111122223333"
      },
      "Action": "s3:GetObject",
      "Resource": "arn:aws::s3:region:111122223333:accesspoint/my-access-point",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        }
      }
    }
  ]
}
```

#### 1.3 - IAM policy to modify tags on existing resources maintaining tagging governence


In this IAM policy, IAM principals (users or roles) can modify tags on an access point only if the value of the access point's `project` tag matches the value of the principal's `project` tag. Only the four tags `project`, `environment`, `owner`, and `cost-center` specified in the `aws:TagKeys` condition keys are permitted for these access points. This helps enforce tag governance, prevents unauthorized tag modifications, and keeps the tagging schema consistent across your access points.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceTaggingRulesOnModification",
      "Effect": "Allow",
      "Action": [
        "s3:TagResource"
      ],
      "Resource": "arn:aws::s3:region:111122223333:accesspoint/my-access-point",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        },
        "ForAllValues:StringEquals": {
          "aws:TagKeys": [
            "project",
            "environment",
            "owner",
            "cost-center"
          ]
        }
      }
    }
  ]
}
```

#### 1.4 - Using the s3:AccessPointTag condition key


In this IAM policy, the condition statement allows access to the bucket's data if the access point has the tag key `Environment` and tag value `Production`. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToSpecificAccessPoint",
      "Effect": "Allow",
      "Action": "*",
      "Resource": "arn:aws::s3:region:111122223333:accesspoint/my-access-point",
      "Condition": {
        "StringEquals": {
          "s3:AccessPointTag/Environment": "Production"
        }
      }
    }
  ]
}
```

#### 1.5 - Using a bucket delegate policy


In Amazon S3, you can delegate access to or control of your S3 bucket policy to another AWS account or to a specific AWS Identity and Access Management (IAM) user or role in the other account. The delegate bucket policy grants this other account, user, or role permission to your bucket and its objects. For more information, see [Permission delegation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html#permission-delegation). 

If using a delegate bucket policy, such as the following: 

```
{
  "Version": "2012-10-17",		 	 	 
    "Statement": {
      "Principal": {"AWS": "*"},
        "Effect": "Allow",
        "Action": ["s3:*"],
        "Resource":["arn:aws::s3:::amzn-s3-demo-bucket/*", "arn:aws::s3:::amzn-s3-demo-bucket"],
           "Condition": {
             "StringEquals" : {
                "s3:DataAccessPointAccount" : "111122223333"
             }
           }
    }
}
```

In the following IAM policy, the condition statement allows access to the bucket's data if the access point has the tag key `Environment` and tag value `Production`. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToSpecificAccessPoint",
      "Effect": "Allow",
      "Action": "*",
      "Resource": "arn:aws::s3:region:111122223333:accesspoint/my-access-point",
      "Condition": {
        "StringEquals": {
          "s3:AccessPointTag/Environment": "Production"
        }
      }
    }
  ]
}
```

## Working with tags for access points for general purpose buckets


You can add or manage tags for access points using the Amazon S3 Console, the AWS Command Line Interface (CLI), the AWS SDKs, or using the S3 APIs: [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html), [UntagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html), and [ListTagsForResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html). For more information, see:

**Topics**
+ [

## Common ways to use tags with access points
](#common-ways-to-use-tags-directory-bucket)
+ [

## Working with tags for access points for general purpose buckets
](#working-with-tags-access-points)
+ [

# Creating access points with tags
](access-points-create-tag.md)
+ [

# Adding a tag to an access point
](access-points-tag-add.md)
+ [

# Viewing access point tags
](access-points-tag-view.md)
+ [

# Deleting a tag from an access point
](access-points-tag-delete.md)

# Creating access points with tags


You can tag access points when you create them. There is no additional charge for using tags on access points beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://docs.aws.amazon.com/s3/pricing/). For more information about tagging access points, see [Using tags with S3 Access Points for general purpose buckets](access-points-tagging.md).

## Permissions


To create an access point with tags, you must have the following permissions:
+ `s3:CreateBucket`
+ `s3:TagResource`

## Troubleshooting errors


If you encounter an error when attempting to create an access point with tags, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-create-tag-permissions) to create the access point and add a tag to it.
+ Check your IAM user policy for any attribute-based access control (ABAC) conditions. You may be required to label your access points only with specific tag keys and values. For more information, see [Using tags for attribute-based access control (ABAC)](tagging.md#using-tags-for-abac).

## Steps


You can create an access point with tags applied by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console


To create an access point with tags using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (General Purpose Buckets)**.

1. Choose **create access point** to create a new access point.

1. On the **Create access point** page, **Tags** is an option when creating a new access point.

1. Enter a name for the access point. For more information, see [Access points naming rules, restrictions, and limitations](access-points-restrictions-limitations-naming-rules.md).

1. Choose **Add new Tag** to open the **Tags** editor and enter a tag key-value pair. The tag key is required, but the value is optional. 

1. To add another tag, select **Add new Tag** again. You can enter up to 50 tag key-value pairs.

1. After you complete specifying the options for your new access point, choose **Create access point**. 

## Using the AWS SDKs


------
#### [ SDK for Java 2.x ]

This example shows you how to create an access point with tags by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
CreateAccessPointRequest createAccessPointRequest = CreateAccessPointRequest.builder()
                .accountId(111122223333)
                .name(my-access-point)
                .bucket(amzn-s3-demo-bucket)
                .tags(Collections.singletonList(Tag.builder().key("key1").value("value1").build()))
                .build();
 awss3Control.createAccessPoint(createAccessPointRequest);
```

------

## Using the REST API


For information about the Amazon S3 REST API support for creating an access point with tags, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [CreateAccessPoint](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html)

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to create an access point with tags by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control create-access-point --name my-access-point \
--bucket amzn-s3-demo-bucket \
--account-id 111122223333 \ --profile personal \
--tags [{Key=key1,Value=value1},{Key=key2,Value=value2}] \
--region region
```

# Adding a tag to an access point




You can add tags to Amazon S3 Access Points and modify these tags. There is no additional charge for using tags on access points beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://docs.aws.amazon.com/s3/pricing/). For more information about tagging access points, see [Using tags with S3 Access Points for general purpose buckets](access-points-tagging.md).

## Permissions


To add a tag to an access point, you must have the following permission:
+ `s3:TagResource`

## Troubleshooting errors


If you encounter an error when attempting to add a tag to an access point, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-tag-add-permissions) to add a tag to an access point.
+ If you attempted to add a tag key that starts with the AWS reserved prefix `aws:`, change the tag key and try again. 

## Steps


You can add tags to access points by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console


To add tags to an access point using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (General Purpose Buckets)**.

1. Choose the access point name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and choose **Add new Tag**. 

1. This opens the **Add Tags** page. You can enter up to 50 tag key value pairs. 

1. If you add a new tag with the same key name as an existing tag, the value of the new tag overrides the value of the existing tag.

1. You can also edit the values of existing tags on this page.

1. After you have added the tag(s), choose **Save changes**. 

## Using the AWS SDKs


------
#### [ SDK for Java 2.x ]

This example shows you how to add tags to an access point by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
TagResourceRequest tagResourceRequest = TagResourceRequest.builder().resourceArn(arn:aws::s3:region:111122223333:accesspoint/my-access-point/*)
.accountId(111122223333)
.tags(List.of(Tag.builder().key("key1").value("value1").build(),
Tag.builder().key("key2").value("value2").build()))
.build();
awss3Control.tagResource(tagResourceRequest);
```

------

## Using the REST API


For information about the Amazon S3 REST API support for adding tags to an access point, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html)

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to add tags to an access point by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control tag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3:region:111122223333:accesspoint/my-access-point/* \
--tags "Key=key1,Value=value1"
```

**Response:**

```
{
  "ResponseMetadata": {
      "RequestId": "EXAMPLE123456789",
      "HTTPStatusCode": 200,
      "HTTPHeaders": {
          "date": "Wed, 19 Jun 2025 10:30:00 GMT",
          "content-length": "0"
      },
      "RetryAttempts": 0
  }
}
```

# Viewing access point tags


You can view or list tags applied to access points. For more information about tags, see [Using tags with S3 Access Points for general purpose buckets](access-points-tagging.md).

## Permissions


To view tags applied to an access point, you must have the following permission: 
+ `s3:ListTagsForResource`

## Troubleshooting errors


If you encounter an error when attempting to list or view the tags of an access point, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-tag-view-permissions) to view or list the tags of the access point.

## Steps


You can view tags applied to access points by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console


To view tags applied to an access point using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (General Purpose Buckets)**.

1. Choose the access point name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section to view all of the tags applied to the access point. 

1. The **Tags** section shows the **User-defined tags** by default. You can select the **AWS-generated tags** tab to view tags applied to your access point by AWS services.

## Using the AWS SDKs


This section provides an example of how to view tags applied to an access point by using the AWS SDKs.

------
#### [ SDK for Java 2.x ]

This example shows you how to view tags applied to an access point by using the AWS SDK for Java 2.x. 

```
ListTagsForResourceRequest listTagsForResourceRequest = ListTagsForResourceRequest
.builder().resourceArn(arn:aws::s3:region:111122223333:accesspoint/my-access-point/*)
                .accountId(111122223333).build();
awss3Control.listTagsForResource(listTagsForResourceRequest);
```

------

## Using the REST API


For information about the Amazon S3 REST API support for viewing the tags applied to an access point, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [ListTagsforResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html)

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to view tags applied to an access point. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control list-tags-for-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3:region:444455556666:bucket/prefix--use1-az4--x-s3 \
```

**Response - tags present:**

```
{
  "Tags": [
      {
          "Key": "MyKey1",
          "Value": "MyValue1"
      },
      {
          "Key": "MyKey2",
          "Value": "MyValue2"
      },
      {
          "Key": "MyKey3",
          "Value": "MyValue3"
      }
  ]
}
```

**Response - no tags present:**

```
{
  "Tags": []
}
```

# Deleting a tag from an access point


You can remove tags from Amazon S3 Access Points. An AWS tag is a key-value pair that holds metadata about resources, in this case Access Points. For more information about tags, see [Using tags with S3 Access Points for general purpose buckets](access-points-tagging.md).

**Note**  
If you delete a tag and later learn that it was being used to track costs or for access control, you can add the tag back to the access point. 

## Permissions


To delete a tag from an access point, you must have the following permission: 
+ `s3:UntagResource`

## Troubleshooting errors


If you encounter an error when attempting to delete a tag from an access point, you can do the following: 
+ Verify that you have the required [Permissions](access-points-db-tag-delete.md#access-points-db-tag-delete-permissions) to delete a tag from an access point.

## Steps


You can delete tags from access points by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console


To delete tags from an access point using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (General Purpose Buckets)**.

1. Choose the access point name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and select the checkbox next to the tag or tags that you would like to delete. 

1. Choose **Delete**. 

1. The **Delete user-defined tags** pop-up appears and asks you to confirm the deletion of the tag or tags you selected. 

1. Choose **Delete** to confirm.

## Using the AWS SDKs


------
#### [ SDK for Java 2.x ]

This example shows you how to delete tags from an access point by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
UntagResourceRequest tagResourceRequest = UntagResourceRequest.builder()
                .resourceArn(arn:aws::s3:region:111122223333:accesspoint/my-access-point/*)
                .accountId(111122223333)
                .tagKeys(List.of("key1", "key2")).build();
awss3Control.untagResource(tagResourceRequest);
```

------

## Using the REST API


For information about the Amazon S3 REST API support for deleting tags from an access point, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [UnTagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html)

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to delete tags from an access point by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control untag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3:region:444455556666:access-point/my-access-point \
--tag-keys "tagkey1" "tagkey2"
  
  
  aws s3control untag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3:region:444455556666:accesspointmy-access-point/* \
--tag-keys "key1" "key2"
```

**Response:**

```
{
  "ResponseMetadata": {
    "RequestId": "EXAMPLE123456789",
    "HTTPStatusCode": 204,
    "HTTPHeaders": {
        "date": "Wed, 19 Jun 2025 10:30:00 GMT",
        "content-length": "0"
    },
    "RetryAttempts": 0
  }
}
```

# Managing access with S3 Access Grants


To adhere to the principle of least privilege, you define granular access to your Amazon S3 data based on applications, personas, groups, or organizational units. You can use various approaches to achieve granular access to your data in Amazon S3, depending on the scale and complexity of the access patterns. 

The simplest approach for managing access to small-to-medium numbers of datasets in Amazon S3 by AWS Identity and Access Management (IAM) principals is to define [IAM permission policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/user-policies.html) and [S3 bucket policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html). This strategy works, so long as the necessary policies fit within the policy size limits of S3 bucket policies (20 KB) and IAM policies (5 KB), and within the [number of IAM principals allowed per account](https://docs.aws.amazon.com/general/latest/gr/iam-service.html). 

As your number of datasets and use cases scales, you might require more policy space. An approach that offers significantly more space for policy statements is to use [S3 Access Points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points.html) as additional endpoints for S3 buckets, because each access point can have its own policy. You can define quite granular access control patterns, because you can have thousands of access points per AWS Region per account, with a policy up to 20 KB in size for each access point. Although S3 Access Points increases the amount of policy space available, it requires a mechanism for clients to discover the right access point for the right dataset.

A third approach is to implement an [IAM session broker](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) pattern, in which you implement access-decision logic and dynamically generate short-term IAM session credentials for each access session. While the IAM session broker approach supports arbitrarily dynamic permissions patterns and scales effectively, you must build the access-pattern logic. 

Instead of using these approaches, you can use S3 Access Grants to manage access to your Amazon S3 data. S3 Access Grants provides a simplified model for defining access permissions to data in Amazon S3 by prefix, bucket, or object. In addition, you can use S3 Access Grants to grant access to both IAM principals and directly to users or groups from your corporate directory. 

You commonly define permissions to data in Amazon S3 by mapping users and groups to datasets. You can use S3 Access Grants to define direct access mappings of S3 prefixes to users and roles within Amazon S3 buckets and objects. With the simplified access scheme in S3 Access Grants, you can grant read-only, write-only, or read-write access on a per-S3-prefix basis to both IAM principals and directly to users or groups from a corporate directory. With these S3 Access Grants capabilities, applications can request data from Amazon S3 on behalf of the application's current authenticated user.

When you integrate S3 Access Grants with the [trusted identity propagation](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html) feature of AWS IAM Identity Center, your applications can make requests to AWS services (including S3 Access Grants) directly on behalf of an authenticated corporate directory user. Your applications no longer need to first map the user to an IAM principal. Furthermore, because end-user identities are propagated all the way to Amazon S3, auditing which user accessed which S3 object is simplified. You no longer need to reconstruct the relationship between different users and IAM sessions. When you're using S3 Access Grants with IAM Identity Center trusted identity propagation, each [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) data event for Amazon S3 contains a direct reference to the end user on whose behalf the data was accessed.

[Trusted identity propagation](https://docs.aws.amazon.com//singlesignon/latest/userguide/trustedidentitypropagation-overview.html) is an AWS IAM Identity Center feature that administrators of connected AWS services can use to grant and audit access to service data. Access to this data is based on user attributes such as group associations. Setting up trusted identity propagation requires collaboration between the administrators of connected AWS services and the IAM Identity Center administrators. For more information, see [Prerequisites and considerations](https://docs.aws.amazon.com//singlesignon/latest/userguide/trustedidentitypropagation-overall-prerequisites.html).

For more information about S3 Access Grants, see the following topics.

**Topics**
+ [

# S3 Access Grants concepts
](access-grants-concepts.md)
+ [

# S3 Access Grants and corporate directory identities
](access-grants-directory-ids.md)
+ [

# Getting started with S3 Access Grants
](access-grants-get-started.md)
+ [

# Working with S3 Access Grants instances
](access-grants-instance.md)
+ [

# Working with S3 Access Grants locations
](access-grants-location.md)
+ [

# Working with grants in S3 Access Grants
](access-grants-grant.md)
+ [

# Getting S3 data using access grants
](access-grants-data.md)
+ [

# S3 Access Grants cross-account access
](access-grants-cross-accounts.md)
+ [

# Managing tags for S3 Access Grants
](access-grants-tagging.md)
+ [

# S3 Access Grants limitations
](access-grants-limitations.md)
+ [

# S3 Access Grants integrations
](access-grants-integrations.md)

# S3 Access Grants concepts


**S3 Access Grants workflow**  
The S3 Access Grants workflow is: 

1. Create an S3 Access Grants instance. See [Working with S3 Access Grants instances](access-grants-instance.md).

1. Within your S3 Access Grants instance, register locations in your Amazon S3 data, and map these locations to AWS Identity and Access Management (IAM) roles. See [Register a location](access-grants-location-register.md). 

1. Create grants for grantees, which give grantees access to your S3 resources. See [Working with grants in S3 Access Grants](access-grants-grant.md).

1. The grantee requests temporary credentials from S3 Access Grants. See [Request access to Amazon S3 data through S3 Access Grants](access-grants-credentials.md).

1. The grantee accesses the S3 data using those temporary credentials. See [Accessing S3 data using credentials vended by S3 Access Grants](access-grants-get-data.md).

For more information, see [Getting started with S3 Access Grants](access-grants-get-started.md).

 **S3 Access Grants instances**   
An *S3 Access Grants instance* is a logical container for individual *grants*. When you create an S3 Access Grants instance, you must specify an AWS Region. Each AWS Region in your AWS account can have one S3 Access Grants instance. For more information, see [Working with S3 Access Grants instances](access-grants-instance.md).  
If you want to use S3 Access Grants to grant access to user and group identities from your corporate directory, you must also associate your S3 Access Grants instance with an AWS IAM Identity Center instance. For more information, see [S3 Access Grants and corporate directory identities](access-grants-directory-ids.md).  
A newly created S3 Access Grants instance is empty. You must register a location in the instance, which can be the S3 default path (`s3://`), a bucket, or a prefix within a bucket. After you register at least one location, you can create access grants that give access to data in this registered location.

 **Locations**   
An S3 Access Grants *location* maps buckets or prefixes to an AWS Identity and Access Management (IAM) role. S3 Access Grants assumes this IAM role to vend temporary credentials to the grantee that's accessing that particular location. You must first register at least one location in your S3 Access Grants instance before you can create an access grant.   
We recommend that you register the default location (`s3://`) and map it to an IAM role. The location at the default S3 path (`s3://`) covers access to all of your S3 buckets in the AWS Region of your account. When you create an access grant, you can narrow the grant scope to a bucket, a prefix, or an object within the default location.  
More complex access-management use cases might require you to register more than the default location. Some examples of such use cases are:  
+ Suppose that the *amzn-s3-demo-bucket* is a registered location in your S3 Access Grants instance with an IAM role mapped to it, but this IAM role is denied access to a particular prefix within the bucket. In this case, you can register the prefix that the IAM role does not have access to as a separate location and map that location to a different IAM role with the necessary access. 
+ Suppose that you want to create grants that restrict access to only the users within a virtual private cloud (VPC) endpoint. In this case, you can register a location for a bucket in which the IAM role restricts access to the VPC endpoint. Later, when a grantee asks S3 Access Grants for credentials, S3 Access Grants assumes the location’s IAM role to vend the temporary credentials. This credential will deny access to the specific bucket unless the caller is within the VPC endpoint. This deny permission is applied in addition to the regular READ, WRITE, or READWRITE permission specified in the grant.
If your use case requires you to register multiple locations in your S3 Access Grants instance, you can register any of the following:  
+ The default S3 location (`s3://`)
+ A bucket (for example, *amzn-s3-demo-bucket*) or multiple buckets
+ A bucket and a prefix (for example, `amzn-s3-demo-bucket/prefix*`) or multiple prefixes
For the maximum number of locations that you can register in your S3 Access Grants instance, see [S3 Access Grants limitations](access-grants-limitations.md). For more information about registering an S3 Access Grants location, see [Register a location](access-grants-location-register.md).   
After you register the first location in your S3 Access Grants instance, your instance still does not have any individual access grants in it. So, no access has been granted yet to any of your S3 data. You can now create access grants to give access. For more information about creating grants, see [Working with grants in S3 Access Grants](access-grants-grant.md). 

 **Grants**   
An individual *grant* in an S3 Access Grants instance allows a specific identity—an IAM principal, or a user or group in a corporate directory—to get access within a location that is registered in your S3 Access Grants instance.   
When you create a grant, you don't have to grant access to the entire registered location. You can narrow the grant's scope of access within a location. If the registered location is the default S3 path (`s3://`), you are required to narrow the scope of the grant to a bucket, a prefix within a bucket, or a specific object. If the registered location of the grant is a bucket or a prefix, then you can give access to the entire bucket or prefix, or you can optionally narrow the scope of the grant to a prefix, subprefix, or an object.  
In the grant, you also set the access level of the grant to READ, WRITE, or READWRITE. Suppose you have a grant that gives the corporate directory group `01234567-89ab-cdef-0123-456789abcdef` READ access to the bucket `s3://amzn-s3-demo-bucket/projects/items/*`. Users in this group can have READ access to every object that has an object key name which starts with the prefix `projects/items/` in the bucket named *amzn-s3-demo-bucket*.   
For the maximum number of grants that you can create in your S3 Access Grants instance, see [S3 Access Grants limitations](access-grants-limitations.md). For more information about creating grants, see [Create grants](access-grants-grant-create.md).

 **S3 Access Grants temporary credentials**   
After you create a grant, an authorized application that utilizes the identity specified in the grant can request *just-in-time access credentials*. To do this, the application calls the [GetDataAccess](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetDataAccess.html) S3 API operation. Grantees can use this API operation to request access to the S3 data you have shared with them.   
The S3 Access Grants instance evaluates the `GetDataAccess` request against the grants that it has. If there is a matching grant for the requestor, S3 Access Grants assumes the IAM role that's associated with the registered location of the matching grant. S3 Access Grants scopes the permissions of the temporary credentials to access only the S3 bucket, prefix, or object that's specified by the grant's scope.  
The expiration time of the temporary access credentials defaults to 1 hour, but you can set it to any value from 15 minutes to 12 hours. See the maximum duration session in the [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) API reference. 

## How it works


In the following diagram, a default Amazon S3 location with the scope `s3://` is registered with the IAM role `s3ag-location-role`. This IAM role has permissions to perform Amazon S3 actions within the account when its credentials are obtained through S3 Access Grants. 

Within this location, two individual access grants are created for two IAM users. The IAM user Bob is granted both `READ` and `WRITE` access on the `bob/` prefix in the `DOC-BUCKET-EXAMPLE` bucket. Another IAM role, Alice, is granted only `READ` access on the `alice/` prefix in the `DOC-BUCKET-EXAMPLE` bucket. A grant, colored in blue, is defined for Bob to access the prefix `bob/` in the `DOC-BUCKET-EXAMPLE` bucket. A grant, colored in green, is defined for Alice to access the prefix `alice/` in the `DOC-BUCKET-EXAMPLE` bucket.

When it's time for Bob to `READ` data, the IAM role that's associated with the location that his grant is in calls the S3 Access Grants [GetDataAccess](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetDataAccess.html) API operation. If Bob tries to `READ` any S3 prefix or object that starts with `s3://DOC-BUCKET-EXAMPLE/bob/*`, the `GetDataAccess` request returns a set of temporary IAM session credentials with permission to `s3://DOC-BUCKET-EXAMPLE/bob/*`. Similarly, Bob can `WRITE` to any S3 prefix or object that starts with `s3://DOC-BUCKET-EXAMPLE/bob/*`, because the grant also allows that.

Similarly, Alice can `READ` anything that starts with `s3://DOC-BUCKET-EXAMPLE/alice/`. However, if she tries to `WRITE` anything to any bucket, prefix, or object in `s3://`, she will get an Access Denied (403 Forbidden) error, because there is no grant that gives her `WRITE` access to any data. In addition, if Alice requests any level of access (`READ` or `WRITE`) to data outside of `s3://DOC-BUCKET-EXAMPLE/alice/`, she will again receive an Access Denied error.

![\[How S3 Access Grants works\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/s3ag-how-it-works.png)


This pattern scales to a high number of users and buckets and simplifies management of those permissions. Rather than editing potentially large S3 bucket policies every time you want to add or remove an individual user-prefix access relationship, you can add and remove individual, discrete grants.

# S3 Access Grants and corporate directory identities


You can use Amazon S3 Access Grants to grant access to AWS Identity and Access Management (IAM) principals (users or roles), both in the same AWS account and in others. However, in many cases, the entity accessing the data is an end user from your corporate directory. Instead of granting access to IAM principals, you can use S3 Access Grants to grant access directly to your corporate users and groups. With S3 Access Grants, you no longer need to map your corporate identities to intermediate IAM principals in order to access your S3 data through your corporate applications.

This new functionality—support for using end-user identities access to data—is provided by associating your S3 Access Grants instance with an AWS IAM Identity Center instance. IAM Identity Center supports standards-based identity providers and is the hub in AWS for any services or features, including S3 Access Grants, that support end-user identities. IAM Identity Center provides authentication support for corporate identities through its trusted identity propagation feature. For more information, see [Trusted identity propagation across applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html).

To get started with workforce identity support in S3 Access Grants, as a prerequisite, you start in IAM Identity Center by configuring identity provisioning between your corporate identity provider and IAM Identity Center. IAM Identity Center supports corporate identity providers such as Okta, Microsoft Entra ID (formerly Azure Active Directory), or any other external identity provider (IdP) that supports the System for Cross-domain Identity Management (SCIM) protocol. When you connect IAM Identity Center to your IdP and enable automatic provisioning, the users and groups from your IdP are synchronized into the identity store in IAM Identity Center. After this step, IAM Identity Center has its own view of your users and groups, so that you can refer to them by using other AWS services and features, such as S3 Access Grants. For more information about configuring IAM Identity Center automatic provisioning, see [Automatic provisioning](https://docs.aws.amazon.com/singlesignon/latest/userguide/provision-automatically.html) in the *AWS IAM Identity Center User Guide*.

IAM Identity Center is integrated with AWS Organizations so that you can centrally manage permissions across multiple AWS accounts without configuring each of your accounts manually. In a typical organization, your identity administrator configures one IAM Identity Center instance for the entire organization, as a single point of identity synchronization. This IAM Identity Center instance typically runs in a dedicated AWS account in your organization. In this common configuration, you can refer to user and group identities in S3 Access Grants from any AWS account in the organization. 

However, if your AWS Organizations administrator hasn't yet configured a central IAM Identity Center instance, you can create a local one in the same account and AWS Region as your S3 Access Grants instance. If you have an IAM Identity Center instance configured in a different AWS Region, you can also [replicate](https://docs.aws.amazon.com/singlesignon/latest/userguide/replicate-to-additional-region.html) this instance to the same AWS Region as your S3 Access Grants instance. Such a configuration is more common for proof-of-concept or local development use cases. In all cases, the IAM Identity Center instance must be in the same AWS Region as the S3 Access Grants instance to which it will be associated.

In the following diagram of an IAM Identity Center configuration with an external IdP, the IdP is configured with SCIM to synchronize the identity store from the IdP to the identity store in IAM Identity Center.

![\[IAM Identity Center integration with an external identity store through automatic provisioning.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/s3ag-identity-store.png)


To use your corporate directory identities with S3 Access Grants, do the following:
+ Set up [Automatic provisioning](https://docs.aws.amazon.com/singlesignon/latest/userguide/provision-automatically.html) in IAM Identity Center to synchronize user and group information from your IdP into IAM Identity Center. 
+ Configure your external identity source within IAM Identity Center as a trusted token issuer. For more information, see [Trusted identity propagation across applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html) in the *AWS IAM Identity Center User Guide*.
+ Associate your S3 Access Grants instance with your IAM Identity Center instance. You can do this when you [create your S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance.html). If you've already created your S3 Access Grants instance, see [Associate or disassociate your IAM Identity Center instance](access-grants-instance-idc.md). 

## How directory identities can access S3 data


Suppose that you have corporate directory users who need to access your S3 data through a corporate application, for example, a document-viewer application, that is integrated with your external IdP (for example, Okta) to authenticate users. Authentication of the user in these applications is typically done through redirects in the user's web browser. Because users in the directory are not IAM principals, your application needs IAM credentials with which it can call the S3 Access Grants `GetDataAccess` API operation to [get access credentials to S3 data](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-credentials.html) on the users' behalf. Unlike IAM users and roles who get credentials themselves, your application needs a way to represent a directory user, who isn't mapped to an IAM role, so that the user can get data access through S3 Access Grants.

This transition, from authenticated directory user to an IAM caller that can make requests to S3 Access Grants on behalf of the directory user, is done by the application through the trusted token issuer feature of IAM Identity Center. The application, after authenticating the directory user, has an identity token from the IdP (for example, Okta) that represents the directory user according to Okta. The trusted token issuer configuration in IAM Identity Center enables the application to exchange this Okta token (the Okta tenant is configured as the "trusted issuer") for a different identity token from IAM Identity Center that will securely represent the directory user within AWS services. The data application will then assume an IAM role, providing the directory user's token from IAM Identity Center as additional context. The application can use the resulting IAM session to call S3 Access Grants. The token represents both the identity of the application (the IAM principal itself) as well as the directory user's identity.

The main step of this transition is the token exchange. The application performs this token exchange by calling the `CreateTokenWithIAM` API operation in IAM Identity Center. Of course, that too is an AWS API call and requires an IAM principal to sign it. The IAM principal that makes this request is typically an IAM role that's associated with the application. For example, if the application runs on Amazon EC2, the `CreateTokenWithIAM` request is typically performed by the IAM role that's associated with the EC2 instance on which the application runs. The result of a successful `CreateTokenWithIAM` call is a new identity token, which will be recognized within AWS services. 

The next step, before the application can call `GetDataAccess` on the directory user's behalf, is for the application to obtain an IAM session that includes the directory user's identity. The application does this with an AWS Security Token Service (AWS STS) `AssumeRole` request that also includes the IAM Identity Center token for the directory user as additional identity context. This additional context is what enables IAM Identity Center to propagate the directory user's identity to the next step. The IAM role that the application assumes is the role that will need IAM permissions to call the `GetDataAccess` operation.

Having assumed the identity bearer IAM role with the IAM Identity Center token for the directory user as additional context, the application now has everything it needs to make a signed request to `GetDataAccess` on behalf of the authenticated directory user.

Token propagation is based on the following steps:

**Create an IAM Identity Center application**

First, create a new application in IAM Identity Center. This application will use a template that allows IAM Identity Center to identify which type of application settings that you can use. The command to create the application requires you to provide the IAM Identity Center instance Amazon Resource Name (ARN), an application name, and the application provider ARN. The application provider is the SAML or OAuth application provider that the application will use to make calls to IAM Identity Center. 

To use the following example command, replace the `user input placeholders` with your own information:

```
aws sso-admin create-application \
 --instance-arn "arn:aws:sso:::instance/ssoins-ssoins-1234567890abcdef" \
 --application-provider-arn "arn:aws:sso::aws:applicationProvider/custom" \
 --name MyDataApplication
```

Response:

```
{
   "ApplicationArn": "arn:aws:sso::123456789012:application/ssoins-ssoins-1234567890abcdef/apl-abcd1234a1b2c3d"
}
```

**Create a trusted token issuer**

Now that you have your IAM Identity Center application, the next step is to configure a trusted token issuer that will be used to exchange your `IdToken` values from your IdP with IAM Identity Center tokens. In this step you need to provide the following items:
+ The identity provider issuer URL
+ The trusted token issuer name
+ The claim attribute path
+ The identity store attribute path
+ The JSON Web Key Set (JWKS) retrieval option

The claim attribute path is the identity provider attribute that will be used to map to the identity store attribute. Normally, the claim attribute path is the email address of the user, but you can use other attributes to perform the mapping.

Create a file called `oidc-configuration.json` with the following information. To use this file, replace the `user input placeholders` with your own information.

```
{
  "OidcJwtConfiguration":
     {
      "IssuerUrl": "https://login.microsoftonline.com/a1b2c3d4-abcd-1234-b7d5-b154440ac123/v2.0",
      "ClaimAttributePath": "preferred_username",
      "IdentityStoreAttributePath": "userName",
      "JwksRetrievalOption": "OPEN_ID_DISCOVERY"
     }
}
```

To create the trusted token issuer, run the following command. To use this example command, replace the `user input placeholders` with your own information.

```
aws sso-admin create-trusted-token-issuer \
  --instance-arn "arn:aws:sso:::instance/ssoins-1234567890abcdef" \
  --name MyEntraIDTrustedIssuer \
  --trusted-token-issuer-type OIDC_JWT \
  --trusted-token-issuer-configuration file://./oidc-configuration.json
```

Response

```
{
  "TrustedTokenIssuerArn": "arn:aws:sso::123456789012:trustedTokenIssuer/ssoins-1234567890abcdef/tti-43b4a822-1234-1234-1234-a1b2c3d41234"
}
```

**Connect the IAM Identity Center application with the trusted token issuer**

The trusted token issuer requires a few more configuration settings to work. Set the audience that the trusted token issuer will trust. The audience is the value inside the `IdToken` that's identified by the key and can be found in the identity provider settings. For example: 

```
1234973b-abcd-1234-abcd-345c5a9c1234
```

Create a file named `grant.json` that contains the following content. To use this file, change the audience to match your identity provider settings and provide the trusted token issuer ARN that was returned by the previous command.

```
{
   "JwtBearer":
     {
       "AuthorizedTokenIssuers":
         [
           {
             "TrustedTokenIssuerArn": "arn:aws:sso::123456789012:trustedTokenIssuer/ssoins-1234567890abcdef/tti-43b4a822-1234-1234-1234-a1b2c3d41234",
               "AuthorizedAudiences":
                 [
                   "1234973b-abcd-1234-abcd-345c5a9c1234"
                 ]
            }
         ]
     }
 }
```

Run the following example command. To use this command, replace the `user input placeholders` with your own information.

```
aws sso-admin put-application-grant \
  --application-arn "arn:aws:sso::123456789012:application/ssoins-ssoins-1234567890abcdef/apl-abcd1234a1b2c3d" \
  --grant-type "urn:ietf:params:oauth:grant-type:jwt-bearer" \
  --grant file://./grant.json \
```

This command sets the trusted token issuer with configuration settings to trust the audience in the `grant.json` file and link this audience with the application created in the first step for exchanging tokens of the type `jwt-bearer`. The string `urn:ietf:params:oauth:grant-type:jwt-bearer` is not an arbitrary string. It is a registered namespace in OAuth JSON Web Token (JWT) assertion profiles. You can find more information about this namespace in [RFC 7523](https://datatracker.ietf.org/doc/html/rfc7523).

Next, use the following command to set up which scopes the trusted token issuer will include when exchanging `IdToken` values from your identity provider. For S3 Access Grants, the value for the `--scope` parameter is `s3:access_grants:read_write`.

```
aws sso-admin put-application-access-scope \
  --application-arn "arn:aws:sso::111122223333:application/ssoins-ssoins-111122223333abcdef/apl-abcd1234a1b2c3d" \
  --scope "s3:access_grants:read_write"
```

The last step is to attach a resource policy to the IAM Identity Center application. This policy will allow your application IAM role to make requests to the API operation `sso-oauth:CreateTokenWithIAM` and receive the `IdToken` values from IAM Identity Center.

Create a file named `authentication-method.json` that contains the following content. Replace `123456789012` with your account ID.

```
{
   "Iam":
       {
         "ActorPolicy":
             {
                "Version": "2012-10-17"		 	 	 ,		 	 	 TCX5-2025-waiver;,
                    "Statement":
                    [
                        {
                           "Effect": "Allow",
                            "Principal":
                            {
                              "AWS": "arn:aws:iam::123456789012:role/webapp"
                            },
                           "Action": "sso-oauth:CreateTokenWithIAM",
                            "Resource": "*"
                        }
                    ]
                }
            }
        }
```

To attach the policy to the IAM Identity Center application, run the following command:

```
aws sso-admin put-application-authentication-method \
   --application-arn "arn:aws:sso::123456789012:application/ssoins-ssoins-1234567890abcdef/apl-abcd1234a1b2c3d" \
   --authentication-method-type IAM \
   --authentication-method file://./authentication-method.json
```

This completes the configuration settings for using S3 Access Grants with directory users through a web application. You can test this setup directly in the application or you can call the `CreateTokenWithIAM` API operation by using the following command from an allowed IAM role in the IAM Identity Center application policy:

```
aws sso-oidc create-token-with-iam \
   --client-id "arn:aws:sso::123456789012:application/ssoins-ssoins-1234567890abcdef/apl-abcd1234a1b2c3d"  \
   --grant-type urn:ietf:params:oauth:grant-type:jwt-bearer \
   --assertion IdToken
```

The response will be similar to this:

```
{
    "accessToken": "<suppressed long string to reduce space>",
    "tokenType": "Bearer",
    "expiresIn": 3600,
    "refreshToken": "<suppressed long string to reduce space>",
    "idToken": "<suppressed long string to reduce space>",
    "issuedTokenType": "urn:ietf:params:oauth:token-type:refresh_token",
    "scope": [
      "sts:identity_context",
      "s3:access_grants:read_write",
      "openid",
      "aws"
    ]
}
```

If you decode the `IdToken` value that is encoded with base64, you can see the key-value pairs in JSON format. The key `sts:identity_context` contains the value that your application needs to send in the `sts:AssumeRole` request to include the identity information of the directory user. Here is an example of the `IdToken` decoded:

```
{
    "aws:identity_store_id": "d-996773e796",
    "sts:identity_context": "AQoJb3JpZ2luX2VjEOTtl;<SUPRESSED>",
    "sub": "83d43802-00b1-7054-db02-f1d683aacba5",
    "aws:instance_account": "123456789012",
    "iss": "https://identitycenter.amazonaws.com/ssoins-1234567890abcdef",
    "sts:audit_context": "AQoJb3JpZ2luX2VjEOT<SUPRESSED>==",
    "aws:identity_store_arn": "arn:aws:identitystore::232642235904:identitystore/d-996773e796",
    "aud": "abcd12344U0gi7n4Yyp0-WV1LWNlbnRyYWwtMQ",
    "aws:instance_arn": "arn:aws:sso:::instance/ssoins-6987d7fb04cf7a51",
    "aws:credential_id": "EXAMPLEHI5glPh40y9TpApJn8...",
    "act": {
       "sub": "arn:aws:sso::232642235904:trustedTokenIssuer/ssoins-6987d7fb04cf7a51/43b4a822-1020-7053-3631-cb2d3e28d10e"
    },
    "auth_time": "2023-11-01T20:24:28Z",
    "exp": 1698873868,
    "iat": 1698870268
}
```

You can get the value from `sts:identity_context` and pass this information in an `sts:AssumeRole` call. The following is a CLI example of the syntax. The role to be assumed is a temporary role with permissions to invoke `s3:GetDataAccess`.

```
aws sts assume-role \
   --role-arn "arn:aws:iam::123456789012:role/temp-role" \
   --role-session-name "TempDirectoryUserRole" \
   --provided-contexts ProviderArn="arn:aws:iam::aws:contextProvider/IdentityCenter",ContextAssertion="value from sts:identity_context"
```

You can now use the credentials received from this call to invoke the `s3:GetDataAccess` API operation and receive the final credentials with access to your S3 resources.

# Getting started with S3 Access Grants


Amazon S3 Access Grants is an Amazon S3 feature that provides a scalable access control solution for your S3 data. S3 Access Grants is an S3 credential vendor, meaning that you register with S3 Access Grants your list of grants and at what level. Thereafter, when users or clients need access to your S3 data, they first ask S3 Access Grants for credentials. If there is a corresponding grant that authorizes access, S3 Access Grants vends temporary, least-privilege access credentials. The users or clients can then use S3 Access Grants vended credentials to access your S3 data. With that in mind, if your S3 data requirements mandate a complex or large permission configuration, you can use S3 Access Grants to scale S3 data permissions for users, groups, roles, and applications. 

For most use cases, you can manage access control for your S3 data by using AWS Identity and Access Management (IAM) with bucket policies or IAM identity-based policies. 

However, if you have complex S3 access control requirements, such as the following, you could benefit greatly from using S3 Access Grants: 
+ You are running into the bucket policy size limit of 20 KB. 
+ You grant human identities, for example, Microsoft Entra ID (formerly Azure Active Directory), Okta, or Ping users and groups, access to S3 data for analytics and big data.
+ You must provide cross-account access without making frequent updates to IAM policies.
+ Your data is unstructured and object-level rather than structured, in row and column format.

The S3 Access Grants workflow is as follows: 


| Steps | Description | 
| --- | --- | 
| 1 | [Create an S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance.html) To get started, initiate an S3 Access Grants instance that will contain your individual access grants.   | 
| 2 | [Register a location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html) Second, register an S3 data location (such as the default, `s3://`) and then specify a default IAM role that S3 Access Grants assumes when providing access to the S3 data location. You can also add custom locations to specific buckets or prefixes and map those to custom IAM roles.   | 
| 3 | [Create grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant.html) Create individual permission grants. Specify in these permission grants the registered S3 location, the scope of data access within the location, the identity of the grantee, and their access level (`READ`, `WRITE`, or `READWRITE`).  | 
| 4 | [Request access to S3 data](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-credentials.html) When users, applications, and AWS services want to access S3 data, they first make an access request. S3 Access Grants determines if the request should be authorized. If there is a corresponding grant that authorizes access, S3 Access Grants uses the registered location's IAM role that's associated with that grant to vend temporary credentials back to the requester.  | 
| 5 | [Access S3 data](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-get-data.html) Applications use the temporary credentials vended by S3 Access Grants to access S3 data.  | 

# Working with S3 Access Grants instances


To get started with using AmazonS3 Access Grants, you first create an S3 Access Grants instance. You can create only one S3 Access Grants instance per AWS Region per account. The S3 Access Grants instance serves as the container for your S3 Access Grants resources, which include registered locations and grants. 

With S3 Access Grants, you can create permission grants to your S3 data for AWS Identity and Access Management (IAM) users and roles. If you've [added your corporate identity directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-idp.html) to AWS IAM Identity Center, you can associate this IAM Identity Center instance of your corporate directory with your S3 Access Grants instance. After you've done so, you can create access grants for your corporate users and groups. If you haven't yet added your corporate directory to IAM Identity Center, you can associate your S3 Access Grants instance with an IAM Identity Center instance later. 

**Topics**
+ [

# Create an S3 Access Grants instance
](access-grants-instance-create.md)
+ [

# Get the details of an S3 Access Grants instance
](access-grants-instance-view.md)
+ [

# List your S3 Access Grants instances
](access-grants-instance-list.md)
+ [

# Associate or disassociate your IAM Identity Center instance
](access-grants-instance-idc.md)
+ [

# Delete an S3 Access Grants instance
](access-grants-instance-delete.md)

# Create an S3 Access Grants instance


To get started with using AmazonS3 Access Grants, you first create an S3 Access Grants instance. You can create only one S3 Access Grants instance per AWS Region per account. The S3 Access Grants instance serves as the container for your S3 Access Grants resources, which include registered locations and grants. 

With S3 Access Grants, you can create permission grants to your S3 data for AWS Identity and Access Management (IAM) users and roles. If you've [added your corporate identity directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-idp.html) to AWS IAM Identity Center, you can associate this IAM Identity Center instance of your corporate directory with your S3 Access Grants instance. After you've done so, you can create access grants for your corporate users and groups. If you haven't yet added your corporate directory to IAM Identity Center, you can associate your S3 Access Grants instance with an IAM Identity Center instance later. 

You can create an S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console


Before you can grant access to your S3 data with S3 Access Grants, you must first create an S3 Access Grants instance in the same AWS Region as your S3 data. 

**Prerequisites**  
If you want to grant access to your S3 data by using identities from your corporate directory, [add your corporate identity directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-idp.html) to AWS IAM Identity Center. If you're not yet ready to do so, you can associate your S3 Access Grants instance with an IAM Identity Center instance later.

**To create an S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to switch to. 

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose **Create S3 Access Grants instance**. 

   1. In **Step 1** of the **Set up Access Grants instance** wizard, verify that you want to create the instance in the current AWS Region. Make sure that this is the same AWS Region where your S3 data is located. You can create one S3 Access Grants instance per AWS Region per account. 

   1. (Optional) If you've [added your corporate identity directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-idp.html) to AWS IAM Identity Center, you can associate this IAM Identity Center instance of your corporate directory with your S3 Access Grants instance.

      To do so, select **Add IAM Identity Center instance in *region***. Then enter the IAM Identity Center instance Amazon Resource Name (ARN). 

      If you haven't yet added your corporate directory to IAM Identity Center, you can associate your S3 Access Grants instance with an IAM Identity Center instance later. 

   1. To create the S3 Access Grants instance, choose **Next**. To register a location, see [Step 2 - register a location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance.html).

1. If **Next** or **Create S3 Access Grants instance** is disabled:

**Cannot create instance**
   + You might already have an S3 Access Grants instance in the same AWS Region. In the left navigation pane, choose **Access Grants**. On the **S3 Access Grants** page, scroll down to the **S3 Access Grants instance in your account** section o determine if an instance already exists.
   + You might not have the `s3:CreateAccessGrantsInstance` permission which is required to create an S3 Access Grants instance. Contact your account administrator. For additional permissions that are required if you are associating an IAM Identity Center instance, with your S3 Access Grants instance, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html) . 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example Create an S3 Access Grants instance**  

```
aws s3control create-access-grants-instance \
--account-id 111122223333 \
--region us-east-2
```
Response:  

```
{
    "CreatedAt": "2023-05-31T17:54:07.893000+00:00",
    "AccessGrantsInstanceId": "default",
    "AccessGrantsInstanceArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
}
```

## Using the REST API


You can use the Amazon S3 REST API to create an S3 Access Grants instance. For information on the REST API support for managing an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html)

## Using the AWS SDKs


This section provides an example of how to create an S3 Access Grants instance by using the AWS SDKs.

------
#### [ Java ]

This example creates the S3 Access Grants instance, which serves as a container for your individual access grants. You can have one S3 Access Grants instance per AWS Region in your account. The response includes the instance ID `default` and an Amazon Resource Name (ARN) that's generated for your S3 Access Grants instance.

**Example Create an S3 Access Grants instance request**  

```
public void createAccessGrantsInstance() {
CreateAccessGrantsInstanceRequest createRequest = CreateAccessGrantsInstanceRequest.builder().accountId("111122223333").build();
CreateAccessGrantsInstanceResponse createResponse = s3Control.createAccessGrantsInstance(createRequest);LOGGER.info("CreateAccessGrantsInstanceResponse: " + createResponse);
}
```
Response:  

```
CreateAccessGrantsInstanceResponse(
CreatedAt=2023-06-07T01:46:20.507Z,
AccessGrantsInstanceId=default,
AccessGrantsInstanceArn=arn:aws:s3:us-east-2:111122223333:access-grants/default)
```

------

# Get the details of an S3 Access Grants instance


You can get the details of your Amazon S3 Access Grants instance in a particular AWS Region. You can get the details of your S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To get the details of an S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. The **S3 Access Grants** page lists your S3 Access Grants instances and any cross-account instances that have been shared with your account. To view the details of an instance, choose **View details**. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Get the details of an S3 Access Grants instance**  

```
aws s3control get-access-grants-instance \
 --account-id 111122223333 \
 --region us-east-2
```
Response:  

```
{
    "AccessGrantsInstanceArn": "arn:aws:s3:us-east-2: 111122223333:access-grants/default",
    "AccessGrantsInstanceId": "default",
    "CreatedAt": "2023-05-31T17:54:07.893000+00:00"
}
```

## Using the REST API


For information about the Amazon S3 REST API support for managing an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html) 

## Using the AWS SDKs


This section provides examples of how to get the details of an S3 Access Grants instance by using the AWS SDKs.

To use the following examples, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Example – Get an S3 Access Grants instance**  

```
public void getAccessGrantsInstance() {
GetAccessGrantsInstanceRequest getRequest = GetAccessGrantsInstanceRequest.builder()
.accountId("111122223333")
.build();
GetAccessGrantsInstanceResponse getResponse = s3Control.getAccessGrantsInstance(getRequest);
LOGGER.info("GetAccessGrantsInstanceResponse: " + getResponse);
}
```
Response:  

```
GetAccessGrantsInstanceResponse(
AccessGrantsInstanceArn=arn:aws:s3:us-east-2: 111122223333:access-grants/default,
CreatedAt=2023-06-07T01:46:20.507Z)
```

------

# List your S3 Access Grants instances


You can list your S3 Access Grants instances, including the instances that have been shared with you through AWS Resource Access Manager (AWS RAM).

You can list your S3 Access Grants instances by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To list your S3 Access Grants instances**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. The **S3 Access Grants** page lists your S3 Access Grants instances and any cross-account instances that have been shared with your account. To view the details of an instance, choose **View details**. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – List all S3 Access Grants instances for an account**  
This action lists the S3 Access Grants instances for an account. You can only have one S3 Access Grants instance per AWS Region. This action also lists other cross-account S3 Access Grants instances that your account has access to.   

```
aws s3control list-access-grants-instances \
 --account-id 111122223333 \
 --region us-east-2
```
Response:  

```
{
    "AccessGrantsInstanceArn": "arn:aws:s3:us-east-2: 111122223333:access-grants/default",
    "AccessGrantsInstanceId": "default",
    "CreatedAt": "2023-05-31T17:54:07.893000+00:00"
}
```

## Using the REST API


For information about the Amazon S3 REST API support for managing an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html) 

## Using the AWS SDKs


This section provides examples of how to get the details of an S3 Access Grants instance by using the AWS SDKs.

To use the following examples, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Example – List all S3 Access Grants instances for an account**  
This action lists the S3 Access Grants instances for an account. You can only have one S3 Access Grants instance per Region. This action can also list other cross-account S3 Access Grants instances that your account has access to.   

```
public void listAccessGrantsInstances() {
ListAccessGrantsInstancesRequest listRequest = ListAccessGrantsInstancesRequest.builder()
.accountId("111122223333")
.build();
ListAccessGrantsInstancesResponse listResponse = s3Control.listAccessGrantsInstances(listRequest);
LOGGER.info("ListAccessGrantsInstancesResponse: " + listResponse);
}
```
Response:  

```
ListAccessGrantsInstancesResponse(
AccessGrantsInstancesList=[
ListAccessGrantsInstanceEntry(
AccessGrantsInstanceId=default,
AccessGrantsInstanceArn=arn:aws:s3:us-east-2:111122223333:access-grants/default,
CreatedAt=2023-06-07T04:28:11.728Z
)
]
)
```

------

# Associate or disassociate your IAM Identity Center instance


In Amazon S3 Access Grants, you can associate the AWS IAM Identity Center instance of your corporate identity directory with an S3 Access Grants instance. After you do so, you can create access grants for your corporate directory users and groups, in addition to AWS Identity and Access Management (IAM) users and roles. 

If you no longer want to create access grants for your corporate directory users and groups, you can disassociate your IAM Identity Center instance from your S3 Access Grants instance. 

You can associate or disassociate an IAM Identity Center instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


Before you associate your IAM Identity Center instance with your S3 Access Grants instance, you must add your corporate identity directory to IAM Identity Center. For more information, see [S3 Access Grants and corporate directory identities](access-grants-directory-ids.md).

**To associate an IAM Identity Center instance with an S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**. 

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance. 

1. On the details page, in the **IAM Identity Center** section, choose to either **Add** an IAM Identity Center instance or **Deregister** an already associated IAM Identity Center instance. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Associate an IAM Identity Center instance with an S3 Access Grants instance**  

```
aws s3control associate-access-grants-identity-center \
 --account-id 111122223333 \
 --identity-center-arn arn:aws:sso:::instance/ssoins-1234a567bb89012c \
 --profile access-grants-profile \
 --region eu-central-1
     
// No response body
```

**Example – Disassociate an IAM Identity Center instance from an S3 Access Grants instance**  

```
aws s3control dissociate-access-grants-identity-center \
 --account-id 111122223333 \
 --profile access-grants-profile \
 --region eu-central-1
     
// No response body
```

## Using the REST API


For information about the Amazon S3 REST API support for managing the association between an IAM Identity Center instance and an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html) 

# Delete an S3 Access Grants instance


You can delete an Amazon S3 Access Grants instance from an AWS Region in your account. However, before you can delete an S3 Access Grants instance, you must first do the following:
+ Delete all resources within the S3 Access Grants instance, including all grants and locations. For more information, see [Delete a grant](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant-delete.html) and [Delete a location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant-location.html).
+ If you've associated an AWS IAM Identity Center instance with your S3 Access Grants instance, you must disassociate the IAM Identity Center instance. For more information, see [Associate or disassociate your IAM Identity Center instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-idc.html).

**Important**  
If you delete an S3 Access Grants instance, the deletion is permanent and can't be undone. All grantees that were given access through the grants in this S3 Access Grants instance will lose access to your S3 data. 

You can delete an S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To delete an S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance. 

1. On the instance details page, choose **Delete instance** in the upper-right corner. 

1. In the dialog box that appears, choose **Delete**. This action can't be undone.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Note**  
Before you can delete an S3 Access Grants instance, you must first delete all grants and locations created within the S3 Access Grants instance. If you have associated an IAM Identity Center center instance with your S3 Access Grants instance, you must disassociate it first.

**Example – Delete an S3 Access Grants instance**  

```
aws s3control delete-access-grants-instance \
--account-id 111122223333 \
--profile access-grants-profile \
--region us-east-2 \
--endpoint-url https://s3-control.us-east-2.amazonaws.com \
 
 // No response body
```

## Using the REST API


For information about the Amazon S3 REST API support for deleting an S3 Access Grants instance, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


This section provides examples of how to delete an S3 Access Grants instance by using the AWS SDKs.

To use the following example, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Note**  
Before you can delete an S3 Access Grants instance, you must first delete all grants and locations created within the S3 Access Grants instance. If you have associated an IAM Identity Center center instance with your S3 Access Grants instance, you must disassociate it first.

**Example – Delete an S3 Access Grants instance**  

```
public void deleteAccessGrantsInstance() {
DeleteAccessGrantsInstanceRequest deleteRequest = DeleteAccessGrantsInstanceRequest.builder()
.accountId("111122223333")
.build();
DeleteAccessGrantsInstanceResponse deleteResponse = s3Control.deleteAccessGrantsInstance(deleteRequest);
LOGGER.info("DeleteAccessGrantsInstanceResponse: " + deleteResponse);
}
```

------

# Working with S3 Access Grants locations


After you [create an Amazon S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-create.html) in an AWS Region in your account, you register an S3 location in that instance. An S3 Access Grants location maps the default S3 location (`s3://`), a bucket, or a prefix to an AWS Identity and Access Management (IAM) role. S3 Access Grants assumes this IAM role to vend temporary credentials to the grantee that is accessing that particular location. You must first register at least one location in your S3 Access Grants instance before you can create an access grant. 

You can register a location, view a location's details, edit a location, and delete a location.

**Note**  
 After you register the first location in your S3 Access Grants instance, your instance still does not have any individual access grants in it. To create an access grant, see [Create grants](access-grants-grant-create.md). 

**Topics**
+ [

# Register a location
](access-grants-location-register.md)
+ [

# View the details of a registered location
](access-grants-location-view.md)
+ [

# Update a registered location
](access-grants-location-edit.md)
+ [

# Delete a registered location
](access-grants-location-delete.md)

# Register a location


After you [create an Amazon S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-create.html) in an AWS Region in your account, you register an S3 location in that instance. An S3 Access Grants location maps the default S3 location (`s3://`), a bucket, or a prefix to an AWS Identity and Access Management (IAM) role. S3 Access Grants assumes this IAM role to vend temporary credentials to the grantee that is accessing that particular location. You must first register at least one location in your S3 Access Grants instance before you can create an access grant. 

**Recommended use case**  
We recommend that you register the default location (`s3://`) and map it to an IAM role. The location at the default S3 path (`s3://`) covers access to all of your S3 buckets in that AWS Region of your account. When you create an access grant, you can narrow the grant scope to a bucket, a prefix, or an object within the default location. 

**Complex access-management use cases**  
More complex access-management use cases might require you to register more than the default location. Some examples of such use cases are:
+ Suppose that the *amzn-s3-demo-bucket* is a registered location in your S3 Access Grants instance with an IAM role mapped to it, but this IAM role is denied access to a particular prefix within the bucket. In this case, you can register the prefix that the IAM role does not have access to as a separate location and map that location to a different IAM role with the necessary access. 
+ Suppose that you want to create grants that restrict access to only the users within a virtual private cloud (VPC) endpoint. In this case, you can register a location for a bucket in which the IAM role restricts access to the VPC endpoint. Later, when a grantee asks S3 Access Grants for credentials, S3 Access Grants assumes the location’s IAM role to vend the temporary credentials. This credential will deny access to the specific bucket unless the caller is within the VPC endpoint. This deny permission is applied in addition to the regular READ, WRITE, or READWRITE permission specified in the grant.

When you register a location, you must also specify the IAM role that S3 Access Grants assumes to vend temporary credentials and to scope the permissions for a specific grant. 

If your use case requires you to register multiple locations in your S3 Access Grants instance, you can register any of the following:


| S3 URI | IAM role | Description | 
| --- | --- | --- | 
| s3:// | Default-IAM-role |  The default location, `s3://`, includes all buckets in the AWS Region.  | 
| s3://amzn-s3-demo-bucket1/ | IAM-role-For-bucket |  This location includes all objects in the specified bucket.  | 
| s3://amzn-s3-demo-bucket1/prefix-name | IAM-role-For-prefix |  This location includes all objects in the bucket with an object key name that starts with this prefix.  | 

Before you can register a specific bucket or prefix, make sure that you do the following:
+ Create one or more buckets that contain the data that you want to grant access to. These buckets must be located in the same AWS Region as your S3 Access Grants instance. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). 

  Adding a prefix is an optional step. Prefixes are strings at the beginning of an object key name. You can use them to organize objects in your bucket as well as for access management. To add a prefix to a bucket, see [Creating object key names](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html). 
+ Create an IAM role that has permission to access your S3 data in the AWS Region. For more information, see [Creating IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html) in the *AWS IAM Identity Center user guide*. 
+  In the IAM role trust policy, give the S3 Access Grants service (`access-grants.s3.amazonaws.com`) principal access to the IAM role that you created. To do so, you can create a JSON file that contains the following statements. To add the trust policy to your account, see [Create a role using custom trust policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html).

  *TestRolePolicy.json*

------
#### [ JSON ]

****  

  ```
  {
    "Version":"2012-10-17",		 	 	 
      "Statement": [
      {
        "Sid": "TestRolePolicy",
        "Effect": "Allow",
        "Principal": {
          "Service": "access-grants.s3.amazonaws.com"
        },
        "Action": [
          "sts:AssumeRole", 
          "sts:SetSourceIdentity"
        ],
        "Condition": {
          "StringEquals": {
            "aws:SourceAccount": "111122223333",
            "aws:SourceArn": "arn:aws:s3::111122223333:access-grants/default"
          }
        }
      }
    ]
  }
  ```

------

  Alternatively, for an IAM Identity Center use case, use the following policy which includes a second statement:
+ Create an IAM policy to attach Amazon S3 permissions to the IAM role that you created. See the following example `iam-policy.json` file and replace the `user input placeholders` with your own information. 
**Note**  
If you use server-side encryption with AWS Key Management Service (AWS KMS) keys to encrypt your data, the following example includes the necessary AWS KMS permissions for the IAM role in the policy. If you do not use this feature, you can remove these permissions from your IAM policy. 
You can restrict the IAM role to access S3 data only if the credentials are vended by S3 Access Grants. This example shows you how to add a `Condition` statement for a specific S3 Access Grants instance. To use this `Condition`, replace the S3 Access Grants instance ARN in the `Condition` statement with your S3 Access Grants instance ARN, which has the format: `arn:aws:s3:region:accountId:access-grants/default` 

  *iam-policy.json*

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
           "Sid": "ObjectLevelReadPermissions",
           "Effect":"Allow",
           "Action":[
              "s3:GetObject",
              "s3:GetObjectVersion",
              "s3:GetObjectAcl",
              "s3:GetObjectVersionAcl",
              "s3:ListMultipartUploadParts"
           ],
           "Resource":[ 
              "arn:aws:s3:::*"  
           ],
           "Condition":{
              "StringEquals": { "aws:ResourceAccount": "accountId" },
              "ArnEquals": {
                  "s3:AccessGrantsInstanceArn": ["arn:aws:s3:region:accountId:access-grants/default"]
              }
          } 
        },
        {
           "Sid": "ObjectLevelWritePermissions",
           "Effect":"Allow",
           "Action":[
              "s3:PutObject",
              "s3:PutObjectAcl",
              "s3:PutObjectVersionAcl",
              "s3:DeleteObject",
              "s3:DeleteObjectVersion",
              "s3:AbortMultipartUpload"
           ],
           "Resource":[
              "arn:aws:s3:::*"  
           ],
           "Condition":{
              "StringEquals": { "aws:ResourceAccount": "accountId" },
              "ArnEquals": {
                  "s3:AccessGrantsInstanceArn": ["arn:aws:s3:AWS Region:accountId:access-grants/default"]
              }
           } 
        },
        {
           "Sid": "BucketLevelReadPermissions",
           "Effect":"Allow",
           "Action":[
              "s3:ListBucket"
           ],
           "Resource":[
              "arn:aws:s3:::*"
           ],
           "Condition":{
              "StringEquals": { "aws:ResourceAccount": "accountId" },
              "ArnEquals": {
                  "s3:AccessGrantsInstanceArn": ["arn:aws:s3:AWS Region:accountId:access-grants/default"]
              }
           }     
        },
  	  //Optionally add the following section if you use SSE-KMS encryption
        {
           "Sid": "KMSPermissions",
           "Effect":"Allow",
           "Action":[
              "kms:Decrypt",
              "kms:GenerateDataKey"
           ],
           "Resource":[
              "*"
           ]
        }
     ]
  }
  ```

You can register a location in your S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, or the AWS SDKs.

**Note**  
 After you register the first location in your S3 Access Grants instance, your instance still does not have any individual access grants in it. To create an access grant, see [Create grants](access-grants-grant-create.md). 

## Using the S3 console


Before you can grant access to your S3 data with S3 Access Grants, you must have at least one registered location. 

**To register a location in your S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

   If you're using S3 Access Grants instance for the first time, make sure that you have completed [Step 1 - create an S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-create.html) and navigated to **Step 2** of the **Set up Access Grants instance** wizard. If you already have an S3 Access Grants instance, choose **View details**, and then from the **Locations** tab, choose **Register location**.

   1. For the **Location scope**, choose **Browse S3** or enter the S3 URI path to the location that you want to register. For S3 URI formats, see the [location formats](#location-types) table. After you enter a URI, you can choose **View** to browse the location. 

   1. For the **IAM role**, choose one of the following: 
      + **Choose from existing IAM roles**

        Choose an IAM role from the dropdown list. After you choose a role, choose **View** to make sure that this role has the necessary permissions to manage the location that you're registering. Specifically, make sure that this role grants S3 Access Grants the permissions `sts:AssumeRole` and `sts:SetSourceIdentity`. 
      + **Enter IAM role ARN**

        Navigate to the [IAM Console](https://console.aws.amazon.com/iam/). Copy the Amazon Resource Name (ARN) of the IAM role and paste it in this box. 

   1. To finish, choose **Next** or **Register location**.

1. Troubleshooting:

**Cannot register location**
   + The location might already be registered.

     You might not have the `s3:CreateAccessGrantsLocation` permission to register locations. Contact your account administrator.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

You can register the default location, `s3://`, or a custom location in your S3 Access Grants instance. Make sure that you first create an IAM role with principal access to the location, and then make sure that you grant S3 Access Grants permission to assume this role. 

To use the following example commands, replace the `user input placeholders` with your own information.

**Example Create a resource policy**  
Create a policy that allows S3 Access Grants to assume the IAM role. To do so, you can create a JSON file that contains the following statements. To add the resource policy to your account, see [Create and attach your first customer managed policy](https://docs.aws.amazon.com//IAM/latest/UserGuide/tutorial_managed-policies.html).  
*TestRolePolicy.json*    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "Stmt1234567891011",
      "Action": ["sts:AssumeRole", "sts:SetSourceIdentity"],
      "Effect": "Allow",
      "Principal": {"Service": "access-grants.s3.amazonaws.com"}
    }
  ]
}
```

**Example Create the role**  
Run the following IAM command to create the role.  

```
aws iam create-role --role-name accessGrantsTestRole \
 --region us-east-2 \
 --assume-role-policy-document file://TestRolePolicy.json
```
Running the `create-role` command returns the policy:   

```
{
    "Role": {
        "Path": "/",
        "RoleName": "accessGrantsTestRole",
        "RoleId": "AROASRDGX4WM4GH55GIDA",
        "Arn": "arn:aws:iam::111122223333:role/accessGrantsTestRole",
        "CreateDate": "2023-05-31T18:11:06+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",		 	 	 
            "Statement": [
                {
                    "Sid": "Stmt1685556427189",
                    "Action": [
                        "sts:AssumeRole",
                        "sts:SetSourceIdentity"
                    ],
                    "Effect": "Allow",
                    "Principal": {
                        "Service":"access-grants.s3.amazonaws.com"
                    }
                }
            ]
        }
    }
}
```

**Example**  
Create an IAM policy to attach Amazon S3 permissions to the IAM role. See the following example `iam-policy.json` file and replace the `user input placeholders` with your own information.   
If you use server-side encryption with AWS Key Management Service (AWS KMS) keys to encrypt your data, the following example adds the necessary AWS KMS permissions for the IAM role in the policy. If you do not use this feature, you can remove these permissions from your IAM policy.   
To make sure that the IAM role can only be used to access data in S3 if the credentials are vended out by S3 Access Grants, this example shows you how to add a `Condition` statement that specifies the S3 Access Grants instance (`s3:AccessGrantsInstance: InstanceArn`) in your IAM policy. When using following example policy, replace the `user input placeholders` with your own information.
*iam-policy.json*    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
         "Sid": "ObjectLevelReadPermissions",
         "Effect": "Allow",
         "Action": [
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:GetObjectAcl",
            "s3:GetObjectVersionAcl",
            "s3:ListMultipartUploadParts"
         ],
         "Resource": [ 
            "arn:aws:s3:::*"  
         ],
         "Condition": {
            "StringEquals": { "aws:ResourceAccount": "111122223333" },
            "ArnEquals": {
                "s3:AccessGrantsInstanceArn": ["arn:aws:s3:us-east-1::access-grants/default"]
            }
        } 
      },
      {
         "Sid": "ObjectLevelWritePermissions",
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:PutObjectVersionAcl",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion",
            "s3:AbortMultipartUpload"
         ],
         "Resource": [
            "arn:aws:s3:::*"  
         ],
         "Condition": {
            "StringEquals": { "aws:ResourceAccount": "111122223333" },
            "ArnEquals": {
                "s3:AccessGrantsInstanceArn": ["arn:aws:s3:us-east-1::access-grants/default"]
            }
         } 
      },
      {
         "Sid": "BucketLevelReadPermissions",
         "Effect": "Allow",
         "Action": [
            "s3:ListBucket"
         ],
         "Resource": [
            "arn:aws:s3:::*"
         ],
         "Condition": {
            "StringEquals": { "aws:ResourceAccount": "111122223333" },
            "ArnEquals": {
                "s3:AccessGrantsInstanceArn": ["arn:aws:s3:us-east-1::access-grants/default"]
            }
         }     
      },
      {
         "Sid": "KMSPermissions",
         "Effect": "Allow",
         "Action": [
            "kms:Decrypt",
            "kms:GenerateDataKey"
         ],
         "Resource": [
            "*"
         ],
         "Condition": {
            "StringEquals": {
               "kms:ViaService": "s3.us-east-1.amazonaws.com"
            }
         }
      }
   ]
}
```

**Example**  
Run the following command:  

```
aws iam put-role-policy \
--role-name accessGrantsTestRole \
--policy-name accessGrantsTestRole \
--policy-document file://iam-policy.json
```

**Example Register the default location**  

```
aws s3control create-access-grants-location \
 --account-id 111122223333 \
 --location-scope s3:// \
 --iam-role-arn arn:aws:iam::111122223333:role/accessGrantsTestRole
```
Response:  

```
{"CreatedAt": "2023-05-31T18:23:48.107000+00:00",
    "AccessGrantsLocationId": "default",
    "AccessGrantsLocationArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/location/default",
    "LocationScope": "s3://" 
    "IAMRoleArn": "arn:aws:iam::111122223333:role/accessGrantsTestRole"
}
```

**Example Register a custom location**  

```
aws s3control create-access-grants-location \
 --account-id 111122223333 \
 --location-scope s3://DOC-BUCKET-EXAMPLE/ \
 --iam-role-arn arn:aws:iam::123456789012:role/accessGrantsTestRole
```
Response:  

```
{"CreatedAt": "2023-05-31T18:23:48.107000+00:00",
    "AccessGrantsLocationId": "635f1139-1af2-4e43-8131-a4de006eb456",
    "AccessGrantsLocationArn": "arn:aws:s3:us-east-2: 111122223333:access-grants/default/location/635f1139-1af2-4e43-8131-a4de006eb888",
    "LocationScope": "s3://DOC-BUCKET-EXAMPLE/",
    "IAMRoleArn": "arn:aws:iam::111122223333:role/accessGrantsTestRole"
}
```

## Using the REST API


For information about Amazon S3 REST API support for managing an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsLocation.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html) 

## Using the AWS SDKs


This section provides examples of how to register locations by using the AWS SDKs.

To use the following examples, replace the `user input placeholders` with your own information.

------
#### [ Java ]

You can register the default location, `s3://`, or a custom location in your S3 Access Grants instance. Make sure that you first create an IAM role with principal access to the location, and then make sure that you grant S3 Access Grants permission to assume this role. 

To use the following example commands, replace the `user input placeholders` with your own information.

**Example Register a default location**  
Request:  

```
public void createAccessGrantsLocation() {
CreateAccessGrantsLocationRequest createRequest = CreateAccessGrantsLocationRequest.builder()
.accountId("111122223333")
.locationScope("s3://")
.iamRoleArn("arn:aws:iam::123456789012:role/accessGrantsTestRole")
.build();
CreateAccessGrantsLocationResponse createResponse = s3Control.createAccessGrantsLocation(createRequest);
LOGGER.info("CreateAccessGrantsLocationResponse: " + createResponse);
}
```
Response:  

```
CreateAccessGrantsLocationResponse(
CreatedAt=2023-06-07T04:35:11.027Z,
AccessGrantsLocationId=default,
AccessGrantsLocationArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/location/default,
LocationScope=s3://,
IAMRoleArn=arn:aws:iam::111122223333:role/accessGrantsTestRole
)
```

**Example Register a custom location**  
Request:  

```
public void createAccessGrantsLocation() {
CreateAccessGrantsLocationRequest createRequest = CreateAccessGrantsLocationRequest.builder()
.accountId("111122223333")
.locationScope("s3://DOC-BUCKET-EXAMPLE/")
.iamRoleArn("arn:aws:iam::111122223333:role/accessGrantsTestRole")
.build();
CreateAccessGrantsLocationResponse createResponse = s3Control.createAccessGrantsLocation(createRequest);
LOGGER.info("CreateAccessGrantsLocationResponse: " + createResponse);
}
```
Response:  

```
CreateAccessGrantsLocationResponse(
CreatedAt=2023-06-07T04:35:10.027Z,
AccessGrantsLocationId=18cfe6fb-eb5a-4ac5-aba9-8d79f04c2012,
AccessGrantsLocationArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/location/18cfe6fb-eb5a-4ac5-aba9-8d79f04c2666,
LocationScope= s3://test-bucket-access-grants-user123/,
IAMRoleArn=arn:aws:iam::111122223333:role/accessGrantsTestRole
)
```

------

# View the details of a registered location


You can get the details of a location that's registered in your S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs. 

## Using the S3 console


**To view the locations registered in your S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance.

1. On the details page for the instance, choose the **Locations** tab.

1. Find the registered location that you want to view. To filter the list of registered locations, use the search box. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Get the details of a registered location**  

```
aws s3control get-access-grants-location \
--account-id 111122223333 \
--access-grants-location-id default
```
Response:  

```
{
    "CreatedAt": "2023-05-31T18:23:48.107000+00:00",
    "AccessGrantsLocationId": "default",
    "AccessGrantsLocationArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/location/default",
    "IAMRoleArn": "arn:aws:iam::111122223333:role/accessGrantsTestRole"
}
```

**Example – List all of the locations that are registered in an S3 Access Grants instance**  
To restrict the results to an S3 prefix or bucket, you can optionally use the `--location-scope s3://bucket-and-or-prefix` parameter.   

```
aws s3control list-access-grants-locations \
--account-id 111122223333 \
--region us-east-2
```
Response:  

```
{"AccessGrantsLocationsList": [
  {
    "CreatedAt": "2023-05-31T18:23:48.107000+00:00",
    "AccessGrantsLocationId": "default",
    "AccessGrantsLocationArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/location/default",
    "LocationScope": "s3://" 
    "IAMRoleArn": "arn:aws:iam::111122223333:role/accessGrantsTestRole"
     },
  {
    "CreatedAt": "2023-05-31T18:23:48.107000+00:00",
    "AccessGrantsLocationId": "635f1139-1af2-4e43-8131-a4de006eb456",
    "AccessGrantsLocationArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/location/635f1139-1af2-4e43-8131-a4de006eb888",
    "LocationScope": "s3://amzn-s3-demo-bucket/prefixA*",
    "IAMRoleArn": "arn:aws:iam::111122223333:role/accessGrantsTestRole"
     }
   ]
  }
```

## Using the REST API


For information about the Amazon S3 REST API support for getting the details of a registered location or listing all of the locations that are registered with an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html) 

## Using the AWS SDKs


This section provides examples of how to get the details of a registered location or list all of the registered locations in an S3 Access Grants instance by using the AWS SDKs.

To use the following examples, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Example – Get the details of a registered location**  

```
public void getAccessGrantsLocation() {
GetAccessGrantsLocationRequest getAccessGrantsLocationRequest = GetAccessGrantsLocationRequest.builder()
.accountId("111122223333")
.accessGrantsLocationId("default")
.build();
GetAccessGrantsLocationResponse getAccessGrantsLocationResponse = s3Control.getAccessGrantsLocation(getAccessGrantsLocationRequest);
LOGGER.info("GetAccessGrantsLocationResponse: " + getAccessGrantsLocationResponse);
}
```
Response:  

```
GetAccessGrantsLocationResponse(
CreatedAt=2023-06-07T04:35:10.027Z,
AccessGrantsLocationId=default,
AccessGrantsLocationArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/location/default,
LocationScope= s3://,
IAMRoleArn=arn:aws:iam::111122223333:role/accessGrantsTestRole
)
```

**Example – List all registered locations in an S3 Access Grants instance**  
To restrict the results to an S3 prefix or bucket, you can optionally pass an S3 URI, such as `s3://bucket-and-or-prefix`, in the `LocationScope` parameter.   

```
public void listAccessGrantsLocations() {

ListAccessGrantsLocationsRequest listRequest =   ListAccessGrantsLocationsRequest.builder()
.accountId("111122223333")
.build();

ListAccessGrantsLocationsResponse listResponse = s3Control.listAccessGrantsLocations(listRequest);
LOGGER.info("ListAccessGrantsLocationsResponse: " + listResponse);
}
```
Response:  

```
ListAccessGrantsLocationsResponse(
AccessGrantsLocationsList=[
ListAccessGrantsLocationsEntry(
CreatedAt=2023-06-07T04:35:11.027Z,
AccessGrantsLocationId=default,
AccessGrantsLocationArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/location/default,
LocationScope=s3://,
IAMRoleArn=arn:aws:iam::111122223333:role/accessGrantsTestRole
),
ListAccessGrantsLocationsEntry(
CreatedAt=2023-06-07T04:35:10.027Z,
AccessGrantsLocationId=635f1139-1af2-4e43-8131-a4de006eb456,
AccessGrantsLocationArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/location/635f1139-1af2-4e43-8131-a4de006eb888,
LocationScope=s3://amzn-s3-demo-bucket/prefixA*,
IAMRoleArn=arn:aws:iam::111122223333:role/accessGrantsTestRole
)
]
)
```

------

# Update a registered location


You can update the AWS Identity and Access Management (IAM) role of a location that's registered in your Amazon S3 Access Grants instance. For each new IAM role that you use to register a location in S3 Access Grants, be sure to give the S3 Access Grants service principal (`access-grants.s3.amazonaws.com`) access to this role. To do this, add an entry for the new IAM role in the same trust policy JSON file that you used when you first [registered the location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html). 

You can update a location in your S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To update the IAM role of a location registered with your S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance.

1. On the details page for the instance, choose the **Locations** tab.

1. Find the location that you want to update. To filter the list of locations, use the search box.

1. Choose the options button next to the registered location that you want to update.

1. Update the IAM role, and then choose **Save changes**.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Update the IAM role of a registered location**  

```
aws s3control update-access-grants-location \
--account-id 111122223333 \
--access-grants-location-id 635f1139-1af2-4e43-8131-a4de006eb999 \
--iam-role-arn arn:aws:iam::777788889999:role/accessGrantsTestRole
```
Response:  

```
{
    "CreatedAt": "2023-05-31T18:23:48.107000+00:00",
    "AccessGrantsLocationId": "635f1139-1af2-4e43-8131-a4de006eb999",
    "AccessGrantsLocationArn": "arn:aws:s3:us-east-2:777788889999:access-grants/default/location/635f1139-1af2-4e43-8131-a4de006eb888",
    "LocationScope": "s3://amzn-s3-demo-bucket/prefixB*",
    "IAMRoleArn": "arn:aws:iam::777788889999:role/accessGrantsTestRole"
}
```

## Using the REST API


For information on the Amazon S3 REST API support for updating a location in an S3 Access Grants instance, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


This section provides examples of how to update the IAM role of a registered location by using the AWS SDKs.

To use the following example, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Example – Update the IAM role of a registered location**  

```
public void updateAccessGrantsLocation() {
UpdateAccessGrantsLocationRequest updateRequest = UpdateAccessGrantsLocationRequest.builder()
.accountId("111122223333")
.accessGrantsLocationId("635f1139-1af2-4e43-8131-a4de006eb999")
.iamRoleArn("arn:aws:iam::777788889999:role/accessGrantsTestRole")
.build();
UpdateAccessGrantsLocationResponse updateResponse = s3Control.updateAccessGrantsLocation(updateRequest);
LOGGER.info("UpdateAccessGrantsLocationResponse: " + updateResponse);
}
```
Response:  

```
UpdateAccessGrantsLocationResponse(
CreatedAt=2023-06-07T04:35:10.027Z,
AccessGrantsLocationId=635f1139-1af2-4e43-8131-a4de006eb999,
AccessGrantsLocationArn=arn:aws:s3:us-east-2:777788889999:access-grants/default/location/635f1139-1af2-4e43-8131-a4de006eb888,
LocationScope=s3://amzn-s3-demo-bucket/prefixB*,
IAMRoleArn=arn:aws:iam::777788889999:role/accessGrantsTestRole
)
```

------

# Delete a registered location


You can delete a location registration from an Amazon S3 Access Grants instance. Deleting the location deregisters it from the S3 Access Grants instance. 

Before you can remove a location registration from an S3 Access Grants instance, you must delete all of the grants that are associated with this location. For information about how to delete grants, see [Delete a grant](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant-delete.html). 

You can delete a location in your S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To delete a location registration from your S3 Access Grants instance**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance.

1. On the details page for the instance, choose the **Locations** tab.

1. Find the location that you want to update. To filter the list of locations, use the search box.

1. Choose the option button next to the registered location that you want to delete.

1. Choose **Deregister**.

1. A dialog box appears that warns you that this action can't be undone. To delete the location, choose **Deregister**.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Delete a location registration**  

```
aws s3control delete-access-grants-location \
--account-id 111122223333 \
--access-grants-location-id  a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 
 // No response body
```

## Using the REST API


For information about the Amazon S3 REST API support for deleting a location from an S3 Access Grants instance, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


This section provides an example of how to delete a location by using the AWS SDKs.

To use the following example, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Example – Delete a location registration**  

```
public void deleteAccessGrantsLocation() {
DeleteAccessGrantsLocationRequest deleteRequest = DeleteAccessGrantsLocationRequest.builder()
.accountId("111122223333")
.accessGrantsLocationId("a1b2c3d4-5678-90ab-cdef-EXAMPLE11111")
.build();
DeleteAccessGrantsLocationResponse deleteResponse = s3Control.deleteAccessGrantsLocation(deleteRequest);
LOGGER.info("DeleteAccessGrantsLocationResponse: " + deleteResponse);
}
```
Response:  

```
DeleteAccessGrantsLocationResponse()
```

------

# Working with grants in S3 Access Grants


An individual access *grant* in an S3 Access Grants instance allows a specific identity—an AWS Identity and Access Management (IAM) principal, or a user or group in a corporate directory—to get access within a location that is registered in your S3 Access Grants instance. A location maps buckets or prefixes to an IAM role. S3 Access Grants assumes this IAM role to vend temporary credentials to grantees. 

After you [register at least one location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html) in your S3 Access Grants instance, you can create an access grant.

The grantee can be an IAM user or role or a directory user or group. A directory user is a user from your corporate directory or external identity source that you [associated with your S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-idc.html). For more information, see [S3 Access Grants and corporate directory identities](access-grants-directory-ids.md). To create a grant for a specific directory user or group from IAM Identity Center, find the GUID that IAM Identity Center uses to identify that user in IAM Identity Center, for example, `a1b2c3d4-5678-90ab-cdef-EXAMPLE11111`. For more information about how to use IAM Identity Center to view user information, see [View user and group assignments](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-view-assignments.html) in the *AWS IAM Identity Center user guide*. 

You can grant access to a bucket, a prefix, or an object. A prefix in Amazon S3 is a string of characters in the beginning of an object key name that is used to organize objects within a bucket. This can be any string of allowed characters, for example, object key names in your bucket that start with the `engineering/` prefix. 

**Topics**
+ [

# Create grants
](access-grants-grant-create.md)
+ [

# View a grant
](access-grants-grant-view.md)
+ [

# Delete a grant
](access-grants-grant-delete.md)

# Create grants


An individual access *grant* in an S3 Access Grants instance allows a specific identity—an AWS Identity and Access Management (IAM) principal, or a user or group in a corporate directory—to get access within a location that is registered in your S3 Access Grants instance. A location maps buckets or prefixes to an IAM role. S3 Access Grants assumes this IAM role to vend temporary credentials to grantees. 

After you [register at least one location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html) in your S3 Access Grants instance, you can create an access grant.

The grantee can be an IAM user or role or a directory user or group. A directory user is a user from your corporate directory or external identity source that you [associated with your S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-idc.html). For more information, see [S3 Access Grants and corporate directory identities](access-grants-directory-ids.md). To create a grant for a specific directory user or group from IAM Identity Center, find the GUID that IAM Identity Center uses to identify that user in IAM Identity Center, for example, `a1b2c3d4-5678-90ab-cdef-EXAMPLE11111`. For more information about how to use IAM Identity Center to view user information, see [View user and group assignments](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-view-assignments.html) in the *AWS IAM Identity Center user guide*. 

You can grant access to a bucket, a prefix, or an object. A prefix in Amazon S3 is a string of characters in the beginning of an object key name that is used to organize objects within a bucket. This can be any string of allowed characters, for example, object key names in your bucket that start with the `engineering/` prefix. 

## Subprefix


When granting access to a registered location, you can use the `Subprefix` field to narrow the scope of access to a subset of the location scope. If the registered location that you choose for the grant is the default S3 path (`s3://`), you must narrow the grant scope. You cannot create an access grant for the default location (`s3://`), which would give the grantee access to every bucket in an AWS Region. Instead, you must narrow the grant scope to one of the following:
+ A bucket: `s3://bucket/*`
+ A prefix within a bucket: `s3://bucket/prefix*`
+ A prefix within a prefix: `s3://bucket/prefixA/prefixB*`
+ An object: `s3://bucket/object-key-name`

If you are creating an access grant where the registered location is a bucket, you can pass one of the following in the `Subprefix` field to narrow the grant scope:
+ A prefix within the bucket: `prefix*`
+ A prefix within a prefix: `prefixA/prefixB*`
+ An object: `/object-key-name`

After you create the grant, the grant scope that's displayed in the Amazon S3 console or the `GrantScope` that is returned in the API or AWS Command Line Interface (AWS CLI) response is the result of concatenating the location path with the `Subprefix`. Make sure that this concatenated path maps correctly to the S3 bucket, prefix, or object to which you want to grant access.

**Note**  
If you need to create an access grant that grants access to only one object, you must specify that the grant type is for an object. To do this in an API call or a CLI command, pass the `s3PrefixType` parameter with the value `Object`. In the Amazon S3 console, when you create the grant, after you select a location, under **Grant Scope**, select the **Grant scope is an object** checkbox.
You cannot create a grant to a bucket if the bucket does not yet exist. However, you can create a grant to a prefix that does not yet exist. 
For the maximum number of grants that you can create in your S3 Access Grants instance, see [S3 Access Grants limitations](access-grants-limitations.md).

You can create an access grant by using the Amazon S3 console, AWS CLI, the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console


**To create an access grant**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

   If you're using the S3 Access Grants instance for the first time, make sure that you have completed [Step 2 - register a location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html) and navigated to **Step 3** of the **Set up Access Grants instance** wizard. If you already have an S3 Access Grants instance, choose **View details**, and then from the **Grants** tab, choose **Create grant**.

   1. In the **Grant scope** section, select or enter a registered location. 

      If you selected the default `s3://` location, use the **Subprefix** box to can narrow the scope of the access grant. For more information, see [Subprefix](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant.html#subprefix). If you're granting access only to an object, select **Grant scope is an object**.

   1. Under **Permissions and access**, select the **Permission** level, either **Read**, **Write**, or both. 

      Then choose the **Grantee type**. If you have added your corporate directory to IAM Identity Center and associated this IAM Identity Center instance with your S3 Access Grants instance, you can choose **Directory identity from IAM Identity Center**. If you choose this option, get the ID of the user or group from IAM Identity Center and enter it in this section. 

      If the **Grantee type** is an IAM user or role, choose **IAM principal**. Under **IAM principal type**, choose **User** or **Role**. Then, under **IAM principal user**, either choose from the list or enter the identity's ID. 

   1. To create the S3 Access Grants grant, choose **Next** or **Create grant**.

1. If **Next** or **Create grant** is disabled:

**Cannot create grant**
   + You might need to [register a location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html) first in your S3 Access Grants instance.
   + You might not have the `s3:CreateAccessGrant` permission to create an access grant. Contact your account administrator. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

The following examples show how to create an access grant request for an IAM principal and how to create an access grant request for a corporate directory user or group. 

To use the following example commands, replace the `user input placeholders` with your own information.

**Note**  
If you're creating an access grant that grants access to only one object, include the required parameter `--s3-prefix-type Object`.

**Example Create an access grant request for an IAM principal**  

```
aws s3control create-access-grant \
--account-id 111122223333 \
--access-grants-location-id a1b2c3d4-5678-90ab-cdef-EXAMPLE22222 \
--access-grants-location-configuration S3SubPrefix=prefixB* \
--permission READ \
--grantee GranteeType=IAM,GranteeIdentifier=arn:aws:iam::123456789012:user/data-consumer-3
```

**Example Create an access grant response**  

```
{"CreatedAt": "2023-05-31T18:41:34.663000+00:00",
    "AccessGrantId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "AccessGrantArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/grant/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "Grantee": {
        "GranteeType": "IAM",
        "GranteeIdentifier": "arn:aws:iam::111122223333:user/data-consumer-3"
    },
    "AccessGrantsLocationId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "AccessGrantsLocationConfiguration": {
        "S3SubPrefix": "prefixB*"
    },
    "GrantScope": "s3://amzn-s3-demo-bucket/prefix*",
    "Permission": "READ"
}
```

**Create an access grant request for a directory user or group**  
To create an access grant request for a directory user or group, you must first get the GUID for the directory user or group by running one of the following commands.

**Example Get a GUID for a directory user or group**  
You can find the GUID of an IAM Identity Center user through the IAM Identity Center console or by using the AWS CLI or AWS SDKs. The following command lists the users in he specified IAM Identity Center instance, with their names and identifiers.  

```
aws identitystore list-users --identity-store-id d-1a2b3c4d1234 
```
This command lists the groups in the specified IAM Identity Center instance.  

```
aws identitystore list-groups --identity-store-id d-1a2b3c4d1234
```

**Example Create an access grant for a directory user or group**  
This command is similar to creating a grant for IAM users or roles, except the grantee type is `DIRECTORY_USER` or `DIRECTORY_GROUP`, and the grantee identifier is the GUID for the directory user or group.  

```
aws s3control create-access-grant \
--account-id 123456789012 \
--access-grants-location-id default \
--access-grants-location-configuration S3SubPrefix="amzn-s3-demo-bucket/rafael/*" \
--permission READWRITE \
--grantee GranteeType=DIRECTORY_USER,GranteeIdentifier=83d43802-00b1-7054-db02-f1d683aacba5 \
```

## Using the REST API


For information about the Amazon S3 REST API support for managing access grants, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [CreateAccessGrant](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrant.html) 
+  [DeleteAccessGrant](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html) 
+  [GetAccessGrant](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html) 
+  [ListAccessGrants](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html)

## Using the AWS SDKs


This section provides examples of how to create an access grant by using the AWS SDKs.

------
#### [ Java ]

To use the following example, replace the `user input placeholders` with your own information:

**Note**  
If you are creating an access grant that grants access to only one object, include the required parameter `.s3PrefixType(S3PrefixType.Object)`.

**Example Create an access grant request**  

```
public void createAccessGrant() {
CreateAccessGrantRequest createRequest = CreateAccessGrantRequest.builder()
.accountId("111122223333")
.accessGrantsLocationId("a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa")
.permission("READ")
.accessGrantsLocationConfiguration(AccessGrantsLocationConfiguration.builder().s3SubPrefix("prefixB*").build())
.grantee(Grantee.builder().granteeType("IAM").granteeIdentifier("arn:aws:iam::111122223333:user/data-consumer-3").build())
.build();
CreateAccessGrantResponse createResponse = s3Control.createAccessGrant(createRequest);
LOGGER.info("CreateAccessGrantResponse: " + createResponse);
}
```

**Example Create an access grant response**  

```
CreateAccessGrantResponse(
CreatedAt=2023-06-07T05:20:26.330Z,
AccessGrantId=a1b2c3d4-5678-90ab-cdef-EXAMPLE33333,
AccessGrantArn=arn:aws:s3:us-east-2:444455556666:access-grants/default/grant/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333,
Grantee=Grantee(
GranteeType=IAM,
GranteeIdentifier=arn:aws:iam::111122223333:user/data-consumer-3
),
AccessGrantsLocationId=a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa,
AccessGrantsLocationConfiguration=AccessGrantsLocationConfiguration(
S3SubPrefix=prefixB*
),
GrantScope=s3://amzn-s3-demo-bucket/prefixB,
Permission=READ
)
```

------

# View a grant


You can view the details of an access grant in your Amazon S3 Access Grants instance by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To view the details of an access grant**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance.

1. On the details page, choose the **Grants** tab.

1. In the **Grants** section, find the access grant that you want to view. To filter the list of grants, use the search box. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example commands, replace the `user input placeholders` with your own information.

**Example – Get the details of an access grant**  

```
aws s3control get-access-grant \
--account-id 111122223333 \
--access-grant-id a1b2c3d4-5678-90ab-cdef-EXAMPLE22222
```
Response:  

```
{
    "CreatedAt": "2023-05-31T18:41:34.663000+00:00",
    "AccessGrantId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "AccessGrantArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/grant-a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "Grantee": {
        "GranteeType": "IAM",
        "GranteeIdentifier": "arn:aws:iam::111122223333:user/data-consumer-3"
    },
    "Permission": "READ",
    "AccessGrantsLocationId": "12a6710f-5af8-41f5-b035-0bc795bf1a2b",
    "AccessGrantsLocationConfiguration": {
        "S3SubPrefix": "prefixB*"
    },
    "GrantScope": "s3://amzn-s3-demo-bucket/"
}
```

**Example – List all of the access grants in an S3 Access Grants instance**  
You can optionally use the following parameters to restrict the results to an S3 prefix or AWS Identity and Access Management (IAM) identity:  
+ **Subprefix** – `--grant-scope s3://bucket-name/prefix*`
+ **IAM identity** – `--grantee-type IAM` and `--grantee-identifier arn:aws:iam::123456789000:role/accessGrantsConsumerRole`

```
aws s3control list-access-grants \
--account-id 111122223333
```
Response:  

```
{
    "AccessGrantsList": [{"CreatedAt": "2023-06-14T17:54:46.542000+00:00",
            "AccessGrantId": "dd8dd089-b224-4d82-95f6-975b4185bbaa",
            "AccessGrantArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/grant/dd8dd089-b224-4d82-95f6-975b4185bbaa",
            "Grantee": {
                "GranteeType": "IAM",
                "GranteeIdentifier": "arn:aws:iam::111122223333:user/data-consumer-3"
            },
            "Permission": "READ",
            "AccessGrantsLocationId": "23514a34-ea2e-4ddf-b425-d0d4bfcarda1",
            "GrantScope": "s3://amzn-s3-demo-bucket/prefixA*"
        },
        {"CreatedAt": "2023-06-24T17:54:46.542000+00:00",
            "AccessGrantId": "ee8ee089-b224-4d72-85f6-975b4185a1b2",
            "AccessGrantArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default/grant/ee8ee089-b224-4d72-85f6-975b4185a1b2",
            "Grantee": {
                "GranteeType": "IAM",
                "GranteeIdentifier": "arn:aws:iam::111122223333:user/data-consumer-9"
            },
            "Permission": "READ",
            "AccessGrantsLocationId": "12414a34-ea2e-4ddf-b425-d0d4bfcacao0",
            "GrantScope": "s3://amzn-s3-demo-bucket/prefixB*"
        },

    ]
}
```

## Using the REST API


You can use Amazon S3 API operations to view the details of an access grant and list all access grants in an S3 Access Grants instance. For information about the REST API support for managing access grants, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html) 

## Using the AWS SDKs


This section provides examples of how to get the details of an access grant by using the AWS SDKs.

To use the following examples, replace the `user input placeholders` with your own information.

------
#### [ Java ]



**Example – Get the details of an access grant**  

```
public void getAccessGrant() {
GetAccessGrantRequest getRequest = GetAccessGrantRequest.builder()
.accountId("111122223333")
.accessGrantId("a1b2c3d4-5678-90ab-cdef-EXAMPLE22222")
.build();
GetAccessGrantResponse getResponse = s3Control.getAccessGrant(getRequest);
LOGGER.info("GetAccessGrantResponse: " + getResponse);
}
```
Response:  

```
GetAccessGrantResponse(
CreatedAt=2023-06-07T05:20:26.330Z,
AccessGrantId=a1b2c3d4-5678-90ab-cdef-EXAMPLE22222,
AccessGrantArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/grant-fd3a5086-42f7-4b34-9fad-472e2942c70e,
Grantee=Grantee(
GranteeType=IAM,
GranteeIdentifier=arn:aws:iam::111122223333:user/data-consumer-3
),
Permission=READ,
AccessGrantsLocationId=12a6710f-5af8-41f5-b035-0bc795bf1a2b,
AccessGrantsLocationConfiguration=AccessGrantsLocationConfiguration(
S3SubPrefix=prefixB*
),
GrantScope=s3://amzn-s3-demo-bucket/ 
)
```

**Example – List all of the access grants in an S3 Access Grants instance**  
You can optionally use these parameters to restrict the results to an S3 prefix or IAM identity:  
+ **Scope** – `GrantScope=s3://bucket-name/prefix*`
+ **Grantee ** – `GranteeType=IAM` and `GranteeIdentifier= arn:aws:iam::111122223333:role/accessGrantsConsumerRole`

```
public void listAccessGrants() {
ListAccessGrantsRequest listRequest = ListAccessGrantsRequest.builder()
.accountId("111122223333")
.build();
ListAccessGrantsResponse listResponse = s3Control.listAccessGrants(listRequest);
LOGGER.info("ListAccessGrantsResponse: " + listResponse);
}
```
Response:  

```
ListAccessGrantsResponse(
AccessGrantsList=[
ListAccessGrantEntry(
CreatedAt=2023-06-14T17:54:46.540z,
AccessGrantId=dd8dd089-b224-4d82-95f6-975b4185bbaa,
AccessGrantArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/grant/dd8dd089-b224-4d82-95f6-975b4185bbaa,
Grantee=Grantee(
GranteeType=IAM, GranteeIdentifier= arn:aws:iam::111122223333:user/data-consumer-3
),
Permission=READ,
AccessGrantsLocationId=23514a34-ea2e-4ddf-b425-d0d4bfcarda1,
GrantScope=s3://amzn-s3-demo-bucket/prefixA 
),
ListAccessGrantEntry(
CreatedAt=2023-06-24T17:54:46.540Z,
AccessGrantId=ee8ee089-b224-4d72-85f6-975b4185a1b2,
AccessGrantArn=arn:aws:s3:us-east-2:111122223333:access-grants/default/grant/ee8ee089-b224-4d72-85f6-975b4185a1b2,
Grantee=Grantee(
GranteeType=IAM, GranteeIdentifier= arn:aws:iam::111122223333:user/data-consumer-9
),
Permission=READ,
AccessGrantsLocationId=12414a34-ea2e-4ddf-b425-d0d4bfcacao0,
GrantScope=s3://amzn-s3-demo-bucket/prefixB* 
)
]
)
```

------

# Delete a grant


You can delete access grants from your Amazon S3 Access Grants instance. You can't undo an access grant deletion. After you delete an access grant, the grantee will no longer have access to your Amazon S3 data.

You can delete an access grant by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console


**To delete an access grant**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Grants**.

1. On the **S3 Access Grants** page, choose the Region that contains the S3 Access Grants instance that you want to work with.

1. Choose **View details** for the instance.

1. On the details page, choose the **Grants** tab. 

1. Search for the grant that you want to delete. When you locate the grant, choose the radio button next to it. 

1. Choose **Delete**. A dialog box appears with a warning that your action can't be undone. Choose **Delete** again to delete the grant. 

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Delete an access grant**  

```
aws s3control delete-access-grant \
--account-id 111122223333 \
--access-grant-id a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 

// No response body
```

## Using the REST API


For information about the Amazon S3 REST API support for managing access grants, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


This section provides examples of how to delete an access grant by using the AWS SDKs. To use the following example, replace the `user input placeholders` with your own information.

------
#### [ Java ]

**Example – Delete an access grant**  

```
public void deleteAccessGrant() {
DeleteAccessGrantRequest deleteRequest = DeleteAccessGrantRequest.builder()
.accountId("111122223333")
.accessGrantId("a1b2c3d4-5678-90ab-cdef-EXAMPLE11111")
.build();
DeleteAccessGrantResponse deleteResponse = s3Control.deleteAccessGrant(deleteRequest);
LOGGER.info("DeleteAccessGrantResponse: " + deleteResponse);
}
```
Response:  

```
DeleteAccessGrantResponse()
```

------

# Getting S3 data using access grants


Grantees who have been given access to S3 data through S3 Access Grants must request temporary credentials from S3 Access Grants, which they use to access the S3 data. For more information, see [Request access to Amazon S3 data through S3 Access Grants](access-grants-credentials.md). Grantees then use the temporary credentials to perform allowable S3 actions on the S3 data. For more information, see [Accessing S3 data using credentials vended by S3 Access Grants](access-grants-get-data.md). Grantees can optionally request a list of their access grants for an AWS account before requesting these credentials. For more information, see [List the caller's access grants](access-grants-list-grants.md). 

**Topics**
+ [

# Request access to Amazon S3 data through S3 Access Grants
](access-grants-credentials.md)
+ [

# Accessing S3 data using credentials vended by S3 Access Grants
](access-grants-get-data.md)
+ [

# List the caller's access grants
](access-grants-list-grants.md)

# Request access to Amazon S3 data through S3 Access Grants


After you [create an access grant](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant.html) using S3 Access Grants, grantees can request credentials to access the S3 data that they were granted access to. Grantees can be AWS Identity and Access Management (IAM) principals, your corporate directory identities, or authorized applications. 

An application or AWS service can use the S3 Access Grants `GetDataAccess` API operation to ask S3 Access Grants for access to your S3 data on behalf of a grantee. `GetDataAccess` first verifies that you have granted this identity access to the data. Then, S3 Access Grants uses the [https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) API operation to obtain a temporary credential token and vends it to the requester. This temporary credential token is an AWS Security Token Service (AWS STS) token.

The `GetDataAccess` request must include the `target` parameter, which specifies the scope of the S3 data that the temporary credentials apply to. This `target` scope can be the same as the scope of the grant or a subset of that scope, but the `target` scope must be within the scope of the grant that was given to the grantee. The request must also specify the `permission` parameter to indicate the permission level for the temporary credentials, whether `READ`, `WRITE`, or `READWRITE`.

**Privilege**  
The requester can specify the privilege level of the temporary token in their credential request. Using the `privilege` parameter, the requester can reduce or increase the temporary credentials' scope of access, within the boundaries of the grant scope. The default value of the `privilege` parameter is `Default`, which means that the target scope of the credential returned is the original grant scope. The other possible value for `privilege` is `Minimal`. If the `target` scope is reduced from the original grant scope, then the temporary credential is de-scoped to match the `target` scope, as long as the `target` scope is within the grant scope. 

The following table details the effect of the `privilege` parameter on two grants. One grant has the scope `S3://amzn-s3-demo-bucket1/bob/*`, which includes the entire `bob/` prefix in the `amzn-s3-demo-bucket1` bucket. The other grant has the scope `S3://amzn-s3-demo-bucket1/bob/reports/*`, which includes only the `bob/reports/` prefix in the `amzn-s3-demo-bucket1` bucket. 


|  Grant scope  |  Requested scope  |  Privilege  |  Returned scope  |  Effect  | 
| --- | --- | --- | --- | --- | 
| S3://amzn-s3-demo-bucket1/bob/\$1 | amzn-s3-demo-bucket1/bob/\$1 | Default  | amzn-s3-demo-bucket1/bob/\$1  |  The requester has access to all objects that have key names that start with the prefix `bob/` in the `amzn-s3-demo-bucket1` bucket.  | 
| S3://amzn-s3-demo-bucket1/bob/\$1 | amzn-s3-demo-bucket1/bob/  | Minimal  | amzn-s3-demo-bucket1/bob/  |  Without a wild card \$1 character after the prefix name `bob/`, the requester has access to only the object named `bob/` in the `amzn-s3-demo-bucket1` bucket. It's not common to have such an object. The requester doesn't have access to any other objects, including those that have key names that start with the `bob/` prefix.  | 
| S3://amzn-s3-demo-bucket1/bob/\$1 | amzn-s3-demo-bucket1/bob/images/\$1  | Minimal  | amzn-s3-demo-bucket1/bob/images/\$1  |  The requester has access to all objects that have key names that start with the prefix `bob/images/*`in the `amzn-s3-demo-bucket1` bucket.  | 
| S3://amzn-s3-demo-bucket1/bob/reports/\$1 | amzn-s3-demo-bucket1/bob/reports/file.txt  | Default  | amzn-s3-demo-bucket1/bob/reports/\$1  |  The requester has access to all objects that have key names that start with the `bob/reports` prefix in the `amzn-s3-demo-bucket1` bucket, which is the scope of the matching grant.  | 
| S3://amzn-s3-demo-bucket1/bob/reports/\$1 | amzn-s3-demo-bucket1/bob/reports/file.txt  | Minimal  | amzn-s3-demo-bucket1/bob/reports/file.txt  |  The requester has access only to the object with the key name `bob/reports/file.txt` in the `amzn-s3-demo-bucket1` bucket. The requester has no access to any other object.   | 

**Directory identities**  
`GetDataAccess` considers all of the identities involved in a request when matching suitable grants. For corporate directory identities, `GetDataAccess` also returns the grants of the IAM identity that is used for the identity-aware session. For more information on identity-aware sessions, see [Granting permissions to use identity-aware console sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_sts-setcontext.html) in the *AWS Identity and Access Management User Guide*. `GetDataAccess` generates credentials restricting scope to the most restrictive grant, as shown in the following table:


|  Grant scope for IAM identity |  Grant scope for directory identity |  Requested scope  |  Returned scope  |  Privilege  |  Effect  | 
| --- | --- | --- | --- | --- | --- | 
| S3://amzn-s3-demo-bucket1/bob/\$1 | amzn-s3-demo-bucket1/bob/images/\$1 | S3://amzn-s3-demo-bucket1/bob/images/image1.jpeg  | S3://amzn-s3-demo-bucket1/bob/images/\$1  | Default |  The requestor has access to all of the objects that have key names that start with the prefix *bob/* as a part of the grant for the IAM role but is restricted to the prefixes *bob/images/* as a part of the grant for the directory identity. Both the IAM role and directory identity provide access to the requested scope, which is `bob/images/image1.jpeg`, but the directory identity has a more restrictive grant. So, the returned scope is restricted to the more restrictive grant for the directory identity.  | 
| S3://amzn-s3-demo-bucket1/bob/\$1 | amzn-s3-demo-bucket1/bob/images/\$1 | S3://amzn-s3-demo-bucket1/bob/images/image1.jpeg  | S3://amzn-s3-demo-bucket1/bob/images/image1.jpeg  | Minimal |  Because the Privilege is set to `Minimal`, even though the identity has access to a bigger scope, only the requested scope is returned `bob/images/image1.jpeg`.  | 
| S3://amzn-s3-demo-bucket1/bob/images/\$1 | amzn-s3-demo-bucket1/bob/\$1 | S3://amzn-s3-demo-bucket1/bob/images/image1.jpeg  | S3://amzn-s3-demo-bucket1/bob/images/\$1  | Default |  The requestor has access to all of the objects that have key names that start with the prefix *bob/* as a part of the grant for the directory identity but is restricted to the prefixes *bob/images/* as a part of the grant for the IAM role. Both the IAM role and directory identity provide access to the requested scope, which is `bob/images/image1.jpeg`, but the IAM role has a more restrictive grant. So, the returned scope is restricted to the more restrictive grant for the IAM role.  | 
| S3://amzn-s3-demo-bucket1/bob/images/\$1 | amzn-s3-demo-bucket1/bob/\$1 | S3://amzn-s3-demo-bucket1/bob/images/image1.jpeg  | S3://amzn-s3-demo-bucket1/bob/images/image1.jpeg  | Minimal |  Because the Privilege is set to `Minimal`, even though the identity has access to a bigger scope, only the requested scope is returned `bob/images/image1.jpeg`.  | 

**Duration**  
The `durationSeconds` parameter sets the temporary credential's duration, in seconds. The default value is `3600` seconds (1 hour), but the requester (the grantee) can specify a range from `900` seconds (15 minutes) up to `43200` seconds (12 hours). If the grantee requests a value higher than this maximum, the request fails. 

**Note**  
In your request for a temporary token, if the location is an object, set the value of the `targetType` parameter in your request to `Object`. This parameter is required only if the location is an object and the privilege level is `Minimal`. If the location is a bucket or a prefix, you don't need to specify this parameter.

**Examples**  
You can request temporary credentials by using the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs. See these examples.

For additional information, see [GetDataAccess](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetDataAccess.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example Request temporary credentials**  
Request:  

```
aws s3control get-data-access \
--account-id 111122223333 \
--target s3://amzn-s3-demo-bucket/prefixA* \
--permission READ \
--privilege Default \
--region us-east-2
```
Response:  

```
{
"Credentials": {
"AccessKeyId": "Example-key-id",
"SecretAccessKey": "Example-access-key",
"SessionToken": "Example-session-token",
"Expiration": "2023-06-14T18:56:45+00:00"},
"MatchedGrantTarget": "s3://amzn-s3-demo-bucket/prefixA**",
"Grantee": {
    "GranteeType": "IAM",
    "GranteeIdentifier": "arn:aws:iam::111122223333:role/role-name"
 }
}
```

## Using the REST API


For information about the Amazon S3 REST API support for requesting temporary credentials from S3 Access Grants, see [GetDataAccess](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetDataAccess.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


This section provides an example of how grantees request temporary credentials from S3 Access Grants by using the AWS SDKs.

------
#### [ Java ]

The following code example returns the temporary credentials that the grantee uses to access your S3 data. To use this code example, replace the `user input placeholders` with your own information.

**Example Get temporary credentials**  
Request:  

```
public void getDataAccess() {
GetDataAccessRequest getDataAccessRequest = GetDataAccessRequest.builder()
.accountId("111122223333")
.permission(Permission.READ)
.privilege(Privilege.MINIMAL)
.target("s3://amzn-s3-demo-bucket/prefixA*")
.build();
GetDataAccessResponse getDataAccessResponse = s3Control.getDataAccess(getDataAccessRequest);
LOGGER.info("GetDataAccessResponse: " + getDataAccessResponse);
}
```
Response:  

```
GetDataAccessResponse(
Credentials=Credentials(
AccessKeyId="Example-access-key-id",
SecretAccessKey="Example-secret-access-key",
SessionToken="Example-session-token",
Expiration=2023-06-07T06:55:24Z
))
```

------

# Accessing S3 data using credentials vended by S3 Access Grants


After a grantee [obtains temporary credentials](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-credentials.html) through their access grant, they can use these temporary credentials to call Amazon S3 API operations to access your data. 

Grantees can access S3 data by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, and the Amazon S3 REST API. Additionally, you can use the AWS [Python](https://github.com/aws/boto3-s3-access-grants-plugin) and [Java](https://github.com/aws/aws-s3-accessgrants-plugin-java-v2) plugins to call S3 Access Grants

## Using the AWS CLI


After the grantee obtains their temporary credentials from S3 Access Grants, they can set up a profile with these credentials to retrieve the data. 

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

To use the following example commands, replace the `user input placeholders` with your own information.

**Example – Set up a profile**  

```
aws configure set aws_access_key_id "$accessKey" --profile access-grants-consumer-access-profile
aws configure set aws_secret_access_key "$secretKey" --profile access-grants-consumer-access-profile
aws configure set aws_session_token "$sessionToken" --profile access-grants-consumer-access-profile
```

To use the following example command, replace the `user input placeholders` with your own information.

**Example – Get the S3 data**  
The grantee can use the [https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object.html) AWS CLI command to access the data. The grantee can also use [https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html), [https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html), and other S3 AWS CLI commands.   

```
aws s3api get-object \
--bucket amzn-s3-demo-bucket1 \
--key myprefix \
--region us-east-2 \
--profile access-grants-consumer-access-profile
```

## Using the AWS SDKs


This section provides examples of how grantees can access your S3 data by using the AWS SDKs.

------
#### [ Java ]

The following Java code example gets an object from an S3 bucket. For instructions on creating and testing a working sample, see [Getting Started](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/getting-started.html) in the *AWS SDK for Java Developer Guide*.

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;

public class GetObject2 {

    public static void main(String[] args) throws IOException {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";
        String key = "*** Object key ***";

        S3Object fullObject = null, objectPortion = null, headerOverrideObject = null;
        try {
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withRegion(clientRegion)
                    .withCredentials(new ProfileCredentialsProvider())
                    .build();

            // Get an object and print its contents.
            System.out.println("Downloading an object");
            fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
            System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
            System.out.println("Content: ");
            displayTextInputStream(fullObject.getObjectContent());

            // Get a range of bytes from an object and print the bytes.
            GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
                    .withRange(0, 9);
            objectPortion = s3Client.getObject(rangeObjectRequest);
            System.out.println("Printing bytes retrieved.");
            displayTextInputStream(objectPortion.getObjectContent());

            // Get an entire object, overriding the specified response headers, and print
            // the object's content.
            ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
                    .withCacheControl("No-cache")
                    .withContentDisposition("attachment; filename=example.txt");
            GetObjectRequest getObjectRequestHeaderOverride = new GetObjectRequest(bucketName, key)
                    .withResponseHeaders(headerOverrides);
            headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
            displayTextInputStream(headerOverrideObject.getObjectContent());
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        } finally {
            // To ensure that the network connection doesn't remain open, close any open
            // input streams.
            if (fullObject != null) {
                fullObject.close();
            }
            if (objectPortion != null) {
                objectPortion.close();
            }
            if (headerOverrideObject != null) {
                headerOverrideObject.close();
            }
        }
    }

    private static void displayTextInputStream(InputStream input) throws IOException {
        // Read the text input stream one line at a time and display each line.
        BufferedReader reader = new BufferedReader(new InputStreamReader(input));
        String line = null;
        while ((line = reader.readLine()) != null) {
            System.out.println(line);
        }
        System.out.println();
    }
}
```

------

## Supported S3 actions in S3 Access Grants


A grantee can use the temporary credential vended by S3 Access Grants to perform S3 actions on the S3 data they have access to. The following is a list of allowable S3 actions that a grantee can perform. Which actions are allowable depends on the level of permission granted in the access grant, either `READ`, `WRITE`, or `READWRITE`. 

**Note**  
In addition to the Amazon S3 permissions listed below, Amazon S3 can call the AWS Key Management Service (AWS KMS) [Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) (`kms:decrypt`) `READ` permission or the AWS KMS [GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html) (`kms:generateDataKey`) `WRITE` permission. These permissions don't allow direct access to the AWS KMS key.


****  

| S3 IAM action | API action & doc | S3 Access Grants Permission | S3 resource | 
| --- | --- | --- | --- | 
| s3:GetObject | [GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) | READ | Object | 
| s3:GetObjectVersion | [GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) | READ | Object | 
| s3:GetObjectAcl | [GetObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html) | READ | Object | 
| s3:GetObjectVersionAcl | [GetObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html) | READ | Object | 
| s3:ListMultipartUploads | [ListParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html) | READ | Object | 
| s3:PutObject | [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html), [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html), [UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html), [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html) | WRITE | Object | 
| s3:PutObjectAcl | [PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) | WRITE | Object | 
| s3:PutObjectVersionAcl | [PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) | WRITE | Object | 
| s3:DeleteObject | [DeleteObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html) | WRITE | Object | 
| s3:DeleteObjectVersion | [DeleteObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html) | WRITE | Object | 
| s3:AbortMultipartUpload | [AbortMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html) | WRITE | Object | 
| s3:ListBucket | [HeadBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html), [ListObjectsV2](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html), [ListObjects](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html) | READ | Bucket | 
| s3:ListBucketVersions | [ListObjectVersions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html) | READ | Bucket | 
| s3:ListBucketMultipartUploads | [ListMultipartUploads](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html) | READ | Bucket | 

# List the caller's access grants


S3 data owners can use S3 Access Grants to create access grants for AWS Identity and Access Management (IAM) identities or for AWS IAM Identity Center corporate directory identities. IAM identies and IAM Identity Center directory identities can in turn use the `ListCallerAccessGrants` API to list all of the Amazon S3 buckets, prefixes, and objects they can access, as defined by their S3 Access Grants. Use this API to discover all of the S3 data an IAM or directory identity can access through S3 Access Grants. 

You can use this feature to build applications that show the data that is accessible to specific end-users. For example, the AWS Storage Browser for S3, an open source UI component that customers use to access S3 buckets, uses this feature to present end-users with the data that they have access to in Amazon S3, based on their S3 Access Grants. Another example is when building an application for browsing, uploading, or downloading data in Amazon S3, you can use this feature to build a tree structure in your application that an end-user could then browse. 

**Note**  
For corporate directory identities, when listing the caller's access grants, S3 Access Grants returns the grants of the IAM identity that is used for the identity-aware session. For more information on identity-aware sessions, see [Granting permissions to use identity-aware console sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_sts-setcontext.html) in the *AWS Identity and Access Management User Guide*.

The grantee whether an IAM identity, or a corporate directory identity can get a list of their access grants by using the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

To use the following example command, replace the `user input placeholders` with your own information.

**Example List a caller's access grants**  
Request:  

```
aws s3control list-caller-access-grants \
--account-id 111122223333 \
--region us-east-2
--max-results 5
```
Response:  

```
{
	"NextToken": "6J9S...",
	"CallerAccessGrantsList": [
		{
			"Permission": "READWRITE",
			"GrantScope": "s3://amzn-s3-demo-bucket/prefix1/sub-prefix1/*",
			"ApplicationArn": "NA"
		},
		{
			"Permission": "READWRITE",
			"GrantScope": "s3://amzn-s3-demo-bucket/prefix1/sub-prefix2/*",
			"ApplicationArn": "ALL"
		},
		{
			"Permission": "READWRITE",
			"GrantScope": "s3://amzn-s3-demo-bucket/prefix1/sub-prefix3/*",
			"ApplicationArn": "arn:aws:sso::111122223333:application/ssoins-ssoins-1234567890abcdef/apl-abcd1234a1b2c3d"
		}
	]
}
```

**Example List a caller's access grants for a bucket**  
You can narrow the scope of the results using the `grantscope` parameter.  
Request:  

```
aws s3control list-caller-access-grants \
--account-id 111122223333 \
--region us-east-2
--grant-scope "s3://amzn-s3-demo-bucket""
--max-results 1000
```
Response:  

```
{
	"NextToken": "6J9S...",
	"CallerAccessGrantsList": [
		{
			"Permission": "READ",
			"GrantScope": "s3://amzn-s3-demo-bucket*",
			"ApplicationArn": "ALL"
		},
		{
			"Permission": "READ",
			"GrantScope": "s3://amzn-s3-demo-bucket/prefix1/*",
			"ApplicationArn": "arn:aws:sso::111122223333:application/ssoins-ssoins-1234567890abcdef/apl-abcd1234a1b2c3d"
		}
	]
}
```

## Using the REST API


For information about the Amazon S3 REST API support for getting a list of the API caller's access grants, see [ListCallerAccessGrants](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs


This section provides an example of how grantees request temporary credentials from S3 Access Grants by using the AWS SDKs.

------
#### [ Java ]

The following code example returns the API caller's access grants to the S3 data of a particular AWS account. To use this code example, replace the `user input placeholders` with your own information.

**Example List a caller's access grants**  
Request:  

```
Public void ListCallerAccessGrants() {
	ListCallerAccessGrantsRequest listRequest = ListCallerAccessGrantsRequest.builder()
				.withMaxResults(1000)
				.withGrantScope("s3://")
				.accountId("111122223333");
	ListCallerAccessGrantsResponse listResponse = s3control.listCallerAccessGrants(listRequest);
	LOGGER.info("ListCallerAccessGrantsResponse: " + listResponse);
	}
```
Response:  

```
ListCallerAccessGrantsResponse(
CallerAccessGrantsList=[
	ListCallerAccessGrantsEntry(
		S3Prefix=s3://amzn-s3-demo-bucket/prefix1/,
		Permission=READ,
		ApplicationArn=ALL
	)
])
```

------

# S3 Access Grants cross-account access


With S3 Access Grants, you can grant Amazon S3 data access to the following: 
+ AWS Identity and Access Management (IAM) identities within your account
+ IAM identities in other AWS accounts
+ Directory users or groups in your AWS IAM Identity Center instance

First, configure cross-account access for the other account. This includes granting access to your S3 Access Grants instance by using a resource policy. Then, grant access to your S3 data (buckets, prefixes, or objects) by using grants. 

After you configure cross-account access, the other account can request temporary access credentials to your Amazon S3 data from S3 Access Grants. The following image shows the user flow for cross-account S3 access through S3 Access Grants:

![\[S3 Access Grants cross-account user flow\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-grants-cross-account.png)


1. Users or applications in a second account (B) request credentials from the S3 Access Grants instance in your account (A), where the Amazon S3 data is stored. For more information, see [Request access to Amazon S3 data through S3 Access Grants](access-grants-credentials.md).

1. The S3 Access Grants instance in your account (A) returns temporary credentials if there is a grant that gives the second account access to your Amazon S3 data. For more information on access grants, see [Working with grants in S3 Access Grants](access-grants-grant.md).

1. Users or applications in the second account (B) use the S3 Access Grants-vended credentials to access the S3 data in your account (A).

**Configuring S3 Access Grants cross-account access**  
To grant cross-account S3 access through S3 Access Grants, follow these steps:
+ **Step 1:** Configure an S3 Access Grants instance in your account, for example, account ID `111122223333`, where the S3 data is stored.
+ **Step 2:** Configure the resource policy for the S3 Access Grants instance in your account `111122223333` to give access to the second account, for example, account ID `444455556666`.
+ **Step 3:** Configure the IAM permissions for the IAM Principal in the second account `444455556666` to request credentials from the S3 Access Grants instance in your account `111122223333`.
+ **Step 4:** Create a grant in your account `111122223333` that gives the IAM Principal in the second account `444455556666` access to some of the S3 data in your account `111122223333`.

## Step 1: Configure an S3 Access Grants instance in your account


First, you must have an S3 Access Grants instance in your account `111122223333` to manage access to your Amazon S3 data. You must create an S3 Access Grants instance in each AWS Region where the S3 data that you want to share is stored. If you are sharing data in more than one AWS Region, then repeat each of these configuration steps for each AWS Region. If you already have an S3 Access Grants instance in the AWS Region where your S3 data is stored, proceed to the next step. If you haven’t configured an S3 Access Grants instance, see [Working with S3 Access Grants instances](access-grants-instance.md) to complete this step. 

## Step 2: Configure the resource policy for your S3 Access Grants instance to grant cross-account access


After you create an S3 Access Grants instance in your account `111122223333` for cross-account access, configure the resource-based policy for the S3 Access Grants instance in your account `111122223333` to grant cross-account access. The S3 Access Grants instance itself supports resource-based policies. With the correct resource-based policy in place, you can grant access for AWS Identity and Access Management (IAM) users or roles from other AWS accounts to your S3 Access Grants instance. Cross-account access only grants these permissions (actions):
+ `s3:GetAccessGrantsInstanceForPrefix` — the user, role, or app can retrieve the S3 Access Grants instance that contains a particular prefix. 
+ `s3:ListAccessGrants`
+ `s3:ListAccessLocations`
+ `s3:ListCallerAccessGrants`
+ `s3:GetDataAccess` — the user, role, or app can request temporary credentials based on the access you were granted through S3 Access Grants. Use these credentials to access the S3 data to which you have been granted access. 

You can choose which of these permissions to include in the resource policy. This resource policy on the S3 Access Grants instance is a normal resource-based policy and supports everything that the [IAM policy language](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html) supports. In the same policy, you can grant access to specific IAM identities in your account `111122223333`, for example, by using the `aws:PrincipalArn` condition, but you don't have to do that with S3 Access Grants. Instead, within your S3 Access Grants instance, you can create grants for individual IAM identities from your account, as well as for the other account. By managing each access grant through S3 Access Grants, you can scale your permissions.

If you already use [AWS Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) (AWS RAM), you can use it to share your [https://docs.aws.amazon.com/ram/latest/userguide/shareable.html#shareable-s3](https://docs.aws.amazon.com/ram/latest/userguide/shareable.html#shareable-s3) resources with other accounts or within your organization. See [Working with shared AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with.html) for more information. If you don't use AWS RAM, you can also add the resource policy by using the S3 Access Grants API operations or the AWS Command Line Interface (AWS CLI). 

### Using the S3 console


We recommend that you use the AWS Resource Access Manager (AWS RAM) Console to share your `s3:AccessGrants` resources with other accounts or within your organization. To share S3 Access Grants cross-account, do the following:

**To configure the S3 Access Grants instance resource policy:**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Select the AWS Region from the AWS Region selector.

1. From the left navigation pane, select **Access Grants**.

1. On the Access Grants instance page, in the **Instance in this account** section, select **Share instance**. This will redirect you to the AWS RAM Console.

1. Select **Create resource share**.

1. Follow the AWS RAM steps to create the resource share. For more information, see [Creating a resource share in AWS RAM](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing-create.html).

### Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

You can add the resource policy by using the `put-access-grants-instance-resource-policy` CLI command.

If you want to grant cross-account access for the S3 Access Grants instance is in your account `111122223333` to the second account `444455556666`, the resource policy for the S3 Access Grants instance in your account `111122223333` should give the second account `444455556666` permission to perform the following actions: 
+ `s3:ListAccessGrants`
+ `s3:ListAccessGrantsLocations`
+ `s3:GetDataAccess`
+ `s3:GetAccessGrantsInstanceForPrefix`

In the S3 Access Grants instance resource policy, specify the ARN of your S3 Access Grants instance as the `Resource`, and the second account `444455556666` as the `Principal`. To use the following example, replace the *user input placeholders* with your own information.

```
{
"Version": "2012-10-17",		 	 	 
"Statement": [
{
	"Effect": "Allow", 
	"Principal": {
	"AWS": "444455556666"
}, 
	"Action": [
		"s3:ListAccessGrants",
		"s3:ListAccessGrantsLocations",
		"s3:GetDataAccess",
		"s3:GetAccessGrantsInstanceForPrefix"
	],
	"Resource": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
} ]
}
```

To add or update the S3 Access Grants instance resource policy, use the following command. When you use the following example command, replace the `user input placeholders` with your own information.

**Example Add or update the S3 Access Grants instance resource policy**  

```
	aws s3control put-access-grants-instance-resource-policy \
	--account-id 111122223333 \
	--policy file://resourcePolicy.json \
	--region us-east-2
	{
		"Policy": "{\n 
		  \"Version\": \"2012-10-17\",\n 
		  \"Statement\": [{\n  
			\"Effect\": \"Allow\",\n
			\"Principal\": {\n
			  \"AWS\": \"444455556666\"\n
			},\n  
			\"Action\": [\n
			  \"s3:ListAccessGrants\",\n
			  \"s3:ListAccessGrantsLocations\",\n
			  \"s3:GetDataAccess\",\n
			  \"s3:GetAccessGrantsInstanceForPrefix\",\n
			  \"s3:ListCallerAccessGrants"\n
			],\n
			\"Resource\": \"arn:aws:s3:us-east-2:111122223333:access-grants/default\"\n
		   }\n  
		  ]\n
		  }\n",
		"CreatedAt": "2023-06-16T00:07:47.473000+00:00"
	}
```

**Example Get an S3 Access Grants resource policy**  
You can also use the CLI to get or delete a resource policy for an S3 Access Grants instance.  
To get an S3 Access Grants resource policy, use the following example command. To use this example command, replace the `user input placeholders` with your own information.  

```
aws s3control get-access-grants-instance-resource-policy \
--account-id 111122223333 \
--region us-east-2

{
"Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::111122223333:root\"},\"Action\":[\"s3:ListAccessGrants\",\"s3:ListAccessGrantsLocations\",\"s3:GetDataAccess\",\"s3:GetAccessGrantsInstanceForPrefix\",\"s3:ListCallerAccessGrants\"],\"Resource\":\"arn:aws:
s3:us-east-2:111122223333:access-grants/default\"}]}",
"CreatedAt": "2023-06-16T00:07:47.473000+00:00"
}
```

**Example Delete an S3 Access Grants resource policy**  
To delete an S3 Access Grants resource policy, use the following example command. To use this example command, replace the `user input placeholders` with your own information.  

```
aws s3control delete-access-grants-instance-resource-policy \
--account-id 111122223333 \
--region us-east-2

// No response body
```

### Using the REST API


You can add the resource policy by using the [PutAccessGrantsInstanceResourcePolicy API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html).

If you want to grant cross-account access for the S3 Access Grants instance is in your account `111122223333` to the second account `444455556666`, the resource policy for the S3 Access Grants instance in your account `111122223333` should give the second account `444455556666` permission to perform the following actions: 
+ `s3:ListAccessGrants`
+ `s3:ListAccessGrantsLocations`
+ `s3:GetDataAccess`
+ `s3:GetAccessGrantsInstanceForPrefix`

In the S3 Access Grants instance resource policy, specify the ARN of your S3 Access Grants instance as the `Resource`, and the second account `444455556666` as the `Principal`. To use the following example, replace the *user input placeholders* with your own information.

```
{
"Version": "2012-10-17",		 	 	 
"Statement": [
{
	"Effect": "Allow", 
	"Principal": {
	"AWS": "444455556666"
}, 
	"Action": [
		"s3:ListAccessGrants",
		"s3:ListAccessGrantsLocations",
		"s3:GetDataAccess",
		"s3:GetAccessGrantsInstanceForPrefix"
	],
	"Resource": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
} ]
}
```

You can then use the [PutAccessGrantsInstanceResourcePolicy API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html) to configure the policy.

For information on the REST API support to update, get, or delete a resource policy for an S3 Access Grants instance, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [PutAccessGrantsInstanceResourcePolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html) 
+  [GetAccessGrantsInstanceResourcePolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html) 
+  [DeleteAccessGrantsInstanceResourcePolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstanceResourcePolicy.html) 

### Using the AWS SDKs


This section provides you with the AWS SDK examples of how to configure your S3 Access Grants resource policy to grant a second AWS account access to some of your S3 data. 

------
#### [ Java ]

Add, update, get, or delete a resource policy to manage cross-account access to your S3 Access Grants instance. 

**Example Add or update an S3 Access Grants instance resource policy**  
If you want to grant cross-account access for the S3 Access Grants instance is in your account `111122223333` to the second account `444455556666`, the resource policy for the S3 Access Grants instance in your account `111122223333` should give the second account `444455556666` permission to perform the following actions:   
+ `s3:ListAccessGrants`
+ `s3:ListAccessGrantsLocations`
+ `s3:GetDataAccess`
+ `s3:GetAccessGrantsInstanceForPrefix`
In the S3 Access Grants instance resource policy, specify the ARN of your S3 Access Grants instance as the `Resource`, and the second account `444455556666` as the `Principal`. To use the following example, replace the *user input placeholders* with your own information.  

```
{
"Version": "2012-10-17",		 	 	 
"Statement": [
{
	"Effect": "Allow", 
	"Principal": {
	"AWS": "444455556666"
}, 
	"Action": [
		"s3:ListAccessGrants",
		"s3:ListAccessGrantsLocations",
		"s3:GetDataAccess",
		"s3:GetAccessGrantsInstanceForPrefix"
	],
	"Resource": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
} ]
}
```
To add or update an S3 Access Grants instance resource policy, use the following code example:  

```
public void putAccessGrantsInstanceResourcePolicy() {
	PutAccessGrantsInstanceResourcePolicyRequest putRequest = PutAccessGrantsInstanceResourcePolicyRequest.builder()
	.accountId(111122223333)
	.policy(RESOURCE_POLICY)
	.build();
	PutAccessGrantsInstanceResourcePolicyResponse putResponse = s3Control.putAccessGrantsInstanceResourcePolicy(putRequest);
	LOGGER.info("PutAccessGrantsInstanceResourcePolicyResponse: " + putResponse);
	}
```
Response:  

```
PutAccessGrantsInstanceResourcePolicyResponse(
	Policy={
	"Version": "2012-10-17",		 	 	 
	"Statement": [{
	"Effect": "Allow",
	"Principal": {
	"AWS": "444455556666"
	},
	"Action": [
	"s3:ListAccessGrants",
	"s3:ListAccessGrantsLocations",
	"s3:GetDataAccess",
	"s3:GetAccessGrantsInstanceForPrefix",
	"s3:ListCallerAccessGrants"
	],
	"Resource": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
	}]
	}
	)
```

**Example Get an S3 Access Grants resource policy**  
To get an S3 Access Grants resource policy, use the following code example. To use the following example command, replace the `user input placeholders` with your own information.  

```
public void getAccessGrantsInstanceResourcePolicy() {
	GetAccessGrantsInstanceResourcePolicyRequest getRequest = GetAccessGrantsInstanceResourcePolicyRequest.builder()
	.accountId(111122223333)
	.build();
	GetAccessGrantsInstanceResourcePolicyResponse getResponse = s3Control.getAccessGrantsInstanceResourcePolicy(getRequest);
	LOGGER.info("GetAccessGrantsInstanceResourcePolicyResponse: " + getResponse);
	}
```
Response:  

```
GetAccessGrantsInstanceResourcePolicyResponse(
	Policy={"Version": "2012-10-17",		 	 	 "Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::444455556666:root"},"Action":["s3:ListAccessGrants","s3:ListAccessGrantsLocations","s3:GetDataAccess","s3:GetAccessGrantsInstanceForPrefix","s3:ListCallerAccessGrants"],"Resource":"arn:aws:s3:us-east-2:111122223333:access-grants/default"}]},
	CreatedAt=2023-06-15T22:54:44.319Z
	)
```

**Example Delete an S3 Access Grants resource policy**  
To delete an S3 Access Grants resource policy, use the following code example. To use the following example command, replace the `user input placeholders` with your own information.  

```
public void deleteAccessGrantsInstanceResourcePolicy() {
	DeleteAccessGrantsInstanceResourcePolicyRequest deleteRequest = DeleteAccessGrantsInstanceResourcePolicyRequest.builder()
	.accountId(111122223333)
	.build();
	DeleteAccessGrantsInstanceResourcePolicyResponse deleteResponse = s3Control.putAccessGrantsInstanceResourcePolicy(deleteRequest);
	LOGGER.info("DeleteAccessGrantsInstanceResourcePolicyResponse: " + deleteResponse);
	}
```
Response:  

```
DeleteAccessGrantsInstanceResourcePolicyResponse()
```

------

## Step 3: Grant IAM identities in a second account permission to call the S3 Access Grants instance in your account


After the owner of the Amazon S3 data has configured the cross-account policy for the S3 Access Grants instance in account `111122223333`, the owner of the second account `444455556666` must create an identity-based policy for its IAM users or roles, and the owner must give them access to the S3 Access Grants instance. In the identity-based policy, include one or more of the following actions, depending on what’s granted in the S3 Access Grants instance resource policy and the permissions you want to grant:
+ `s3:ListAccessGrants`
+ `s3:ListAccessGrantsLocations`
+ `s3:GetDataAccess`
+ `s3:GetAccessGrantsInstanceForPrefix`
+ `s3:ListCallerAccessGrants`

Following the [AWS cross-account access pattern](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html), the IAM users or roles in the second account `444455556666` must explicitly have one or more of these permissions. For example, grant the `s3:GetDataAccess` permission so that the IAM user or role can call the S3 Access Grants instance in account `111122223333` to request credentials. 

To use this example command, replace the `user input placeholders` with your own information.

```
{
	"Version": "2012-10-17",		 	 	 
	"Statement": [
	{
		"Effect": "Allow", 
		"Action": [
			"s3:GetDataAccess",
		],
			"Resource": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
		} 
	]
}
```

For information on editing IAM identity-based policy, see [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *AWS Identity and Access Management guide*.

## Step 4: Create a grant in the S3 Access Grants instance of your account that gives the IAM identity in the second account access to some of your S3 data


For the final configuration step, you can create a grant in the S3 Access Grants instance in your account 111122223333 that gives access to the IAM identity in the second account 444455556666 to some of the S3 data in your account. You can do this by using the Amazon S3 Console, CLI, API, and SDKs. For more information, see [Create grants](access-grants-grant-create.md). 

In the grant, specify the AWS ARN of the IAM identity from the second account, and specify which location in your S3 data (a bucket, prefix, or object) that you are granting access to. This location must already be registered with your S3 Access Grants instance. For more information, see [Register a location](access-grants-location-register.md). You can optionally specify a subprefix. For example, if the location you are granting access to is a bucket, and you want to limit the access further to a specific object in that bucket, then pass the object key name in the `S3SubPrefix` field. Or if you want to limit access to the objects in the bucket with key names that start with a specific prefix, such as `2024-03-research-results/`, then pass `S3SubPrefix=2024-03-research-results/`. 

The following is an example CLI command for creating an access grant for an identity in the second account. See [Create grants](access-grants-grant-create.md) for more information. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control create-access-grant \
--account-id 111122223333 \
--access-grants-location-id default \
--access-grants-location-configuration S3SubPrefix=prefixA* \
--permission READ \
--grantee GranteeType=IAM,GranteeIdentifier=arn:aws:iam::444455556666:role/data-consumer-1
```

After configuring cross-account access, the user or role in the second account can do the following: 
+ Calls `ListAccessGrantsInstances` to list the S3 Access Grants instances shared with it through AWS RAM. For more information, see [Get the details of an S3 Access Grants instance](access-grants-instance-view.md).
+ Requests temporary credentials from S3 Access Grants. For more information on how to make these requests, see [Request access to Amazon S3 data through S3 Access Grants](access-grants-credentials.md).

# Managing tags for S3 Access Grants


Tags in Amazon S3 Access Grants have similar characteristics to [object tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html) in Amazon S3. Each tag is a key-value pair. The resources in S3 Access Grants that you can tag are S3 Access Grants [instances](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance.html), [locations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html), and [grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant.html). 

**Note**  
Tagging in S3 Access Grants uses different API operations than object tagging. S3 Access Grants uses the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html) API operations, where a resource can be either an S3 Access Grants instance, a registered location, or an access grant.

Similar to [object tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html), the following limitations apply:
+ You can add tags to new S3 Access Grants resources when you create them, or you can add tags to existing resources.
+ You can associate up to 10 tags with a resource. If multiple tags are associated with the same resource, they must have unique tag keys.
+ A tag key can be up to 128 Unicode characters in length, and tag values can be up to 256 Unicode characters in length. Tags are internally represented in UTF-16. In UTF-16, characters consume either 1 or 2 character positions.
+ The keys and values are case sensitive.

For more information about tag restrictions, see [User-defined tag restrictions](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation-tag-restrictions.html) in the *AWS Billing User Guide*.

You can tag resources in S3 Access Grants by using the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, or the AWS SDKs.

## Using the AWS CLI


To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*. 

You can tag an S3 Access Grants resource when you create it or after you have created it. The following examples show how you tag or untag an S3 Access Grants instance. You can perform similar operations for registered locations and access grants. 

To use the following example commands, replace the `user input placeholders` with your own information.

**Example – Create an S3 Access Grants instance with tags**  

```
aws s3control create-access-grants-instance \
 --account-id 111122223333 \
 --profile access-grants-profile \
 --region us-east-2 \
 --tags Key=tagKey1,Value=tagValue1
```
Response:  

```
 {
    "CreatedAt": "2023-10-25T01:09:46.719000+00:00",
    "AccessGrantsInstanceId": "default",
    "AccessGrantsInstanceArn": "arn:aws:s3:us-east-2:111122223333:access-grants/default"
}
```

**Example – Tag an already created S3 Access Grants instance**  

```
aws s3control tag-resource \
--account-id 111122223333 \
--resource-arn "arn:aws:s3:us-east-2:111122223333:access-grants/default" \
--profile access-grants-profile \
--region us-east-2 \
--tags Key=tagKey2,Value=tagValue2
```

**Example – List tags for the S3 Access Grants instance**  

```
aws s3control list-tags-for-resource \
--account-id 111122223333 \
--resource-arn "arn:aws:s3:us-east-2:111122223333:access-grants/default" \
--profile access-grants-profile \
--region us-east-2
```
Response:  

```
{
    "Tags": [
        {
            "Key": "tagKey1",
            "Value": "tagValue1"
        },
        {
            "Key": "tagKey2",
            "Value": "tagValue2"
        }
    ]
}
```

**Example – Untag the S3 Access Grants instance**  

```
aws s3control untag-resource \
 --account-id 111122223333 \
 --resource-arn "arn:aws:s3:us-east-2:111122223333:access-grants/default" \
 --profile access-grants-profile \
 --region us-east-2 \
 --tag-keys "tagKey2"
```

## Using the REST API


You can use the Amazon S3 API to tag, untag, or list tags for an S3 Access Grants instance, registered location, or access grant. For information about the REST API support for managing S3 Access Grants tags, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html)

# S3 Access Grants limitations


[S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html) has the following limitations: 

**Note**  
If your use case exceeds these limitations, [contact AWS support](https://aws.amazon.com/contact-us/?cmpid=docs_headercta_contactus) to request higher limits.

 **S3 Access Grants instance**   
You can create **1 S3 Access Grants instance** per AWS Region per account. See [Create an S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-create.html).

 **S3 Access Grants location**   
You can register **1,000 S3 Access Grants locations** per S3 Access Grants instance. See [Register an S3 Access Grants location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location.html). 

 **Grant**   
You can create **100,000 grants** per S3 Access Grants instance. See [Create a grant](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-grant.html).

## S3 Access Grants AWS Regions


 S3 Access Grants is currently available in the following AWS Regions: 


| AWS Region code | AWS Region name | 
| --- | --- | 
| us-east-1 | US East (N. Virginia) | 
| us-east-2 | US East (Ohio) | 
| us-west-1 | US West (N. California) | 
| us-west-2 | US West (Oregon) | 
| af-south-1 | Africa (Cape Town) | 
| ap-east-1 | Asia Pacific (Hong Kong) | 
| ap-east-2 | Asia Pacific (Taipei) | 
| ap-northeast-1 | Asia Pacific (Tokyo) | 
| ap-northeast-2 | Asia Pacific (Seoul) | 
| ap-northeast-3 | Asia Pacific (Osaka) | 
| ap-south-1 | Asia Pacific (Mumbai) | 
| ap-south-2 | Asia Pacific (Hyderabad) | 
| ap-southeast-1 | Asia Pacific (Singapore) | 
| ap-southeast-2 | Asia Pacific (Sydney) | 
| ap-southeast-3 | Asia Pacific (Jakarta) | 
| ap-southeast-4 | Asia Pacific (Melbourne) | 
| ap-southeast-6 | Asia Pacific (New Zealand) | 
| ap-southeast-7 | Asia Pacific (Thailand) | 
| ca-central-1 | Canada (Central) | 
| ca-west-1 | Canada West (Calgary) | 
| eu-central-1 | Europe (Frankfurt) | 
| eu-central-2 | Europe (Zurich) | 
| eu-north-1 | Europe (Stockholm) | 
| eu-south-1 | Europe (Milan) | 
| eu-south-2 | Europe (Spain) | 
| eu-west-1 | Europe (Ireland) | 
| eu-west-2 | Europe (London) | 
| eu-west-3 | Europe (Paris) | 
| il-central-1 | Israel (Tel Aviv) | 
| me-central-1 | Middle East (UAE) | 
| me-south-1 | Middle East (Bahrain) | 
| mx-central-1 | Mexico (Central) | 
| sa-east-1 | South America (São Paulo) | 
| us-gov-east-1 | AWS GovCloud (US-East) | 
| us-gov-west-1 | AWS GovCloud (US-West) | 

# S3 Access Grants integrations


S3 Access Grants can be used with the following AWS services and features. This page will be updated as new integrations become available. 

**Tip**  
This [AWS workshop for S3 Access Grants](https://catalog.us-east-1.prod.workshops.aws/workshops/77b0af63-6ad2-4c94-bfc0-270eb9358c7a/en-US/0-getting-started) walks you through using S3 Access Grants with AWS Identity and Access Management (IAM) users, IAM Identity Center users, Amazon EMR, and AWS Transfer Family.

 **Amazon Athena**   
[Using IAM Identity Center enabled Athena workgroups](https://docs.aws.amazon.com/athena/latest/ug/workgroups-identity-center.html)

 **Amazon EMR**   
[Launch an Amazon EMR cluster with S3 Access Grants](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-access-grants.html)

 **Amazon EMR on EKS**   
[Launch an Amazon EMR on EKS cluster with S3 Access Grants](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/access-grants.html)

 **Amazon EMR Serverless application**   
[Launch an Amazon EMR Serverless application with S3 Access Grants](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/access-grants.html)

 **Amazon Redshift**   
[Amazon Redshift integration with Amazon S3 Access Grants](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-sso-s3idc.html)

 **Amazon SageMaker AI Studio**   
[Adding Amazon S3 data to Amazon SageMaker AI Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/adding-existing-s3-data.html)  
Using S3 Access Grants in Amazon SageMaker AI Unified Studio, you can share your Amazon S3 data in multiple projects. To enable granting access to data using S3 Access Grants, an S3 Access Grants instance is required. Amazon SageMaker AI Unified Studio will use an S3 Access Grants instance if one is already available or can create an instance. First, you add your Amazon S3 data and then publish the data to the catalog or share it directly with consumers.  
[Using Amazon S3 Access Grants with Amazon SageMaker AI Studio and the SDK for Python (Boto3) plugin](https://aws.amazon.com/about-aws/whats-new/2024/07/amazon-s3-access-grants-integrate-sagemaker-studio/)  
Using S3 Access Grants in Amazon SageMaker AI Studio notebooks is now easier when using the SDK for Python (Boto3) plugin. Set up access grants for IAM principals and AWS IAM Identity Center directory users, beforehand. Although Amazon SageMaker AI Studio does not natively support identity provider directory users, you can write custom Python code, using the plugin that allows these identities to access S3 data via S3 Access Grants. The data access is taking place with the help of the plugin and not through Amazon SageMaker AI.

 **AWS Glue**   
[Amazon S3 Access Grants with AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/security-s3-access-grants.html)

 **AWS IAM Identity Center**   
[Trusted identity propagation across applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html)

 **AWS Transfer Family**   
[Configure Amazon S3 Access Grants](https://docs.aws.amazon.com/transfer/latest/userguide/webapp-access-grant.html) for AWS Transfer Family

 **Storage Browser for S3**   
[Managing data access at scale](https://docs.aws.amazon.com/AmazonS3/latest/userguide/setup-storagebrowser.html#setup-storagebrowser-method3) using Storage Browser for S3

 **Open source Python frameworks**   
[Amazon S3 Access Grants now integrates with open source Python frameworks](https://aws.amazon.com/about-aws/whats-new/2024/07/amazon-s3-access-grants-integrate-open-source-python/)

# Managing access with ACLs


 Access control lists (ACLs) are one of the resource-based options that you can use to manage access to your buckets and objects. You can use ACLs to grant basic read/write permissions to other AWS accounts. There are limits to managing permissions using ACLs.

For example, you can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account. You cannot grant conditional permissions, nor can you explicitly deny permissions. ACLs are suitable for specific scenarios. For example, if a bucket owner allows other AWS accounts to upload objects, permissions to these objects can only be managed using object ACL by the AWS account that owns the object.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.

 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

For more information about ACLs, see the following topics.

**Topics**
+ [

# Access control list (ACL) overview
](acl-overview.md)
+ [

# Configuring ACLs
](managing-acls.md)
+ [

# Policy examples for ACLs
](example-bucket-policies-condition-keys.md)

# Access control list (ACL) overview
ACL overview

Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions. 

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.

 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. This is shown in the following sample bucket ACL (the default object ACL has the same structure):

**Example**  

```
 1. <?xml version="1.0" encoding="UTF-8"?>
 2. <AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
 3.   <Owner>
 4.     <ID>*** Owner-Canonical-User-ID ***</ID>
 5.   </Owner>
 6.   <AccessControlList>
 7.     <Grant>
 8.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
 9.                xsi:type="Canonical User">
10.         <ID>*** Owner-Canonical-User-ID ***</ID>
11.       </Grantee>
12.       <Permission>FULL_CONTROL</Permission>
13.     </Grant>
14.   </AccessControlList>
15. </AccessControlPolicy>
```

The sample ACL includes an `Owner` element that identifies the owner by the AWS account's canonical user ID. For instructions on finding your canonical user ID, see [Finding an AWS account canonical user ID](#finding-canonical-id). The `Grant` element identifies the grantee (either an AWS account or a predefined group) and the permission granted. This default ACL has one `Grant` element for the owner. You grant permissions by adding `Grant` elements, with each grant identifying the grantee and the permission. 

**Note**  
An ACL can have up to 100 grants.

**Topics**
+ [

## Who is a grantee?
](#specifying-grantee)
+ [

## What permissions can I grant?
](#permissions)
+ [

## `aclRequired` values for common Amazon S3 requests
](#aclrequired-s3)
+ [

## Sample ACL
](#sample-acl)
+ [

## Canned ACL
](#canned-acl)

## Who is a grantee?


When you grant access rights, you specify each grantee as a `type="value"` pair, where `type` is one of the following:
+ `id` – If the value specified is the canonical user ID of an AWS account
+ `uri` – If you are granting permissions to a predefined group

**Warning**  
When you grant other AWS accounts access to your resources, be aware that the AWS accounts can delegate their permissions to users under their accounts. This is known as *cross-account access*. For information about using cross-account access, see [ Creating a Role to Delegate Permissions to an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*. 

### Finding an AWS account canonical user ID


The canonical user ID is associated with your AWS account. This ID is a long string of characters, such as:

`79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be`

For information about how to find the canonical user ID for your account, see [Find the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId) in the *AWS Account Management Reference Guide*.

You can also look up the canonical user ID of an AWS account by reading the ACL of a bucket or an object to which the AWS account has access permissions. When an individual AWS account is granted permissions by a grant request, a grant entry is added to the ACL with the account's canonical user ID. 

**Note**  
If you make your bucket public (not recommended), any unauthenticated user can upload objects to the bucket. These anonymous users don't have an AWS account. When an anonymous user uploads an object to your bucket, Amazon S3 adds a special canonical user ID (`65a011a29cdf8ec533ec3d1ccaae921c`) as the object owner in the ACL. For more information, see [Amazon S3 bucket and object ownership](access-policy-language-overview.md#about-resource-owner).

### Amazon S3 predefined groups


Amazon S3 has a set of predefined groups. When granting account access to a group, you specify one of the Amazon S3 URIs instead of a canonical user ID. Amazon S3 provides the following predefined groups:
+ ****Authenticated Users group**** – Represented by `http://acs.amazonaws.com/groups/global/AuthenticatedUsers`.

  This group represents all AWS accounts. **Access permission to this group allows any AWS account to access the resource.** However, all requests must be signed (authenticated).
**Warning**  
When you grant access to the **Authenticated Users group**, any AWS authenticated user in the world can access your resource.
+ ****All Users group**** – Represented by `http://acs.amazonaws.com/groups/global/AllUsers`.

  **Access permission to this group allows anyone in the world access to the resource.** The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request.
**Warning**  
We highly recommend that you never grant the **All Users group** `WRITE`, `WRITE_ACP`, or `FULL_CONTROL` permissions. For example, although `WRITE` permissions deny non-owners the ability to overwrite or delete existing objects, `WRITE` permissions still allow anyone to store objects in your bucket, for which you are billed. For more details about these permissions, see the following section [What permissions can I grant?](#permissions).
+ ****Log Delivery group**** – Represented by `http://acs.amazonaws.com/groups/s3/LogDelivery`.

  `WRITE` permission on a bucket enables this group to write server access logs (see [Logging requests with server access logging](ServerLogs.md)) to the bucket.

**Note**  
When using ACLs, a grantee can be an AWS account or one of the predefined Amazon S3 groups. However, the grantee cannot be an IAM user. For more information about AWS users and permissions within IAM, see [Using AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/).

## What permissions can I grant?


The following table lists the set of permissions that Amazon S3 supports in an ACL. The set of ACL permissions is the same for an object ACL and a bucket ACL. However, depending on the context (bucket ACL or object ACL), these ACL permissions grant permissions for specific buckets or object operations. The table lists the permissions and describes what they mean in the context of objects and buckets. 

For more information about ACL permissions in the Amazon S3 console, see [Configuring ACLs](managing-acls.md).


| Permission | When granted on a bucket | When granted on an object | 
| --- | --- | --- | 
| READ | Allows grantee to list the objects in the bucket | Allows grantee to read the object data and its metadata | 
| WRITE | Allows grantee to create new objects in the bucket. For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects | Not applicable | 
| READ\$1ACP | Allows grantee to read the bucket ACL | Allows grantee to read the object ACL | 
| WRITE\$1ACP | Allows grantee to write the ACL for the applicable bucket | Allows grantee to write the ACL for the applicable object | 
| FULL\$1CONTROL | Allows grantee the READ, WRITE, READ\$1ACP, and WRITE\$1ACP permissions on the bucket | Allows grantee the READ, READ\$1ACP, and WRITE\$1ACP permissions on the object | 

**Warning**  
Use caution when granting access permissions to your S3 buckets and objects. For example, granting `WRITE` access to a bucket allows the grantee to create objects in the bucket. We highly recommend that you read through the entire [Access control list (ACL) overview](#acl-overview) section before granting permissions.

### Mapping of ACL permissions and access policy permissions


As shown in the preceding table, an ACL allows only a finite set of permissions, compared to the number of permissions that you can set in an access policy (see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions)). Each of these permissions allows one or more Amazon S3 operations.

The following table shows how each ACL permission maps to the corresponding access policy permissions. As you can see, access policy allows more permissions than an ACL does. You use ACLs primarily to grant basic read/write permissions, similar to file system permissions. For more information about when to use an ACL, see [Identity and Access Management for Amazon S3](security-iam.md).

For more information about ACL permissions in the Amazon S3 console, see [Configuring ACLs](managing-acls.md).


| ACL permission | Corresponding access policy permissions when the ACL permission is granted on a bucket  | Corresponding access policy permissions when the ACL permission is granted on an object | 
| --- | --- | --- | 
| READ | s3:ListBucket, s3:ListBucketVersions, and s3:ListBucketMultipartUploads  | s3:GetObject and s3:GetObjectVersion | 
| WRITE |  `s3:PutObject` Bucket owner can create, overwrite, and delete any object in the bucket, and object owner has `FULL_CONTROL` over their object. In addition, when the grantee is the bucket owner, granting `WRITE` permission in a bucket ACL allows the `s3:DeleteObjectVersion` action to be performed on any version in that bucket.   | Not applicable | 
| READ\$1ACP | s3:GetBucketAcl  | s3:GetObjectAcl and s3:GetObjectVersionAcl | 
| WRITE\$1ACP | s3:PutBucketAcl | s3:PutObjectAcl and s3:PutObjectVersionAcl | 
| FULL\$1CONTROL | Equivalent to granting READ, WRITE, READ\$1ACP, and WRITE\$1ACP ACL permissions. Accordingly, this ACL permission maps to a combination of corresponding access policy permissions. | Equivalent to granting READ, READ\$1ACP, and WRITE\$1ACP ACL permissions. Accordingly, this ACL permission maps to a combination of corresponding access policy permissions. | 

### Condition keys


When you grant access policy permissions, you can use condition keys to constrain the value for the ACL on an object using a bucket policy. The following context keys correspond to ACLs. You can use these context keys to mandate the use of a specific ACL in a request:
+ `s3:x-amz-grant-read` ‐ Require read access.
+ `s3:x-amz-grant-write` ‐ Require write access.
+ `s3:x-amz-grant-read-acp` ‐ Require read access to the bucket ACL.
+ `s3:x-amz-grant-write-acp` ‐ Require write access to the bucket ACL.
+ `s3:x-amz-grant-full-control` ‐ Require full control.
+ `s3:x-amz-acl` ‐ Require a [Canned ACL](#canned-acl).

For example policies that involve ACL-specific headers, see [Granting s3:PutObject permission with a condition requiring the bucket owner to get full control](example-bucket-policies-condition-keys.md#grant-putobject-conditionally-1). For a complete list of Amazon S3 specific condition keys, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## `aclRequired` values for common Amazon S3 requests


To identify Amazon S3 requests that required ACLs for authorization, you can use the `aclRequired` value in Amazon S3 server access logs or AWS CloudTrail. The `aclRequired` value that appears in CloudTrail or Amazon S3 server access logs depends on which operations were called and certain information about the requester, object owner, and bucket owner. If no ACLs were required, or if you are setting the `bucket-owner-full-control` canned ACL, or if the requests are allowed by your bucket policy, the `aclRequired` value string is "`-`" in Amazon S3 server access logs and is absent in CloudTrail.

The following tables list the expected `aclRequired` values in CloudTrail or Amazon S3 server access logs for the various Amazon S3 API operations. You can use this information to understand which Amazon S3 operations depend on ACLs for authorization. In the following tables, A, B, and C represent the different accounts associated with the requester, object owner, and bucket owner. Entries with an asterisk (\$1) indicate any of accounts A, B, or C. 

**Note**  
`PutObject` operations in the following table, unless specified otherwise, indicate requests that do not set an ACL, unless the ACL is a `bucket-owner-full-control` ACL. A null value for `aclRequired` indicates that `aclRequired` is absent in AWS CloudTrail logs.

 The following table shows the `aclRequired` values for CloudTrail. 


| Operation name | Requester | Object owner | Bucket owner  | Bucket policy grants access | `aclRequired` value | Reason | 
| --- | --- | --- | --- | --- | --- | --- | 
| GetObject | A | A | A | Yes or No | null | Same-account access | 
| GetObject | A | B | A | Yes or No | null | Same-account access with bucket owner enforced | 
| GetObject | A | A | B | Yes | null | Cross-account access granted by bucket policy | 
| GetObject | A | A | B | No | Yes | Cross-account access relies on ACL | 
| GetObject | A | A | B | Yes | null | Cross-account access granted by bucket policy | 
| GetObject | A | B | B | No | Yes | Cross-account access relies on ACL | 
| GetObject | A | B | C | Yes | null | Cross-account access granted by bucket policy | 
| GetObject | A | B | C | No | Yes | Cross-account access relies on ACL | 
| PutObject | A | Not applicable | A | Yes or No | null | Same-account access | 
| PutObject | A | Not applicable | B | Yes | null | Cross-account access granted by bucket policy | 
| PutObject | A | Not applicable | B | No | Yes | Cross-account access relies on ACL | 
| PutObject with an ACL (except for bucket-owner-full-control) | \$1 | Not applicable | \$1 | Yes or No | Yes | Request grants ACL | 
| ListObjects | A | Not applicable | A | Yes or No | null | Same-account access | 
| ListObjects | A | Not applicable | B | Yes | null | Cross-account access granted by bucket policy | 
| ListObjects | A | Not applicable | B | No | Yes | Cross-account access relies on ACL | 
| DeleteObject | A | Not applicable | A | Yes or No | null | Same-account access | 
| DeleteObject | A | Not applicable | B | Yes | null | Cross-account access granted by bucket policy | 
| DeleteObject | A | Not applicable | B | No | Yes | Cross-account access relies on ACL | 
| PutObjectAcl | \$1 | \$1 | \$1 | Yes or No | Yes | Request grants ACL | 
| PutBucketAcl | \$1 | Not applicable | \$1 | Yes or No | Yes | Request grants ACL | 

 

**Note**  
`REST.PUT.OBJECT` operations in the following table, unless specified otherwise, indicate requests that do not set an ACL, unless the ACL is a `bucket-owner-full-control` ACL. An `aclRequired` value string of "`-`" indicates a null value in Amazon S3 server access logs.

 The following table shows the `aclRequired` values for Amazon S3 server access logs. 


| Operation name | Requester | Object owner | Bucket owner  | Bucket policy grants access | `aclRequired` value | Reason | 
| --- | --- | --- | --- | --- | --- | --- | 
| REST.GET.OBJECT | A | A | A | Yes or No | - | Same-account access | 
| REST.GET.OBJECT | A | B | A | Yes or No | - | Same-account access with bucket owner enforced | 
| REST.GET.OBJECT | A | A | B | Yes | - | Cross-account access granted by bucket policy | 
| REST.GET.OBJECT | A | A | B | No | Yes | Cross-account access relies on ACL | 
| REST.GET.OBJECT | A | B | B | Yes | - | Cross-account access granted by bucket policy | 
| REST.GET.OBJECT | A | B | B | No | Yes | Cross-account access relies on ACL | 
| REST.GET.OBJECT | A | B | C | Yes | - | Cross-account access granted by bucket policy | 
| REST.GET.OBJECT | A | B | C | No | Yes | Cross-account access relies on ACL | 
| REST.PUT.OBJECT | A | Not applicable | A | Yes or No | - | Same-account access | 
| REST.PUT.OBJECT | A | Not applicable | B | Yes | - | Cross-account access granted by bucket policy | 
| REST.PUT.OBJECT | A | Not applicable | B | No | Yes | Cross-account access relies on ACL | 
| REST.PUT.OBJECT with an ACL (except for bucket-owner-full-control) | \$1 | Not applicable | \$1 | Yes or No | Yes | Request grants ACL | 
| REST.GET.BUCKET | A | Not applicable | A | Yes or No | - | Same-account access | 
| REST.GET.BUCKET | A | Not applicable | B | Yes | - | Cross-account access granted by bucket policy | 
| REST.GET.BUCKET | A | Not applicable | B | No | Yes | Cross-account access relies on ACL | 
| REST.DELETE.OBJECT | A | Not applicable | A | Yes or No | - | Same-account access | 
| REST.DELETE.OBJECT | A | Not applicable | B | Yes | - | Cross-account access granted by bucket policy | 
| REST.DELETE.OBJECT | A | Not applicable | B | No | Yes | Cross-account access relies on ACL | 
| REST.PUT.ACL | \$1 | \$1 | \$1 | Yes or No | Yes | Request grants ACL | 

## Sample ACL


The following sample ACL on a bucket identifies the resource owner and a set of grants. The format is the XML representation of an ACL in the Amazon S3 REST API. The bucket owner has `FULL_CONTROL` of the resource. In addition, the ACL shows how permissions are granted on a resource to two AWS accounts, identified by canonical user ID, and two of the predefined Amazon S3 groups discussed in the preceding section.

**Example**  

```
 1. <?xml version="1.0" encoding="UTF-8"?>
 2. <AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
 3.   <Owner>
 4.     <ID>Owner-canonical-user-ID</ID>
 5.   </Owner>
 6.   <AccessControlList>
 7.     <Grant>
 8.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
 9.         <ID>Owner-canonical-user-ID</ID>
10.       </Grantee>
11.       <Permission>FULL_CONTROL</Permission>
12.     </Grant>
13.     
14.     <Grant>
15.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
16.         <ID>user1-canonical-user-ID</ID>
17.       </Grantee>
18.       <Permission>WRITE</Permission>
19.     </Grant>
20. 
21.     <Grant>
22.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
23.         <ID>user2-canonical-user-ID</ID>
24.       </Grantee>
25.       <Permission>READ</Permission>
26.     </Grant>
27. 
28.     <Grant>
29.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
30.         <URI>http://acs.amazonaws.com/groups/global/AllUsers</URI> 
31.       </Grantee>
32.       <Permission>READ</Permission>
33.     </Grant>
34.     <Grant>
35.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
36.         <URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI>
37.       </Grantee>
38.       <Permission>WRITE</Permission>
39.     </Grant>
40. 
41.   </AccessControlList>
42. </AccessControlPolicy>
```

## Canned ACL


Amazon S3 supports a set of predefined grants, known as *canned ACLs*. Each canned ACL has a predefined set of grantees and permissions. The following table lists the set of canned ACLs and the associated predefined grants. 


| Canned ACL | Applies to | Permissions added to ACL | 
| --- | --- | --- | 
| private | Bucket and object | Owner gets FULL\$1CONTROL. No one else has access rights (default). | 
| public-read | Bucket and object | Owner gets FULL\$1CONTROL. The AllUsers group (see [Who is a grantee?](#specifying-grantee)) gets READ access.  | 
| public-read-write | Bucket and object | Owner gets FULL\$1CONTROL. The AllUsers group gets READ and WRITE access. Granting this on a bucket is generally not recommended. | 
| aws-exec-read | Bucket and object | Owner gets FULL\$1CONTROL. Amazon EC2 gets READ access to GET an Amazon Machine Image (AMI) bundle from Amazon S3. | 
| authenticated-read | Bucket and object | Owner gets FULL\$1CONTROL. The AuthenticatedUsers group gets READ access. | 
| bucket-owner-read | Object | Object owner gets FULL\$1CONTROL. Bucket owner gets READ access. If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. | 
| bucket-owner-full-control | Object  | Both the object owner and the bucket owner get FULL\$1CONTROL over the object. If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. | 
| log-delivery-write | Bucket  | The LogDelivery group gets WRITE and READ\$1ACP permissions on the bucket. For more information about logs, see ([Logging requests with server access logging](ServerLogs.md)). | 

**Note**  
You can specify only one of these canned ACLs in your request.

You specify a canned ACL in your request by using the `x-amz-acl` request header. When Amazon S3 receives a request with a canned ACL in the request, it adds the predefined grants to the ACL of the resource. 

# Configuring ACLs


This section explains how to manage access permissions for S3 buckets and objects using access control lists (ACLs). You can add grants to your resource ACL using the AWS Management Console, AWS Command Line Interface (CLI), REST API, or AWS SDKs.

Bucket and object permissions are independent of each other. An object does not inherit the permissions from its bucket. For example, if you create a bucket and grant write access to a user, you can't access that user’s objects unless the user explicitly grants you access.

You can grant permissions to other AWS account users or to predefined groups. The user or group that you are granting permissions to is called the *grantee*. By default, the owner, which is the AWS account that created the bucket, has full permissions.

Each permission you grant for a user or group adds an entry in the ACL that is associated with the bucket. The ACL lists grants, which identify the grantee and the permission granted.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.

 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

**Warning**  
We highly recommend that you avoid granting write access to the **Everyone (public access)** or **Authenticated Users group (all AWS authenticated users)** groups. For more information about the effects of granting write access to these groups, see [Amazon S3 predefined groups](acl-overview.md#specifying-grantee-predefined-groups).

## Using the S3 console to set ACL permissions for a bucket


The console displays combined access grants for duplicate grantees. To see the full list of ACLs, use the Amazon S3 REST API, AWS CLI, or AWS SDKs.

The following table shows the ACL permissions that you can configure for buckets in the Amazon S3 console.


**Amazon S3 console ACL permissions for buckets**  

| Console permission | ACL permission | Access | 
| --- | --- | --- | 
| Objects - List | READ | Allows grantee to list the objects in the bucket. | 
| Objects - Write | WRITE | Allows grantee to create new objects in the bucket. For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects. | 
| Bucket ACL - Read | READ\$1ACP | Allows grantee to read the bucket ACL. | 
| Bucket ACL - Write | WRITE\$1ACP | Allows grantee to write the ACL for the applicable bucket. | 
| Everyone (public access): Objects - List | READ | Grants public read access for the objects in the bucket. When you grant list access to Everyone (public access), anyone in the world can access the objects in the bucket. | 
| Everyone (public access): Bucket ACL - Read | READ\$1ACP | Grants public read access for the bucket ACL. When you grant read access to Everyone (public access), anyone in the world can access the bucket ACL. | 

For more information about ACL permissions, see [Access control list (ACL) overview](acl-overview.md).

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

**To set ACL permissions for a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the **Buckets** list, choose the name of the bucket that you want to set permissions for.

1. Choose **Permissions**.

1. Under **Access control list**, choose **Edit**.

   You can edit the following ACL permissions for the bucket:

**Objects**
   + **List** – Allows a grantee to list the objects in the bucket.
   + **Write** – Allows grantee to create new objects in the bucket. For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects. 

     In the S3 console, you can only grant write access to the S3 log delivery group and the bucket owner (your AWS account). We highly recommend that you do not grant write access for other grantees. However, if you need to grant write access, you can use the AWS CLI, AWS SDKs, or the REST API. 

**Bucket ACL**
   + **Read** – Allows grantee to read the bucket ACL.
   + **Write** – Allows grantee to write the ACL for the applicable bucket.

1. To change the bucket owner's permissions, beside **Bucket owner (your AWS account)**, clear or select from the following ACL permissions:
   + **Objects** – **List** or **Write**
   + **Bucket ACL** – **Read** or **Write**

   The *owner* refers to the AWS account root user, not an AWS Identity and Access Management IAM user. For more information about the root user, see [The AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) in the *IAM User Guide*.

1. To grant or undo permissions for the general public (everyone on the internet), beside **Everyone (public access)**, clear or select from the following ACL permissions:
   + **Objects** – **List**
   + **Bucket ACL** – **Read**
**Warning**  
Use caution when granting the **Everyone** group public access to your S3 bucket. When you grant access to this group, anyone in the world can access your bucket. We highly recommend that you never grant any kind of public write access to your S3 bucket.

1. To grant or undo permissions for anyone with an AWS account, beside **Authenticated Users group (anyone with an AWS account)**, clear or select from the following ACL permissions:
   + **Objects** – **List**
   + **Bucket ACL** – **Read**

1. To grant or undo permissions for Amazon S3 to write server access logs to the bucket, under **S3 log delivery group**, clear or select from the following ACL permissions:
   + **Objects** – **List** or **Write** 
   + **Bucket ACL** – **Read** or **Write** 

     If a bucket is set up as the target bucket to receive access logs, the bucket permissions must allow the **Log Delivery** group write access to the bucket. When you enable server access logging on a bucket, the Amazon S3 console grants write access to the **Log Delivery** group for the target bucket that you choose to receive the logs. For more information about server access logging, see [Enabling Amazon S3 server access logging](enable-server-access-logging.md).

1. To grant access to another AWS account, do the following:

   1. Choose **Add grantee**.

   1. In the **Grantee** box, enter the canonical ID of the other AWS account.

   1. Select from the following ACL permissions:
      + **Objects** – **List** or **Write**
      + **Bucket ACL** – **Read** or **Write**
**Warning**  
When you grant other AWS accounts access to your resources, be aware that the AWS accounts can delegate their permissions to users under their accounts. This is known as *cross-account access*. For information about using cross-account access, see [ Creating a Role to Delegate Permissions to an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*. 

1. To remove access to another AWS account, under **Access for other AWS accounts**, choose **Remove**.

1. To save your changes, choose **Save changes**.

## Using the S3 console to set ACL permissions for an object


The console displays combined access grants for duplicate grantees. To see the full list of ACLs, use the Amazon S3 REST API, AWS CLI, or AWS SDKs. The following table shows the ACL permissions that you can configure for objects in the Amazon S3 console.


**Amazon S3 console ACL permissions for objects**  

| Console permission | ACL permission | Access | 
| --- | --- | --- | 
| Object - Read | READ | Allows grantee to read the object data and its metadata. | 
| Object ACL - Read | READ\$1ACP | Allows grantee to read the object ACL. | 
| Object ACL - Write | WRITE\$1ACP | Allows grantee to write the ACL for the applicable object | 

For more information about ACL permissions, see [Access control list (ACL) overview](acl-overview.md).

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

**To set ACL permissions for an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that contains the object.

1. In the **objects** list, choose the name of the object for which you want to set permissions.

1. Choose **Permissions**.

1. Under Access control list (ACL), choose **Edit**.

   You can edit the following ACL permissions for the object:

**Object**
   + **Read** – Allows grantee to read the object data and its metadata.

**Object ACL**
   + **Read** – Allows grantee to read the object ACL.
   + **Write** – Allows grantee to write the ACL for the applicable object. In the S3 console, you can only grant write access to the bucket owner (your AWS account). We highly recommend that you do not grant write access for other grantees. However, if you need to grant write access, you can use the AWS CLI, AWS SDKs, or the REST API. 

1. You can manage object access permissions for the following: 

   1. 

**Access for object owner**

      The *owner* refers to the AWS account root user, and not an AWS Identity and Access Management IAM user. For more information about the root user, see [The AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) in the *IAM User Guide*.

      To change the owner's object access permissions, under **Access for object owner**, choose **Your AWS Account (owner)**.

      Select the check boxes for the permissions that you want to change, and then choose **Save**.

   1. 

**Access for other AWS accounts**

      To grant permissions to an AWS user from a different AWS account, under **Access for other AWS accounts**, choose **Add account**. In the **Enter an ID** field, enter the canonical ID of the AWS user that you want to grant object permissions to. For information about finding a canonical ID, see [Your AWS account identifiers](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) in the *Amazon Web Services General Reference*. You can add as many as 99 users.

      Select the check boxes for the permissions that you want to grant to the user, and then choose **Save**. To display information about the permissions, choose the Help icons. 

   1. 

**Public access**

      To grant access to your object to the general public (everyone in the world), under **Public access**, choose **Everyone**. Granting public access permissions means that anyone in the world can access the object.

      Select the check boxes for the permissions that you want to grant, and then choose **Save**. 
**Warning**  
Use caution when granting the **Everyone** group anonymous access to your Amazon S3 objects. When you grant access to this group, anyone in the world can access your object. If you need to grant access to everyone, we highly recommend that you only grant permissions to **Read objects**.
We highly recommend that you *do not* grant the **Everyone** group write object permissions. Doing so allows anyone to overwrite the ACL permissions for the object.

## Using the AWS SDKs


This section provides examples of how to configure access control list (ACL) grants on buckets and objects.

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

------
#### [ Java ]

This section provides examples of how to configure access control list (ACL) grants on buckets and objects. The first example creates a bucket with a canned ACL (see [Canned ACL](acl-overview.md#canned-acl)), creates a list of custom permission grants, and then replaces the canned ACL with an ACL containing the custom grants. The second example shows how to modify an ACL using the `AccessControlList.grantPermission()` method.

**Example Create a bucket and specify a canned ACL that grants permission to the S3 log delivery group**  
This example creates a bucket. In the request, the example specifies a canned ACL that grants the Log Delivery group permission to write logs to the bucket.   

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.io.IOException;
import java.util.ArrayList;

public class CreateBucketWithACL {

    public static void main(String[] args) throws IOException {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";
        String userEmailForReadPermission = "*** user@example.com ***";

        try {
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withRegion(clientRegion)
                    .build();

            // Create a bucket with a canned ACL. This ACL will be replaced by the
            // setBucketAcl()
            // calls below. It is included here for demonstration purposes.
            CreateBucketRequest createBucketRequest = new CreateBucketRequest(bucketName, clientRegion.getName())
                    .withCannedAcl(CannedAccessControlList.LogDeliveryWrite);
            s3Client.createBucket(createBucketRequest);

            // Create a collection of grants to add to the bucket.
            ArrayList<Grant> grantCollection = new ArrayList<Grant>();

            // Grant the account owner full control.
            Grant grant1 = new Grant(new CanonicalGrantee(s3Client.getS3AccountOwner().getId()),
                    Permission.FullControl);
            grantCollection.add(grant1);

            // Grant the LogDelivery group permission to write to the bucket.
            Grant grant2 = new Grant(GroupGrantee.LogDelivery, Permission.Write);
            grantCollection.add(grant2);

            // Save grants by replacing all current ACL grants with the two we just created.
            AccessControlList bucketAcl = new AccessControlList();
            bucketAcl.grantAllPermissions(grantCollection.toArray(new Grant[0]));
            s3Client.setBucketAcl(bucketName, bucketAcl);

            // Retrieve the bucket's ACL, add another grant, and then save the new ACL.
            AccessControlList newBucketAcl = s3Client.getBucketAcl(bucketName);
            Grant grant3 = new Grant(new EmailAddressGrantee(userEmailForReadPermission), Permission.Read);
            newBucketAcl.grantAllPermissions(grant3);
            s3Client.setBucketAcl(bucketName, newBucketAcl);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

**Example Update ACL on an existing object**  
This example updates the ACL on an object. The example performs the following tasks:   
+ Retrieves an object's ACL
+ Clears the ACL by removing all existing permissions
+ Adds two permissions: full access to the owner, and WRITE\$1ACP (see [What permissions can I grant?](acl-overview.md#permissions)) to a user identified by an email address
+ Saves the ACL to the object

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.AccessControlList;
import com.amazonaws.services.s3.model.CanonicalGrantee;
import com.amazonaws.services.s3.model.EmailAddressGrantee;
import com.amazonaws.services.s3.model.Permission;

import java.io.IOException;

public class ModifyACLExistingObject {

    public static void main(String[] args) throws IOException {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";
        String keyName = "*** Key name ***";
        String emailGrantee = "*** user@example.com ***";

        try {
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(clientRegion)
                    .build();

            // Get the existing object ACL that we want to modify.
            AccessControlList acl = s3Client.getObjectAcl(bucketName, keyName);

            // Clear the existing list of grants.
            acl.getGrantsAsList().clear();

            // Grant a sample set of permissions, using the existing ACL owner for Full
            // Control permissions.
            acl.grantPermission(new CanonicalGrantee(acl.getOwner().getId()), Permission.FullControl);
            acl.grantPermission(new EmailAddressGrantee(emailGrantee), Permission.WriteAcp);

            // Save the modified ACL back to the object.
            s3Client.setObjectAcl(bucketName, keyName, acl);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

------
#### [ .NET ]

**Example Create a bucket and specify a canned ACL that grants permission to the S3 log delivery group**  
This C\$1 example creates a bucket. In the request, the code also specifies a canned ACL that grants the Log Delivery group permissions to write the logs to the bucket.  
 For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*.   

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class ManagingBucketACLTest
    {
        private const string newBucketName = "*** bucket name ***"; 
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;

        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            CreateBucketUseCannedACLAsync().Wait();
        }

        private static async Task CreateBucketUseCannedACLAsync()
        {
            try
            {
                // Add bucket (specify canned ACL).
                PutBucketRequest putBucketRequest = new PutBucketRequest()
                {
                    BucketName = newBucketName,
                    BucketRegion = S3Region.EUW1, // S3Region.US,
                                                  // Add canned ACL.
                    CannedACL = S3CannedACL.LogDeliveryWrite
                };
                PutBucketResponse putBucketResponse = await client.PutBucketAsync(putBucketRequest);

                // Retrieve bucket ACL.
                GetACLResponse getACLResponse = await client.GetACLAsync(new GetACLRequest
                {
                    BucketName = newBucketName
                });
            }
            catch (AmazonS3Exception amazonS3Exception)
            {
                Console.WriteLine("S3 error occurred. Exception: " + amazonS3Exception.ToString());
            }
            catch (Exception e)
            {
                Console.WriteLine("Exception: " + e.ToString());
            }
        }
    }
}
```

**Example Update ACL on an existing object**  
This C\$1 example updates the ACL on an existing object. The example performs the following tasks:  
+ Retrieves an object's ACL.
+ Clears the ACL by removing all existing permissions.
+ Adds two permissions: full access to the owner, and WRITE\$1ACP to a user identified by email address.
+ Saves the ACL by sending a `PutAcl` request.
For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*.   

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class ManagingObjectACLTest
    {
        private const string bucketName = "*** bucket name ***"; 
        private const string keyName = "*** object key name ***"; 
        private const string emailAddress = "*** email address ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;
        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            TestObjectACLTestAsync().Wait();
        }
        private static async Task TestObjectACLTestAsync()
        {
            try
            {
                    // Retrieve the ACL for the object.
                    GetACLResponse aclResponse = await client.GetACLAsync(new GetACLRequest
                    {
                        BucketName = bucketName,
                        Key = keyName
                    });

                    S3AccessControlList acl = aclResponse.AccessControlList;

                    // Retrieve the owner (we use this to re-add permissions after we clear the ACL).
                    Owner owner = acl.Owner;

                    // Clear existing grants.
                    acl.Grants.Clear();

                    // Add a grant to reset the owner's full permission (the previous clear statement removed all permissions).
                    S3Grant fullControlGrant = new S3Grant
                    {
                        Grantee = new S3Grantee { CanonicalUser = owner.Id },
                        Permission = S3Permission.FULL_CONTROL
                        
                    };

                    // Describe the grant for the permission using an email address.
                    S3Grant grantUsingEmail = new S3Grant
                    {
                        Grantee = new S3Grantee { EmailAddress = emailAddress },
                        Permission = S3Permission.WRITE_ACP
                    };
                    acl.Grants.AddRange(new List<S3Grant> { fullControlGrant, grantUsingEmail });
 
                    // Set a new ACL.
                    PutACLResponse response = await client.PutACLAsync(new PutACLRequest
                    {
                        BucketName = bucketName,
                        Key = keyName,
                        AccessControlList = acl
                    });
            }
            catch (AmazonS3Exception amazonS3Exception)
            {
                Console.WriteLine("An AmazonS3Exception was thrown. Exception: " + amazonS3Exception.ToString());
            }
            catch (Exception e)
            {
                Console.WriteLine("Exception: " + e.ToString());
            }
        }
    }
}
```

------

## Using the REST API


Amazon S3 APIs enable you to set an ACL when you create a bucket or an object. Amazon S3 also provides API to set an ACL on an existing bucket or an object. These APIs provide the following methods to set an ACL:
+ **Set ACL using request headers—** When you send a request to create a resource (bucket or object), you set an ACL using the request headers. Using these headers, you can either specify a canned ACL or specify grants explicitly (identifying grantee and permissions explicitly). 
+ **Set ACL using request body—** When you send a request to set an ACL on an existing resource, you can set the ACL either in the request header or in the body. 

For information on the REST API support for managing ACLs, see the following sections in the *Amazon Simple Storage Service API Reference*:
+  [GetBucketAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETacl.html) 
+  [PutBucketAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html) 
+  [GetObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETacl.html) 
+  [PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUTacl.html) 
+  [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) 
+  [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html) 
+  [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) 
+  [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html) 

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

### Access Control List (ACL)-Specific Request Headers


You can use headers to grant access control list (ACL)-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the Access Control List (ACL) on the object. For more information, see [Access control list (ACL) overview](acl-overview.md).

With this operation, you can grant access permissions using one these two methods:
+ **Canned ACL (`x-amz-acl`)** — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see [Canned ACL](acl-overview.md#canned-acl).
+ **Access Permissions** — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see [Access control list (ACL) overview](acl-overview.md). In the header, you specify a list of grantees who get the specific permission. 
  + x-amz-grant-read
  + x-amz-grant-write
  + x-amz-grant-read-acp
  + x-amz-grant-write-acp
  + x-amz-grant-full-control

## Using the AWS CLI


For more information about managing ACLs using the AWS CLI, see [put-bucket-acl](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-acl.html) in the *AWS CLI Command Reference*.

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

# Policy examples for ACLs
Policy examples

You can use condition keys in bucket policies to control access to Amazon S3.

**Topics**
+ [

## Granting s3:PutObject permission with a condition requiring the bucket owner to get full control
](#grant-putobject-conditionally-1)
+ [

## Granting s3:PutObject permission with a condition on the x-amz-acl header
](#example-acl-header)

## Granting s3:PutObject permission with a condition requiring the bucket owner to get full control


The [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) operation allows access control list (ACL)–specific headers that you can use to grant ACL-based permissions. Using these keys, the bucket owner can set a condition to require specific access permissions when the user uploads an object. 

Suppose that Account A owns a bucket, and the account administrator wants to grant Dave, a user in Account B, permissions to upload objects. By default, objects that Dave uploads are owned by Account B, and Account A has no permissions on these objects. Because the bucket owner is paying the bills, it wants full permissions on the objects that Dave uploads. The Account A administrator can do this by granting the `s3:PutObject` permission to Dave, with a condition that the request include ACL-specific headers that either grant full permission explicitly or use a canned ACL. For more information, see [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html).

### Require the x-amz-full-control header


You can require the `x-amz-full-control` header in the request with full control permission to the bucket owner. The following bucket policy grants the `s3:PutObject` permission to user Dave with a condition using the `s3:x-amz-grant-full-control` condition key, which requires the request to include the `x-amz-full-control` header.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Dave"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::awsexamplebucket1/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-grant-full-control": "id=AccountA-CanonicalUserID"
                }
            }
        }
    ]
}
```

------

**Note**  
This example is about cross-account permission. However, if Dave (who is getting the permission) belongs to the AWS account that owns the bucket, this conditional permission is not necessary. This is because the parent account to which Dave belongs owns objects that the user uploads.

**Add explicit deny**  
The preceding bucket policy grants conditional permission to user Dave in Account B. While this policy is in effect, it is possible for Dave to get the same permission without any condition via some other policy. For example, Dave can belong to a group, and you grant the group `s3:PutObject` permission without any condition. To avoid such permission loopholes, you can write a stricter access policy by adding explicit deny. In this example, you explicitly deny the user Dave upload permission if he does not include the necessary headers in the request granting full permissions to the bucket owner. Explicit deny always supersedes any other permission granted. The following is the revised access policy example with explicit deny added.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/AccountBadmin"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::awsexamplebucket1/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-grant-full-control": "id=AccountA-CanonicalUserID"
                }
            }
        },
        {
            "Sid": "statement2",
            "Effect": "Deny",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/AccountBadmin"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::awsexamplebucket1/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-grant-full-control": "id=AccountA-CanonicalUserID"
                }
            }
        }
    ]
}
```

------

**Test the policy with the AWS CLI**  
If you have two AWS accounts, you can test the policy using the AWS Command Line Interface (AWS CLI). You attach the policy and use Dave's credentials to test the permission using the following AWS CLI `put-object` command. You provide Dave's credentials by adding the `--profile` parameter. You grant full control permission to the bucket owner by adding the `--grant-full-control` parameter. For more information about setting up and using the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*. 

```
aws s3api put-object --bucket examplebucket --key HappyFace.jpg --body c:\HappyFace.jpg --grant-full-control id="AccountA-CanonicalUserID" --profile AccountBUserProfile
```

### Require the x-amz-acl header


You can require the `x-amz-acl` header with a canned ACL granting full control permission to the bucket owner. To require the `x-amz-acl` header in the request, you can replace the key-value pair in the `Condition` block and specify the `s3:x-amz-acl` condition key, as shown in the following example.

```
"Condition": {
    "StringEquals": {
        "s3:x-amz-acl": "bucket-owner-full-control"
    }
}
```

To test the permission using the AWS CLI, you specify the `--acl` parameter. The AWS CLI then adds the `x-amz-acl` header when it sends the request.

```
aws s3api put-object --bucket examplebucket --key HappyFace.jpg --body c:\HappyFace.jpg --acl "bucket-owner-full-control" --profile AccountBadmin
```

## Granting s3:PutObject permission with a condition on the x-amz-acl header


The following bucket policy grants the `s3:PutObject` permission for two AWS accounts if the request includes the `x-amz-acl` header making the object publicly readable. The `Condition` block uses the `StringEquals` condition, and it is provided a key-value pair, `"s3:x-amz-acl":["public-read"]`, for evaluation. In the key-value pair, the `s3:x-amz-acl` is an Amazon S3–specific key, as indicated by the prefix `s3:`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AddCannedAcl",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:root",
                    "arn:aws:iam::111122223333:root"
                ]
            },
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::awsexamplebucket1/*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": [
                        "public-read"
                    ]
                }
            }
        }
    ]
}
```

------

**Important**  
Not all conditions make sense for all actions. For example, it makes sense to include an `s3:LocationConstraint` condition on a policy that grants the `s3:CreateBucket` Amazon S3 permission. However, it does not make sense to include this condition on a policy that grants the `s3:GetObject` permission. Amazon S3 can test for semantic errors of this type that involve Amazon S3–specific conditions. However, if you are creating a policy for an IAM user or role and you include a semantically invalid Amazon S3 condition, no error is reported because IAM cannot validate Amazon S3 conditions. 

# Blocking public access to your Amazon S3 storage
Blocking public access

The Amazon S3 Block Public Access feature provides settings for access points, buckets, accounts, and AWS Organizations to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects don't allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources. 

With S3 Block Public Access, organization administrators, account administrators, and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources that are enforced regardless of how the resources are created.

You can manage Block Public Access settings at multiple levels: organization level (using AWS Organizations), account level, and bucket and access point level. For instructions on configuring public block access, see [Configuring block public access](#configuring-block-public-access).

When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner's account has a block public access setting applied. If the account is part of an AWS Organizations with Block Public Access policies, Amazon S3 also checks for organization-level settings. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request. 

Amazon S3 Block Public Access provides four settings. These settings are independent and can be used in any combination. Each setting can be applied to an access point, a bucket, or an entire AWS account. At the organization level, all four settings are applied together as a unified policy - you cannot select individual settings granularly. If the block public access settings for the access point, bucket, or account differ, then Amazon S3 applies the most restrictive combination of the access point, bucket, and account settings. Account-level settings automatically inherit organization-level policies when present, and S3 takes the most restrictive policy between bucket-level and effective account-level settings. For example, if your organization has a Block Public Access policy enabled, but a specific bucket has Block Public Access disabled at the bucket level, the bucket will still be protected because S3 applies the more restrictive organization/account-level settings. Conversely, if your organization policy is disabled but a bucket has Block Public Access enabled, that bucket remains protected by its bucket-level settings. 

When Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates an organization policy (which enforces the account BPA setting) or an access point, bucket, or account setting.

**Important**  
Public access is granted to buckets and objects through access control lists (ACLs), access point policies, bucket policies, or all. To help ensure that all of your Amazon S3 access points, buckets, and objects have their public access blocked, we recommend that you turn on all four settings for block public access for your account. For organizations managing multiple accounts, consider using organization-level Block Public Access policies for centralized control. Additionally, we recommend that you also turn on all four settings for each bucket to comply with AWS Security Hub Foundational Security Best Practices control S3.8. These settings block public access for all current and future buckets and access points.   
Before applying these settings, verify that your applications will work correctly without public access. If you require some level of public access to your buckets or objects, for example to host a static website as described at [Hosting a static website using Amazon S3](WebsiteHosting.md), you can customize the individual settings to suit your storage use cases.  
Enabling Block Public Access helps protect your resources by preventing public access from being granted through the resource policies or access control lists (ACLs) that are directly attached to S3 resources. In addition to enabling Block Public Access, carefully inspect the following policies to confirm that they don't grant public access:  
Identity-based policies attached to associated AWS principals (for example, IAM roles)
Resource-based policies attached to associated AWS resources (for example, AWS Key Management Service (KMS) keys)

**Note**  
You can enable block public access settings only for organizations, access points, buckets, and AWS accounts. Amazon S3 doesn't support block public access settings on a per-object basis.
When you apply block public access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions.
When you apply organization-level block public access policies, they automatically propagate to selected member accounts and override account-level settings.

**Topics**
+ [

## Block public access settings
](#access-control-block-public-access-options)
+ [

## Managing block public access at organization level
](#access-control-block-public-access-organization-level)
+ [

## Performing block public access operations on an access point
](#access-control-block-public-access-examples-access-point)
+ [

## The meaning of "public"
](#access-control-block-public-access-policy-status)
+ [

## Using IAM Access Analyzer for S3 to review public buckets
](#access-analyzer-public-info)
+ [

## Permissions
](#access-control-block-public-access-permissions)
+ [

## Configuring block public access
](#configuring-block-public-access)
+ [

# Configuring block public access settings for your account
](configuring-block-public-access-account.md)
+ [

# Configuring block public access settings for your S3 buckets
](configuring-block-public-access-bucket.md)

## Block public access settings


S3 Block Public Access provides four settings. You can apply these settings in any combination to individual access points, buckets, or entire AWS accounts. At the organization level, you can only enable or disable all four settings together using an "all" or "none" approach - granular control over individual settings is not available. If you apply a setting to an account, it applies to all buckets and access points that are owned by that account. Account-level settings automatically inherit from organization policies when present. Similarly, if you apply a setting to a bucket, it applies to all access points associated with that bucket.

The policy inheritance and enforcement works as follows:
+ Organization-level policies automatically apply to member accounts, enforcing any existing account-level settings
+ Account-level setting inherit from organization policies when present, or use locally configured settings when no organization policy exists
+ Bucket-level settings operate independently but are subject to enforcement restrictions. S3 applies the most restrictive combination across all applicable levels - organization/account-level and bucket-level settings. This means a bucket inherits the baseline protection from its account (which may be organization-managed), but S3 will enforce whichever configuration is more restrictive between the bucket's settings and the account's effective settings.

The following table contains the available settings.


| Name | Description | 
| --- | --- | 
| BlockPublicAcls |  Setting this option to `TRUE` causes the following behavior: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html) When this setting is set to `TRUE`, the specified operations fail (whether made through the REST API, AWS CLI, or AWS SDKs). However, existing policies and ACLs for buckets and objects aren't modified. This setting enables you to protect against public access while allowing you to audit, refine, or otherwise alter the existing policies and ACLs for your buckets and objects.  Access points don't have ACLs associated with them. If you apply this setting to an access point, it acts as a passthrough to the underlying bucket. If an access point has this setting enabled, requests made through the access point behave as though the underlying bucket has this setting enabled, regardless of whether the bucket actually has this setting enabled.   | 
| IgnorePublicAcls |  Setting this option to `TRUE` causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains. This setting enables you to safely block public access granted by ACLs while still allowing `PutObject` calls that include a public ACL (as opposed to `BlockPublicAcls`, which rejects `PutObject` calls that include a public ACL). Enabling this setting doesn't affect the persistence of any existing ACLs and doesn't prevent new public ACLs from being set.  Access points don't have ACLs associated with them. If you apply this setting to an access point, it acts as a passthrough to the underlying bucket. If an access point has this setting enabled, requests made through the access point behave as though the underlying bucket has this setting enabled, regardless of whether the bucket actually has this setting enabled.   | 
| BlockPublicPolicy |  Setting this option to `TRUE` for a bucket causes Amazon S3 to reject calls to `PutBucketPolicy` if the specified bucket policy allows public access. Setting this option to `TRUE` for a bucket also causes Amazon S3 to reject calls to `PutAccessPointPolicy` for all of the bucket's same-account access points if the specified policy allows public access.  Setting this option to `TRUE` for an access point causes Amazon S3 to reject calls to `PutAccessPointPolicy` and `PutBucketPolicy` that are made through the access point if the specified policy (for either the access point or the underlying bucket) allows public access. You can use this setting to allow users to manage access point and bucket policies without allowing them to publicly share the bucket or the objects it contains. Enabling this setting doesn't affect existing access point or bucket policies.  To use this setting effectively, we recommend that you apply it at the *account* level. A bucket policy can allow users to alter a bucket's block public access settings. Therefore, users who have permission to change a bucket policy could insert a policy that allows them to disable the block public access settings for the bucket. If this setting is enabled for the entire account, rather than for a specific bucket, Amazon S3 blocks public policies even if a user alters the bucket policy to disable this setting.   | 
| RestrictPublicBuckets |  Setting this option to `TRUE` restricts access to an access point or bucket with a public policy to only AWS service principals and authorized users within the bucket owner's account and access point owner's account. This setting blocks all cross-account access to the access point or bucket (except by AWS service principals), while still allowing users within the account to manage the access point or bucket. Enabling this setting doesn't affect existing access point or bucket policies, except that Amazon S3 blocks public and cross-account access derived from any public access point or bucket policy, including non-public delegation to specific accounts.  | 

**Important**  
Calls to `GetBucketAcl` and `GetObjectAcl` always return the effective permissions in place for the specified bucket or object. For example, suppose that a bucket has an ACL that grants public access, but the bucket also has the `IgnorePublicAcls` setting enabled. In this case, `GetBucketAcl` returns an ACL that reflects the access permissions that Amazon S3 is enforcing, rather than the actual ACL that is associated with the bucket.
Block public access settings don't alter existing policies or ACLs. Therefore, removing a block public access setting causes a bucket or object with a public policy or ACL to again be publicly accessible. 

## Managing block public access at organization level


Organization-level block public access uses AWS Organizations policies to centrally manage S3 public access controls across your entire organization. When enabled, these policies automatically apply to selected accounts and override individual account-level settings.

For additional information on block public access at an organization level, see [S3 policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_s3.html) in the *AWS Organizations user guide*.

## Performing block public access operations on an access point


To perform block public access operations on an access point, use the AWS CLI service `s3control`. 

**Important**  
You can't change an access point's block public access settings after creating the access point. You can specify block public access settings for an access point only when creating the access point.

## The meaning of "public"


### ACLs


Amazon S3 considers a bucket or object ACL public if it grants any permissions to members of the predefined `AllUsers` or `AuthenticatedUsers` groups. For more information about predefined groups, see [Amazon S3 predefined groups](acl-overview.md#specifying-grantee-predefined-groups).

### Bucket policies


When evaluating a bucket policy, Amazon S3 begins by assuming that the policy is public. It then evaluates the policy to determine whether it qualifies as non-public. To be considered non-public, a bucket policy must grant access only to fixed values (values that don't contain a wildcard or [an AWS Identity and Access Management Policy Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html)) for one or more of the following:
+ An AWS principal, user, role, or service principal (e.g. `aws:PrincipalOrgID`)
+ A set of Classless Inter-Domain Routings (CIDR) blocks, using `aws:SourceIp`. For more information about CIDR, see [RFC 4632](http://www.rfc-editor.org/rfc/rfc4632.txt) on the RFC Editor website.
**Note**  
Bucket policies that grant access conditioned on the `aws:SourceIp` condition key with very broad IP ranges (for example, 0.0.0.0/1) are evaluated as "public." This includes values broader than `/8` for IPv4 and `/32` for IPv6 (excluding RFC1918 private ranges). Block public access will reject these "public" policies and prevent cross-account access to buckets that are already using these "public" policies.
+ `aws:SourceArn`
+ `aws:SourceVpc`
+ `aws:SourceVpce`
+ `aws:SourceOwner`
+ `aws:SourceAccount`
+ `aws:userid`, outside the pattern "`AROLEID:*`"
+ `s3:DataAccessPointArn`
**Note**  
When used in a bucket policy, this value can contain a wildcard for the access point name without rendering the policy public, as long as the account ID is fixed. For example, allowing access to `arn:aws:s3:us-west-2:123456789012:accesspoint/*` would permit access to any access point associated with account `123456789012` in Region `us-west-2`, without rendering the bucket policy public. This behavior is different for access point policies. For more information, see [Access points](#access-control-block-public-access-policy-status-access-points).
+ `s3:DataAccessPointAccount`

For more information about bucket policies, see [Bucket policies for Amazon S3](bucket-policies.md).

**Note**  
When using [multivalued context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-single-vs-multi-valued-context-keys.html), you must use the `ForAllValues` or `ForAnyValue` set operators.

**Example : Public bucket policies**  
Under these rules, the following example policies are considered public.  

```
{
		"Principal": "*", 
		"Resource": "*", 
		"Action": "s3:PutObject", 
		"Effect": "Allow" 
	}
```

```
{
		"Principal": "*", 
		"Resource": "*", 
		"Action": "s3:PutObject", 
		"Effect": "Allow", 
		"Condition": { "StringLike": {"aws:SourceVpc": "vpc-*"}}
	}
```
You can make these policies non-public by including any of the condition keys listed previously, using a fixed value. For example, you can make the last policy preceding non-public by setting `aws:SourceVpc` to a fixed value, like the following.  

```
{
		"Principal": "*", 
		"Resource": "*", 
		"Action": "s3:PutObject", 
		"Effect": "Allow", 
		"Condition": {"StringEquals": {"aws:SourceVpc": "vpc-91237329"}}
	}
```

### How Amazon S3 evaluates a bucket policy that contains both public and non-public access grants


This example shows how Amazon S3 evaluates a bucket policy that contains both public and non-public access grants.

Suppose that a bucket has a policy that grants access to a set of fixed principals. Under the previously described rules, this policy isn't public. Thus, if you enable the `RestrictPublicBuckets` setting, the policy remains in effect as written, because `RestrictPublicBuckets` only applies to buckets that have public policies. However, if you add a public statement to the policy, `RestrictPublicBuckets` takes effect on the bucket. It allows only AWS service principals and authorized users of the bucket owner's account to access the bucket.

As an example, suppose that a bucket owned by "Account-1" has a policy that contains the following:

1. A statement that grants access to AWS CloudTrail (which is an AWS service principal)

1. A statement that grants access to account "Account-2"

1. A statement that grants access to the public, for example by specifying `"Principal": "*"` with no limiting `Condition`

This policy qualifies as public because of the third statement. With this policy in place and `RestrictPublicBuckets` enabled, Amazon S3 allows access only by CloudTrail. Even though statement 2 isn't public, Amazon S3 disables access by "Account-2." This is because statement 3 renders the entire policy public, so `RestrictPublicBuckets` applies. As a result, Amazon S3 disables cross-account access, even though the policy delegates access to a specific account, "Account-2." But if you remove statement 3 from the policy, then the policy doesn't qualify as public, and `RestrictPublicBuckets` no longer applies. Thus, "Account-2" regains access to the bucket, even if you leave `RestrictPublicBuckets` enabled.

### Access points


Amazon S3 evaluates block public access settings slightly differently for access points compared to buckets. The rules that Amazon S3 applies to determine when an access point policy is public are generally the same for access points as for buckets, except in the following situations:
+ An access point that has a VPC network origin is always considered non-public, regardless of the contents of its access point policy.
+ An access point policy that grants access to a set of access points using `s3:DataAccessPointArn` is considered public. Note that this behavior is different than for bucket policies. For example, a bucket policy that grants access to values of `s3:DataAccessPointArn` that match `arn:aws:s3:us-west-2:123456789012:accesspoint/*` is not considered public. However, the same statement in an access point policy would render the access point public.

## Using IAM Access Analyzer for S3 to review public buckets


You can use IAM Access Analyzer for S3 to review buckets with bucket ACLs, bucket policies, or access point policies that grant public access. IAM Access Analyzer for S3 alerts you to buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings that report the source and level of public or shared access. 

In IAM Access Analyzer for S3, you can block all public access to a bucket with a single click. You can also drill down into bucket-level permission settings to configure granular levels of access. For specific and verified use cases that require public or shared access, you can acknowledge and record your intent for the bucket to remain public or shared by archiving the findings for the bucket.

In rare cases, IAM Access Analyzer for S3 and Amazon S3 block public access evaluation might differ on whether a bucket is public. This behavior occurs because Amazon S3 block public access performs validation on the existence of actions in addition to evaluating public access. Suppose that the bucket policy contains an `Action` statement that allows public access for an action that isn't supported by Amazon S3 (for example, `s3:NotASupportedAction`). In this case, Amazon S3 block public access evaluates the bucket as public because such a statement could potentially make the bucket public if the action later becomes supported. In cases where Amazon S3 block public access and IAM Access Analyzer for S3 differ in their evaluations, we recommend reviewing the bucket policy and removing any unsupported actions.

For more information about IAM Access Analyzer for S3, see [Reviewing bucket access using IAM Access Analyzer for S3](access-analyzer.md).

## Permissions


To use Amazon S3 Block Public Access features, you must have the following permissions.


| Operation | Required permissions | 
| --- | --- | 
| GET bucket policy status | s3:GetBucketPolicyStatus | 
| GET bucket Block Public Access settings | s3:GetBucketPublicAccessBlock | 
| PUT bucket Block Public Access settings | s3:PutBucketPublicAccessBlock | 
| DELETE bucket Block Public Access settings | s3:PutBucketPublicAccessBlock | 
| GET account Block Public Access settings | s3:GetAccountPublicAccessBlock | 
| PUT account Block Public Access settings | s3:PutAccountPublicAccessBlock | 
| DELETE account Block Public Access settings | s3:PutAccountPublicAccessBlock | 
| PUT access point Block Public Access settings | s3:CreateAccessPoint | 

**Note**  
The `DELETE` operations require the same permissions as the `PUT` operations. There are no separate permissions for the `DELETE` operations.

## Configuring block public access


For more information about configuring block public access for your AWS account, your Amazon S3 buckets, and your access points, see the following topics:
+ [Configuring block public access settings for your account](configuring-block-public-access-account.md)
+ [Configuring block public access settings for your S3 buckets](configuring-block-public-access-bucket.md)
+ [Performing block public access operations on an access point](#access-control-block-public-access-examples-access-point)

# Configuring block public access settings for your account
Configuring account settings

**Important**  
If your account is managed by an organization-level Block Public Access policy, you cannot modify these account-level settings. Organization-level policies override account-level configurations. For more information on centralized management options, see [S3 policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_s3.html) in the *AWS Organizations user guide*.

Amazon S3 Block Public Access provides settings for access points, buckets, organizations, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects do not allow public access. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

**Note**  
Account level settings override settings on individual objects. Configuring your account to block public access will override any public access settings made to individual objects within your account. When organization-level policies are active, account-level settings automatically inherit from the organization policy and cannot be modified directly.

You can use the S3 console, AWS CLI, AWS SDKs, and REST API to configure block public access settings for all the buckets in your account when not managed by organization policies. For more information, see the sections below.

To configure block public access settings for your buckets, see [Configuring block public access settings for your S3 buckets](configuring-block-public-access-bucket.md). For information about access points, see [Performing block public access operations on an access point](access-control-block-public-access.md#access-control-block-public-access-examples-access-point).

## Using the S3 console


Amazon S3 block public access prevents the application of any settings that allow public access to data within S3 buckets. This section describes how to edit block public access settings for all the S3 buckets in your AWS account. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

**To edit block public access settings for all the S3 buckets in an AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose **Block Public Access settings for this account**.

1. Choose **Edit** to change the block public access settings for all the buckets in your AWS account.

1. Choose the settings that you want to change, and then choose **Save changes**.

1. When you're asked for confirmation, enter **confirm**. Then choose **Confirm** to save your changes.

If you receive an error message that says, "This account does not allow changes to its account-level S3 Block Public Access settings due to an organizational S3 Block Public Access policy in effect," your account is managed by organization-level policies. Contact your organization administrator to modify these settings.

## Using the AWS CLI


You can use Amazon S3 Block Public Access through the AWS CLI. For more information about setting up and using the AWS CLI, see [What is the AWS Command Line Interface?](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) 

**Account**
+ To perform block public access operations on an account, use the AWS CLI service `s3control`. The account-level operations that use this service are as follows:
  + `PutPublicAccessBlock` (for an account)
  + `GetPublicAccessBlock` (for an account)
  + `DeletePublicAccessBlock` (for an account)

**Note**  
`PutPublicAccessBlock` and `DeletePublicAccessBlock` operations will return an "Access Denied" error when the account is managed by organization-level policies. Account-level `GetPublicAccessBlock` operations will return the enforced organization-level policy if present.

For additional information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/put-public-access-block.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/put-public-access-block.html) in the *AWS CLI Reference*.

## Using the AWS SDKs


------
#### [ Java ]

The following examples show you how to use Amazon S3 Block Public Access with the AWS SDK for Java to put a public access block configuration on an Amazon S3 account.

**Note**  
`PutPublicAccessBlock` and `DeletePublicAccessBlock` operations will fail with an "Access Denied" error if the account is managed by organization-level policies.

```
AWSS3ControlClientBuilder controlClientBuilder = AWSS3ControlClientBuilder.standard();
controlClientBuilder.setRegion(<region>);
controlClientBuilder.setCredentials(<credentials>);
					
AWSS3Control client = controlClientBuilder.build();
client.putPublicAccessBlock(new PutPublicAccessBlockRequest()
		.withAccountId(<account-id>)
		.withPublicAccessBlockConfiguration(new PublicAccessBlockConfiguration()
				.withIgnorePublicAcls(<value>)
				.withBlockPublicAcls(<value>)
				.withBlockPublicPolicy(<value>)
				.withRestrictPublicBuckets(<value>)));
```

**Important**  
This example pertains only to account-level operations, which use the `AWSS3Control` client class. For bucket-level operations, see the preceding example.

------
#### [ Other SDKs ]

For information about using the other AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

------

## Using the REST API


For information about using Amazon S3 Block Public Access through the REST APIs, see the following topics in the *Amazon Simple Storage Service API Reference*.
+ Account-level operations
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html) - Fails when account is managed by organization policies
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html) - Returns effective configuration including organization policies.
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html) - Fails when account is managed by organization policies.

You'll see following error message for restricted operations: "This account does not allow changes to its account-level S3 Block Public Access settings due to an organizational S3 Block Public Access policy in effect."

# Configuring block public access settings for your S3 buckets
Configuring bucket and access point settings

Amazon S3 Block Public Access provides settings for access points, buckets, organizations, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects do not allow public access. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

**Note**  
Bucket-level Block Public Access settings work alongside organization and account-level policies. S3 applies the most restrictive setting between bucket-level and effective account-level configurations (which may be enforced by organization policies if present).

You can use the S3 console, AWS CLI, AWS SDKs, and REST API to grant public access to one or more buckets. You can also block public access to buckets that are already public. For more information, see the sections below.

To configure block public access settings for every bucket in your account, see [Configuring block public access settings for your account](configuring-block-public-access-account.md). For organization-wide centralized management, see [S3 policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_s3.html) in the *AWS Organizations user guide*.

For information about configuring block public access for access points, see [Performing block public access operations on an access point](access-control-block-public-access.md#access-control-block-public-access-examples-access-point).

# Using the S3 console
Editing bucket public access settings

Amazon S3 Block Public Access prevents the application of any settings that allow public access to data within S3 buckets. This section describes how to edit Block Public Access settings for one or more S3 buckets. For information about blocking public access using the AWS CLI, AWS SDKs, and the Amazon S3 REST APIs, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

You can see if your bucket is publicly accessible from the **Buckets** list, in the **IAM Access Analyzer** column. For more information, see [Reviewing bucket access using IAM Access Analyzer for S3](access-analyzer.md).

If you see an `Error` when you list your buckets and their public access settings, you might not have the required permissions. Check to make sure you have the following permissions added to your user or role policy:

```
s3:GetAccountPublicAccessBlock
s3:GetBucketPublicAccessBlock
s3:GetBucketPolicyStatus
s3:GetBucketLocation
s3:GetBucketAcl
s3:ListAccessPoints
s3:ListAllMyBuckets
```

In some rare cases, requests can also fail because of an AWS Region outage.

**To edit the Amazon S3 block public access settings for a single S3 bucket**

Follow these steps if you need to change the public access settings for a single S3 bucket.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Bucket name** list, choose the name of the bucket that you want.

1. Choose **Permissions**.

1. Choose **Edit** next to **Block public access (bucket settings)** to change the public access settings for the bucket. For more information about the four Amazon S3 Block Public Access Settings, see [Block public access settings](access-control-block-public-access.md#access-control-block-public-access-options).

1. Choose one of the settings, and then choose **Save changes**.

1. When you're asked for confirmation, enter **confirm**. Then choose **Confirm** to save your changes.

**Important**  
Even if you disable bucket-level Block Public Access settings, your bucket may still be protected by account-level or organization-level policies. S3 always applies the most restrictive combination of settings across all levels.

You can also change Amazon S3 Block Public Access settings when you create a bucket. For more information, see [Creating a general purpose bucket](create-bucket-overview.md). 

## Using the AWS CLI


To block public access on a bucket or to delete the public access block, use the AWS CLI service `s3api`. The bucket-level operations that use this service are as follows:
+ `PutPublicAccessBlock` (for a bucket)
+ `GetPublicAccessBlock` (for a bucket)
+ `DeletePublicAccessBlock` (for a bucket)
+ `GetBucketPolicyStatus`

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-public-access-block.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-public-access-block.html) in the *AWS CLI Reference*.

**Note**  
These bucket-level operations are not restricted by organization-level policies. However, the effective public access behavior will still be governed by the most restrictive combination of bucket, account, and organization settings. For more information about the hierarchy and policy interactions, see [Using the S3 console](block-public-access-bucket.md).

## Using the AWS SDKs


------
#### [ Java ]

```
AmazonS3 client = AmazonS3ClientBuilder.standard()
	  .withCredentials(<credentials>)
	  .build();

client.setPublicAccessBlock(new SetPublicAccessBlockRequest()
		.withBucketName(<bucket-name>)
		.withPublicAccessBlockConfiguration(new PublicAccessBlockConfiguration()
				.withBlockPublicAcls(<value>)
				.withIgnorePublicAcls(<value>)
				.withBlockPublicPolicy(<value>)
				.withRestrictPublicBuckets(<value>)));
```

**Important**  
This example pertains only to bucket-level operations, which use the `AmazonS3` client class. For account-level operations, see the following example.

------
#### [ Other SDKs ]

For information about using the other AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

------

## Using the REST API


For information about using Amazon S3 Block Public Access through the REST APIs, see the following topics in the *Amazon Simple Storage Service API Reference*.
+ Bucket-level operations
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html)
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html)
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html)
  + [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html)

# Reviewing bucket access using IAM Access Analyzer for S3
Reviewing bucket access

IAM Access Analyzer for S3 provides external access findings for your S3 general purpose buckets that are configured to allow access to anyone on the internet (public) or other AWS accounts, including AWS accounts outside of your organization. For each bucket that's shared publicly or with other AWS accounts, you receive findings into the source and level of shared access. For example, IAM Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, a Multi-Region Access Point policy, or an access point policy. With these findings, you can take immediate and precise corrective action to restore your bucket access to what you intended. 

The Amazon S3 console presents an **External access summary** in the S3 console next to your list of general purpose buckets. In the summary, you can click on the active findings for each AWS Region to see the details of the finding in the IAM Access Analyzer for S3 page. External acces findings in **External access summary** are automatically updated once every 24 hours.

When reviewing a bucket that allows public access, on the IAM Access Analyzer for S3 page, you can block all public access to the bucket with a single click. We recommend that you block all public access to your buckets unless you require public access to support a specific use case. Before you block all public access, ensure that your applications will continue to work correctly without public access. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

You can also drill down into bucket-level permission settings to configure granular levels of access. For specific and verified use cases that require public access, such as static website hosting, public downloads, or cross-account sharing, you can acknowledge and record your intent for the bucket to remain public or shared by archiving the findings for the bucket. You can revisit and modify these bucket configurations at any time. You can also download your findings as a CSV report for auditing purposes.

IAM Access Analyzer for S3 is available at no extra cost on the Amazon S3 console. IAM Access Analyzer for S3 is powered by AWS Identity and Access Management (IAM) IAM Access Analyzer. To use IAM Access Analyzer for S3 in the Amazon S3 console, you must visit the IAM console and create an external access analyzer on a per-Region basis.

For more information about IAM Access Analyzer, see [What is IAM Access Analyzer?](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) in the *IAM User Guide*. For more information about IAM Access Analyzer for S3, review the following sections.

**Important**  
IAM Access Analyzer for S3 requires an account-level analyzer in each AWS Region where you have buckets. To use IAM Access Analyzer for S3, you must visit IAM Access Analyzer and create an analyzer that has an account as the zone of trust. For more information, see [Enabling IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-enabling) in *IAM User Guide*.
IAM Access Analyzer for S3 doesn't analyze the access point policy that's attached to cross-account access points. This behavior occurs because the access point and its policy are outside the zone of trust, that is, the account. Buckets that delegate access to a cross-account access point are listed under **Buckets with public access** if you haven't applied the `RestrictPublicBuckets` block public access setting to the bucket or account. When you apply the `RestrictPublicBuckets` block public access setting, the bucket is reported under **Buckets with access from other AWS accounts — including third-party AWS accounts**.
When a bucket policy or bucket ACL is added or modified, IAM Access Analyzer generates and updates findings based on the change within 30 minutes. Findings related to account level block public access settings might not be generated or updated for up to 6 hours after you change the settings. Findings related to Multi-Region Access Points might not be generated or updated for up to six hours after the Multi-Region Access Point is created, deleted, or you change its policy.

**Topics**
+ [

## Reviewing a global summary of policies that grant external access to buckets
](#external-access-summary)
+ [

## Information provided by IAM Access Analyzer for S3
](#access-analyzer-information-s3)
+ [

## Blocking all public access
](#blocking-public-access-access-analyzer)
+ [

## Reviewing and changing bucket access
](#changing-bucket-access)
+ [

## Archiving bucket findings
](#archiving-buckets)
+ [

## Activating an archived bucket finding
](#activating-buckets)
+ [

## Viewing finding details
](#viewing-finding-details)
+ [

## Downloading an IAM Access Analyzer for S3 report
](#downloading-bucket-report-s3)

## Reviewing a global summary of policies that grant external access to buckets
Reviewing external access with external access summary

You can use the **External access summary** to view a global summary of policies that grant external access to buckets across your AWS account directly from the S3 console. This summary helps you identify Amazon S3 general purpose buckets in any AWS Region that allow public access or access from other AWS accounts without needing to inspect policies in each AWS Region individually.

### Enabling external access summary and IAM Access Analyzer for S3


To use the **External access summary** and IAM Access Analyzer for S3, you must complete the following prerequisite steps.

1. Grant the required permissions. For more information, see [Permissions Required to use IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-permissions) in the *IAM User Guide*.

1. Visit IAM to create an account-level analyzer for each Region where you want to use IAM Access Analyzer.

   You can do this by creating an analyzer that has an account as the zone of trust in the IAM console or by using the AWS CLI or AWS SDKs. For more information, see [Enabling IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-enabling) in *IAM User Guide*.

### Viewing buckets that allow external access


The **External access summary** displays findings and errors for external access that are provided by IAM Access Analyzer for S3 for general purpose buckets. Archived findings and findings for unused access are not included in the summary but can be viewed in the IAM console or IAM Access Analyzer for S3. For more information, see [View the IAM Access Analyzer findings dashboard](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-dashboard.html) in the IAM User Guide.

**Note**  
The **External access summary** only includes findings for external access analyzers for each of your AWS accounts, not your AWS Organization.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation panel, choose **General purpose buckets**.

1. Expand the **External access summary**. The console displays active public and cross-account access findings.
**Note**  
If S3 experiences an issue loading bucket details, refresh the general purpose buckets list or view findings in IAM Access Analyzer for S3. For more information, see [Reviewing bucket access using IAM Access Analyzer for S3](#access-analyzer).

1. To view a list of findings or errors for an AWS Region, choose the link to the Region. The IAM Access Analyzer for S3 page displays names of buckets that can be accessed publicly or by other AWS accounts. For more information, see [Information provided by IAM Access Analyzer for S3](#access-analyzer-information-s3).

### Updating access controls for buckets that allow external access


1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation panel, choose **General purpose buckets**.

1. Expand the **External access summary**. The console displays active findings for buckets that can be accessed publicly or by other AWS accounts.
**Note**  
If S3 experiences an issue loading bucket details, refresh the general purpose buckets list or view findings in IAM Access Analyzer for S3. For more information, see [Reviewing bucket access using IAM Access Analyzer for S3](#access-analyzer).

1. To view a list of findings or errors for an AWS Region, choose the link to the Region. The IAM Access Analyzer for S3 displays active findings for buckets that can be accessed publicly or by other AWS accounts.
**Note**  
External acces findings in **External access summary** are automatically updated once every 24 hours.

1. To block all public access for a bucket, see [Blocking all public access](#blocking-public-access-access-analyzer). To change the bucket access, see [Reviewing and changing bucket access](#changing-bucket-access).

## Information provided by IAM Access Analyzer for S3


IAM Access Analyzer for S3 provides findings for buckets that can be accessed outside your AWS account. Buckets that are listed under **Public access findings** can be accessed by anyone on the internet. If IAM Access Analyzer for S3 identifies public buckets, you also see a warning at the top of the page that shows you the number of public buckets in your Region. Buckets listed under **Cross-account access findings** are shared conditionally with other AWS accounts, including accounts outside of your organization. 

For each bucket, IAM Access Analyzer for S3 provides the following information:
+ **Bucket name**
+ **Shared through** ‐ How the bucket is shared—through a bucket policy, a bucket ACL, a Multi-Region Access Point policy, or an access point policy. Multi-Region Access Points and cross-account access points are reflected under access points. A bucket can be shared through both policies and ACLs. If you want to find and review the source for your bucket access, you can use the information in this column as a starting point for taking immediate and precise corrective action. 
+ **Status** ‐ The status of the bucket finding. IAM Access Analyzer for S3 displays findings for all public and shared buckets. 
  + **Active **‐ Finding has not been reviewed. 
  + **Archived** ‐ Finding has been reviewed and confirmed as intended. 
  + **All** ‐ All findings for buckets that are public or shared with other AWS accounts, including AWS accounts outside of your organization.
+ **Access level** ‐ Access permissions granted for the bucket:
  + **List** ‐ List resources.
  + **Read** ‐ Read but not edit resource contents and attributes.
  + **Write** ‐ Create, delete, or modify resources.
  + **Permissions** ‐ Grant or modify resource permissions.
  + **Tagging** ‐ Update tags associated with the resource.
+ **External principal** ‐ The AWS account outside of your organization with access to the bucket.
+ **Resources control policy (RCP) restriction** ‐ The resource control policy (RCP) that applies to the bucket, if applicable. For more information, see [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html).

## Blocking all public access


If you want to block all access to a bucket in a single click, you can use the **Block all public access** button in IAM Access Analyzer for S3. When you block all public access to a bucket, no public access is granted. We recommend that you block all public access to your buckets unless you require public access to support a specific and verified use case. Before you block all public access, ensure that your applications will continue to work correctly without public access.

If you don't want to block all public access to your bucket, you can edit your block public access settings on the Amazon S3 console to configure granular levels of access to your buckets. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

In rare cases, IAM Access Analyzer for S3 and Amazon S3 block public access evaluation might differ on whether a bucket is public. This behavior occurs because Amazon S3 block public access performs validation on the existence of actions in addition to evaluating public access. Suppose that the bucket policy contains an `Action` statement that allows public access for an action that isn't supported by Amazon S3 (for example, `s3:NotASupportedAction`). In this case, Amazon S3 block public access evaluates the bucket as public because such a statement could potentially make the bucket public if the action later becomes supported. In cases where Amazon S3 block public access and IAM Access Analyzer for S3 differ in their evaluations, we recommend reviewing the bucket policy and removing any unsupported actions.

**To block all public access to a bucket using IAM Access Analyzer for S3**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane on the left, under **Dashboards**, choose **Access analyzer for S3**.

1. In IAM Access Analyzer for S3, choose a bucket.

1. Choose **Block all public access**.

1. To confirm your intent to block all public access to the bucket, in **Block all public access (bucket settings)**, enter **confirm**.

   Amazon S3 blocks all public access to your bucket. The status of the bucket finding updates to **resolved**, and the bucket disappears from the IAM Access Analyzer for S3 listing. If you want to review resolved buckets, open IAM Access Analyzer on the [IAM Console](https://console.aws.amazon.com/iam/).

## Reviewing and changing bucket access


If you did not intend to grant access to the public or other AWS accounts, including accounts outside of your organization, you can modify the bucket ACL, bucket policy, the Multi-Region Access Point policy, or the access point policy to remove the access to the bucket. The **Shared through** column shows all sources of bucket access: bucket policy, bucket ACL, and/or access point policy. Multi-Region Access Points and cross-account access points are reflected under access points.

**To review and change a bucket policy, a bucket ACL, a Multi-Region Access Point, or an access point policy**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Access analyzer for S3**.

1. To see whether public access or shared access is granted through a bucket policy, a bucket ACL, a Multi-Region Access Point policy, or an access point policy, look in the **Shared through** column.

1. Under **Buckets**, choose the name of the bucket with the bucket policy, bucket ACL, Multi-Region Access Point policy, or access point policy that you want to change or review.

1. If you want to change or view a bucket ACL:

   1. Choose **Permissions**.

   1. Choose **Access Control List**.

   1. Review your bucket ACL, and make changes as required.

      For more information, see [Configuring ACLs](managing-acls.md).

1. If you want to change or review a bucket policy:

   1. Choose **Permissions**.

   1. Choose **Bucket Policy**.

   1. Review or change your bucket policy as required.

      For more information, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md).

1. If you want to change or view a Multi-Region Access Point policy:

   1. Choose **Multi-Region Access Point**.

   1. Choose the Multi-Region Access Point name.

   1. Review or change your Multi-Region Access Point policy as required.

      For more information, see [Permissions](MultiRegionAccessPointPermissions.md).

1. If you want to review or change an access point policy:

   1. Choose **Access Points for general purpose buckets** or **Access Points for directory buckets**.

   1. Choose the access point name.

   1. Review or change access as required. 

      For more information, see [Managing your Amazon S3 access points for general purpose buckets](access-points-manage.md).

   If you edit or remove a bucket ACL, a bucket policy, or an access point policy to remove public or shared access, the status for the bucket findings updates to resolved. The resolved bucket findings disappear from the IAM Access Analyzer for S3 listing, but you can view them in IAM Access Analyzer.

## Archiving bucket findings


If a bucket grants access to the public or other AWS accounts, including accounts outside of your organization, to support a specific use case (for example, a static website, public downloads, or cross-account sharing), you can archive the finding for the bucket. When you archive bucket findings, you acknowledge and record your intent for the bucket to remain public or shared. Archived bucket findings remain in your IAM Access Analyzer for S3 listing so that you always know which buckets are public or shared.

**To archive bucket findings in IAM Access Analyzer for S3**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **IAM Access Analyzer for S3**.

1. In IAM Access Analyzer for S3, choose an active bucket.

1. To acknowledge your intent for this bucket to be accessed by the public or other AWS accounts, including accounts outside of your organization, choose **Archive**.

1. Enter **confirm**, and choose **Archive**.

## Activating an archived bucket finding


After you archive findings, you can always revisit them and change their status back to active, indicating that the bucket requires another review. 

**To activate an archived bucket finding in IAM Access Analyzer for S3**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Access analyzer for S3**.

1. Choose the archived bucket findings.

1. Choose **Mark as active**.

## Viewing finding details


If you need to see more information about a finding, you can open the bucket finding details in IAM Access Analyzer on the [IAM Console](https://console.aws.amazon.com/iam/).

**To view finding details in IAM Access Analyzer for S3**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Access analyzer for S3**.

1. In IAM Access Analyzer for S3, choose a bucket.

1. Choose **View details**.

   The finding details open in IAM Access Analyzer on the [IAM Console](https://console.aws.amazon.com/iam/). 

## Downloading an IAM Access Analyzer for S3 report


You can download your bucket findings as a CSV report that you can use for auditing purposes. The report includes the same information that you see in IAM Access Analyzer for S3 on the Amazon S3 console.

**To download a report**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane on the left, choose **Access analyzer for S3**.

1. In the Region filter, choose the Region.

   IAM Access Analyzer for S3 updates to shows buckets for the chosen Region.

1. Choose **Download report**.

   A CSV report is generated and saved to your computer.

# Verifying bucket ownership with bucket owner condition
Verifying bucket ownership

Amazon S3 bucket owner condition ensures that the buckets you use in your S3 operations belong to the AWS accounts that you expect.

Most S3 operations read from or write to specific S3 buckets. These operations include uploading, copying, and downloading objects, retrieving or modifying bucket configurations, and retrieving or modifying object configurations. When you perform these operations, you specify the bucket that you want to use by including its name with the request. For example, to retrieve an object from S3, you make a request that specifies the name of a bucket and the object key to retrieve from that bucket.

Because Amazon S3 identifies buckets based on their names, an application that uses an incorrect bucket name in a request could inadvertently perform operations against a different bucket than expected. To help avoid unintentional bucket interactions in situations like this, you can use *bucket owner condition*. Bucket owner condition enables you to verify that the target bucket is owned by the expected AWS account, providing an additional layer of assurance that your S3 operations are having the effects you intend.

**Topics**
+ [

## When to use bucket owner condition
](#bucket-owner-condition-when-to-use)
+ [

## Verifying a bucket owner
](#bucket-owner-condition-use)
+ [

## Examples
](#bucket-owner-condition-examples)
+ [

## Restrictions and limitations
](#bucket-owner-condition-restrictions-limitations)

## When to use bucket owner condition


We recommend using bucket owner condition whenever you perform a supported S3 operation and know the account ID of the expected bucket owner. Bucket owner condition is available for all S3 object operations and most S3 bucket operations. For a list of S3 operations that don't support bucket owner condition, see [Restrictions and limitations](#bucket-owner-condition-restrictions-limitations).

To see the benefit of using bucket owner condition, consider the following scenario involving AWS customer Bea:

1. Bea develops an application that uses Amazon S3. During development, Bea uses her testing-only AWS account to create a bucket named `bea-data-test`, and configures her application to make requests to `bea-data-test`.

1. Bea deploys her application, but forgets to reconfigure the application to use a bucket in her production AWS account.

1. In production, Bea's application makes requests to `bea-data-test`, which succeed. This results in production data being written to the bucket in Bea's test account.

Bea can help protect against situations like this by using bucket owner condition. With bucket owner condition, Bea can include the AWS account ID of the expected bucket owner in her requests. Amazon S3 then checks the account ID of the bucket owner before processing each request. If the actual bucket owner doesn't match the expected bucket owner, the request fails.

If Bea uses bucket owner condition, the scenario described earlier won't result in Bea's application inadvertently writing to a test bucket. Instead, the requests that her application makes at step 3 will fail with an `Access Denied` error message. By using bucket owner condition, Bea helps eliminate the risk of accidentally interacting with buckets in the wrong AWS account.

## Verifying a bucket owner


To use bucket owner condition, you include a parameter with your request that specifies the expected bucket owner. Most S3 operations involve only a single bucket, and require only this single parameter to use bucket owner condition. For `CopyObject` operations, this first parameter specifies the expected owner of the destination bucket, and you include a second parameter to specify the expected owner of the source bucket.

When you make a request that includes a bucket owner condition parameter, S3 checks the account ID of the bucket owner against the specified parameter before processing the request. If the parameter matches the bucket owner's account ID, S3 processes the request. If the parameter doesn't match the bucket owner's account ID, the request fails with an `Access Denied` error message.

You can use bucket owner condition with the AWS Command Line Interface (AWS CLI), AWS SDKs, and Amazon S3 REST APIs. When using bucket owner condition with the AWS CLI and Amazon S3 REST APIs, use the following parameter names.


****  

| Access method | Parameter for non-copy operations | Copy operation source parameter | Copy operation destination parameter | 
| --- | --- | --- | --- | 
| AWS CLI | --expected-bucket-owner | --expected-source-bucket-owner | --expected-bucket-owner | 
| Amazon S3 REST APIs | x-amz-expected-bucket-owner header | x-amz-source-expected-bucket-owner header | x-amz-expected-bucket-owner header | 

The parameter names that are required to use bucket owner condition with the AWS SDKs vary depending on the language. To determine the required parameters, see the SDK documentation for your desired language. You can find the SDK documentation at [Tools to Build on AWS](https://aws.amazon.com/tools/).

## Examples


The following examples show how you can implement bucket owner condition in Amazon S3 using the AWS CLI or the AWS SDK for Java 2.x.

**Example**  
***Example: Upload an object***  
The following example uploads an object to S3 bucket `amzn-s3-demo-bucket1`, using bucket owner condition to ensure that `amzn-s3-demo-bucket1` is owned by AWS account `111122223333`.  

```
aws s3api put-object \
                 --bucket amzn-s3-demo-bucket1 --key exampleobject --body example_file.txt \
                 --expected-bucket-owner 111122223333
```

```
public void putObjectExample() {
    S3Client s3Client = S3Client.create();;
    PutObjectRequest request = PutObjectRequest.builder()
            .bucket("amzn-s3-demo-bucket1")
            .key("exampleobject")
            .expectedBucketOwner("111122223333")
            .build();
    Path path = Paths.get("example_file.txt");
    s3Client.putObject(request, path);
}
```

**Example**  
***Example: Copy an object***  
The following example copies the object `object1` from S3 bucket `amzn-s3-demo-bucket1` to S3 bucket `amzn-s3-demo-bucket2`. It uses bucket owner condition to ensure that the buckets are owned by the expected accounts according to the following table.   


****  

| Bucket | Expected owner | 
| --- | --- | 
| amzn-s3-demo-bucket1 | 111122223333 | 
| amzn-s3-demo-bucket2 | 444455556666 | 

```
aws s3api copy-object --copy-source amzn-s3-demo-bucket1/object1 \
                            --bucket amzn-s3-demo-bucket2 --key object1copy \
                            --expected-source-bucket-owner 111122223333 --expected-bucket-owner 444455556666
```

```
public void copyObjectExample() {
        S3Client s3Client = S3Client.create();
        CopyObjectRequest request = CopyObjectRequest.builder()
                .copySource("amzn-s3-demo-bucket1/object1")
                .destinationBucket("amzn-s3-demo-bucket2")
                .destinationKey("object1copy")
                .expectedSourceBucketOwner("111122223333")
                .expectedBucketOwner("444455556666")
                .build();
        s3Client.copyObject(request);
    }
```

**Example**  
***Example: Retrieve a bucket policy***  
The following example retrieves the access policy for S3 bucket `amzn-s3-demo-bucket1`, using bucket owner condition to ensure that `amzn-s3-demo-bucket1` is owned by AWS account `111122223333`.  

```
aws s3api get-bucket-policy --bucket amzn-s3-demo-bucket1 --expected-bucket-owner 111122223333
```

```
public void getBucketPolicyExample() {
    S3Client s3Client = S3Client.create();
    GetBucketPolicyRequest request = GetBucketPolicyRequest.builder()
            .bucket("amzn-s3-demo-bucket1")
            .expectedBucketOwner("111122223333")
            .build();
    try {
        GetBucketPolicyResponse response = s3Client.getBucketPolicy(request);
    }
    catch (S3Exception e) {
        // The call was transmitted successfully, but Amazon S3 couldn't process 
        // it, so it returned an error response.
        e.printStackTrace();
    }
}
```

## Restrictions and limitations


Amazon S3 bucket owner condition has the following restrictions and limitations:
+ The value of the bucket owner condition parameter must be an AWS account ID (12-digit numeric value). Service principals aren't supported. 
+ Bucket owner condition isn't available for [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html), [ListBuckets](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html), or any of the operations included in [AWS S3 Control](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_AWS_S3_Control.html). Amazon S3 ignores any bucket owner condition parameters included with requests to these operations. 
+ Bucket owner condition only verifies that the account specified in the verification parameter owns the bucket. Bucket owner condition doesn't check the configuration of the bucket. It also doesn't guarantee that the bucket's configuration meets any specific conditions or matches any past state. 

# Controlling ownership of objects and disabling ACLs for your bucket
Controlling object ownership

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to control ownership of objects uploaded to your bucket and to disable or enable [access control lists (ACLs)](acl-overview.md). By default, Object Ownership is set to the Bucket owner enforced setting and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to data exclusively using access management policies.

A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that you keep ACLs disabled except in circumstances where you must control access for each object individually. With ACLs disabled, you can use policies to more easily control access to every object in your bucket, regardless of who uploaded the objects in your bucket. 

Object Ownership has three settings that you can use to control ownership of objects uploaded to your bucket and to disable or enable ACLs:

**ACLs disabled**
+ **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket uses policies to define access control.

**ACLs enabled**
+ **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 
+ **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

For the majority of modern use cases in S3, we recommend that you keep ACLs disabled by applying the Bucket owner enforced setting and using your bucket policy to share data with users outside of your account as needed. This approach simplifies permissions management. You can disable ACLs on both newly created and already existing buckets. For newly created buckets, ACLs are disabled by default. In the case of an existing bucket that already has objects in it, after you disable ACLs, the object and bucket ACLs are no longer part of an access evaluation, and access is granted or denied on the basis of policies. For existing buckets, you can re-enable ACLs at any time after you disable them, and your preexisting bucket and object ACLs are restored.

Before you disable ACLs, we recommend that you review your bucket policy to ensure that it covers all the ways that you intend to grant access to your bucket outside of your account. After you disable ACLs, your bucket accepts only `PUT` requests that do not specify an ACL or `PUT` requests with bucket owner full control ACLs, for example, the `bucket-owner-full-control` canned ACL or equivalent forms of this ACL expressed in XML. Existing applications that support bucket owner full control ACLs see no impact. `PUT` requests that contain other ACLs (for example, custom grants to certain AWS accounts) fail and return a `400` error with the error code `AccessControlListNotSupported`. 

In contrast, a bucket with the Bucket owner preferred setting continues to accept and honor bucket and object ACLs. With this setting, new objects that are written with the `bucket-owner-full-control` canned ACL are automatically owned by the bucket owner rather than the object writer. All other ACL behaviors remain in place. To require all Amazon S3 `PUT` operations to include the `bucket-owner-full-control` canned ACL, you can [add a bucket policy](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) that allows only object uploads using this ACL.

To see which Object Ownership settings are applied to your buckets, you can use Amazon S3 Storage Lens metrics. S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. For more information, see [ Using S3 Storage Lens to find Object Ownership settings](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-access-management.html?icmpid=docs_s3_user_guide_about-object-ownership.html).

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

## Object Ownership settings


This table shows the impact that each Object Ownership setting has on ACLs, objects, object ownership, and object uploads. 


| Setting | Applies to | Effect on object ownership | Effect on ACLs | Uploads accepted | 
| --- | --- | --- | --- | --- | 
| Bucket owner enforced (default) | All new and existing objects | Bucket owner owns every object. |  ACLs are disabled and no longer affect access permissions to your bucket. Requests to set or update ACLs fail. However, requests to read ACLs are supported.  Bucket owner has full ownership and control. Object writer no longer has full ownership and control.  | Uploads with bucket owner full control ACLs or uploads that don't specify an ACL | 
| Bucket owner preferred | New objects | If an object upload includes the bucket-owner-full-control canned ACL, the bucket owner owns the object. Objects uploaded with other ACLs are owned by the writing account. |  ACLs can be updated and can grant permissions. If an object upload includes the `bucket-owner-full-control` canned ACL, the bucket owner has full control access, and the object writer no longer has full control access.  | All uploads | 
| Object writer | New objects | Object writer owns the object. |  ACLs can be updated and can grant permissions. Object writer has full control access.  | All uploads | 

## Changes introduced by disabling ACLs


When the Bucket owner enforced setting for Object Ownership is applied, ACLs are disabled and you automatically own and take full control over every object in the bucket without taking any additional actions. Bucket owner enforced is the default setting for all newly created buckets. After the Bucket owner enforced setting is applied, you will see three changes: 
+ All bucket ACLs and object ACLs are disabled, which gives full access to you, as the bucket owner. When you perform a read ACL request on your bucket or object, you will see that full access is given only to the bucket owner.
+ You, as the bucket owner, automatically own and have full control over every object in your bucket.
+ ACLs no longer affect access permissions to your bucket. As a result, access control for your data is based on policies, such as AWS Identity and Access Management (IAM) [identity-based policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security_iam_id-based-policy-examples.html), Amazon S3 [bucket policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html), VPC endpoint policies, and Organizations [service control policies (SCPs)](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_policies_scps.html) or [resource control policies (RCPs)](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_policies_rcps.html).

![\[Diagram showing what happens when you apply the Bucket owner enforced setting to disable ACLs.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/bucket-owner-enforced.png)


If you use S3 Versioning, the bucket owner owns and has full control over all object versions in your bucket. Applying the Bucket owner enforced setting does not add a new version of an object.

New objects can be uploaded to your bucket only if they use bucket owner full control ACLs or don't specify an ACL. Object uploads fail if they specify any other ACL. For more information, see [Troubleshooting](object-ownership-error-responses.md).

Because the following example `PutObject` operation using the AWS Command Line Interface (AWS CLI) includes the `bucket-owner-full-control` canned ACL, the object can be uploaded to a bucket with disabled ACLs.

```
aws s3api put-object --bucket amzn-s3-demo-bucket --key key-name --body path-to-file --acl bucket-owner-full-control
```

Because the following `PutObject` operation doesn't specify an ACL, it also succeeds for a bucket with disabled ACLs.

```
aws s3api put-object --bucket amzn-s3-demo-bucket --key key-name --body path-to-file
```

**Note**  
If other AWS accounts need access to objects after uploading, you must grant additional permissions to those accounts through bucket policies. For more information, see [Walkthroughs that use policies to manage access to your Amazon S3 resources](example-walkthroughs-managing-access.md).

**Re-enabling ACLs**  
You can re-enable ACLs by changing from the Bucket owner enforced setting to another Object Ownership setting at any time. If you used object ACLs for permissions management before you applied the Bucket owner enforced setting and you didn't migrate these object ACL permissions to your bucket policy, after you re-enable ACLs, these permissions are restored. Additionally, objects written to the bucket while the Bucket owner enforced setting was applied are still owned by the bucket owner. 

For example, if you change from the Bucket owner enforced setting back to the Object writer setting, you, as the bucket owner, no longer own and have full control over objects that were previously owned by other AWS accounts. Instead, the uploading accounts again own these objects. Objects owned by other accounts use ACLs for permissions, so you can't use policies to grant permissions to these objects. However, you, as the bucket owner, still own any objects that were written to the bucket while the Bucket owner enforced setting was applied. These objects are not owned by the object writer, even if you re-enable ACLs.

For instructions on enabling and managing ACLs using the AWS Management Console, AWS Command Line Interface (CLI), REST API, or AWS SDKs, see [Configuring ACLs](managing-acls.md).

## Prerequisites for disabling ACLs


Before you disable ACLs for an existing bucket, complete the following prerequisites.
+ [Review bucket and object ACLs and migrate ACL permissions](object-ownership-migrating-acls-prerequisites.md#object-ownership-acl-permissions)

  
+ [Identify requests that required an ACL for authorization](object-ownership-migrating-acls-prerequisites.md#object-ownership-acl-identify)

  
+ [Review and update bucket policies that use ACL-related condition keys](object-ownership-migrating-acls-prerequisites.md#object-ownership-bucket-policies)

  

## Object Ownership permissions


To apply, update, or delete an Object Ownership setting for a bucket, you need the `s3:PutBucketOwnershipControls` permission. To return the Object Ownership setting for a bucket, you need the `s3:GetBucketOwnershipControls` permission. For more information, see [Setting Object Ownership when you create a bucket](object-ownership-new-bucket.md) and [Viewing the Object Ownership setting for an S3 bucket](object-ownership-retrieving.md).

## Disabling ACLs for all new buckets


By default, all new buckets are created with the Bucket owner enforced setting applied and ACLs are disabled. We recommend keeping ACLs disabled. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM policies for access control instead of ACLs. Policies are a simplified and more flexible access control option. With bucket policies and access point policies, you can define rules that apply broadly across all requests to your Amazon S3 resources. 

## Replication and Object Ownership


When you use S3 replication and the source and destination buckets are owned by different AWS accounts, you can disable ACLs (with the Bucket owner enforced setting for Object Ownership) to change replica ownership to the AWS account that owns the destination bucket. This setting mimics the existing owner override behavior without the need of the `s3:ObjectOwnerOverrideToBucketOwner` permission. All objects that are replicated to the destination bucket with the Bucket owner enforced setting are owned by the destination bucket owner. For more information about the owner override option for replication configurations, see [Changing the replica owner](replication-change-owner.md). 

## Setting Object Ownership


You can apply an Object Ownership setting by using the Amazon S3 console, AWS CLI, AWS SDKs, Amazon S3 REST API, or AWS CloudFormation. The following REST API and AWS CLI commands support Object Ownership:


| REST API | AWS CLI | Description | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html) | [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-ownership-controls.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-ownership-controls.html)  | Creates or modifies the Object Ownership setting for an existing S3 bucket. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)  | [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) | Creates a bucket using the x-amz-object-ownership request header to specify the Object Ownership setting.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html)  | [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-ownership-controls.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-ownership-controls.html) | Retrieves the Object Ownership setting for an Amazon S3 bucket.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html) | [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-bucket-ownership-controls.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-bucket-ownership-controls.html) | Deletes the Object Ownership setting for an Amazon S3 bucket.  | 

For more information about applying and working with Object Ownership settings, see the following topics.

**Topics**
+ [

## Object Ownership settings
](#object-ownership-overview)
+ [

## Changes introduced by disabling ACLs
](#object-ownership-changes)
+ [

## Prerequisites for disabling ACLs
](#object-ownership-considerations)
+ [

## Object Ownership permissions
](#object-ownership-permissions)
+ [

## Disabling ACLs for all new buckets
](#requiring-bucket-owner-enforced)
+ [

## Replication and Object Ownership
](#object-ownership-replication)
+ [

## Setting Object Ownership
](#object-ownership-setting)
+ [

# Prerequisites for disabling ACLs
](object-ownership-migrating-acls-prerequisites.md)
+ [

# Setting Object Ownership when you create a bucket
](object-ownership-new-bucket.md)
+ [

# Setting Object Ownership on an existing bucket
](object-ownership-existing-bucket.md)
+ [

# Viewing the Object Ownership setting for an S3 bucket
](object-ownership-retrieving.md)
+ [

# Disabling ACLs for all new buckets and enforcing Object Ownership
](ensure-object-ownership.md)
+ [

# Troubleshooting
](object-ownership-error-responses.md)

# Prerequisites for disabling ACLs


A bucket access control list (ACL) in Amazon S3 is a mechanism that allows you to define granular permissions for individual objects within an S3 bucket, specifying which AWS accounts or groups can access and modify those objects. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you use AWS Identity and Access Management (IAM) and bucket policies to manage access, and to keep ACLs disabled, except in circumstances where you need to control access for each object individually.

If you have ACLs enabled on your bucket, before you disable ACLs, complete the following prerequisites:

**Topics**
+ [

## Review bucket and object ACLs and migrate ACL permissions
](#object-ownership-acl-permissions)
+ [

## Identify requests that required an ACL for authorization
](#object-ownership-acl-identify)
+ [

## Review and update bucket policies that use ACL-related condition keys
](#object-ownership-bucket-policies)
+ [

## Example use cases
](#object-ownership-migrating-acls)

## Review bucket and object ACLs and migrate ACL permissions


When you disable ACLs, permissions granted by bucket and object ACLs no longer affect access. Before you disable ACLs, review your bucket and object ACLs. 

Each of your existing bucket and object ACLs has an equivalent in an IAM policy. The following bucket policy examples show you how `READ` and `WRITE` permissions for bucket and object ACLs map to IAM permissions. For more information about how each ACL translates to IAM permissions, see [Mapping of ACL permissions and access policy permissions](acl-overview.md#acl-access-policy-permission-mapping).

Before you disable ACLs:
+ If your bucket ACL grants access outside of your AWS account, first, you must migrate your bucket ACL permissions to your bucket policy.
+ Next, reset your bucket ACL to the default private ACL. 
+ We also recommend that you review your object-level ACL permissions and migrate them to your bucket policy. 

If your bucket ACLs grant read or write permissions to others outside of your account, before you can disable ACLs, you must migrate these permissions to your bucket policy. After you migrate these permissions, you can set **Object Ownership** to the *Bucket owner enforced* setting. If you don't migrate bucket ACLs that grant read or write access outside of your account, your request to apply the Bucket owner enforced setting fails and returns the [InvalidBucketAclWithObjectOwnership](object-ownership-error-responses.md#object-ownership-error-responses-invalid-acl) error code. 

If your bucket ACL grants access outside of your AWS account, before you disable ACLs, you must migrate your bucket ACL permissions to your bucket policy and reset your bucket ACL to the default private ACL. If you don't migrate and reset, your request to apply the Bucket owner enforced setting to disable ACLs fails and returns the [InvalidBucketAclWithObjectOwnership](object-ownership-error-responses.md#object-ownership-error-responses-invalid-acl) error code. We also recommend that you review your object ACL permissions and migrate them to your bucket policy. 

To review and migrate ACL permissions to bucket policies, see the following topics.

**Topics**
+ [

### Bucket policies examples
](#migrate-acl-permissions-bucket-policies)
+ [

### Using the S3 console to review and migrate ACL permissions
](#review-migrate-acl-console)
+ [

### Using the AWS CLI to review and migrate ACL permissions
](#review-migrate-acl-cli)

### Bucket policies examples


These example bucket policies show you how to migrate `READ` and `WRITE` bucket and object ACL permissions for a third-party AWS account to a bucket policy. `READ_ACP` and `WRITE_ACP` ACLs are less relevant for policies because they grant ACL-related permissions (`s3:GetBucketAcl`, `s3:GetObjectAcl`, `s3:PutBucketAcl`, and `s3:PutObjectAcl`).

**Example – `READ` ACL for a bucket**  
If your bucket had a `READ` ACL that grants AWS account `111122223333` permission to list the contents of your bucket, you can write a bucket policy that grants `s3:ListBucket`, `s3:ListBucketVersions`, `s3:ListBucketMultipartUploads` permissions for your bucket.     
****  

```
{
		"Version":"2012-10-17",		 	 	 
		"Statement": [
			{
				"Sid": "Permission to list the objects in a bucket",
				"Effect": "Allow",
				"Principal": {
					"AWS": [

						"arn:aws:iam::111122223333:root"
					]
				},
				"Action": [
					"s3:ListBucket",
					"s3:ListBucketVersions",
					"s3:ListBucketMultipartUploads"
				],
				"Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
			}
		]
	}
```

**Example – `READ` ACLs for every object in a bucket**  
If every object in your bucket has a `READ` ACL that grants access to AWS account `111122223333`, you can write a bucket policy that grants `s3:GetObject` and `s3:GetObjectVersion` permissions to this account for every object in your bucket.    
****  

```
{
		"Version":"2012-10-17",		 	 	 
		"Statement": [
			{
				"Sid": "Read permission for every object in a bucket",
				"Effect": "Allow",
				"Principal": {
					"AWS": [
						"arn:aws:iam::111122223333:root"
					]
				},
				"Action": [
					"s3:GetObject",
					"s3:GetObjectVersion"
				],
				"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
			}
		]
	}
```
This example resource element grants access to a specific object.  

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/OBJECT-KEY"
```

**Example – `WRITE` ACL that grants permissions to write objects to a bucket**  
If your bucket has a `WRITE` ACL that grants AWS account `111122223333` permission to write objects to your bucket, you can write a bucket policy that grants `s3:PutObject` permission for your bucket.    
****  

```
{
		"Version":"2012-10-17",		 	 	 
		"Statement": [
			{
				"Sid": "Permission to write objects to a bucket",
				"Effect": "Allow",
				"Principal": {
					"AWS": [
						"arn:aws:iam::111122223333:root"
					]
				},
				"Action": [
					"s3:PutObject"
				],
				"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
			}
		]
	}
```

### Using the S3 console to review and migrate ACL permissions


**To review a bucket's ACL permissions**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the bucket name.

1. Choose the **Permissions** tab.

1. Under **Access control list (ACL)**, review your bucket ACL permissions.

**To review an object's ACL permissions**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the bucket name containing your object.

1. In the **Objects** list, choose your object name.

1. Choose the **Permissions** tab.

1. Under **Access control list (ACL)**, review your object ACL permissions.

**To migrate ACL permissions and update your bucket ACL**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the bucket name.

1. On the **Permissions** tab, under **Bucket policy**, choose **Edit**.

1. In the **Policy** box, add or update your bucket policy.

   For example bucket policies, see [Bucket policies examples](#migrate-acl-permissions-bucket-policies) and [Example use cases](#object-ownership-migrating-acls).

1. Choose **Save changes**.

1. [Update your bucket ACL](managing-acls.md) to remove ACL grants to other groups or AWS accounts.

1. [Apply the **Bucket owner enforced** setting](object-ownership-existing-bucket.md) for Object Ownership.

### Using the AWS CLI to review and migrate ACL permissions


1. To return the bucket ACL for your bucket, use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-acl.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-acl.html) AWS CLI command:

   ```
   aws s3api get-bucket-acl --bucket amzn-s3-demo-bucket
   ```

   For example, this bucket ACL grants `WRITE` and `READ` access to a third-party account. In this ACL, the third-party account is identified by the [canonical user ID](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId). To apply the Bucket owner enforced setting and disable ACLs, you must migrate these permissions for the third-party account to a bucket policy. 

   ```
   {
   		"Owner": {
   			"ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID"
   		},
   		"Grants": [
   			{
   				"Grantee": {
   					"ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID",
   					"Type": "CanonicalUser"
   				},
   				"Permission": "FULL_CONTROL"
   			},
   			{
   				"Grantee": {
   					"ID": "72806de9d1ae8b171cca9e2494a8d1335dfced4ThirdPartyAccountCanonicalUserID",
   					"Type": "CanonicalUser"
   				},
   				"Permission": "READ"
   			},
   			{
   				"Grantee": {
   					"ID": "72806de9d1ae8b171cca9e2494a8d1335dfced4ThirdPartyAccountCanonicalUserID",
   					"Type": "CanonicalUser"
   				},
   				"Permission": "WRITE"
   			}
   		]
   	}
   ```

   For other example ACLs, see [Example use cases](#object-ownership-migrating-acls).

1. Migrate your bucket ACL permissions to a bucket policy:

   This example bucket policy grants `s3:PutObject` and `s3:ListBucket` permissions for a third-party account. In the bucket policy, the third-party account is identified by the AWS account ID (`111122223333`).

   ```
   aws s3api put-bucket-policy --bucket amzn-s3-demo-bucket --policy file://policy.json
   
   	policy.json:
   	{
   		"Version": "2012-10-17",		 	 	 
   		"Statement": [
   			{
   				"Sid": "PolicyForCrossAccountAllowUpload",
   				"Effect": "Allow",
   				"Principal": {
   					"AWS": [
   						"arn:aws:iam::111122223333:root"
   					]
   				},
   				"Action": [
   					"s3:PutObject",
   					"s3:ListBucket"
   				],
   				"Resource": [
   					"arn:aws:s3:::amzn-s3-demo-bucket",
   					"arn:aws:s3:::amzn-s3-demo-bucket/*"
   			}
   		]
   	}
   ```

   For more example bucket policies, see [Bucket policies examples](#migrate-acl-permissions-bucket-policies) and [Example use cases](#object-ownership-migrating-acls).

1. To return the ACL for a specific object, use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object-acl.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object-acl.html) AWS CLI command.

   ```
   aws s3api get-object-acl --bucket amzn-s3-demo-bucket --key EXAMPLE-OBJECT-KEY
   ```

1. If required, migrate object ACL permissions to your bucket policy. 

   This example resource element grants access to a specific object in a bucket policy.

   ```
   "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/EXAMPLE-OBJECT-KEY"
   ```

1. Reset the ACL for your bucket to the default ACL.

   ```
   aws s3api put-bucket-acl --bucket amzn-s3-demo-bucket --acl private
   ```

1. [Apply the Bucket owner enforced setting](object-ownership-existing-bucket.md) for Object Ownership.

## Identify requests that required an ACL for authorization


To identify Amazon S3 requests that required ACLs for authorization, you can use the `aclRequired` value in Amazon S3 server access logs or AWS CloudTrail. If the request required an ACL for authorization or if you have `PUT` requests that specify an ACL, the string is `Yes`. If no ACLs were required, or if you are setting a `bucket-owner-full-control` canned ACL, or if the requests are allowed by your bucket policy, the `aclRequired` value string is "`-`" in Amazon S3 server access logs and is absent in CloudTrail. For more information about the expected `aclRequired` values, see [`aclRequired` values for common Amazon S3 requests](acl-overview.md#aclrequired-s3).

If you have `PutBucketAcl` or `PutObjectAcl` requests with headers that grant ACL-based permissions, with the exception of the `bucket-owner-full-control` canned ACL, you must remove those headers before you can disable ACLs. Otherwise, your requests will fail.

For all other requests that required an ACL for authorization, migrate those ACL permissions to bucket policies. Then, remove any bucket ACLs before you enable the bucket owner enforced setting. 

**Note**  
Do not remove object ACLs. Otherwise, applications that rely on object ACLs for permissions will lose access.

If you see that no requests required an ACL for authorization, you can proceed to disable ACLs. For more information about identifying requests, see [Using Amazon S3 server access logs to identify requests](using-s3-access-logs-to-identify-requests.md) and [Identifying Amazon S3 requests using CloudTrail](cloudtrail-request-identification.md).

## Review and update bucket policies that use ACL-related condition keys


After you apply the Bucket owner enforced setting to disable ACLs, new objects can be uploaded to your bucket only if the request uses bucket owner full control ACLs or doesn't specify an ACL. Before disabling ACLs, review your bucket policy for ACL-related condition keys.

If your bucket policy uses an ACL-related condition key to require the `bucket-owner-full-control` canned ACL (for example, `s3:x-amz-acl`), you don't need to update your bucket policy. The following bucket policy uses the `s3:x-amz-acl` to require the `bucket-owner-full-control` canned ACL for S3 `PutObject` requests. This policy *still* requires the object writer to specify the `bucket-owner-full-control` canned ACL. However, buckets with ACLs disabled still accept this ACL, so requests continue to succeed with no client-side changes required.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "OnlyAllowWritesToMyBucketWithBucketOwnerFullControl",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/ExampleUser"
                ]
            },
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
```

------

However, if your bucket policy uses an ACL-related condition key that requires a different ACL, you must remove this condition key. This example bucket policy requires the `public-read` ACL for S3 `PutObject` requests and therefore must be updated before disabling ACLs. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Only allow writes to my bucket with public read access",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/ExampleUser"                ]
            },
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "public-read"
                }
            }
        }
    ]
}
```

------

## Example use cases


The following examples show you how to migrate ACL permissions to bucket policies for specific use cases.

**Topics**
+ [

### Grant access to the S3 log delivery group for server access logging
](#object-ownership-server-access-logs)
+ [

### Grant public read access to the objects in a bucket
](#object-ownership-public-read)
+ [

### Grant Amazon ElastiCache (Redis OSS) access to your S3 bucket
](#object-ownership-elasticache-redis)

### Grant access to the S3 log delivery group for server access logging


If you want to apply the Bucket owner enforced setting to disable ACLs for a server access logging destination bucket (also known as a *target bucket*), you must migrate bucket ACL permissions for the S3 log delivery group to the logging service principal (`logging.s3.amazonaws.com`) in a bucket policy. For more information about log delivery permissions, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general).

This bucket ACL grants `WRITE` and `READ_ACP` access to the S3 log delivery group:

```
{
    "Owner": {
        "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID"
    }, 
    "Grants": [
        {
            "Grantee": {
                "Type": "CanonicalUser", 
                "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID"
            }, 
            "Permission": "FULL_CONTROL"
        }, 
        {
            "Grantee": {
                "Type": "Group", 
                "URI": "http://acs.amazonaws.com/groups/s3/LogDelivery"
            }, 
            "Permission": "WRITE"
        }, 
        {
            "Grantee": {
                "Type": "Group", 
                "URI": "http://acs.amazonaws.com/groups/s3/LogDelivery"
            }, 
            "Permission": "READ_ACP"
        }
    ]
}
```

**To migrate bucket ACL permissions for the S3 log delivery group to the logging service principal in a bucket policy**

1. Add the following bucket policy to your destination bucket, replacing the example values.

   ```
   aws s3api put-bucket-policy --bucket amzn-s3-demo-bucket --policy file://policy.json
   
   policy.json:						{
       {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "S3ServerAccessLogsPolicy",
               "Effect": "Allow",
               "Principal": {
                   "Service": "logging.s3.amazonaws.com"
               },
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/EXAMPLE-LOGGING-PREFIX*",
               "Condition": {
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:s3:::SOURCE-BUCKET-NAME"
                   },
                   "StringEquals": {
                       "aws:SourceAccount": "SOURCE-AWS-ACCOUNT-ID"
                   }
               }
           }
       ]
   }
   ```

1. Reset the ACL for your destination bucket to the default ACL.

   ```
   aws s3api put-bucket-acl --bucket amzn-s3-demo-bucket --acl private
   ```

1. [Apply the Bucket owner enforced setting](object-ownership-existing-bucket.md) for Object Ownership to your destination bucket.

### Grant public read access to the objects in a bucket


If your object ACLs grant public read access to all of the objects in your bucket, you can migrate these ACL permissions to a bucket policy.

This object ACL grants public read access to an object in a bucket:

```
{
    "Owner": {
        "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID"
    },
    "Grants": [
        {
            "Grantee": {
                "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID",
                "Type": "CanonicalUser"
            },
            "Permission": "FULL_CONTROL"
        },
        {
            "Grantee": {
                "Type": "Group",
                "URI": "http://acs.amazonaws.com/groups/global/AllUsers"
            },
            "Permission": "READ"
        }
    ]
}
```

**To migrate public read ACL permissions to a bucket policy**

1. To grant public read access to all of the objects in your bucket, add the following bucket policy, replacing the example values.

   ```
   aws s3api put-bucket-policy --bucket amzn-s3-demo-bucket --policy file://policy.json
   
   policy.json:
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "PublicReadGetObject",
               "Effect": "Allow",
               "Principal": "*",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/*"
               ]
           }
       ]
   }
   ```

   To grant public access to a specific object in a bucket policy, use the following format for the `Resource` element. 

   ```
   "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/OBJECT-KEY"
   ```

   To grant public access to all of the objects with a specific prefix, use the following format for the `Resource` element. 

   ```
   "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/PREFIX/*"
   ```

1. [Apply the Bucket owner enforced setting](object-ownership-existing-bucket.md) for Object Ownership.

### Grant Amazon ElastiCache (Redis OSS) access to your S3 bucket


You can [export your ElastiCache (Redis OSS) backup](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html) to an S3 bucket, which gives you access to the backup from outside ElastiCache. To export your backup to an S3 bucket, you must grant ElastiCache permissions to copy a snapshot to the bucket. If you've granted permissions to ElastiCache in a bucket ACL, you must migrate these permissions to a bucket policy before you apply the Bucket owner enforced setting to disable ACLs. For more information, see [Grant ElastiCache access to your Amazon S3 bucket](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html#backups-exporting-grant-access) in the *Amazon ElastiCache User Guide*.

The following example shows the bucket ACL permissions that grant permissions to ElastiCache. 

```
{
    "Owner": {
        "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID"
    },
    "Grants": [
        {
            "Grantee": {
                "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID",
                "Type": "CanonicalUser"
            },
            "Permission": "FULL_CONTROL"
        },
        {
            "Grantee": {
                "ID": "540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353",
                "Type": "CanonicalUser"
            },
            "Permission": "READ"
        },
        {
            "Grantee": {
                "ID": "540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353",
                "Type": "CanonicalUser"
            },
            "Permission": "WRITE"
        },
        {
            "Grantee": {
                "ID": "540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353",
                "Type": "CanonicalUser"
            },
            "Permission": "READ_ACP"
        }
    ]
}
```

**To migrate bucket ACL permissions for ElastiCache (Redis OSS) to a bucket policy**

1. Add the following bucket policy to your bucket, replacing the example values.

   ```
   aws s3api put-bucket-policy --bucket amzn-s3-demo-bucket --policy file://policy.json
   
   policy.json:
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Stmt15399483",
               "Effect": "Allow",
               "Principal": {
                   "Service": "Region.elasticache-snapshot.amazonaws.com"
               },
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject",
                   "s3:ListBucket",
                   "s3:GetBucketAcl",
                   "s3:ListMultipartUploadParts",
                   "s3:ListBucketMultipartUploads"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*"
               ]
           }
       ]
   }
   ```

1. Reset the ACL for your bucket to the default ACL:

   ```
   aws s3api put-bucket-acl --bucket amzn-s3-demo-bucket --acl private
   ```

1. [Apply the Bucket owner enforced setting](object-ownership-existing-bucket.md) for Object Ownership.

# Setting Object Ownership when you create a bucket
Creating a bucket

When you create a bucket, you can configure S3 Object Ownership. To set Object Ownership for an existing bucket, see [Setting Object Ownership on an existing bucket](object-ownership-existing-bucket.md).

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable [access control lists (ACLs)](acl-overview.md) and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. By default, S3 Object Ownership is set to the Bucket owner enforced setting, and ACLs are disabled for new buckets. With ACLs disabled, the bucket owner owns every object in the bucket and manages access to data exclusively by using access-management policies. We recommend that you keep ACLs disabled, except in unusual circumstances where you must control access for each object individually. 

Object Ownership has three settings that you can use to control ownership of objects uploaded to your bucket and to disable or enable ACLs:

**ACLs disabled**
+ **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket uses policies to define access control.

**ACLs enabled**
+ **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 
+ **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

**Permissions**: To apply the **Bucket owner enforced** setting or the **Bucket owner preferred** setting, you must have the following permissions: `s3:CreateBucket` and `s3:PutBucketOwnershipControls`. No additional permissions are needed when creating a bucket with the **Object writer** setting applied. For more information about Amazon S3 permissions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*. 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Important**  
A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that you disable ACLs except in circumstances where you need to control access for each object individually. With Object Ownership, you can disable ACLs and rely on policies for access control. When you disable ACLs, you can easily maintain a bucket with objects uploaded by different AWS accounts. You, as the bucket owner, own all the objects in the bucket and can manage access to them using policies. 

## Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
After you create a bucket, you can't change its Region. 
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. For **Bucket name**, enter a name for your bucket.

   The bucket name must:
   + Be unique within a partition. A partition is a grouping of Regions. AWS currently has three partitions: `aws` (commercial Regions), `aws-cn` (China Regions), and `aws-us-gov` (AWS GovCloud (US) Regions).
   + Be between 3 and 63 characters long.
   + Consist only of lowercase letters, numbers, periods (`.`), and hyphens (`-`). For best compatibility, we recommend that you avoid using periods (`.`) in bucket names, except for buckets that are used only for static website hosting.
   + Begin and end with a letter or number. 
   + For a complete list of bucket-naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).
**Important**  
After you create the bucket, you can't change its name. 
Don't include sensitive information in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. (Optional) Under **General configuration**, you can choose to copy an existing bucket's settings to your new bucket. If you don't want to copy the settings of an existing bucket, skip to the next step.
**Note**  
This option:  
Isn't available in the AWS CLI and is only available in the Amazon S3 console
Doesn't copy the bucket policy from the existing bucket to the new bucket

    To copy an existing bucket's settings, under **Copy settings from existing bucket**, select **Choose bucket**. The **Choose bucket** window opens. Find the bucket with the settings that you want to copy, and select **Choose bucket**. The **Choose bucket** window closes, and the **Create bucket** window reopens.

   Under **Copy settings from existing bucket**, you now see the name of the bucket that you selected. The settings of your new bucket now match the settings of the bucket that you selected. If you want to remove the copied settings, choose **Restore defaults**. Review the remaining bucket settings on the **Create bucket** page. If you don't want to make any changes, you can skip to the final step. 

1. Under **Object Ownership**, to disable or enable ACLs and control ownership of objects uploaded in your bucket, choose one of the following settings:

**ACLs disabled**
   +  **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

     By default, ACLs are disabled. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you must control access for each object individually. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**ACLs enabled**
   + **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 

     If you apply the **Bucket owner preferred** setting, to require all Amazon S3 uploads to include the `bucket-owner-full-control` canned ACL, you can [add a bucket policy](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) that allows only object uploads that use this ACL.
   + **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.
**Note**  
The default setting is **Bucket owner enforced**. To apply the default setting and keep ACLs disabled, only the `s3:CreateBucket` permission is needed. To enable ACLs, you must have the `s3:PutBucketOwnershipControls` permission.

1. Under **Block Public Access settings for this bucket**, choose the Block Public Access settings that you want to apply to the bucket. 

   By default, all four Block Public Access settings are enabled. We recommend that you keep all settings enabled, unless you know that you need to turn off one or more of them for your specific use case. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).
**Note**  
To enable all Block Public Access settings, only the `s3:CreateBucket` permission is required. To turn off any Block Public Access settings, you must have the `s3:PutBucketPublicAccessBlock` permission.

1. (Optional) By default, **Bucket Versioning** is disabled. Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your bucket. With versioning, you can recover more easily from both unintended user actions and application failures. For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). 

   To enable versioning on your bucket, choose **Enable**. 

1. (Optional) Under **Tags**, you can choose to add tags to your bucket. With AWS cost allocation, you can use bucket tags to annotate billing for your use of a bucket. A tag is a key-value pair that represents a label that you assign to a bucket. For more information, see [Using cost allocation S3 bucket tags](CostAllocTagging.md).

   To add a bucket tag, enter a **Key** and optionally a **Value** and choose **Add Tag**.

1. To configure **Default encryption**, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed keys (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service keys (SSE-KMS)**
   + **Dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS)**
**Important**  
If you use the SSE-KMS or DSSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.

   Buckets and new objects are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption configuration. For more information about default encryption, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md). For more information about SSE-S3, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md).

   For more information about using server-side encryption to encrypt your data, see [Protecting data with encryption](UsingEncryption.md). 

1. If you chose **Server-side encryption with AWS Key Management Service keys (SSE-KMS)** or **Dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS)**, do the following:

   1. Under **AWS KMS key**, specify your KMS key in one of the following ways:
      + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from the list of available keys.

        Both the AWS managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
      + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears. 
      + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

        For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that isn't listed, you must enter your KMS key ARN. If you want to use a KMS key that's owned by a different account, you must first have permission to use the key, and then you must enter the KMS key ARN. For more information about cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. For more information about SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS)](specifying-kms-encryption.md). For more information about DSSE-KMS, see [Using dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)](UsingDSSEncryption.md).  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

   1. When you configure your bucket to use default encryption with SSE-KMS, you can also use S3 Bucket Keys. S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md). S3 Bucket Keys aren't supported for DSSE-KMS.

      By default, S3 Bucket Keys are enabled in the Amazon S3 console. We recommend leaving S3 Bucket Keys enabled to lower your costs. To disable S3 Bucket Keys for your bucket, under **Bucket Key**, choose **Disable**.

1. (Optional) S3 Object Lock helps protect new objects from being deleted or overwritten. For more information, see [Locking objects with Object Lock](object-lock.md). If you want to enable S3 Object Lock, do the following:

   1. Choose **Advanced settings**.
**Important**  
Enabling Object Lock automatically enables versioning for the bucket. After you've enabled and successfully created the bucket, you must also configure the Object Lock default retention and legal hold settings on the bucket's **Properties** tab. 

   1. If you want to enable Object Lock, choose **Enable**, read the warning that appears, and acknowledge it.
**Note**  
To create an Object Lock enabled bucket, you must have the following permissions: `s3:CreateBucket`, `s3:PutBucketVersioning`, and `s3:PutBucketObjectLockConfiguration`.

1. Choose **Create bucket**.

## Using the AWS CLI


To set Object Ownership when you create a new bucket, use the `create-bucket` AWS CLI command with the `--object-ownership` parameter. 

This example applies the Bucket owner enforced setting for a new bucket using the AWS CLI:

```
aws s3api create-bucket --bucket  amzn-s3-demo-bucket --region us-east-1 --object-ownership BucketOwnerEnforced
```

**Important**  
If you don’t set Object Ownership when you create a bucket by using the AWS CLI, the default setting will be `ObjectWriter` (ACLs enabled).

## Using the AWS SDK for Java


This example sets the Bucket owner enforced setting for a new bucket using the AWS SDK for Java:

```
    // Build the ObjectOwnership for CreateBucket
    CreateBucketRequest createBucketRequest = CreateBucketRequest.builder()
            .bucket(bucketName)
            .objectOwnership(ObjectOwnership.BucketOwnerEnforced)
            .build()

     // Send the request to Amazon S3 
     s3client.createBucket(createBucketRequest);
```

## Using CloudFormation


To use the `AWS::S3::Bucket` CloudFormation resource to set Object Ownership when you create a new bucket, see [OwnershipControls within AWS::S3::Bucket](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html#cfn-s3-bucket-ownershipcontrols) in the *AWS CloudFormation User Guide*.

## Using the REST API


To apply the Bucket owner enforced setting for S3 Object Ownership, use the `CreateBucket` API operation with the `x-amz-object-ownership` request header set to `BucketOwnerEnforced`. For information and examples, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html) in the *Amazon Simple Storage Service API Reference*.

**Next steps**: After you apply the Bucket owner enforced or bucket owner preferred settings for Object Ownership, you can further take the following steps:
+ [Bucket owner enforced](ensure-object-ownership.md#object-ownership-requiring-bucket-owner-enforced) – Require that all new buckets are created with ACLs disabled by using an IAM or Organizations policy. 
+ [Bucket owner preferred](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) – Add an S3 bucket policy to require the `bucket-owner-full-control` canned ACL for all object uploads to your bucket.

# Setting Object Ownership on an existing bucket
Setting Object Ownership

You can configure S3 Object Ownership on an existing S3 bucket. To apply Object Ownership when you create a bucket, see [Setting Object Ownership when you create a bucket](object-ownership-new-bucket.md).

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable [access control lists (ACLs)](acl-overview.md) and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. By default, S3 Object Ownership is set to the Bucket owner enforced setting, and ACLs are disabled for new buckets. With ACLs disabled, the bucket owner owns every object in the bucket and manages access to data exclusively by using access-management policies. We recommend that you keep ACLs disabled, except in unusual circumstances where you must control access for each object individually. 

Object Ownership has three settings that you can use to control ownership of objects uploaded to your bucket and to disable or enable ACLs:

**ACLs disabled**
+ **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket uses policies to define access control.

**ACLs enabled**
+ **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 
+ **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

**Prerequisites**: Before you apply the Bucket owner enforced setting to disable ACLs, you must migrate bucket ACL permissions to bucket policies and reset your bucket ACLs to the default private ACL. We also recommend that you migrate object ACL permissions to bucket policies and edit bucket policies that require ACLs other than bucket owner full control ACLs. For more information, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md).

**Permissions**: To use this operation, you must have the `s3:PutBucketOwnershipControls` permission. For more information about Amazon S3 permissions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*. 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that you want to apply an S3 Object Ownership setting to.

1. Choose the **Permissions** tab.

1. Under **Object Ownership**, choose **Edit**.

1. Under **Object Ownership**, to disable or enable ACLs and control ownership of objects uploaded in your bucket, choose one of the following settings:

**ACLs disabled**
   + **Bucket owner enforced** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket uses policies to define access control.

     To require that all new buckets are created with ACLs disabled by using IAM or AWS Organizations policies, see [Disabling ACLs for all new buckets (bucket owner enforced)](ensure-object-ownership.md#object-ownership-requiring-bucket-owner-enforced).

**ACLs enabled**
   + **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 

     If you apply the bucket owner preferred setting, to require all Amazon S3 uploads to include the `bucket-owner-full-control` canned ACL, you can [add a bucket policy](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) that only allows object uploads that use this ACL.
   + **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

1. Choose **Save**.

## Using the AWS CLI


To apply an Object Ownership setting for an existing bucket, use the `put-bucket-ownership-controls` command with the `--ownership-controls` parameter. Valid values for ownership are `BucketOwnerEnforced`, `BucketOwnerPreferred`, or `ObjectWriter`.

This example applies the Bucket owner enforced setting for an existing bucket by using the AWS CLI:

```
aws s3api put-bucket-ownership-controls --bucket amzn-s3-demo-bucket --ownership-controls="Rules=[{ObjectOwnership=BucketOwnerEnforced}]"
```

For information about `put-bucket-ownership-controls`, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-ownership-controls.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-ownership-controls.html) in the *AWS Command Line Interface User Guide*. 

## Using the AWS SDK for Java


This example applies the `BucketOwnerEnforced` setting for Object Ownership on an existing bucket by using the AWS SDK for Java:

```
         // Build the ObjectOwnership for BucketOwnerEnforced
         OwnershipControlsRule rule = OwnershipControlsRule.builder()
                .objectOwnership(ObjectOwnership.BucketOwnerEnforced)
                .build();

         OwnershipControls ownershipControls = OwnershipControls.builder()
                   .rules(rule)
                   .build()

          // Build the PutBucketOwnershipControlsRequest
          PutBucketOwnershipControlsRequest putBucketOwnershipControlsRequest =
                PutBucketOwnershipControlsRequest.builder()
                        .bucket(BUCKET_NAME)
                        .ownershipControls(ownershipControls)
                        .build();
                        
          // Send the request to Amazon S3 
          s3client.putBucketOwnershipControls(putBucketOwnershipControlsRequest);
```

## Using CloudFormation


To use CloudFormation to apply an Object Ownership setting for an existing bucket, see [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-ownershipcontrols.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-ownershipcontrols.html) in the *AWS CloudFormation User Guide*.

## Using the REST API


To use the REST API to apply an Object Ownership setting to an existing S3 bucket, use `PutBucketOwnershipControls`. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html) in the *Amazon Simple Storage Service API Reference*.

**Next steps**: After you apply the Bucket owner enforced or bucket owner preferred settings for Object Ownership, you can further take the following steps:
+ [Bucket owner enforced](ensure-object-ownership.md#object-ownership-requiring-bucket-owner-enforced) – Require that all new buckets are created with ACLs disabled by using an IAM or Organizations policy. 
+ [Bucket owner preferred](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) – Add an S3 bucket policy to require the `bucket-owner-full-control` canned ACL for all object uploads to your bucket.

# Viewing the Object Ownership setting for an S3 bucket
Viewing Object Ownership settings

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable [access control lists (ACLs)](acl-overview.md) and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. By default, S3 Object Ownership is set to the Bucket owner enforced setting, and ACLs are disabled for new buckets. With ACLs disabled, the bucket owner owns every object in the bucket and manages access to data exclusively by using access-management policies. We recommend that you keep ACLs disabled, except in unusual circumstances where you must control access for each object individually. 

Object Ownership has three settings that you can use to control ownership of objects uploaded to your bucket and to disable or enable ACLs:

**ACLs disabled**
+ **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket uses policies to define access control.

**ACLs enabled**
+ **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 
+ **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

You can view the S3 Object Ownership settings for an Amazon S3 bucket. To set Object Ownership for a new bucket, see [Setting Object Ownership when you create a bucket](object-ownership-new-bucket.md). To set Object Ownership for an existing bucket, see [Setting Object Ownership on an existing bucket](object-ownership-existing-bucket.md).

**Permissions:** To use this operation, you must have the `s3:GetBucketOwnershipControls` permission. For more information about Amazon S3 permissions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*. 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that you want to apply an Object Ownership setting to.

1. Choose the **Permissions** tab.

1. Under **Object Ownership**, you can view the Object Ownership settings for your bucket.

## Using the AWS CLI


To retrieve the S3 Object Ownership setting for an S3 bucket, use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-ownership-controls.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-ownership-controls.html) AWS CLI command.

```
aws s3api get-bucket-ownership-controls --bucket amzn-s3-demo-bucket
```

## Using the REST API


To retrieve the Object Ownership setting for an S3 bucket, use the `GetBucketOwnershipControls` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html).

# Disabling ACLs for all new buckets and enforcing Object Ownership
Disabling ACLs for all new buckets

We recommend that you disable ACLs on your Amazon S3 buckets. You can do this by applying the Bucket owner enforced setting for S3 Object Ownership. When you apply this setting, ACLs are disabled and you automatically own and have full control over all objects in your bucket. To require that all new buckets are created with ACLs disabled, use AWS Identity and Access Management (IAM) policies or AWS Organizations service control policies (SCPs), as described in the next section.

To enforce object ownership for new objects without disabling ACLs, you can apply the Bucket owner preferred setting. When you apply this setting, we strongly recommend that you update your bucket policy to require the `bucket-owner-full-control` canned ACL for all `PUT` requests to your bucket. Make sure you also update your clients to send the `bucket-owner-full-control` canned ACL to your bucket from other accounts.

**Topics**
+ [

## Disabling ACLs for all new buckets (bucket owner enforced)
](#object-ownership-requiring-bucket-owner-enforced)
+ [

## Requiring the bucket-owner-full-control canned ACL for Amazon S3 `PUT` operations (bucket owner preferred)
](#ensure-object-ownership-bucket-policy)

## Disabling ACLs for all new buckets (bucket owner enforced)


The following example IAM policy denies the `s3:CreateBucket` permission for a specific IAM user or role unless the Bucket owner enforced setting is applied for Object Ownership. The key-value pair in the `Condition` block specifies `s3:x-amz-object-ownership` as its key and the `BucketOwnerEnforced` setting as its value. In other words, the IAM user can create buckets only if they set the Bucket owner enforced setting for Object Ownership and disable ACLs. You can also use this policy as a boundary SCP for your AWS organization.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "RequireBucketOwnerFullControl",
            "Action": "s3:CreateBucket",
            "Effect": "Deny",
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-object-ownership": "BucketOwnerEnforced"
                }
            }
        }
    ]
}
```

------

## Requiring the bucket-owner-full-control canned ACL for Amazon S3 `PUT` operations (bucket owner preferred)


With the Bucket owner preferred setting for Object Ownership, you, as the bucket owner, own and have full control over new objects that other accounts write to your bucket with the `bucket-owner-full-control` canned ACL. However, if other accounts write objects to your bucket without the `bucket-owner-full-control` canned ACL, the object writer maintains full control access. You, as the bucket owner, can implement a bucket policy that allows writes only if they specify the `bucket-owner-full-control` canned ACL.

**Note**  
If you have ACLs disabled with the Bucket owner enforced setting, you, as the bucket owner, automatically own and have full control over all the objects in your bucket. You don't need to use this section to update your bucket policy to enforce object ownership for the bucket owner.

The following bucket policy specifies that account *`111122223333`* can upload objects to *`amzn-s3-demo-bucket`* only when the object's ACL is set to `bucket-owner-full-control`. Be sure to replace *`111122223333`* with your account and *`amzn-s3-demo-bucket`* with the name of your bucket.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "OnlyAllowWritesToMyBucketWithBucketOwnerFullControl",
         "Effect": "Allow",
         "Principal": {
            "AWS": [
               "arn:aws:iam::111122223333:user/ExampleUser"
            ]
         },
         "Action": [
            "s3:PutObject"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
         "Condition": {
            "StringEquals": {
               "s3:x-amz-acl": "bucket-owner-full-control"
            }
         }
      }
   ]
}
```

------

The following is an example copy operation that includes the `bucket-owner-full-control` canned ACL by using the AWS Command Line Interface (AWS CLI).

```
aws s3 cp file.txt s3://amzn-s3-demo-bucket --acl bucket-owner-full-control
```

After the bucket policy is put into effect, if the client does not include the `bucket-owner-full-control` canned ACL, the operation fails, and the uploader receives the following error: 

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.

**Note**  
If clients need access to objects after uploading, you must grant additional permissions to the uploading account. For information about granting accounts access to your resources, see [Walkthroughs that use policies to manage access to your Amazon S3 resources](example-walkthroughs-managing-access.md).

# Troubleshooting


When you apply the Bucket owner enforced setting for S3 Object Ownership, access control lists (ACLs) are disabled and you, as the bucket owner, automatically own all objects in your bucket. ACLs no longer affect permissions for the objects in your bucket. You can use policies to grant permissions. All S3 `PUT` requests must either specify the `bucket-owner-full-control` canned ACL or not specify an ACL, or these requests will fail. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

If an invalid ACL is specified or bucket ACL permissions grant access outside of your AWS account, you might see the following error responses.

## AccessControlListNotSupported


After you apply the Bucket owner enforced setting for Object Ownership, ACLs are disabled. Requests to set ACLs or update ACLs fail with a `400` error and return the AccessControlListNotSupported error code. Requests to read ACLs are still supported. Requests to read ACLs always return a response that shows full control for the bucket owner. In your `PUT` operations, you must either specify bucket owner full control ACLs or not specify an ACL. Otherwise, your `PUT` operations fail. 

The following example `put-object` AWS CLI command includes the `public-read` canned ACL. 

```
aws s3api put-object --bucket amzn-s3-demo-bucket --key object-key-name --body doc-example-body --acl public-read
```

If the bucket uses the Bucket owner enforced setting to disable ACLs, this operation fails, and the uploader receives the following error message:

An error occurred (AccessControlListNotSupported) when calling the PutObject operation: The bucket does not allow ACLs

## InvalidBucketAclWithObjectOwnership


If you want to apply the Bucket owner enforced setting to disable ACLs, your bucket ACL must give full control only to the bucket owner. Your bucket ACL cannot give access to an external AWS account or any other group. For example, if your `CreateBucket` request sets Bucket owner enforced and specifies a bucket ACL that provides access to an external AWS account, your request fails with a `400` error and returns the InvalidBucketAclWithObjectOwnership error code. Similarly, if your `PutBucketOwnershipControls` request sets Bucket owner enforced on a bucket that has a bucket ACL that grants permissions to others, the request fails.

**Example : Existing bucket ACL grants public read access**  
For example, if an existing bucket ACL grants public read access, you cannot apply the Bucket owner enforced setting for Object Ownership until you migrate these ACL permissions to a bucket policy and reset your bucket ACL to the default private ACL. For more information, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md).  
This example bucket ACL grants public read access:  

```
{
    "Owner": {
        "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID"
    },
    "Grants": [
        {
            "Grantee": {
                "ID": "852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID",
                "Type": "CanonicalUser"
            },
            "Permission": "FULL_CONTROL"
        },
        {
            "Grantee": {
                "Type": "Group",
                "URI": "http://acs.amazonaws.com/groups/global/AllUsers"
            },
            "Permission": "READ"
        }
    ]
}
```
The following example `put-bucket-ownership-controls` AWS CLI command applies the Bucket owner enforced setting for Object Ownership:  

```
aws s3api put-bucket-ownership-controls --bucket amzn-s3-demo-bucket --ownership-controls Rules=[{ObjectOwnership=BucketOwnerEnforced}]
```
Because the bucket ACL grants public read access, the request fails and returns the following error code:  
An error occurred (InvalidBucketAclWithObjectOwnership) when calling the PutBucketOwnershipControls operation: Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting 