

# Hosting a static website using Amazon S3
Hosting a static website

You can use Amazon S3 to host a static website. On a *static* website, individual webpages include static content. They might also contain client-side scripts.

**Note**  
We recommend that you use [AWS Amplify Hosting](https://docs.aws.amazon.com//amplify/latest/userguide/welcome.html.html) to host static website content stored on S3. Amplify Hosting is a fully managed service that makes it easy to deploy your websites on a globally available content delivery network (CDN) powered by Amazon CloudFront, allowing secure static website hosting.   
With AWS Amplify Hosting, you can select the location of your objects within your general purpose bucket, deploy your content to a managed CDN, and generate a public HTTPS URL for your website to be accessible anywhere. For more information about Amplify Hosting, see [Deploying a static website to AWS Amplify Hosting from an S3 general purpose bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-amplify.html) and [Deploying a static website from S3 using the Amplify console](https://docs.aws.amazon.com//amplify/latest/userguide/deploy--from-amplify-console.html) in the *AWS Amplify Console User Guide*.

For more information about hosting a static website on Amazon S3, including instructions and step-by-step walkthroughs, see the following topics.

**Important**  
If the bucket that you're using to host your static website has been encrypted using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you must create an Amazon CloudFront distribution to serve your website because SSE-KMS doesn't support anonymous users. When you create your CloudFront distribution, you must use origin access control (OAC) instead of origin access identity (OAI) to secure the origin. OAI doesn't support SSE-KMS, so you must use OAC instead.  
For more information about OAC, see [Restrict access to an Amazon S3 origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*. For a tutorial that shows how to host a static website using Amazon CloudFront, see [Tutorial: Hosting on-demand streaming video with Amazon S3, Amazon CloudFront, and Amazon Route 53](tutorial-s3-cloudfront-route53-video-streaming.md).

**Topics**
+ [

# Website endpoints
](WebsiteEndpoints.md)
+ [

# Enabling website hosting
](EnableWebsiteHosting.md)
+ [

# Configuring an index document
](IndexDocumentSupport.md)
+ [

# Configuring a custom error document
](CustomErrorDocSupport.md)
+ [

# Setting permissions for website access
](WebsiteAccessPermissionsReqd.md)
+ [

# (Optional) Logging web traffic
](LoggingWebsiteTraffic.md)
+ [

# (Optional) Configuring a webpage redirect
](how-to-page-redirect.md)
+ [

# Using cross-origin resource sharing (CORS)
](cors.md)
+ [

# Static website tutorials
](static-website-tutorials.md)

# Website endpoints


When you configure your bucket as a static website, the website is available at the AWS Region-specific website endpoint of the bucket. Website endpoints are different from the endpoints where you send REST API requests. For more information about the differences between the endpoints, see [Key differences between a website endpoint and a REST API endpoint](#WebsiteRestEndpointDiff).

Depending on your Region, your Amazon S3 website endpoint follows one of these two formats.
+ **s3-website dash (-) Region** ‐ `http://bucket-name.s3-website-Region.amazonaws.com`
+ **s3-website dot (.) Region** ‐ `http://bucket-name.s3-website.Region.amazonaws.com`

These URLs return the default index document that you configure for the website. For a complete list of Amazon S3 website endpoints, see [Amazon S3 Website Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_website_region_endpoints).

**Note**  
To augment the security of your Amazon S3 static websites, the Amazon S3 website endpoint domains (for example, *s3-website-us-east-1.amazonaws.com* or *s3-website.ap-south-1.amazonaws.com*) are registered in the [Public Suffix List (PSL)](https://publicsuffix.org/). For further security, we recommend that you use cookies with a `__Host-` prefix if you ever need to set sensitive cookies in the domain name for your Amazon S3 static websites. This practice will help to defend your domain against cross-site request forgery attempts (CSRF). For more information see the [Set-Cookie](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#cookie_prefixes) page in the Mozilla Developer Network.

If you want your website to be public, you must make all your content publicly readable for your customers to be able to access it at the website endpoint. For more information, see [Setting permissions for website access](WebsiteAccessPermissionsReqd.md). 

**Important**  
Amazon S3 website endpoints do not support HTTPS or access points. If you want to use HTTPS, you can do one of the following:   
(Recommended) Use [AWS Amplify Hosting](https://docs.aws.amazon.com//amplify/latest/userguide/welcome.html.html) to host static website content stored on S3. Amplify Hosting is a fully managed service that makes it easy to deploy your websites on a globally available content delivery network (CDN) powered by Amazon CloudFront, allowing secure static website hosting.   
With AWS Amplify Hosting, you can select the location of your objects within your general purpose bucket, deploy your content to a managed CDN, and generate a public HTTPS URL for your website to be accessible anywhere. For more information about Amplify Hosting, see [Deploying a static website to AWS Amplify Hosting from an S3 general purpose bucket](https://docs.aws.amazon.com//AmazonS3/latest/userguide/website-hosting-amplify) and [Deploying a static website from S3 using the Amplify console](https://docs.aws.amazon.com//amplify/latest/userguide/deploy--from-amplify-console.html) in the *AWS Amplify Console User Guide*.
Use Amazon CloudFront to serve a static website hosted on Amazon S3. For more information, see [How do I use CloudFront to serve HTTPS requests for my Amazon S3 bucket?](https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-requests-s3) To use HTTPS with a custom domain, see [Configuring a static website using a custom domain registered with Route 53](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-custom-domain-walkthrough.html).
Requester Pays buckets do not allow access through a website endpoint. Any request to such a bucket receives a 403 Access Denied response. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md).

**Topics**
+ [

## Website endpoint examples
](#website-endpoint-examples)
+ [

## Adding a DNS CNAME
](#website-endpoint-dns-cname)
+ [

## Using a custom domain with Route 53
](#custom-domain-s3-endpoint)
+ [

## Key differences between a website endpoint and a REST API endpoint
](#WebsiteRestEndpointDiff)

## Website endpoint examples


The following examples show how you can access an Amazon S3 bucket that is configured as a static website.

**Example — Requesting an object at the root level**  
To request a specific object that is stored at the root level in the bucket, use the following URL structure.  

```
http://bucket-name.s3-website.Region.amazonaws.com/object-name
```
For example, the following URL requests the `photo.jpg` object that is stored at the root level in the bucket.  

```
http://example-bucket.s3-website.us-west-2.amazonaws.com/photo.jpg
```

**Example — Requesting an object in a prefix**  
To request an object that is stored in a folder in your bucket, use this URL structure.  

```
http://bucket-name.s3-website.Region.amazonaws.com/folder-name/object-name
```
The following URL requests the `docs/doc1.html` object in your bucket.   

```
http://example-bucket.s3-website.us-west-2.amazonaws.com/docs/doc1.html
```

## Adding a DNS CNAME


If you have a registered domain, you can add a DNS CNAME entry to point to the Amazon S3 website endpoint. For example, if you registered the domain `www.example-bucket.com`, you could create a bucket `www.example-bucket.com`, and add a DNS CNAME record that points to `www.example-bucket.com.s3-website.Region.amazonaws.com`. All requests to `http://www.example-bucket.com` are routed to `www.example-bucket.com.s3-website.Region.amazonaws.com`. 

For more information, see [Customizing Amazon S3 URLs with CNAME records](VirtualHosting.md#VirtualHostingCustomURLs). 

## Using a custom domain with Route 53


Instead of accessing the website using an Amazon S3 website endpoint, you can use your own domain registered with Amazon Route 53 to serve your content—for example, `example.com`. You can use Amazon S3 with Route 53 to host a website at the root domain. For example, if you have the root domain `example.com` and you host your website on Amazon S3, your website visitors can access the site from their browser by entering either `http://www.example.com` or `http://example.com`. 

For an example walkthrough, see [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md). 

## Key differences between a website endpoint and a REST API endpoint


An Amazon S3 website endpoint is optimized for access from a web browser. The following table summarizes the key differences between a REST API endpoint and a website endpoint. 


| Key difference | REST API endpoint | Website endpoint | 
| --- | --- | --- | 
| Access control |  Supports both public and private content  | Supports only publicly readable content  | 
| Error message handling |  Returns an XML-formatted error response  | Returns an HTML document | 
| Redirection support |  Not applicable  | Supports both object-level and bucket-level redirects | 
| Requests supported  |  Supports all bucket and object operations  | Supports only GET and HEAD requests on objects | 
| Responses to GET and HEAD requests at the root of a bucket | Returns a list of the object keys in the bucket | Returns the index document that is specified in the website configuration | 
| Secure Sockets Layer (SSL) support | Supports SSL connections | Does not support SSL connections | 

For a complete list of Amazon S3 endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*.

# Enabling website hosting


When you configure a bucket as a static website, you must enable static website hosting, configure an index document, and set permissions.

You can enable static website hosting using the Amazon S3 console, REST API, the AWS SDKs, the AWS CLI, or CloudFormation.

To configure your website with a custom domain, see [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md).

## Using the S3 console


**To enable static website hosting**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable static website hosting for.

1. Choose **Properties**.

1. Under **Static website hosting**, choose **Edit**.

1. Choose **Use this bucket to host a website**. 

1. Under **Static website hosting**, choose **Enable**.

1. In **Index document**, enter the file name of the index document, typically `index.html`. 

   The index document name is case sensitive and must exactly match the file name of the HTML index document that you plan to upload to your S3 bucket. When you configure a bucket for website hosting, you must specify an index document. Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. For more information, see [Configuring an index document](IndexDocumentSupport.md).

1. To provide your own custom error document for 4XX class errors, in **Error document**, enter the custom error document file name. 

   The error document name is case sensitive and must exactly match the file name of the HTML error document that you plan to upload to your S3 bucket. If you don't specify a custom error document and an error occurs, Amazon S3 returns a default HTML error document. For more information, see [Configuring a custom error document](CustomErrorDocSupport.md).

1. (Optional) If you want to specify advanced redirection rules, in **Redirection rules**, enter JSON to describe the rules.

   For example, you can conditionally route requests according to specific object key names or prefixes in the request. For more information, see [Configure redirection rules to use advanced conditional redirects](how-to-page-redirect.md#advanced-conditional-redirects).

1. Choose **Save changes**.

   Amazon S3 enables static website hosting for your bucket. At the bottom of the page, under **Static website hosting**, you see the website endpoint for your bucket.

1. Under **Static website hosting**, note the **Endpoint**.

   The **Endpoint** is the Amazon S3 website endpoint for your bucket. After you finish configuring your bucket as a static website, you can use this endpoint to test your website.

## Using the REST API


For more information about sending REST requests directly to enable static website hosting, see the following sections in the Amazon Simple Storage Service API Reference:
+ [PUT Bucket website](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTwebsite.html)
+ [GET Bucket website](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETwebsite.html)
+ [DELETE Bucket website](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEwebsite.html)

## Using the AWS SDKs


To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. You can also use the AWS SDKs to create, update, and delete the website configuration programmatically. The SDKs provide wrapper classes around the Amazon S3 REST API. If your application requires it, you can send REST API requests directly from your application. 

------
#### [ .NET ]

The following example shows how to use the AWS SDK for .NET to manage website configuration for a bucket. To add a website configuration to a bucket, you provide a bucket name and a website configuration. The website configuration must include an index document and can contain an optional error document. These documents must be stored in the bucket. For more information, see [PUT Bucket website](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTwebsite.html). For more information about the Amazon S3 website feature, see [Hosting a static website using Amazon S3](WebsiteHosting.md). 

The following C\$1 code example adds a website configuration to the specified bucket. The configuration specifies both the index document and the error document names. For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class WebsiteConfigTest
    {
        private const string bucketName = "*** bucket name ***";
        private const string indexDocumentSuffix = "*** index object key ***"; // For example, index.html.
        private const string errorDocument = "*** error object key ***"; // For example, error.html.
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;
        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            AddWebsiteConfigurationAsync(bucketName, indexDocumentSuffix, errorDocument).Wait();
        }

        static async Task AddWebsiteConfigurationAsync(string bucketName,
                                            string indexDocumentSuffix,
                                            string errorDocument)
        {
            try
            {
                // 1. Put the website configuration.
                PutBucketWebsiteRequest putRequest = new PutBucketWebsiteRequest()
                {
                    BucketName = bucketName,
                    WebsiteConfiguration = new WebsiteConfiguration()
                    {
                        IndexDocumentSuffix = indexDocumentSuffix,
                        ErrorDocument = errorDocument
                    }
                };
                PutBucketWebsiteResponse response = await client.PutBucketWebsiteAsync(putRequest);

                // 2. Get the website configuration.
                GetBucketWebsiteRequest getRequest = new GetBucketWebsiteRequest()
                {
                    BucketName = bucketName
                };
                GetBucketWebsiteResponse getResponse = await client.GetBucketWebsiteAsync(getRequest);
                Console.WriteLine("Index document: {0}", getResponse.WebsiteConfiguration.IndexDocumentSuffix);
                Console.WriteLine("Error document: {0}", getResponse.WebsiteConfiguration.ErrorDocument);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }
    }
}
```

------
#### [ PHP ]

The following PHP example adds a website configuration to the specified bucket. The `create_website_config` method explicitly provides the index document and error document names. The example also retrieves the website configuration and prints the response. For more information about the Amazon S3 website feature, see [Hosting a static website using Amazon S3](WebsiteHosting.md).

For more information about the AWS SDK for Ruby API, go to [AWS SDK for Ruby - Version 2](https://docs.aws.amazon.com/sdkforruby/api/index.html).

```
 require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([
    'version' => 'latest',
    'region'  => 'us-east-1'
]);


// Add the website configuration.
$s3->putBucketWebsite([
    'Bucket'                => $bucket,
    'WebsiteConfiguration'  => [
        'IndexDocument' => ['Suffix' => 'index.html'],
        'ErrorDocument' => ['Key' => 'error.html']
    ]
]);

// Retrieve the website configuration.
$result = $s3->getBucketWebsite([
    'Bucket' => $bucket
]);
echo $result->getPath('IndexDocument/Suffix');

// Delete the website configuration.
$s3->deleteBucketWebsite([
    'Bucket' => $bucket
]);
```

------

## Using the AWS CLI


For more information about using the AWS CLI to configure an S3 bucket as a static website, see [website](https://docs.aws.amazon.com/cli/latest/reference/s3/website.html) in the *AWS CLI Command Reference*.

Next, you must configure your index document and set permissions. For information, see [Configuring an index document](IndexDocumentSupport.md) and [Setting permissions for website access](WebsiteAccessPermissionsReqd.md). 

You can also optionally configure an [error document](CustomErrorDocSupport.md), [web traffic logging](LoggingWebsiteTraffic.md), or a [redirect](how-to-page-redirect.md).

# Configuring an index document


When you enable website hosting, you must also configure and upload an index document. An *index document* is a webpage that Amazon S3 returns when a request is made to the root of a website or any subfolder. For example, if a user enters `http://www.example.com` in the browser, the user is not requesting any specific page. In that case, Amazon S3 serves up the index document, which is sometimes referred to as the *default page*.

When you enable static website hosting for your bucket, you enter the name of the index document (for example, `index.html`). After you enable static website hosting for your bucket, you upload an HTML file with the index document name to your bucket. 

The trailing slash at the root-level URL is optional. For example, if you configure your website with `index.html` as the index document, either of the following URLs returns `index.html`.

```
1. http://example-bucket.s3-website.Region.amazonaws.com/
2. http://example-bucket.s3-website.Region.amazonaws.com
```

For more information about Amazon S3 website endpoints, see [Website endpoints](WebsiteEndpoints.md).

## Index document and folders


In Amazon S3, a bucket is a flat container of objects. It does not provide any hierarchical organization as the file system on your computer does. However, you can create a logical hierarchy by using object key names that imply a folder structure. 

For example, consider a bucket with three objects that have the following key names. Although these are stored with no physical hierarchical organization, you can infer the following logical folder structure from the key names:
+ `sample1.jpg` — Object is at the root of the bucket.
+ `photos/2006/Jan/sample2.jpg` — Object is in the `photos/2006/Jan` subfolder.
+ `photos/2006/Feb/sample3.jpg` — Object is in the `photos/2006/Feb` subfolder. 

In the Amazon S3 console, you can also create a folder in a bucket. For example, you can create a folder named `photos`. You can upload objects to the bucket or to the `photos` folder within the bucket. If you add the object `sample.jpg` to the bucket, the key name is `sample.jpg`. If you upload the object to the `photos` folder, the object key name is `photos/sample.jpg`.

If you create a folder structure in your bucket, you must have an index document at each level. In each folder, the index document must have the same name, for example, `index.html`. When a user specifies a URL that resembles a folder lookup, the presence or absence of a trailing slash determines the behavior of the website. For example, the following URL, with a trailing slash, returns the `photos/index.html` index document. 

```
1. http://bucket-name.s3-website.Region.amazonaws.com/photos/
```

However, if you exclude the trailing slash from the preceding URL, Amazon S3 first looks for an object `photos` in the bucket. If the `photos` object is not found, it searches for an index document, `photos/index.html`. If that document is found, Amazon S3 returns a `302 Found` message and points to the `photos/` key. For subsequent requests to `photos/`, Amazon S3 returns `photos/index.html`. If the index document is not found, Amazon S3 returns an error.

## Configure an index document


To configure an index document using the S3 console, use the following procedure. You can also configure an index document using the REST API, the AWS SDKs, the AWS CLI, or CloudFormation. 

**Note**  
In a versioning-enabled bucket, you may upload multiple copies of the `index.html` but only the newest version will be resolved to. For more information about using S3 Versioning see, [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

When you enable static website hosting for your bucket, you enter the name of the index document (for example, **index.html**). After you enable static website hosting for the bucket, you upload an HTML file with this index document name to your bucket.

**To configure the index document**

1. Create an `index.html` file.

   If you don't have an `index.html` file, you can use the following HTML to create one:

   ```
   <html xmlns="http://www.w3.org/1999/xhtml" >
   <head>
       <title>My Website Home Page</title>
   </head>
   <body>
     <h1>Welcome to my website</h1>
     <p>Now hosted on Amazon S3!</p>
   </body>
   </html>
   ```

1. Save the index file locally.

   The index document file name must exactly match the index document name that you enter in the **Static website hosting** dialog box. The index document name is case sensitive. For example, if you enter `index.html` for the **Index document** name in the **Static website hosting** dialog box, your index document file name must also be `index.html` and not `Index.html`.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to use to host a static website.

1. Enable static website hosting for your bucket, and enter the exact name of your index document (for example, `index.html`). For more information, see [Enabling website hosting](EnableWebsiteHosting.md).

   After enabling static website hosting, proceed to step 6. 

1. To upload the index document to your bucket, do one of the following:
   + Drag and drop the index file into the console bucket listing.
   + Choose **Upload**, and follow the prompts to choose and upload the index file.

   For step-by-step instructions, see [Uploading objects](upload-objects.md).

1. (Optional) Upload other website content to your bucket.

Next, you must set permissions for website access. For information, see [Setting permissions for website access](WebsiteAccessPermissionsReqd.md). 

You can also optionally configure an [error document](CustomErrorDocSupport.md), [web traffic logging](LoggingWebsiteTraffic.md), or a [redirect](how-to-page-redirect.md).

# Configuring a custom error document


After you configure your bucket as a static website, when an error occurs, Amazon S3 returns an HTML error document. You can optionally configure your bucket with a custom error document so that Amazon S3 returns that document when an error occurs. 

**Note**  
Some browsers display their own error message when an error occurs, ignoring the error document that Amazon S3 returns. For example, when an HTTP 404 Not Found error occurs, Google Chrome might ignore the error document that Amazon S3 returns and display its own error.

**Topics**
+ [

## Amazon S3 HTTP response codes
](#s3-http-error-codes)
+ [

## Configuring a custom error document
](#custom-error-document)

## Amazon S3 HTTP response codes


The following table lists the subset of HTTP response codes that Amazon S3 returns when an error occurs. 


| HTTP error code | Description | 
| --- | --- | 
| 301 Moved Permanently | When a user sends a request directly to the Amazon S3 website endpoint (http://s3-website.Region.amazonaws.com/), Amazon S3 returns a 301 Moved Permanently response and redirects those requests to https://aws.amazon.com/s3/. | 
| 302 Found |  When Amazon S3 receives a request for a key `x`, `http://bucket-name.s3-website.Region.amazonaws.com/x`, without a trailing slash, it first looks for the object with the key name `x`. If the object is not found, Amazon S3 determines that the request is for subfolder `x` and redirects the request by adding a slash at the end, and returns **302 Found**.   | 
| 304 Not Modified |  Amazon S3 uses request headers `If-Modified-Since`, `If-Unmodified-Since`, `If-Match` and/or `If-None-Match` to determine whether the requested object is same as the cached copy held by the client. If the object is the same, the website endpoint returns a **304 Not Modified** response.  | 
| 400 Malformed Request |  The website endpoint responds with a **400 Malformed Request** when a user attempts to access a bucket through the incorrect regional endpoint.   | 
| 403 Forbidden |  The website endpoint responds with a **403 Forbidden** when a user request translates to an object that is not publicly readable. The object owner must make the object publicly readable using a bucket policy or an ACL.   | 
| 404 Not Found |  The website endpoint responds with **404 Not Found** for the following reasons: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/CustomErrorDocSupport.html) You can create a custom document that is returned for **404 Not Found**. Make sure that the document is uploaded to the bucket configured as a website, and that the website hosting configuration is set to use the document. For information on how Amazon S3 interprets the URL as a request for an object or an index document, see [Configuring an index document](IndexDocumentSupport.md).   | 
| 500 Service Error |  The website endpoint responds with a **500 Service Error** when an internal server error occurs.  | 
| 503 Service Unavailable |  The website endpoint responds with a **503 Service Unavailable** when Amazon S3 determines that you need to reduce your request rate.   | 

 For each of these errors, Amazon S3 returns a predefined HTML message. The following is an example HTML message that is returned for a **403 Forbidden** response.

![\[403 Forbidden error message example\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/WebsiteErrorExample403.png)


## Configuring a custom error document


When you configure your bucket as a static website, you can provide a custom error document that contains a user-friendly error message and additional help. Amazon S3 returns your custom error document for only the HTTP 4XX class of error codes. 

To configure a custom error document using the S3 console, follow the steps below. You can also configure an error document using the REST API, the AWS SDKs, the AWS CLI, or CloudFormation. For more information, see the following:
+ [PutBucketWebsite](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html) in the *Amazon Simple Storage Service API Reference*
+ [AWS::S3::Bucket WebsiteConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-websiteconfiguration.html) in the *CloudFormation User Guide*
+ [put-bucket-website](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-website.html) in the *AWS CLI Command Reference*

When you enable static website hosting for your bucket, you enter the name of the error document (for example, **404.html**). After you enable static website hosting for the bucket, you upload an HTML file with this error document name to your bucket.

**To configure an error document**

1. Create an error document, for example `404.html`.

1. Save the error document file locally.

   The error document name is case sensitive and must exactly match the name that you enter when you enable static website hosting. For example, if you enter `404.html` for the **Error document** name in the **Static website hosting** dialog box, your error document file name must also be `404.html`.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to use to host a static website.

1. Enable static website hosting for your bucket, and enter the exact name of your error document (for example, `404.html`). For more information, see [Enabling website hosting](EnableWebsiteHosting.md) and [Configuring a custom error document](#CustomErrorDocSupport).

   After enabling static website hosting, proceed to step 6. 

1. To upload the error document to your bucket, do one of the following:
   + Drag and drop the error document file into the console bucket listing.
   + Choose **Upload**, and follow the prompts to choose and upload the index file.

   For step-by-step instructions, see [Uploading objects](upload-objects.md).

# Setting permissions for website access


When you configure a bucket as a static website, if you want your website to be public, you can grant public read access. To make your bucket publicly readable, you must disable block public access settings for the bucket and write a bucket policy that grants public read access. If your bucket contains objects that are not owned by the bucket owner, you might also need to add an object access control list (ACL) that grants everyone read access.

If you don't want to disable block public access settings for your bucket but you still want your website to be public, you can create a Amazon CloudFront distribution to serve your static website. For more information, see [Speeding up your website with Amazon CloudFront](website-hosting-cloudfront-walkthrough.md) or [Use an Amazon CloudFront distribution to serve a static website](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started-cloudfront-overview.html) in the *Amazon Route 53 Developer Guide*.

**Note**  
On the website endpoint, if a user requests an object that doesn't exist, Amazon S3 returns HTTP response code `404 (Not Found)`. If the object exists but you haven't granted read permission on it, the website endpoint returns HTTP response code `403 (Access Denied)`. The user can use the response code to infer whether a specific object exists. If you don't want this behavior, you should not enable website support for your bucket. 

**Topics**
+ [

## Step 1: Edit S3 Block Public Access settings
](#block-public-access-static-site)
+ [

## Step 2: Add a bucket policy
](#bucket-policy-static-site)
+ [

## Object access control lists
](#object-acl)

## Step 1: Edit S3 Block Public Access settings


If you want to configure an existing bucket as a static website that has public access, you must edit Block Public Access settings for that bucket. You might also have to edit your account-level Block Public Access settings. Amazon S3 applies the most restrictive combination of the bucket-level and account-level block public access settings.

For example, if you allow public access for a bucket but block all public access at the account level, Amazon S3 will continue to block public access to the bucket. In this scenario, you would have to edit your bucket-level and account-level Block Public Access settings. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

By default, Amazon S3 blocks public access to your account and buckets. If you want to use a bucket to host a static website, you can use these steps to edit your block public access settings. 

**Warning**  
Before you complete these steps, review [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md) to ensure that you understand and accept the risks involved with allowing public access. When you turn off block public access settings to make your bucket public, anyone on the internet can access your bucket. We recommend that you block all public access to your buckets.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the bucket that you have configured as a static website.

1. Choose **Permissions**.

1. Under **Block public access (bucket settings)**, choose **Edit**.

1. Clear **Block *all* public access**, and choose **Save changes**.  
![\[The Amazon S3 console, showing the block public access bucket settings.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/edit-public-access-clear.png)

   Amazon S3 turns off the Block Public Access settings for your bucket. To create a public static website, you might also have to [edit the Block Public Access settings](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html) for your account before adding a bucket policy. If the Block Public Access settings for your account are currently turned on, you see a note under **Block public access (bucket settings)**.

## Step 2: Add a bucket policy


To make the objects in your bucket publicly readable, you must write a bucket policy that grants everyone `s3:GetObject` permission. 

After you edit S3 Block Public Access settings, you can add a bucket policy to grant public read access to your bucket. When you grant public read access, anyone on the internet can access your bucket.

**Important**  
The following policy is an example only and allows full access to the contents of your bucket. Before you proceed with this step, review [How can I secure the files in my Amazon S3 bucket?](https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/) to ensure that you understand the best practices for securing the files in your S3 bucket and risks involved in granting public access.

1. Under **Buckets**, choose the name of your bucket.

1. Choose **Permissions**.

1. Under **Bucket Policy**, choose **Edit**.

1. To grant public read access for your website, copy the following bucket policy, and paste it in the **Bucket policy editor**.

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "PublicReadGetObject",
               "Effect": "Allow",
               "Principal": "*",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::Bucket-Name/*"
               ]
           }
       ]
   }
   ```

1. Update the `Resource` to your bucket name.

   In the preceding example bucket policy, *Bucket-Name* is a placeholder for the bucket name. To use this bucket policy with your own bucket, you must update this name to match your bucket name.

1. Choose **Save changes**.

   A message appears indicating that the bucket policy has been successfully added.

   If you see an error that says `Policy has invalid resource`, confirm that the bucket name in the bucket policy matches your bucket name. For information about adding a bucket policy, see [How do I add an S3 bucket policy?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-bucket-policy.html)

   If you get an error message and cannot save the bucket policy, check your account and bucket Block Public Access settings to confirm that you allow public access to the bucket.

## Object access control lists


You can use a bucket policy to grant public read permission to your objects. However, the bucket policy applies only to objects that are owned by the bucket owner. If your bucket contains objects that aren't owned by the bucket owner, the bucket owner should use the object access control list (ACL) to grant public READ permission on those objects.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.

 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**Important**  
If your general purpose bucket uses the Bucket owner enforced setting for S3 Object Ownership, you must use policies to grant access to your general purpose bucket and the objects in it. With the Bucket owner enforced setting enabled, requests to set access control lists (ACLs) or update ACLs fail and return the `AccessControlListNotSupported` error code. Requests to read ACLs are still supported.

To make an object publicly readable using an ACL, grant READ permission to the `AllUsers` group, as shown in the following grant element. Add this grant element to the object ACL. For information about managing ACLs, see [Access control list (ACL) overview](acl-overview.md).

```
1. <Grant>
2.   <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
3.           xsi:type="Group">
4.     <URI>http://acs.amazonaws.com/groups/global/AllUsers</URI>
5.   </Grantee>
6.   <Permission>READ</Permission>
7. </Grant>
```

# (Optional) Logging web traffic
Logging web traffic

You can optionally enable Amazon S3 server access logging for a bucket that is configured as a static website. Server access logging provides detailed records for the requests that are made to your bucket. For more information, see [Logging requests with server access logging](ServerLogs.md). If you plan to use Amazon CloudFront to [speed up your website](website-hosting-cloudfront-walkthrough.md), you can also use CloudFront logging. For more information, see [Configuring and Using Access Logs](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html) in the *Amazon CloudFront Developer Guide*.

**To enable server access logging for your static website bucket**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the same Region where you created the bucket that is configured as a static website, create a general purpose bucket for logging, for example `logs.example.com`.

1. Create a folder for the server access logging log files (for example, `logs`).

1. (Optional) If you want to use CloudFront to improve your website performance, create a folder for the CloudFront log files (for example, `cdn`).

   For more information, see [Speeding up your website with Amazon CloudFront](website-hosting-cloudfront-walkthrough.md).

1. In the **Buckets** list, choose your bucket.

1. Choose **Properties**.

1. Under **Server access logging**, choose **Edit**.

1. Choose **Enable**.

1. Under the **Target bucket**, choose the bucket and folder destination for the server access logs:
   + Browse to the folder and bucket location:

     1. Choose **Browse S3**.

     1. Choose the bucket name, and then choose the logs folder. 

     1. Choose **Choose path**.
   + Enter the S3 bucket path, for example, **s3://logs.example.com/logs/**.

1. Choose **Save changes**.

   In your log bucket, you can now access your logs. Amazon S3 writes website access logs to your log bucket every 2 hours.

# (Optional) Configuring a webpage redirect
Configuring a redirect

If your Amazon S3 bucket is configured for static website hosting, you can configure redirects for your bucket or the objects in it. You have the following options for configuring redirects.

**Topics**
+ [

## Redirect requests for your bucket's website endpoint to another bucket or domain
](#redirect-endpoint-host)
+ [

## Configure redirection rules to use advanced conditional redirects
](#advanced-conditional-redirects)
+ [

## Redirect requests for an object
](#redirect-requests-object-metadata)

## Redirect requests for your bucket's website endpoint to another bucket or domain
Redirect requests to another host

You can redirect all requests to a website endpoint for a bucket to another bucket or domain. If you redirect all requests, any request made to the website endpoint is redirected to the specified bucket or domain. 

For example, if your root domain is `example.com`, and you want to serve requests for both `http://example.com` and `http://www.example.com`, you must create two buckets named `example.com` and `www.example.com`. Then, maintain the content in the `example.com` bucket, and configure the other `www.example.com` bucket to redirect all requests to the `example.com` bucket. For more information, see [Configuring a Static Website Using a Custom Domain Name](https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html).

**To redirect requests for a bucket website endpoint**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Under **Buckets**, choose the name of the bucket that you want to redirect requests from (for example, `www.example.com`).

1. Choose **Properties**.

1. Under **Static website hosting**, choose **Edit**.

1. Choose **Redirect requests for an object**. 

1. In the **Host name** box, enter the website endpoint for your bucket or your custom domain.

   For example, if you are redirecting to a root domain address, you would enter **example.com**.

1. For **Protocol**, choose the protocol for the redirected requests (**none**,**http**, or **https**).

   If you do not specify a protocol, the default option is **none**.

1. Choose **Save changes**.

## Configure redirection rules to use advanced conditional redirects
Configure redirection rules

Using advanced redirection rules, you can route requests conditionally according to specific object key names, prefixes in the request, or response codes. For example, suppose that you delete or rename an object in your bucket. You can add a routing rule that redirects the request to another object. If you want to make a folder unavailable, you can add a routing rule to redirect the request to another webpage. You can also add a routing rule to handle error conditions by routing requests that return the error to another domain when the error is processed.

When enabling static website hosting for your bucket, you can optionally specify advanced redirection rules. Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more than 50 routing rules, you can use object redirect. For more information, see [Using the S3 console](#page-redirect-using-console).

For more information about configuring routing rules using the REST API, see [PutBucketWebsite](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html) in the *Amazon Simple Storage Service API Reference*.

**Important**  
To create redirection rules in the new Amazon S3 console, you must use JSON. For JSON examples, see [Redirection rules examples](#redirect-rule-examples).

**To configure redirection rules for a static website**

To add redirection rules for a bucket that already has static website hosting enabled, follow these steps.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of a bucket that you have configured as a static website.

1. Choose **Properties**.

1. Under **Static website hosting**, choose **Edit**.

1. In **Redirection rules** box, enter your redirection rules in JSON. 

   In the S3 console you describe the rules using JSON. For JSON examples, see [Redirection rules examples](#redirect-rule-examples). Amazon S3 has a limitation of 50 routing rules per website configuration.

1. Choose **Save changes**.

### Routing rule elements


The following is general syntax for defining the routing rules in a website configuration in JSON and XML To configure redirection rules in the new S3 console, you must use JSON. For JSON examples, see [Redirection rules examples](#redirect-rule-examples).

------
#### [ JSON ]

```
[
    {
      "Condition": {
        "HttpErrorCodeReturnedEquals": "string",
        "KeyPrefixEquals": "string"
      },
      "Redirect": {
        "HostName": "string",
        "HttpRedirectCode": "string",
        "Protocol": "http"|"https",
        "ReplaceKeyPrefixWith": "string",
        "ReplaceKeyWith": "string"
      }
    }
  ]
 
Note: Redirect must each have at least one child element. You can have either ReplaceKeyPrefix with or ReplaceKeyWith but not both.
```

------
#### [ XML ]

```
<RoutingRules> =
    <RoutingRules>
         <RoutingRule>...</RoutingRule>
         [<RoutingRule>...</RoutingRule>   
         ...]
    </RoutingRules>

<RoutingRule> =
   <RoutingRule>
      [ <Condition>...</Condition> ]
      <Redirect>...</Redirect>
   </RoutingRule>

<Condition> =
   <Condition> 
      [ <KeyPrefixEquals>...</KeyPrefixEquals> ]
      [ <HttpErrorCodeReturnedEquals>...</HttpErrorCodeReturnedEquals> ]
   </Condition>
    Note: <Condition> must have at least one child element.

<Redirect> =
   <Redirect> 
      [ <HostName>...</HostName> ]
      [ <Protocol>...</Protocol> ]
      [ <ReplaceKeyPrefixWith>...</ReplaceKeyPrefixWith>  ]
      [ <ReplaceKeyWith>...</ReplaceKeyWith> ]
      [ <HttpRedirectCode>...</HttpRedirectCode> ]
   </Redirect>

Note: <Redirect> must have at least one child element. You can have either ReplaceKeyPrefix with or ReplaceKeyWith but not both.
```

------

The following table describes the elements in the routing rule.


|  Name  |  Description  | 
| --- | --- | 
| RoutingRules |  Container for a collection of RoutingRule elements.  | 
| RoutingRule |  A rule that identifies a condition and the redirect that is applied when the condition is met.  Condition: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html)  | 
| Condition |  Container for describing a condition that must be met for the specified redirect to be applied. If the routing rule does not include a condition, the rule is applied to all requests.  | 
| KeyPrefixEquals |  The prefix of the object key name from which requests are redirected.  `KeyPrefixEquals` is required if `HttpErrorCodeReturnedEquals` is not specified. If both `KeyPrefixEquals` and `HttpErrorCodeReturnedEquals` are specified, both must be true for the condition to be met.  | 
| HttpErrorCodeReturnedEquals |  The HTTP error code that must match for the redirect to apply. If an error occurs, and if the error code meets this value, then the specified redirect applies. `HttpErrorCodeReturnedEquals` is required if `KeyPrefixEquals` is not specified. If both `KeyPrefixEquals` and `HttpErrorCodeReturnedEquals` are specified, both must be true for the condition to be met.  | 
| Redirect |  Container element that provides instructions for redirecting the request. You can redirect requests to another host or another page, or you can specify another protocol to use. A `RoutingRule` must have a `Redirect` element. A `Redirect` element must contain at least one of the following sibling elements: `Protocol`, `HostName`, `ReplaceKeyPrefixWith`, `ReplaceKeyWith`, or `HttpRedirectCode`.  | 
| Protocol |  The protocol, `http` or `https`, to be used in the `Location` header that is returned in the response.  If one of its siblings is supplied, `Protocol` is not required.  | 
| HostName |  The hostname to be used in the `Location` header that is returned in the response. If one of its siblings is supplied, `HostName` is not required.  | 
| ReplaceKeyPrefixWith |  The prefix of the object key name that replaces the value of `KeyPrefixEquals` in the redirect request.  If one of its siblings is supplied, `ReplaceKeyPrefixWith` is not required. It can be supplied only if `ReplaceKeyWith` is not supplied.  | 
| ReplaceKeyWith |  The object key to be used in the `Location` header that is returned in the response.  If one of its siblings is supplied, `ReplaceKeyWith` is not required. It can be supplied only if `ReplaceKeyPrefixWith` is not supplied.  | 
| HttpRedirectCode |  The HTTP redirect code to be used in the `Location` header that is returned in the response. If one of its siblings is supplied, `HttpRedirectCode` is not required.  | 

#### Redirection rules examples


The following examples explain common redirection tasks:

**Important**  
To create redirection rules in the new Amazon S3 console, you must use JSON.

**Example 1: Redirect after renaming a key prefix**  
Suppose that your bucket contains the following objects:  
+ index.html
+ docs/article1.html
+ docs/article2.html
You decide to rename the folder from `docs/` to `documents/`. After you make this change, you need to redirect requests for prefix `docs/` to `documents/`. For example, request for `docs/article1.html` will be redirected to `documents/article1.html`.  
In this case, you add the following routing rule to the website configuration.  

```
[
    {
        "Condition": {
            "KeyPrefixEquals": "docs/"
        },
        "Redirect": {
            "ReplaceKeyPrefixWith": "documents/"
        }
    }
]
```

```
  <RoutingRules>
    <RoutingRule>
    <Condition>
      <KeyPrefixEquals>docs/</KeyPrefixEquals>
    </Condition>
    <Redirect>
      <ReplaceKeyPrefixWith>documents/</ReplaceKeyPrefixWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
```

**Example 2: Redirect requests for a deleted folder to a page**  
Suppose that you delete the `images/` folder (that is, you delete all objects with the key prefix `images/`). You can add a routing rule that redirects requests for any object with the key prefix `images/` to a page named `folderdeleted.html`.  

```
[
    {
        "Condition": {
            "KeyPrefixEquals": "images/"
        },
        "Redirect": {
            "ReplaceKeyWith": "folderdeleted.html"
        }
    }
]
```

```
  <RoutingRules>
    <RoutingRule>
    <Condition>
       <KeyPrefixEquals>images/</KeyPrefixEquals>
    </Condition>
    <Redirect>
      <ReplaceKeyWith>folderdeleted.html</ReplaceKeyWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
```

**Example 3: Redirect to another domain with a specific path**  
Suppose you want to redirect requests for a specific path to another domain. For example, you want to redirect requests for `/redirect/me` to `https://example.com/new/path`.  
When using both `HostName` and `ReplaceKeyWith` together, Amazon S3 constructs the redirect URL by concatenating the hostname and the replacement key with a forward slash between them. Therefore, you should not include a leading slash in the `ReplaceKeyWith` value. Amazon S3 automatically adds the forward slash between the hostname and the replacement key.  

```
[
    {
        "Condition": {
            "KeyPrefixEquals": "redirect/me"
        },
        "Redirect": {
            "HostName": "example.com",
            "ReplaceKeyWith": "new/path"
        }
    }
]
```

```
  <RoutingRules>
    <RoutingRule>
    <Condition>
      <KeyPrefixEquals>redirect/me</KeyPrefixEquals>
    </Condition>
    <Redirect>
      <HostName>example.com</HostName>
      <ReplaceKeyWith>new/path</ReplaceKeyWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
```
This configuration redirects a request for `https://yourbucket.s3-website-region.amazonaws.com/redirect/me` to `https://example.com/new/path`. Note that `ReplaceKeyWith` is set to `new/path` without a leading slash.

**Example 4: Redirect for an HTTP error**  
Suppose that when a requested object is not found, you want to redirect requests to an Amazon Elastic Compute Cloud (Amazon EC2) instance. Add a redirection rule so that when an HTTP status code 404 (Not Found) is returned, the site visitor is redirected to an Amazon EC2 instance that handles the request.   
The following example also inserts the object key prefix `report-404/` in the redirect. For example, if you request a page `ExamplePage.html` and it results in an HTTP 404 error, the request is redirected to a page `report-404/ExamplePage.html` on the specified Amazon EC2 instance. If there is no routing rule and the HTTP error 404 occurs, the error document that is specified in the configuration is returned.  

```
[
    {
        "Condition": {
            "HttpErrorCodeReturnedEquals": "404"
        },
        "Redirect": {
            "HostName": "ec2-11-22-333-44.compute-1.amazonaws.com",
            "ReplaceKeyPrefixWith": "report-404/"
        }
    }
]
```

```
  <RoutingRules>
    <RoutingRule>
    <Condition>
      <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals >
    </Condition>
    <Redirect>
      <HostName>ec2-11-22-333-44.compute-1.amazonaws.com</HostName>
      <ReplaceKeyPrefixWith>report-404/</ReplaceKeyPrefixWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
```

## Redirect requests for an object


You can redirect requests for an object to another object or URL by setting the website redirect location in the metadata of the object. You set the redirect by adding the `x-amz-website-redirect-location` property to the object metadata. On the Amazon S3 console, you set the **Website Redirect Location** in the metadata of the object. If you use the [Amazon S3 API](#page-redirect-using-rest-api), you set `x-amz-website-redirect-location`. The website then interprets the object as a 301 redirect. 

To redirect a request to another object, you set the redirect location to the key of the target object. To redirect a request to an external URL, you set the redirect location to the URL that you want. For more information about object metadata, see [System-defined object metadata](UsingMetadata.md#SysMetadata).

When you set a page redirect, you can either keep or delete the source object content. For example, if you have a `page1.html` object in your bucket, you can redirect any requests for this page to another object, `page2.html`. You have two options:
+ Keep the content of the `page1.html` object and redirect page requests.
+ Delete the content of `page1.html` and upload a zero-byte object named `page1.html` to replace the existing object and redirect page requests. 

### Using the S3 console


1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that you have configured as a static website (for example, `example.com`).

1. Under **Objects**, select your object.

1. Choose **Actions**, and choose **Edit metadata**.

1. Choose **Metadata**.

1. Choose **Add Metadata**.

1. Under **Type**, choose **System Defined**.

1. In **Key**, choose **x-amz-website-redirect-location**.

1. In **Value**, enter the key name of the object that you want to redirect to, for example, `/page2.html`.

   For another object in the same bucket, the `/` prefix in the value is required. You can also set the value to an external URL, for example, `http://www.example.com`.

1. Choose **Edit metadata**.

### Using the REST API


The following Amazon S3 API actions support the `x-amz-website-redirect-location` header in the request. Amazon S3 stores the header value in the object metadata as `x-amz-website-redirect-location`. 
+ [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html)
+ [Initiate Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html)
+ [POST Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html)
+ [PUT Object - Copy](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html)

A bucket configured for website hosting has both the website endpoint and the REST endpoint. A request for a page that is configured as a 301 redirect has the following possible outcomes, depending on the endpoint of the request:
+ **Region-specific website endpoint – **Amazon S3 redirects the page request according to the value of the `x-amz-website-redirect-location` property. 
+ **REST endpoint – **Amazon S3 doesn't redirect the page request. It returns the requested object.

For more information about the endpoints, see [Key differences between a website endpoint and a REST API endpoint](WebsiteEndpoints.md#WebsiteRestEndpointDiff).

When setting a page redirect, you can either keep or delete the object content. For example, suppose that you have a `page1.html` object in your bucket.
+ To keep the content of `page1.html` and only redirect page requests, you can submit a [PUT Object - Copy](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) request to create a new `page1.html` object that uses the existing `page1.html` object as the source. In your request, you set the `x-amz-website-redirect-location` header. When the request is complete, you have the original page with its content unchanged, but Amazon S3 redirects any requests for the page to the redirect location that you specify.
+ To delete the content of the `page1.html` object and redirect requests for the page, you can send a PUT Object request to upload a zero-byte object that has the same object key: `page1.html`. In the PUT request, you set `x-amz-website-redirect-location` for `page1.html` to the new object. When the request is complete, `page1.html` has no content, and requests are redirected to the location that is specified by `x-amz-website-redirect-location`.

When you retrieve the object using the [GET Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) action, along with other object metadata, Amazon S3 returns the `x-amz-website-redirect-location` header in the response.

# Using cross-origin resource sharing (CORS)
Using CORS

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. 

This section provides an overview of CORS. The subtopics describe how you can enable CORS using the Amazon S3 console, or programmatically by using the Amazon S3 REST API and the AWS SDKs. 

## Cross-origin resource sharing: Use-case scenarios


The following are example scenarios for using CORS.

**Scenario 1**  
Suppose that you are hosting a website in an Amazon S3 bucket named `website` as described in [Hosting a static website using Amazon S3](WebsiteHosting.md). Your users load the website endpoint:

```
http://website.s3-website.us-east-1.amazonaws.com
```

Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, `website.s3.us-east-1.amazonaws.com`. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from `website.s3-website.us-east-1.amazonaws.com`.

**Scenario 2**  
Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preflight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

## How does Amazon S3 evaluate the CORS configuration on a bucket?


When Amazon S3 receives a preflight request from a browser, it evaluates the CORS configuration for the bucket and uses the first `CORSRule` rule that matches the incoming browser request to enable a cross-origin request. For a rule to match, the following conditions must be met:
+ The `Origin` header in a CORS request to your bucket must match the origins in the `AllowedOrigins` element in your CORS configuration.
+ The HTTP methods that are specified in the `Access-Control-Request-Method` in a CORS request to your bucket must match the method or methods listed in the `AllowedMethods` element in your CORS configuration. 
+ The headers listed in the `Access-Control-Request-Headers` header in a pre-flight request must match the headers in the `AllowedHeaders` element in your CORS configuration. 

**Note**  
The ACLs and policies continue to apply when you enable CORS on your bucket.

## How Object Lambda Access Point supports CORS


When S3 Object Lambda receives a request from a browser or the request includes an `Origin` header, S3 Object Lambda always adds an `"AllowedOrigins":"*"` header field.

For more information about using CORS, see the following topics.

**Topics**
+ [

## Cross-origin resource sharing: Use-case scenarios
](#example-scenarios-cors)
+ [

## How does Amazon S3 evaluate the CORS configuration on a bucket?
](#cors-eval-criteria)
+ [

## How Object Lambda Access Point supports CORS
](#cors-olap-cors)
+ [

# Elements of a CORS configuration
](ManageCorsUsing.md)
+ [

# Configuring cross-origin resource sharing (CORS)
](enabling-cors-examples.md)
+ [

# Testing CORS
](testing-cors.md)
+ [

# Troubleshooting CORS
](cors-troubleshooting.md)

# Elements of a CORS configuration
Elements of a CORS configuration

To configure your bucket to allow cross-origin requests, you create a CORS configuration. The CORS configuration is a document with elements that identify the origins that you will allow to access your bucket, the operations (HTTP methods) that you will support for each origin, and other operation-specific information. You can add up to 100 rules to the configuration. You can add the CORS configuration as the `cors` subresource to the bucket.

If you are configuring CORS in the S3 console, you must use JSON to create a CORS configuration. The new S3 console only supports JSON CORS configurations. 

For more information about the CORS configuration and the elements in it, see the topics below. For instructions on how to add a CORS configuration, see [Configuring cross-origin resource sharing (CORS)](enabling-cors-examples.md).

**Important**  
In the S3 console, the CORS configuration must be JSON. 

**Topics**
+ [

## `AllowedMethods` element
](#cors-allowed-methods)
+ [

## `AllowedOrigins` element
](#cors-allowed-origin)
+ [

## `AllowedHeaders` element
](#cors-allowed-headers)
+ [

## `ExposeHeaders` element
](#cors-expose-headers)
+ [

## `MaxAgeSeconds` element
](#cors-max-age)
+ [

## Examples of CORS configurations
](#cors-example-1)

## `AllowedMethods` element


In the CORS configuration, you can specify the following values for the `AllowedMethods` element.
+ GET
+ PUT
+ POST
+ DELETE
+ HEAD

## `AllowedOrigins` element


In the `AllowedOrigins` element, you specify the origins that you want to allow cross-domain requests from, for example,` http://www.example.com`. The origin string can contain only one `*` wildcard character, such as `http://*.example.com`. You can optionally specify `*` as the origin to enable all the origins to send cross-origin requests. You can also specify `https` to enable only secure origins.

## `AllowedHeaders` element


The `AllowedHeaders` element specifies which headers are allowed in a preflight request through the `Access-Control-Request-Headers` header. Each header name in the `Access-Control-Request-Headers` header must match a corresponding entry in the element. Amazon S3 will send only the allowed headers in a response that were requested. For a sample list of headers that can be used in requests to Amazon S3, go to [Common Request Headers](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html) in the *Amazon Simple Storage Service API Reference* guide.

Each AllowedHeaders string in your configuration can contain at most one \$1 wildcard character. For example, `<AllowedHeader>x-amz-*</AllowedHeader>` will enable all Amazon-specific headers.

## `ExposeHeaders` element


Each `ExposeHeader` element identifies a header in the response that you want customers to be able to access from their applications (for example, from a JavaScript `XMLHttpRequest` object). For a list of common Amazon S3 response headers, go to [Common Response Headers](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html) in the *Amazon Simple Storage Service API Reference* guide.

## `MaxAgeSeconds` element


The `MaxAgeSeconds` element specifies the time in seconds that your browser can cache the response for a preflight request as identified by the resource, the HTTP method, and the origin.

## Examples of CORS configurations


Instead of accessing a website by using an Amazon S3 website endpoint, you can use your own domain, such as `example1.com` to serve your content. For information about using your own domain, see [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md). 

The following example CORS configuration has three rules, which are specified as `CORSRule` elements:
+ The first rule allows cross-origin PUT, POST, and DELETE requests from the `http://www.example1.com` origin. The rule also allows all headers in a preflight OPTIONS request through the `Access-Control-Request-Headers` header. In response to preflight OPTIONS requests, Amazon S3 returns requested headers.
+ The second rule allows the same cross-origin requests as the first rule, but the rule applies to another origin, `http://www.example2.com`. 
+ The third rule allows cross-origin GET requests from all origins. The `*` wildcard character refers to all origins. 

------
#### [ JSON ]

```
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "PUT",
            "POST",
            "DELETE"
        ],
        "AllowedOrigins": [
            "http://www.example1.com"
        ],
        "ExposeHeaders": []
    },
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "PUT",
            "POST",
            "DELETE"
        ],
        "AllowedOrigins": [
            "http://www.example2.com"
        ],
        "ExposeHeaders": []
    },
    {
        "AllowedHeaders": [],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
```

------
#### [ XML ]

```
<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>http://www.example1.com</AllowedOrigin>

   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>

   <AllowedHeader>*</AllowedHeader>
 </CORSRule>
 <CORSRule>
   <AllowedOrigin>http://www.example2.com</AllowedOrigin>

   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>

   <AllowedHeader>*</AllowedHeader>
 </CORSRule>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
 </CORSRule>
</CORSConfiguration>
```

------

The CORS configuration also allows optional configuration parameters, as shown in the following CORS configuration. In this example, the CORS configuration allows cross-origin PUT, POST, and DELETE requests from the `http://www.example.com` origin.

------
#### [ JSON ]

```
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "PUT",
            "POST",
            "DELETE"
        ],
        "AllowedOrigins": [
            "http://www.example.com"
        ],
        "ExposeHeaders": [
            "x-amz-server-side-encryption",
            "x-amz-request-id",
            "x-amz-id-2"
        ],
        "MaxAgeSeconds": 3000
    }
]
```

------
#### [ XML ]

```
<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>http://www.example.com</AllowedOrigin>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>
   <AllowedHeader>*</AllowedHeader>
  <MaxAgeSeconds>3000</MaxAgeSeconds>
  <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
  <ExposeHeader>x-amz-request-id</ExposeHeader>
  <ExposeHeader>x-amz-id-2</ExposeHeader>
 </CORSRule>
</CORSConfiguration>
```

------

The `CORSRule` element in the preceding configuration includes the following optional elements:
+ `MaxAgeSeconds`—Specifies the amount of time in seconds (in this example, 3000) that the browser caches an Amazon S3 response to a preflight OPTIONS request for the specified resource. By caching the response, the browser does not have to send preflight requests to Amazon S3 if the original request will be repeated. 
+ `ExposeHeaders`—Identifies the response headers (in this example, `x-amz-server-side-encryption`, `x-amz-request-id`, and `x-amz-id-2`) that customers are able to access from their applications (for example, from a JavaScript `XMLHttpRequest` object).

# Configuring cross-origin resource sharing (CORS)
Configuring CORS

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. 

This section shows you how to enable CORS using the Amazon S3 console, the Amazon S3 REST API, and the AWS SDKs. To configure your bucket to allow cross-origin requests, you add a CORS configuration to the bucket. A CORS configuration is a document that defines rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) supported for each origin, and other operation-specific information. In the S3 console, the CORS configuration must be a JSON document.

For example CORS configurations in JSON and XML, see [Elements of a CORS configuration](ManageCorsUsing.md).

## Using the S3 console


This section explains how to use the Amazon S3 console to add a cross-origin resource sharing (CORS) configuration to an S3 bucket. 

When you enable CORS on the bucket, the access control lists (ACLs) and other access permission policies continue to apply.

**Important**  
In the S3 console, the CORS configuration must be JSON. For examples CORS configurations in JSON and XML, see [Elements of a CORS configuration](ManageCorsUsing.md).

**To add a CORS configuration to an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to create a bucket policy for.

1. Choose **Permissions**.

1. In the **Cross-origin resource sharing (CORS)** section, choose **Edit**.

1. In the **CORS configuration editor** text box, type or copy and paste a new CORS configuration, or edit an existing configuration.

   The CORS configuration is a JSON file. The text that you type in the editor must be valid JSON. For more information, see [Elements of a CORS configuration](ManageCorsUsing.md).

1. Choose **Save changes**.
**Note**  
Amazon S3 displays the Amazon Resource Name (ARN) for the bucket next to the **CORS configuration editor** title. For more information about ARNs, see [Amazon Resource Names (ARNs) and AWS Service Namespaces](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the *Amazon Web Services General Reference*.

## Using the AWS SDKs


You can use the AWS SDK to manage cross-origin resource sharing (CORS) for a bucket. For more information about CORS, see [Using cross-origin resource sharing (CORS)](cors.md).

 The following examples:
+ Creates a CORS configuration and sets the configuration on a bucket
+ Retrieves the configuration and modifies it by adding a rule
+ Adds the modified configuration to the bucket
+ Deletes the configuration

------
#### [ Java ]

**Example**  

**Example**  
 For instructions on how to create and test a working sample, see [Getting Started](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/getting-started.html) in the AWS SDK for Java Developer Guide.  

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketCrossOriginConfiguration;
import com.amazonaws.services.s3.model.CORSRule;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class CORS {

    public static void main(String[] args) throws IOException {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";

        // Create two CORS rules.
        List<CORSRule.AllowedMethods> rule1AM = new ArrayList<CORSRule.AllowedMethods>();
        rule1AM.add(CORSRule.AllowedMethods.PUT);
        rule1AM.add(CORSRule.AllowedMethods.POST);
        rule1AM.add(CORSRule.AllowedMethods.DELETE);
        CORSRule rule1 = new CORSRule().withId("CORSRule1").withAllowedMethods(rule1AM)
                .withAllowedOrigins(Arrays.asList("http://*.example.com"));

        List<CORSRule.AllowedMethods> rule2AM = new ArrayList<CORSRule.AllowedMethods>();
        rule2AM.add(CORSRule.AllowedMethods.GET);
        CORSRule rule2 = new CORSRule().withId("CORSRule2").withAllowedMethods(rule2AM)
                .withAllowedOrigins(Arrays.asList("*")).withMaxAgeSeconds(3000)
                .withExposedHeaders(Arrays.asList("x-amz-server-side-encryption"));

        List<CORSRule> rules = new ArrayList<CORSRule>();
        rules.add(rule1);
        rules.add(rule2);

        // Add the rules to a new CORS configuration.
        BucketCrossOriginConfiguration configuration = new BucketCrossOriginConfiguration();
        configuration.setRules(rules);

        try {
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(clientRegion)
                    .build();

            // Add the configuration to the bucket.
            s3Client.setBucketCrossOriginConfiguration(bucketName, configuration);

            // Retrieve and display the configuration.
            configuration = s3Client.getBucketCrossOriginConfiguration(bucketName);
            printCORSConfiguration(configuration);

            // Add another new rule.
            List<CORSRule.AllowedMethods> rule3AM = new ArrayList<CORSRule.AllowedMethods>();
            rule3AM.add(CORSRule.AllowedMethods.HEAD);
            CORSRule rule3 = new CORSRule().withId("CORSRule3").withAllowedMethods(rule3AM)
                    .withAllowedOrigins(Arrays.asList("http://www.example.com"));

            rules = configuration.getRules();
            rules.add(rule3);
            configuration.setRules(rules);
            s3Client.setBucketCrossOriginConfiguration(bucketName, configuration);

            // Verify that the new rule was added by checking the number of rules in the
            // configuration.
            configuration = s3Client.getBucketCrossOriginConfiguration(bucketName);
            System.out.println("Expected # of rules = 3, found " + configuration.getRules().size());

            // Delete the configuration.
            s3Client.deleteBucketCrossOriginConfiguration(bucketName);
            System.out.println("Removed CORS configuration.");

            // Retrieve and display the configuration to verify that it was
            // successfully deleted.
            configuration = s3Client.getBucketCrossOriginConfiguration(bucketName);
            printCORSConfiguration(configuration);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }

    private static void printCORSConfiguration(BucketCrossOriginConfiguration configuration) {
        if (configuration == null) {
            System.out.println("Configuration is null.");
        } else {
            System.out.println("Configuration has " + configuration.getRules().size() + " rules\n");

            for (CORSRule rule : configuration.getRules()) {
                System.out.println("Rule ID: " + rule.getId());
                System.out.println("MaxAgeSeconds: " + rule.getMaxAgeSeconds());
                System.out.println("AllowedMethod: " + rule.getAllowedMethods());
                System.out.println("AllowedOrigins: " + rule.getAllowedOrigins());
                System.out.println("AllowedHeaders: " + rule.getAllowedHeaders());
                System.out.println("ExposeHeader: " + rule.getExposedHeaders());
                System.out.println();
            }
        }
    }
}
```

------
#### [ .NET ]

**Example**  
For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*.   

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class CORSTest
    {
        private const string bucketName = "*** bucket name ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; 
        private static IAmazonS3 s3Client;

        public static void Main()
        {
            s3Client = new AmazonS3Client(bucketRegion);
            CORSConfigTestAsync().Wait();
        }
        private static async Task CORSConfigTestAsync()
        {
            try
            {
                // Create a new configuration request and add two rules    
                CORSConfiguration configuration = new CORSConfiguration
                {
                    Rules = new System.Collections.Generic.List<CORSRule>
                        {
                          new CORSRule
                          {
                            Id = "CORSRule1",
                            AllowedMethods = new List<string> {"PUT", "POST", "DELETE"},
                            AllowedOrigins = new List<string> {"http://*.example.com"}
                          },
                          new CORSRule
                          {
                            Id = "CORSRule2",
                            AllowedMethods = new List<string> {"GET"},
                            AllowedOrigins = new List<string> {"*"},
                            MaxAgeSeconds = 3000,
                            ExposeHeaders = new List<string> {"x-amz-server-side-encryption"}
                          }
                        }
                };

                // Add the configuration to the bucket. 
                await PutCORSConfigurationAsync(configuration);

                // Retrieve an existing configuration. 
                configuration = await RetrieveCORSConfigurationAsync();

                // Add a new rule.
                configuration.Rules.Add(new CORSRule
                {
                    Id = "CORSRule3",
                    AllowedMethods = new List<string> { "HEAD" },
                    AllowedOrigins = new List<string> { "http://www.example.com" }
                });

                // Add the configuration to the bucket. 
                await PutCORSConfigurationAsync(configuration);

                // Verify that there are now three rules.
                configuration = await RetrieveCORSConfigurationAsync();
                Console.WriteLine();
                Console.WriteLine("Expected # of rulest=3; found:{0}", configuration.Rules.Count);
                Console.WriteLine();
                Console.WriteLine("Pause before configuration delete. To continue, click Enter...");
                Console.ReadKey();

                // Delete the configuration.
                await DeleteCORSConfigurationAsync();

                // Retrieve a nonexistent configuration.
                configuration = await RetrieveCORSConfigurationAsync();
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }

        static async Task PutCORSConfigurationAsync(CORSConfiguration configuration)
        {

            PutCORSConfigurationRequest request = new PutCORSConfigurationRequest
            {
                BucketName = bucketName,
                Configuration = configuration
            };

            var response = await s3Client.PutCORSConfigurationAsync(request);
        }

        static async Task<CORSConfiguration> RetrieveCORSConfigurationAsync()
        {
            GetCORSConfigurationRequest request = new GetCORSConfigurationRequest
            {
                BucketName = bucketName

            };
            var response = await s3Client.GetCORSConfigurationAsync(request);
            var configuration = response.Configuration;
            PrintCORSRules(configuration);
            return configuration;
        }

        static async Task DeleteCORSConfigurationAsync()
        {
            DeleteCORSConfigurationRequest request = new DeleteCORSConfigurationRequest
            {
                BucketName = bucketName
            };
            await s3Client.DeleteCORSConfigurationAsync(request);
        }

        static void PrintCORSRules(CORSConfiguration configuration)
        {
            Console.WriteLine();

            if (configuration == null)
            {
                Console.WriteLine("\nConfiguration is null");
                return;
            }

            Console.WriteLine("Configuration has {0} rules:", configuration.Rules.Count);
            foreach (CORSRule rule in configuration.Rules)
            {
                Console.WriteLine("Rule ID: {0}", rule.Id);
                Console.WriteLine("MaxAgeSeconds: {0}", rule.MaxAgeSeconds);
                Console.WriteLine("AllowedMethod: {0}", string.Join(", ", rule.AllowedMethods.ToArray()));
                Console.WriteLine("AllowedOrigins: {0}", string.Join(", ", rule.AllowedOrigins.ToArray()));
                Console.WriteLine("AllowedHeaders: {0}", string.Join(", ", rule.AllowedHeaders.ToArray()));
                Console.WriteLine("ExposeHeader: {0}", string.Join(", ", rule.ExposeHeaders.ToArray()));
            }
        }
    }
}
```

------

## Using the REST API


To set a CORS configuration on your bucket, you can use the AWS Management Console. If your application requires it, you can also send REST requests directly. The following sections in the *Amazon Simple Storage Service API Reference* describe the REST API actions related to the CORS configuration: 
+ [PutBucketCors](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTcors.html)
+ [GetBucketCors](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETcors.html)
+ [DeleteBucketCors](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEcors.html)
+ [OPTIONS object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTOPTIONSobject.html)

# Testing CORS
Testing CORS

To test your CORS configuration, a CORS preflight request can be sent with the `OPTIONS` method so that the server can respond if it is acceptable to send the request. When Amazon S3 receives a preflight request, S3 evaluates the CORS configuration for the bucket and uses the first `CORSRule` rule that matches the incoming request to enable a cross-origin request. For a rule to match, the following conditions must be met: 
+ The `Origin` header in a CORS request to your bucket must match the origins in the `AllowedOrigins` element in your CORS configuration.
+ The HTTP methods that are specified in the `Access-Control-Request-Method` in a CORS request to your bucket must match the method or methods listed in the `AllowedMethods` element in your CORS configuration.
+ The headers listed in the `Access-Control-Request-Headers` header in a preflight request must match the headers in the `AllowedHeaders` element in your CORS configuration. 

The following is an example of a CORS configuration. To create a CORS Configuration, see [Configuring CORS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enabling-cors-examples.html). For more examples of a CORS configuration, see [ Elements of a CORS configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html). 

For guidance on configuring and troubleshooting CORS rules, see [How do I configure CORS in Amazon S3 and confirm the CORS rules using cURL?](https://repost.aws/knowledge-center/s3-configure-cors) in the AWS re:Post Knowledge Center.

------
#### [ JSON ]

```
[
    {
        "AllowedHeaders": [
            "Authorization"
        ],
        "AllowedMethods": [
            "GET",
            "PUT",
            "POST",
            "DELETE"
        ],
        "AllowedOrigins": [
            "http://www.example1.com"
        ],
        "ExposeHeaders":  [
             "x-amz-meta-custom-header"
        ]
    
    }
]
```

------

To test the CORS configuration, you can send a preflight `OPTIONS` check by using the following CURL command. CURL is a command-line tool that can be used to interact with S3. For more information, see [CURL](https://curl.se/). 

```
 curl -v -X OPTIONS \
  -H "Origin: http://www.example1.com" \
  -H "Access-Control-Request-Method: PUT" \
  -H "Access-Control-Request-Headers: Authorization" \
  -H "Access-Control-Expose-Headers: x-amz-meta-custom-header"\
     "http://bucket_name.s3.amazonaws.com/object_prefix_name"
```

In the above example, the `curl -v -x OPTIONS` command is used to send a preflight request to S3 to inquire if it is allowed by S3 to send a `PUT` request on an object from the cross origin `http://www.example1.com`. The headers `Access-Control-Request-Headers` and `Access-Control-Expose-Headers` are optional.
+ In response to the `Access-Control-Request-Method` header in the preflight `OPTIONS` request, Amazon S3 returns the list of allowed methods if the requested methods match. 
+ In response to the `Access-Control-Request-Headers` header in the preflight `OPTIONS` request, Amazon S3 returns the list of allowed headers if the requested headers match.
+ In response to the `Access-Control-Expose-Headers` header in the preflight `OPTIONS` request, Amazon S3 returns a list of allowed headers if the requested headers match the allowed headers that can be accessed by scripts running in the browser.

**Note**  
When sending a preflight request, if any of the CORS request headers are not allowed, none of the response CORS headers are returned.

In response to this preflight `OPTIONS` request, you will receive a `200 OK` response. For common error codes received when testing CORS and more information to solve CORS related issues, see [Troubleshooting CORS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors-troubleshooting.html). 

```
< HTTP/1.1 200 OK
< Date: Fri, 12 Jul 2024 00:23:51 GMT
< Access-Control-Allow-Origin: http://www.example1.com
< Access-Control-Allow-Methods: GET, PUT, POST, DELETE 
< Access-Control-Allow-Headers: Authorization
< Access-Control-Expose-Headers: x-amz-meta-custom-header
< Access-Control-Allow-Credentials: true
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
< Server: AmazonS3
< Content-Length: 0
```

# Troubleshooting CORS


The following topics can help you troubleshoot some common CORS issues related to S3.

**Topics**
+ [403 Forbidden error - CORS is not enabled for this bucket](#cors-not-enabled)
+ [403 Forbidden error - This CORS request is not allowed](#cors-not-enabled)
+ [

## Headers not found in CORS response
](#Headers-not-found)
+ [

## Considerations of CORS on S3 proxy integrations
](#cors-in-proxy)

## 403 Forbidden error: CORS is not enabled for this bucket
403 Forbidden error - CORS is not enabled for this bucket

The following `403 Forbidden` error occurs when a cross-origin request is sent to Amazon S3 but CORS is not configured on your S3 bucket. 

 Error: HTTP/1.1 403 Forbidden CORS Response: CORS is not enabled for this bucket. 

The CORS configuration is a document or policy with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) that you will support for each origin, and other operation-specific information. See how to [configure CORS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enabling-cors-examples.html) on S3 by using the Amazon S3 console, AWS SDKs, and REST API. For more information on CORS and examples of a CORS configuration, see [ Elements of CORS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html#cors-example-1).

## 403 Forbidden error: This CORS request is not allowed
403 Forbidden error - This CORS request is not allowed

The following `403 Forbidden` error is received when a CORS rule in your CORS configuration doesn't match the data in your request.

Error:  HTTP/1.1 403 Forbidden CORS Response: This CORS request is not allowed.

As a result, this `403 Forbidden` error can occur for multiple reasons:
+ Origin is not allowed.
+ Methods are not allowed.
+ Requested headers are not allowed.

For each request that Amazon S3 receives, you must have a CORS rule in your CORS configuration that matches the data in your request. 

### Origin is not allowed


 The `Origin` header in a CORS request to your bucket must match the origins in the `AllowedOrigins` element in your CORS configuration. A wildcard character (`"*"`) in the `AllowedOrigins` element would match all HTTP methods. For more information on how to update the `AllowedOrigins` element, see [Configuring cross-origin resource sharing (CORS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enabling-cors-examples.html).

 For example, if only the `http://www.example1.com` domain is included in the `AllowedOrigins` element, then a CORS request sent from the `http://www.example2.com` domain would receive the `403 Forbidden` error. 

The following example shows part of a CORS configuration that includes the `http://www.example1.com` domain in the `AllowedOrigins` element. 

```
"AllowedOrigins":[
   "http://www.example1.com"
]
```

For a CORS request sent from the `http://www.example2.com` domain to be successful, the `http://www.example2.com` domain should be included in the `AllowedOrigins` element of CORS configuration. 

```
"AllowedOrigins":[
   "http://www.example1.com"
   "http://www.example2.com"
]
```

### Methods are not allowed


 The HTTP methods that are specified in the `Access-Control-Request-Method` in a CORS request to your bucket must match the method or methods listed in the `AllowedMethods` element in your CORS configuration. A wildcard character (`"*"`) in `AllowedMethods` would match all HTTP methods. For more information on how to update the `AllowedOrigins` element, see [Configuring cross-origin resource sharing (CORS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enabling-cors-examples.html). 

In a CORS configuration, you can specify the following methods in the `AllowedMethods` element:
+ `GET`
+ `PUT`
+ `POST`
+ `DELETE`
+ `HEAD`

The following example shows part of a CORS configuration that includes the `GET` method in the `AllowedMethods` element. Only requests including the `GET` method would succeed. 

```
"AllowedMethods":[
   "GET"
]
```

 If an HTTP method (for example, `PUT`) was used in a CORS request or included in a pre-flight CORS request to your bucket but the method isn't present in your CORS configuration, the request would result in a `403 Forbidden` error. To allow this CORS request or CORS pre-flight request, the `PUT` method must be added to your CORS configuration. 

```
"AllowedMethods":[
   "GET"
   "PUT"
]
```

### Requested headers are not allowed


 The headers listed in the `Access-Control-Request-Headers` header in a pre-flight request must match the headers in the `AllowedHeaders` element in your CORS configuration. For a list of common headers that can be used in requests to Amazon S3, see [Common Request Headers](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html). For more information on how to update the `AllowedHeaders` element, see [Configuring cross-origin resource sharing (CORS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enabling-cors-examples.html). 

The following example shows part of a CORS configuration that includes the `Authorization` header in the `AllowedHeaders` element. Only requests for the `Authorization` header would succeed. 

```
"AllowedHeaders":  [
    "Authorization"
]
```

 If a header (for example `Content-MD5` was included in a CORS request but the header isn't present in your CORS configuration, the request would result in a `403 Forbidden` error. To allow this CORS request , the `Content-MD5` header must be added to your CORS configuration. If you want to pass both `Authorization` and `Content-MD5` headers in a CORS request to your bucket, confirm that both headers are included in the `AllowedHeaders` element in your CORS configuration. 

```
"AllowedHeaders":  [
    "Authorization"
    "Content-MD5"
]
```

## Headers not found in CORS response


 The `ExposeHeaders` element in your CORS configuration identifies which response headers that you would like to make accessible to scripts and applications running in browsers, in response to a CORS request.

If your objects stored in your S3 bucket have user-defined metadata (for example, `x-amz-meta-custom-header`) along with the response data, this custom header could contain additional metadata or information that you want to access from your client-side JavaScript code. However, by default, browsers block access to custom headers for security reasons. To allow your client-side JavaScript to access custom headers, you need to include the header in your CORS configuration.

 In the example below, the `x-amz-meta-custom-header1` header is included in the `ExposeHeaders` element. The `x-amz-meta-custom-header2` isn't included in the `ExposeHeaders` element and is missing from the CORS configuration. In the response, only the values included in the `ExposeHeaders` element would be returned. If the request included the `x-amz-meta-custom-header2` header in the `Access-Control-Expose-Headers` header, the response would still return a `200 OK`. However, only the permitted header, For example `x-amz-meta-custom-header` would be returned and show in the response. 

```
"ExposeHeaders":  [
    "x-amz-meta-custom-header1"
]
```

 To ensure all headers appear in the response, add all permitted headers to the `ExposeHeaders` element in your CORS configuration as shown below. 

```
"ExposeHeaders":  [
    "x-amz-meta-custom-header1",
    "x-amz-meta-custom-header2"
]
```

## Considerations of CORS on S3 proxy integrations


If you are experiencing errors and have already checked the CORS configuration on your S3 bucket, and the cross-origin request is sent to proxies such as AWS CloudFront, try the following:
+ Configure the settings to allow the `OPTIONS` method for HTTP requests.
+ Configure the proxy to forward the following headers: `Origin`, `Access-Control-Request-Headers`, and `Access-Control-Request-Method`.
+ Configure the proxy settings to include the origin header in its cache key. This is important because caching proxies that don't include the origin header in their cache key may serve cached responses that don't include the appropriate CORS headers for different origins.

Some proxies provide pre-defined features for CORS requests. For example, in CloudFront, you can configure a policy that includes the headers 

 that enable cross-origin resource sharing (CORS) requests when the origin is an Amazon S3 bucket.

 This policy has the following settings: 
+ Headers included in origin requests:

   `Origin`

   `Access-Control-Request-Headers`

   `Access-Control-Request-Method`
+ Cookies included in origin requests: None
+ Query strings included in origin requests: None

For more information, see [Control origin requests with a policy](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/controlling-origin-requests.htm) and [Use managed origin request policies](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-origin-request-policies.html#managed-origin-request-policy-cors-s3) in the *CloudFront Developer Guide*. 

# Static website tutorials


The following tutorials or walkthroughs present complete procedures for how to create and configure an Amazon S3 general purpose bucket for static website hosting and hosting on-demand video streaming. The purpose of these tutorials is to provide general guidance. These tutorials are intended for a lab-type environment, and they use example bucket names, user names, and so on. They are not intended for direct use in a production environment without careful review and adaptation to meet the unique needs of your organization's environment. 
+ [Hosting on-demand streaming video with Amazon S3, Amazon CloudFront, and Amazon Route 53](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-cloudfront-route53-video-streaming) – You can use Amazon S3 with Amazon CloudFront to host videos for on-demand viewing in a secure and scalable way. After your video is packaged into the right formats, you can store it on a server or in an S3 general purpose bucket, and then deliver it with CloudFront as viewers request it. In this tutorial, you will learn how to configure your general purpose bucket to host on-demand video streaming using CloudFront for delivery and Amazon Route 53 for Domain Name System (DNS) and custom domain management. CloudFront serves the video from its cache, retrieving it from your general purpose bucket only if it is not already cached. This caching management feature accelerates the delivery of your video to viewers globally with low latency, high throughput, and high transfer speeds. For more information about CloudFront caching management, see [Optimizing caching and availability](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ConfiguringCaching.html) in the *Amazon CloudFront Developer Guide*.
+ [Configuring a static website](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html) – You can configure a general purpose bucket to function like a website. This tutorial walks you through the steps of hosting a website on Amazon S3 including creating a bucket, enabling static website hosting in the S3 console, creating an index document and creating an error document. For more information, see [Hosting a static website using Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html).
+ [Configuring a static website using a custom domain registered with Route 53](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-custom-domain-walkthrough.html) – You can create and configure a general purpose bucket to host a static website and create redirects on S3 for a website with a custom domain name that is registered with Amazon Route 53. You use Route 53 to register domains and to define where you want to route internet traffic for your domain. This tutorial shows how to create Route 53 alias records that routes traffic for your domain and subdomain to your general purpose bucket that contains an HTML file. For more information, see [Use your domain for a static website in an Amazon S3 bucket](https://docs.aws.amazon.com//Route53/latest/DeveloperGuide/getting-started-s3.html) in the *Amazon Route 53 Developer Guide*. After you complete this tutorial, you can optionally use CloudFront to improve the performance of your website. For more information, see [Speeding up your website with Amazon CloudFront](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-cloudfront-walkthrough.html). 
+ [Deploying a static website to AWS Amplify Hosting from an S3 general purpose bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-amplify) – We recommend that you use [AWS Amplify Hosting](https://docs.aws.amazon.com//amplify/latest/userguide/welcome.html.html) to host static website content stored on S3. Amplify Hosting is a fully managed service that makes it easy to deploy your websites on a globally available content delivery network (CDN) powered by Amazon CloudFront, allowing secure static website hosting without extensive setup. With AWS Amplify Hosting, you can select the location of your objects within your general purpose bucket, deploy your content to a managed CDN, and generate a public HTTPS URL for your website to be accessible anywhere. For more information, see [Deploying a static website from S3 using the Amplify console](https://docs.aws.amazon.com//amplify/latest/userguide/deploy--from-amplify-console.html) in the *AWS Amplify Hosting User Guide*.

# Tutorial: Hosting on-demand streaming video with Amazon S3, Amazon CloudFront, and Amazon Route 53
Hosting video streaming

You can use Amazon S3 with Amazon CloudFront to host videos for on-demand viewing in a secure and scalable way. Video on demand (VOD) streaming means that your video content is stored on a server and viewers can watch it at any time.

CloudFront is a fast, highly secure, and programmable content delivery network (CDN) service. CloudFront can deliver your content securely over HTTPS from all of the CloudFront edge locations around the globe. For more information about CloudFront, see [What is Amazon CloudFront?](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) in the *Amazon CloudFront Developer Guide*.

CloudFront caching reduces the number of requests that your origin server must respond to directly. When a viewer (end user) requests a video that you serve with CloudFront, the request is routed to a nearby edge location closer to where the viewer is located. CloudFront serves the video from its cache, retrieving it from the S3 bucket only if it is not already cached. This caching management feature accelerates the delivery of your video to viewers globally with low latency, high throughput, and high transfer speeds. For more information about CloudFront caching management, see [Optimizing caching and availability](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ConfiguringCaching.html) in the *Amazon CloudFront Developer Guide*.

![\[Diagram showing how the CloudFront caching mechanism works.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/cf-example-image-global.png)


**Objective**  
In this tutorial, you configure an S3 bucket to host on-demand video streaming using CloudFront for delivery and Amazon Route 53 for Domain Name System (DNS) and custom domain management.

**Topics**
+ [

## Prerequisites: Register and configure a custom domain with Route 53
](#cf-s3-prerequisites)
+ [

## Step 1: Create an S3 bucket
](#cf-s3-step1)
+ [

## Step 2: Upload a video to the S3 bucket
](#cf-s3-step2)
+ [

## Step 3: Create a CloudFront origin access identity
](#cf-s3-step3)
+ [

## Step 4: Create a CloudFront distribution
](#cf-s3-step4)
+ [

## Step 5: Access the video through the CloudFront distribution
](#cf-s3-step5)
+ [

## Step 6: Configure your CloudFront distribution to use your custom domain name
](#cf-s3-step6)
+ [

## Step 7: Access the S3 video through the CloudFront distribution with the custom domain name
](#cf-s3-step7)
+ [

## (Optional) Step 8: View data about requests received by your CloudFront distribution
](#cf-s3-step8)
+ [

## Step 9: Clean up
](#cf-s3-step9)
+ [

## Next steps
](#cf-s3-next-steps)

## Prerequisites: Register and configure a custom domain with Route 53


Before you start this tutorial, you must register and configure a custom domain (for example, **example.com**) with Route 53 so that you can configure your CloudFront distribution to use a custom domain name later. 

Without a custom domain name, your S3 video is publicly accessible and hosted through CloudFront at a URL that looks similar to the following: 

```
https://CloudFront distribution domain name/Path to an S3 video
```

For example, **https://d111111abcdef8.cloudfront.net/sample.mp4**.

After you configure your CloudFront distribution to use a custom domain name configured with Route 53, your S3 video is publicly accessible and hosted through CloudFront at a URL that looks similar to the following: 

```
https://CloudFront distribution alternate domain name/Path to an S3 video
```

For example, **https://www.example.com/sample.mp4**. A custom domain name is simpler and more intuitive for your viewers to use.

****  
To register a custom domain, see [Registering a new domain using Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html) in the *Amazon Route 53 Developer Guide*.

When you register a domain name with Route 53, Route 53 creates the hosted zone for you, which you will use later in this tutorial. This hosted zone is where you store information about how to route traffic for your domain, for example, to an Amazon EC2 instance or a CloudFront distribution. 

There are fees associated with domain registration, your hosted zone, and DNS queries received by your domain. For more information, see [Amazon Route 53 Pricing](https://aws.amazon.com/route53/pricing/). 

**Note**  
When you register a domain, it costs money immediately and it's irreversible. You can choose not to auto-renew the domain, but you pay up front and own it for the year. For more information, see [Registering a new domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html) in the *Amazon Route 53 Developer Guide*.

## Step 1: Create an S3 bucket


Create a bucket to store the original video that you plan to stream.

**To create a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. For **Bucket name**, enter a name for your bucket (for example, **tutorial-bucket**). 

   For more information about naming buckets in Amazon S3, see [General purpose bucket naming rules](bucketnamingrules.md).

1. For **Region**, choose the AWS Region where you want the bucket to reside. 

   If possible, you should pick the Region that is closest to the majority of your viewers. For more information about the bucket Region, see [General purpose buckets overview](UsingBucket.md).

1. For **Block Public Access settings for this bucket**, keep the default settings (**Block *all *public access** is enabled). 

   Even with **Block *all* public access** enabled, viewers can still access the uploaded video through CloudFront. This feature is a major advantage of using CloudFront to host a video stored in S3.

   We recommend that you keep all settings enabled unless you need to turn off one or more of them for your use case. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. For the remaining settings, keep the defaults. 

   (Optional) If you want to configure additional bucket settings for your specific use case, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Choose **Create bucket**.

## Step 2: Upload a video to the S3 bucket


The following procedure describes how to upload a video file to an S3 bucket by using the console. If you're uploading many large video files to S3, you might want to use [Amazon S3 Transfer Acceleration](https://aws.amazon.com/s3/transfer-acceleration) to configure fast and secure file transfers. Transfer Acceleration can speed up video uploading to your S3 bucket for long-distance transfer of larger videos. For more information, see [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md). 

**To upload a file to the bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the **General purpose buckets** list, choose the name of the bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**) to upload your file to.

1. On the **Objects** tab for your bucket, choose **Upload**.

1. On the **Upload** page, under **Files and folders**, choose **Add files**.

1. Choose a file to upload, and then choose **Open**.

   For example, you can upload a video file named `sample.mp4`.

1. Choose **Upload**.

## Step 3: Create a CloudFront origin access identity


To restrict direct access to the video from your S3 bucket, create a special CloudFront user called an origin access identity (OAI). You will associate the OAI with your distribution later in this tutorial. By using an OAI, you make sure that viewers can't bypass CloudFront and get the video directly from the S3 bucket. Only the CloudFront OAI can access the file in the S3 bucket. For more information, see [Restrict access to an Amazon S3 origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*.



**Important**  
If the bucket that you're using to host your static website has been encrypted using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you must use origin access control (OAC) instead of origin access identity (OAI) to secure the origin. OAI doesn't support SSE-KMS, so you must use OAC instead. For more information about OAC, see [Restrict access to an Amazon S3 origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*.

**To create a CloudFront OAI**

1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, under the **Security** section, choose **Origin access**.

1. Under the **Identities** tab, choose **Create origin access identity**.

1. Enter a name (for example, **S3-OAI**) for the new origin access identity.

1. Choose **Create**.

## Step 4: Create a CloudFront distribution


To use CloudFront to serve and distribute the video in your S3 bucket, you must create a CloudFront distribution. 

**Topics**
+ [

### Create a CloudFront distribution
](#cf-s3-step4-create-cloudfront)
+ [

### Review the bucket policy
](#cf-s3-step4-review-bucket-policy)

### Create a CloudFront distribution


1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, choose **Distributions**.

1. Choose **Create distribution**.

1. In the **Origin** section, for **Origin domain**, choose the domain name of your S3 origin, which starts with the name of the S3 bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**).

1. For **Origin access**, choose **Legacy access identities**.

1. Under **Origin access identity**, choose the origin access identity that you created in [Step 3](#cf-s3-step3) (for example, **S3-OAI**).

1. Under **Bucket policy**, choose **Yes, update the bucket policy**. 

1. In the **Default cache behavior** section, under **Viewer protocol policy**, choose **Redirect HTTP to HTTPS**. 

   When you choose this feature, HTTP requests are automatically redirected to HTTPS to secure your website and protect your viewers' data. 

1. For the other settings in the **Default cache behaviors** section, keep the default values.

   (Optional) You can control how long your file stays in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means that your viewers get better performance because your files are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin. For more information, see [Managing how long content stays in the cache (expiration)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) in the *Amazon CloudFront Developer Guide*.

1. For the other sections, keep the remaining settings set to the defaults. 

   For more information about the different settings options, see [Values That You Specify When You Create or Update a Distribution](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html) in the *Amazon CloudFront Developer Guide*. 

1. At the bottom of the page, choose **Create distribution**. 

1. On the **General** tab for your CloudFront distribution, under **Details**, the value of the **Last modified** column for your distribution changes from **Deploying** to the timestamp when the distribution was last modified. This process typically takes a few minutes. 

### Review the bucket policy


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you used earlier as the origin of your CloudFront distribution (for example, **tutorial-bucket**).

1. Choose the **Permissions** tab.

1. In the **Bucket policy** section, confirm that you see a statement similar to the following in the bucket policy text: 

   ```
   {
       "Version": "2008-10-17",		 	 	 
       "Id": "PolicyForCloudFrontPrivateContent",
       "Statement": [
           {
               "Sid": "1",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EH1HDMB1FH2TC"
               },
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::tutorial-bucket/*"
           }
       ]
   }
   ```

   This is the statement that your CloudFront distribution added to your bucket policy when you chose **Yes, update the bucket policy** earlier.

   This bucket policy update indicates that you successfully configured the CloudFront distribution to restrict access to the S3 bucket. Because of this restriction, objects in the bucket can be accessed only through your CloudFront distribution. 

## Step 5: Access the video through the CloudFront distribution


Now, CloudFront can serve the video stored in your S3 bucket. To access your video through CloudFront, you must combine your CloudFront distribution domain name with the path to the video in the S3 bucket.

**To create a URL to the S3 video using the CloudFront distribution domain name**

1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, choose **Distributions**.

1. To get the distribution domain name, do the following:

   1. In the **Origins** column, find the correct CloudFront distribution by looking for its origin name, which starts with the S3 bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**). 

   1. After finding the distribution in the list, widen the **Domain name** column to copy the domain name value for your CloudFront distribution.

1. In a new browser tab, paste the distribution domain name that you copied. 

1. Return to the previous browser tab, and open the S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). 

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**). 

1. In the **Objects** list, choose the name of the video that you uploaded in [Step 2](#cf-s3-step2) (for example, `sample.mp4`). 

1. On the object detail page, in the **Object overview** section, copy the value of the **Key**. This value is the path to the uploaded video object in the S3 bucket. 

1. Return to the browser tab where you previously pasted the distribution domain name, enter a forward slash (**/**) after the distribution domain name, and then paste the path to the video that you copied earlier (for example, `sample.mp4`). 

   Now, your S3 video is publicly accessible and hosted through CloudFront at a URL that looks similar to the following: 

   ```
   https://CloudFront distribution domain name/Path to the S3 video
   ```

   Replace *CloudFront distribution domain name* and *Path to the S3 video* with the appropriate values. An example URL is **https://d111111abcdef8.cloudfront.net/sample.mp4**.

## Step 6: Configure your CloudFront distribution to use your custom domain name


To use your own domain name instead of the CloudFront domain name in the URL to access the S3 video, add an alternate domain name to your CloudFront distribution. 

**Topics**
+ [

### Request an SSL certificate
](#cf-s3-step6-create-SSL)
+ [

### Add the alternate domain name to your CloudFront distribution
](#cf-s3-step6-custom-domain)
+ [

### Create a DNS record to route traffic from your alternate domain name to your CloudFront distribution's domain name
](#cf-s3-step6-DNS-record)
+ [

### Check whether IPv6 is enabled for your distribution and create another DNS record if needed
](#s3-step6-ipv6)

### Request an SSL certificate


To allow your viewers to use HTTPS and your custom domain name in the URL for your video streaming, use AWS Certificate Manager (ACM) to request a Secure Sockets Layer (SSL) certificate. The SSL certificate establishes an encrypted network connection to the website. 

1. Sign in to the AWS Management Console and open the ACM console at [https://console.aws.amazon.com/acm/](https://console.aws.amazon.com/acm/).

1. If the introductory page appears, under **Provision certificates**, choose **Get Started**.

1. On the **Request a certificate** page, choose **Request a public certificate**, and then choose **Request a certificate**.

1. On the **Add domain names** page, enter the fully qualified domain name (FQDN) of the site that you want to secure with an SSL/TLS certificate. You can use an asterisk (`*`) to request a wildcard certificate to protect several site names in the same domain. For this tutorial, enter **\$1** and the custom domain name that you configured in [Prerequisites](#cf-s3-prerequisites). For example, enter **\$1.example.com**, and then choose **Next**. 

   For more information, see [To request an ACM public certificate (console)](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html#request-public-console) in the *AWS Certificate Manager User Guide*.

1. On the **Select validation method** page, choose **DNS validation**. Then, choose **Next**. 

   If you are able to edit your DNS configuration, we recommend that you use DNS domain validation rather than email validation. DNS validation has multiple benefits over email validation. For more information, see [Option 1: DNS validation](https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html) in the *AWS Certificate Manager User Guide*. 

1. (Optional) On the **Add tags** page, tag your certificate with metadata.

1. Choose **Review**. 

1. On the **Review** page, verify that the information under **Domain name** and **Validation method** are correct. Then, choose **Confirm and request**. 

   The **Validation** page shows that your request is being processed and that the certificate domain is being validated. The certificate awaiting validation is in the **Pending validation** status. 

1. On the **Validation** page, choose the down arrow to the left of your custom domain name, and then choose **Create record in Route 53** to validate your domain ownership through DNS.

   Doing this adds a CNAME record provided by AWS Certificate Manager to your DNS configuration.

1. In the **Create record in Route 53** dialog box, choose **Create**.

   The **Validation** page should display a status notification of **Success** at the bottom.

1. Choose **Continue** to view the **Certificates** list page. 

   The **Status** for your new certificate changes from **Pending validation** to **Issued** within 30 minutes.

### Add the alternate domain name to your CloudFront distribution


1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, choose **Distributions**.

1. Choose the ID for the distribution that you created in [Step 4](#cf-s3-step3).

1. On the **General** tab, go to the **Settings** section, and choose **Edit**.

1. On the **Edit settings** page, for **Alternate domain name (CNAME) - *optional***, choose **Add item** to add the custom domain names that you want to use in the URL for the S3 video served by this CloudFront distribution.

   In this tutorial, for example, if you want to route traffic for a subdomain, such as `www.example.com`, enter the subdomain name (`www`) with the domain name (`example.com`). Specifically, enter **www.example.com**. 
**Note**  
The alternate domain name (CNAME) that you add must be covered by the SSL certificate that you previously attached to your CloudFront distribution.

1. For **Custom SSL certificate - *optional***, choose the SSL certificate that you requested earlier (for example, **\$1.example.com**).
**Note**  
If you don't see the SSL certificate immediately after you request it, wait 30 minutes, and then refresh the list until the SSL certificate is available for you to select.

1. Keep the remaining settings set to the defaults. Choose **Save changes**. 

1. On the **General** tab for the distribution, wait for the value of **Last modified** to change from **Deploying** to the timestamp when the distribution was last modified. 

### Create a DNS record to route traffic from your alternate domain name to your CloudFront distribution's domain name


1. Sign in to the AWS Management Console and open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

1. In the left navigation pane, choose **Hosted zones**.

1. On the **Hosted zones** page, choose the name of the hosted zone that Route 53 created for you in [Prerequisites](#cf-s3-prerequisites) (for example, **example.com**).

1. Choose **Create record**, and then use the **Quick create record** method. 

1. For **Record name**, keep the value for the record name the same as the alternate domain name of the CloudFront distribution that you added earlier.

   In this tutorial, to route traffic to a subdomain, such as `www.example.com`, enter the subdomain name without the domain name. For example, enter only **www** in the text field before your custom domain name.

1. For **Record type**, choose **A - Routes traffic to an IPv4 address and some AWS resources**.

1. For **Value**, choose the **Alias** toggle to enable the alias resource. 

1. Under **Route traffic to**, choose **Alias to CloudFront distribution** from the dropdown list. 

1. In the search box that says **Choose distribution**, choose the domain name of the CloudFront distribution that you created in [Step 4](#cf-s3-step4). 

   To find the domain name of your CloudFront distribution, do the following:

   1. In a new browser tab, sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v3/home](https://console.aws.amazon.com/cloudfront/v3/home).

   1. In the left navigation pane, choose **Distributions**.

   1. In the **Origins** column, find the correct CloudFront distribution by looking for its origin name, which starts with the S3 bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**).

   1. After finding the distribution in the list, widen the **Domain name** column to see the domain name value for your CloudFront distribution. 

1. On the **Create record** page in the Route 53 console, for the remaining settings, keep the defaults. 

1. Choose **Create records**.

### Check whether IPv6 is enabled for your distribution and create another DNS record if needed


If IPv6 is enabled for your distribution, you must create another DNS record. 

1. To check whether IPv6 is enabled for your distribution, do the following:

   1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

   1. In the left navigation pane, choose **Distributions**.

   1. Choose the ID of the CloudFront distribution that you created in [Step 4](#cf-s3-step4).

   1. On the **General** tab, under **Settings**, check whether **IPv6** is set to **Enabled**. 

      If IPv6 is enabled for your distribution, you must create another DNS record.

1. If IPv6 is enabled for your distribution, do the following to create a DNS record:

   1. Sign in to the AWS Management Console and open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

   1. In the left navigation pane, choose **Hosted zones**.

   1. On the **Hosted zones** page, choose the name of the hosted zone that Route 53 created for you in [Prerequisites](#cf-s3-prerequisites) (for example, **example.com**).

   1. Choose **Create record**, and then use the **Quick create record** method.

   1. For **Record name**, in the text field before your custom domain name, type the same value that you typed when you created the IPv4 DNS record earlier. For example, in this tutorial, to route traffic for the subdomain `www.example.com`, enter only **www**. 

   1. For **Record type**, choose **AAAA - Routes traffic to an IPv6 address and some AWS resources**. 

   1. For **Value**, choose the **Alias** toggle to enable the alias resource. 

   1. Under **Route traffic to**, choose **Alias to CloudFront distribution** from the dropdown list. 

   1. In the search box that says **Choose distribution**, choose the domain name of the CloudFront distribution that you created in [Step 4](#cf-s3-step4). 

   1. For the remaining settings, keep the defaults. 

   1. Choose **Create records**.

## Step 7: Access the S3 video through the CloudFront distribution with the custom domain name


To access the S3 video using the custom URL, you must combine your alternate domain name with the path to the video in the S3 bucket. 

**To create a custom URL to access the S3 video through the CloudFront distribution**

1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, choose **Distributions**.

1. To get the alternate domain name of your CloudFront distribution, do the following:

   1. In the **Origins** column, find the correct CloudFront distribution by looking for its origin name, which starts with the S3 bucket name for the bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**). 

   1. After finding the distribution in the list, widen the **Alternate domain names** column to copy the value of the alternate domain name of your CloudFront distribution.

1. In a new browser tab, paste the alternate domain name of the CloudFront distribution. 

1. Return to the previous browser tab, and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). 

1. Find the path to your S3 video, as explained in [Step 5](#cf-s3-step5). 

1. Return to the browser tab where you previously pasted the alternate domain name, enter a forward slash (**/**), and then paste the path to your S3 video (for example, `sample.mp4`). 

   Now, your S3 video is publicly accessible and hosted through CloudFront at a custom URL that looks similar to the following: 

   ```
   https://CloudFront distribution alternate domain name/Path to the S3 video
   ```

   Replace *CloudFront distribution alternate domain name* and *Path to the S3 video* with the appropriate values. An example URL is **https://www.example.com/sample.mp4**.

## (Optional) Step 8: View data about requests received by your CloudFront distribution


**To view data about requests received by your CloudFront distribution**

1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, under **Reports & analytics**, choose the reports from the console, ranging from **Cache statistics**, **Popular Objects**, **Top Referrers**, **Usage**, and **Viewers**. 

   You can filter each report dashboard. For more information, see [CloudFront Reports in the Console](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/reports.html) in the *Amazon CloudFront Developer Guide*. 

1. To filter data, choose the ID of the CloudFront distribution that you created in [Step 4](#cf-s3-step4).

## Step 9: Clean up


If you hosted an S3 streaming video using CloudFront and Route 53 only as a learning exercise, delete the AWS resources that you allocated so that you no longer accrue charges.

**Note**  
When you register a domain, it costs money immediately and it's irreversible. You can choose not to auto-renew the domain, but you pay up front and own it for the year. For more information, see [Registering a new domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html) in the *Amazon Route 53 Developer Guide*. 

**Topics**
+ [

### Delete the CloudFront distribution
](#cf-s3-step9-delete-cf)
+ [

### Delete the DNS record
](#cf-s3-step9-delete-dns)
+ [

### Delete the public hosted zone for your custom domain
](#cf-s3-step9-delete-hosted-zone)
+ [

### Delete the custom domain name from Route 53
](#cf-s3-step9-delete-domain)
+ [

### Delete the original video in the S3 source bucket
](#cf-s3-step9-delete-video)
+ [

### Delete the S3 source bucket
](#cf-s3-step9-delete-bucket)

### Delete the CloudFront distribution


1. Sign in to the AWS Management Console and open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. In the left navigation pane, choose **Distributions**.

1. In the **Origins** column, find the correct CloudFront distribution by looking for its origin name, which starts with the S3 bucket name for the bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**). 

1. To delete the CloudFront distribution, you must disable it first.
   + If the value of the **Status** column is **Enabled** and the value of **Last modified** is the timestamp when the distribution was last modified, continue to disable the distribution before deleting it.
   + If the value of **Status** is **Enabled** and the value of **Last modified** is **Deploying**, wait until the value of **Status** changes to the timestamp when the distribution was last modified. Then continue to disable the distribution before deleting it.

1. To disable the CloudFront distribution, do the following:

   1. In the **Distributions** list, select the check box next to the ID for the distribution that you want to delete. 

   1. To disable the distribution, choose **Disable**, and then choose **Disable** to confirm.

      If you disable a distribution that has an alternate domain name associated with it, CloudFront stops accepting traffic for that domain name (such as `www.example.com`), even if another distribution has an alternate domain name with a wildcard (`*`) that matches the same domain (such as `*.example.com`).

   1. The value of **Status** immediately changes to **Disabled**. Wait until the value of **Last modified** changes from **Deploying** to the timestamp when the distribution was last modified. 

      Because CloudFront must propagate this change to all edge locations, it might take a few minutes before the update is complete and the **Delete** option is available for you to delete the distribution. 

1. To delete the disabled distribution, do the following:

   1. Choose the check box next to the ID for the distribution that you want to delete.

   1. Choose **Delete**, and then choose **Delete** to confirm.

### Delete the DNS record


If you want to delete the public hosted zone for the domain (including the DNS record), see [Delete the public hosted zone for your custom domain](#cf-s3-step9-delete-hosted-zone) in the *Amazon Route 53 Developer Guide*. If you only want to delete the DNS record created in [Step 6](#cf-s3-step6), do the following:

1. Sign in to the AWS Management Console and open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

1. In the left navigation pane, choose **Hosted zones**.

1. On the **Hosted zones** page, choose the name of the hosted zone that Route 53 created for you in [Prerequisites](#cf-s3-prerequisites) (for example, **example.com**).

1. In the list of records, select the check box next to the records that you want to delete (the records that you created in [Step 6](#cf-s3-step6)). 
**Note**  
You can't delete records that have a **Type** value of **NS** or **SOA**. 

1. Choose **Delete records**. 

1. To confirm the deletion, choose **Delete**.

   Changes to records take time to propagate to the Route 53 DNS servers. Currently, the only way to verify that your changes have propagated is to use the [GetChange API action](https://docs.aws.amazon.com/Route53/latest/APIReference/API_GetChange.html). Changes usually propagate to all Route 53 name servers within 60 seconds.

### Delete the public hosted zone for your custom domain


**Warning**  
If you want to keep your domain registration but stop routing internet traffic to your website or web application, we recommend that you delete records in the hosted zone (as described in the prior section) instead of deleting the hosted zone.   
If you delete a hosted zone, someone else can use the domain and route traffic to their own resources using your domain name.  
In addition, if you delete a hosted zone, you can't undelete it. You must create a new hosted zone and update the name servers for your domain registration, which can take up to 48 hours to take effect.   
If you want to make the domain unavailable on the internet, you can first transfer your DNS service to a free DNS service and then delete the Route 53 hosted zone. This prevents future DNS queries from possibly being misrouted.   
If the domain is registered with Route 53, see [Adding or changing name servers and glue records for a domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-name-servers-glue-records.html) in the *Amazon Route 53 Developer Guide* for information about how to replace Route 53 name servers with name servers for the new DNS service. 
If the domain is registered with another registrar, use the method provided by the registrar to change name servers for the domain. 
If you're deleting a hosted zone for a subdomain (`www.example.com`), you don't need to change name servers for the domain (`example.com`).

1. Sign in to the AWS Management Console and open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

1. In the left navigation pane, choose **Hosted zones**.

1. On the **Hosted zones** page, choose the name of the hosted zone that you want to delete.

1. On the **Records** tab for your hosted zone, confirm that the hosted zone that you want to delete contains only an **NS** and an **SOA** record.

   If it contains additional records, delete them first.

   If you created any NS records for subdomains in the hosted zone, delete those records too.

1. On the **DNSSEC signing** tab for your hosted zone, disable DNNSSEC signing if it was enabled. For more information, see [Disabling DNSSEC signing](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-dnssec-disable.html) in the *Amazon Route 53 Developer Guide*.

1. At the top of the details page of the hosted zone, choose **Delete zone**.

1. To confirm the deletion, enter **delete**, and then choose **Delete**.

### Delete the custom domain name from Route 53


For most top-level domains (TLDs), you can delete the registration if you no longer want it. If you delete a domain name registration from Route 53 before the registration is scheduled to expire, AWS does not refund the registration fee. For more information, see [Deleting a domain name registration](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-delete.html) in the *Amazon Route 53 Developer Guide*.

**Important**  
If you want to transfer the domain between AWS accounts or transfer the domain to another registrar, don't delete the domain and expect to immediately reregister it. Instead, see the applicable documentation in the *Amazon Route 53 Developer Guide*:  
[Transferring a domain to a different AWS account](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-transfer-between-aws-accounts.html)
[Transferring a domain from Amazon Route 53 to another registrar](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-transfer-from-route-53.html)

### Delete the original video in the S3 source bucket


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Bucket name** list, choose the name of the bucket that you uploaded the video to in [Step 2](#cf-s3-step2) (for example, **tutorial-bucket**).

1. On the **Objects** tab, select the check box next to the name of the object that you want to delete (for example, `sample.mp4`).

1. Choose **Delete**. 

1. Under **Permanently delete objects?**, enter **permanently delete** to confirm that you want to delete this object.

1. Choose **Delete objects**.

### Delete the S3 source bucket


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, select the option button next to the name of the bucket that you created in [Step 1](#cf-s3-step1) (for example, **tutorial-bucket**).

1. Choose **Delete**.

1. On the **Delete bucket** page, confirm that you want to delete the bucket by entering the bucket name in the text field, and then choose **Delete bucket**.

## Next steps


After you complete this tutorial, you can further explore the following related use cases:
+ Transcode S3 videos into streaming formats needed by a particular television or connected device before hosting these videos with a CloudFront distribution.

  To use Amazon S3 Batch Operations, AWS Lambda and AWS Elemental MediaConvert to batch-transcode a collection of videos to a variety of output media formats, see [Tutorial: Batch-transcoding videos with S3 Batch Operations](tutorial-s3-batchops-lambda-mediaconvert-video.md). 
+ Host other objects stored in S3, such as images, audio, motion graphics, style sheets, HTML, JavaScript, React apps, and so on, using CloudFront and Route 53.

  For example, see [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md) and [Speeding up your website with Amazon CloudFront](website-hosting-cloudfront-walkthrough.md). 
+ Use [Amazon S3 Transfer Acceleration](https://aws.amazon.com/s3/transfer-acceleration) to configure fast and secure file transfers. Transfer Acceleration can speed up video uploading to your S3 bucket for long-distance transfer of larger videos. Transfer Acceleration improves transfer performance by routing traffic through the CloudFront globally distributed edge locations and over the AWS backbone networks. It also uses network protocol optimizations. For more information, see [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md). 

# Tutorial: Configuring a static website on Amazon S3
Configuring a static website

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

You can configure an Amazon S3 bucket to function like a website. This example walks you through the steps of hosting a website on Amazon S3.

**Important**  
The following tutorial requires disabling Block Public Access. We recommend keeping Block Public Access enabled. If you want to keep all four Block Public Access settings enabled and host a static website, you can use Amazon CloudFront origin access control (OAC). Amazon CloudFront provides the capabilities required to set up a secure static website. Amazon S3 static websites support only HTTP endpoints. Amazon CloudFront uses the durable storage of Amazon S3 while providing additional security headers, such as HTTPS. HTTPS adds security by encrypting a normal HTTP request and protecting against common cyberattacks. For more information, see [Getting started with a secure static website](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html) in the *Amazon CloudFront Developer Guide*. 

**Topics**
+ [

## Step 1: Create a bucket
](#step1-create-bucket-config-as-website)
+ [

## Step 2: Enable static website hosting
](#step2-create-bucket-config-as-website)
+ [

## Step 3: Edit Block Public Access settings
](#step3-edit-block-public-access)
+ [

## Step 4: Add a bucket policy that makes your bucket content publicly available
](#step4-add-bucket-policy-make-content-public)
+ [

## Step 5: Configure an index document
](#step5-upload-index-doc)
+ [

## Step 6: Configure an error document
](#step6-upload-error-doc)
+ [

## Step 7: Test your website endpoint
](#step7-test-web-site)
+ [

## Step 8: Clean up
](#getting-started-cleanup-s3-website-overview)

## Step 1: Create a bucket


The following instructions provide an overview of how to create your buckets for website hosting. For detailed, step-by-step instructions on creating a bucket, see [Creating a general purpose bucket](create-bucket-overview.md).

**To create a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose **Create bucket**.

1. Enter the **Bucket name** (for example, **example.com**).

1. Choose the Region where you want to create the bucket. 

   Choose a Region that is geographically close to you to minimize latency and costs, or to address regulatory requirements. The Region that you choose determines your Amazon S3 website endpoint. For more information, see [Website endpoints](WebsiteEndpoints.md).

1. To accept the default settings and create the bucket, choose **Create**.

## Step 2: Enable static website hosting


After you create a bucket, you can enable static website hosting for your bucket. You can create a new bucket or use an existing bucket.

**To enable static website hosting**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable static website hosting for.

1. Choose **Properties**.

1. Under **Static website hosting**, choose **Edit**.

1. Choose **Use this bucket to host a website**. 

1. Under **Static website hosting**, choose **Enable**.

1. In **Index document**, enter the file name of the index document, typically `index.html`. 

   The index document name is case sensitive and must exactly match the file name of the HTML index document that you plan to upload to your S3 bucket. When you configure a bucket for website hosting, you must specify an index document. Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. For more information, see [Configuring an index document](IndexDocumentSupport.md).

1. To provide your own custom error document for 4XX class errors, in **Error document**, enter the custom error document file name. 

   The error document name is case sensitive and must exactly match the file name of the HTML error document that you plan to upload to your S3 bucket. If you don't specify a custom error document and an error occurs, Amazon S3 returns a default HTML error document. For more information, see [Configuring a custom error document](CustomErrorDocSupport.md).

1. (Optional) If you want to specify advanced redirection rules, in **Redirection rules**, enter JSON to describe the rules.

   For example, you can conditionally route requests according to specific object key names or prefixes in the request. For more information, see [Configure redirection rules to use advanced conditional redirects](how-to-page-redirect.md#advanced-conditional-redirects).

1. Choose **Save changes**.

   Amazon S3 enables static website hosting for your bucket. At the bottom of the page, under **Static website hosting**, you see the website endpoint for your bucket.

1. Under **Static website hosting**, note the **Endpoint**.

   The **Endpoint** is the Amazon S3 website endpoint for your bucket. After you finish configuring your bucket as a static website, you can use this endpoint to test your website.

## Step 3: Edit Block Public Access settings


By default, Amazon S3 blocks public access to your account and buckets. If you want to use a bucket to host a static website, you can use these steps to edit your block public access settings. 

**Warning**  
Before you complete these steps, review [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md) to ensure that you understand and accept the risks involved with allowing public access. When you turn off block public access settings to make your bucket public, anyone on the internet can access your bucket. We recommend that you block all public access to your buckets.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the bucket that you have configured as a static website.

1. Choose **Permissions**.

1. Under **Block public access (bucket settings)**, choose **Edit**.

1. Clear **Block *all* public access**, and choose **Save changes**.  
![\[The Amazon S3 console, showing the block public access bucket settings.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/edit-public-access-clear.png)

   Amazon S3 turns off the Block Public Access settings for your bucket. To create a public static website, you might also have to [edit the Block Public Access settings](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html) for your account before adding a bucket policy. If the Block Public Access settings for your account are currently turned on, you see a note under **Block public access (bucket settings)**.

## Step 4: Add a bucket policy that makes your bucket content publicly available


After you edit S3 Block Public Access settings, you can add a bucket policy to grant public read access to your bucket. When you grant public read access, anyone on the internet can access your bucket.

**Important**  
The following policy is an example only and allows full access to the contents of your bucket. Before you proceed with this step, review [How can I secure the files in my Amazon S3 bucket?](https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/) to ensure that you understand the best practices for securing the files in your S3 bucket and risks involved in granting public access.

1. Under **Buckets**, choose the name of your bucket.

1. Choose **Permissions**.

1. Under **Bucket Policy**, choose **Edit**.

1. To grant public read access for your website, copy the following bucket policy, and paste it in the **Bucket policy editor**.

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "PublicReadGetObject",
               "Effect": "Allow",
               "Principal": "*",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::Bucket-Name/*"
               ]
           }
       ]
   }
   ```

1. Update the `Resource` to your bucket name.

   In the preceding example bucket policy, *Bucket-Name* is a placeholder for the bucket name. To use this bucket policy with your own bucket, you must update this name to match your bucket name.

1. Choose **Save changes**.

   A message appears indicating that the bucket policy has been successfully added.

   If you see an error that says `Policy has invalid resource`, confirm that the bucket name in the bucket policy matches your bucket name. For information about adding a bucket policy, see [How do I add an S3 bucket policy?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-bucket-policy.html)

   If you get an error message and cannot save the bucket policy, check your account and bucket Block Public Access settings to confirm that you allow public access to the bucket.

## Step 5: Configure an index document


When you enable static website hosting for your bucket, you enter the name of the index document (for example, **index.html**). After you enable static website hosting for the bucket, you upload an HTML file with this index document name to your bucket.

**To configure the index document**

1. Create an `index.html` file.

   If you don't have an `index.html` file, you can use the following HTML to create one:

   ```
   <html xmlns="http://www.w3.org/1999/xhtml" >
   <head>
       <title>My Website Home Page</title>
   </head>
   <body>
     <h1>Welcome to my website</h1>
     <p>Now hosted on Amazon S3!</p>
   </body>
   </html>
   ```

1. Save the index file locally.

   The index document file name must exactly match the index document name that you enter in the **Static website hosting** dialog box. The index document name is case sensitive. For example, if you enter `index.html` for the **Index document** name in the **Static website hosting** dialog box, your index document file name must also be `index.html` and not `Index.html`.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to use to host a static website.

1. Enable static website hosting for your bucket, and enter the exact name of your index document (for example, `index.html`). For more information, see [Enabling website hosting](EnableWebsiteHosting.md).

   After enabling static website hosting, proceed to step 6. 

1. To upload the index document to your bucket, do one of the following:
   + Drag and drop the index file into the console bucket listing.
   + Choose **Upload**, and follow the prompts to choose and upload the index file.

   For step-by-step instructions, see [Uploading objects](upload-objects.md).

1. (Optional) Upload other website content to your bucket.

## Step 6: Configure an error document


When you enable static website hosting for your bucket, you enter the name of the error document (for example, **404.html**). After you enable static website hosting for the bucket, you upload an HTML file with this error document name to your bucket.

**To configure an error document**

1. Create an error document, for example `404.html`.

1. Save the error document file locally.

   The error document name is case sensitive and must exactly match the name that you enter when you enable static website hosting. For example, if you enter `404.html` for the **Error document** name in the **Static website hosting** dialog box, your error document file name must also be `404.html`.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to use to host a static website.

1. Enable static website hosting for your bucket, and enter the exact name of your error document (for example, `404.html`). For more information, see [Enabling website hosting](EnableWebsiteHosting.md) and [Configuring a custom error document](CustomErrorDocSupport.md).

   After enabling static website hosting, proceed to step 6. 

1. To upload the error document to your bucket, do one of the following:
   + Drag and drop the error document file into the console bucket listing.
   + Choose **Upload**, and follow the prompts to choose and upload the index file.

   For step-by-step instructions, see [Uploading objects](upload-objects.md).

## Step 7: Test your website endpoint


After you configure static website hosting for your bucket, you can test your website endpoint.

**Note**  
Amazon S3 does not support HTTPS access to the website. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.  
For more information, see [How do I use CloudFront to serve a static website hosted on Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/) and [Requiring HTTPS for communication between viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html).

1. Under **Buckets**, choose the name of your bucket.

1. Choose **Properties**.

1. At the bottom of the page, under **Static website hosting**, choose your **Bucket website endpoint**.

   Your index document opens in a separate browser window.

You now have a website hosted on Amazon S3. This website is available at the Amazon S3 website endpoint. However, you might have a domain, such as `example.com`, that you want to use to serve the content from the website you created. You might also want to use Amazon S3 root domain support to serve requests for both `http://www.example.com` and `http://example.com`. This requires additional steps. For an example, see [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md). 

## Step 8: Clean up


If you created your static website only as a learning exercise, delete the AWS resources that you allocated so that you no longer accrue charges. After you delete your AWS resources, your website is no longer available. For more information, see [Deleting a general purpose bucket](delete-bucket.md).

# Tutorial: Configuring a static website using a custom domain registered with Route 53
Configuring a static website using a custom domain

Suppose that you want to host a static website on Amazon S3. You've registered a domain with Amazon Route 53 (for example, `example.com`), and you want requests for `http://www.example.com` and `http://example.com` to be served from your Amazon S3 content. You can use this walkthrough to learn how to host a static website and create redirects on Amazon S3 for a website with a custom domain name that is registered with Route 53. You can work with an existing website that you want to host on Amazon S3, or use this walkthrough to start from scratch. 

After you complete this walkthrough, you can optionally use Amazon CloudFront to improve the performance of your website. For more information, see [Speeding up your website with Amazon CloudFront](website-hosting-cloudfront-walkthrough.md).

**Note**  
Amazon S3 website endpoints do not support HTTPS or access points. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.  
For a tutorial about how to host your content securely with CloudFront and Amazon S3, see [Tutorial: Hosting on-demand streaming video with Amazon S3, Amazon CloudFront, and Amazon Route 53](tutorial-s3-cloudfront-route53-video-streaming.md). For more information, see [How do I use CloudFront to serve a static website hosted on Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/) and [Requiring HTTPS for communication between viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html).

**Automating static website setup with an CloudFormation template**  
You can use an CloudFormation template to automate your static website setup. The CloudFormation template sets up the components that you need to host a secure static website so that you can focus more on your website’s content and less on configuring components.

The CloudFormation template includes the following components:
+ Amazon S3 – Creates an Amazon S3 bucket to host your static website.
+ CloudFront – Creates a CloudFront distribution to speed up your static website.
+ Lambda@Edge – Uses [Lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html) to add security headers to every server response. Security headers are a group of headers in the web server response that tell web browsers to take extra security precautions. For more information, see the blog post [Adding HTTP security headers using Lambda@Edge and Amazon CloudFront](https://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-lambdaedge-and-amazon-cloudfront/).

This CloudFormation template is available for you to download and use. For information and instructions, see [Getting started with a secure static website](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html) in the *Amazon CloudFront Developer Guide*.

**Topics**
+ [

## Before you begin
](#root-domain-walkthrough-before-you-begin)
+ [

## Step 1: Register a custom domain with Route 53
](#website-hosting-custom-domain-walkthrough-domain-registry)
+ [

## Step 2: Create two buckets
](#root-domain-walkthrough-create-buckets)
+ [

## Step 3: Configure your root domain bucket for website hosting
](#root-domain-walkthrough-configure-bucket-aswebsite)
+ [

## Step 4: Configure your subdomain bucket for website redirect
](#root-domain-walkthrough-configure-redirect)
+ [

## Step 5: Configure logging for website traffic
](#root-domain-walkthrough-configure-logging)
+ [

## Step 6: Upload index and website content
](#upload-website-content)
+ [

## Step 7: Upload an error document
](#configure-error-document-root-domain)
+ [

## Step 8: Edit S3 Block Public Access settings
](#root-domain-walkthrough-configure-bucket-permissions)
+ [

## Step 9: Attach a bucket policy
](#add-bucket-policy-root-domain)
+ [

## Step 10: Test your domain endpoint
](#root-domain-walkthrough-test-website)
+ [

## Step 11: Add alias records for your domain and subdomain
](#root-domain-walkthrough-add-record-to-hostedzone)
+ [

## Step 12: Test the website
](#root-domain-testing)
+ [

# Speeding up your website with Amazon CloudFront
](website-hosting-cloudfront-walkthrough.md)
+ [

# Cleaning up your example resources
](getting-started-cleanup.md)

## Before you begin


As you follow the steps in this example, you work with the following services:

**Amazon Route 53** – You use Route 53 to register domains and to define where you want to route internet traffic for your domain. The example shows how to create Route 53 alias records that route traffic for your domain (`example.com`) and subdomain (`www.example.com`) to an Amazon S3 bucket that contains an HTML file.

**Amazon S3** – You use Amazon S3 to create buckets, upload a sample website page, configure permissions so that everyone can see the content, and then configure the buckets for website hosting.

## Step 1: Register a custom domain with Route 53


If you don't already have a registered domain name, such as `example.com`, register one with Route 53. For more information, see [Registering a new domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html) in the *Amazon Route 53 Developer Guide*. After you register your domain name, you can create and configure your Amazon S3 buckets for website hosting. 

## Step 2: Create two buckets


To support requests from both the root domain and subdomain, you create two buckets.
+ **Domain bucket** – `example.com`
+ **Subdomain bucket** – `www.example.com` 

These bucket names must match your domain name exactly. In this example, the domain name is `example.com`. You host your content out of the root domain bucket (`example.com`). You create a redirect request for the subdomain bucket (`www.example.com`). If someone enters `www.example.com` in their browser, they are redirected to `example.com` and see the content that is hosted in the Amazon S3 bucket with that name. 

**To create your buckets for website hosting**

The following instructions provide an overview of how to create your buckets for website hosting. For detailed, step-by-step instructions on creating a bucket, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create your root domain bucket: 

   1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

   1. In the left navigation pane, choose **General purpose buckets**.

   1. Choose **Create bucket**. The **Create bucket** page opens.

   1. Enter the **Bucket name** (for example, **example.com**).

   1. Choose the Region where you want to create the bucket. 

      Choose a Region that is geographically close to you to minimize latency and costs, or to address regulatory requirements. The Region that you choose determines your Amazon S3 website endpoint. For more information, see [Website endpoints](WebsiteEndpoints.md).

   1. To accept the default settings and create the bucket, choose **Create**.

1. Create your subdomain bucket: 

   1. Choose **Create bucket**.

   1. Enter the **Bucket name** (for example, **www.example.com**).

   1. Choose the Region where you want to create the bucket. 

      Choose a Region that is geographically close to you to minimize latency and costs, or to address regulatory requirements. The Region that you choose determines your Amazon S3 website endpoint. For more information, see [Website endpoints](WebsiteEndpoints.md).

   1. To accept the default settings and create the bucket, choose **Create**.

In the next step, you configure `example.com` for website hosting. 

## Step 3: Configure your root domain bucket for website hosting
Step 3: Configure root Domain bucket

In this step, you configure your root domain bucket (`example.com`) as a website. This bucket will contain your website content. When you configure a bucket for website hosting, you can access the website using the [Website endpoints](WebsiteEndpoints.md). 

**To enable static website hosting**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable static website hosting for.

1. Choose **Properties**.

1. Under **Static website hosting**, choose **Edit**.

1. Choose **Use this bucket to host a website**. 

1. Under **Static website hosting**, choose **Enable**.

1. In **Index document**, enter the file name of the index document, typically `index.html`. 

   The index document name is case sensitive and must exactly match the file name of the HTML index document that you plan to upload to your S3 bucket. When you configure a bucket for website hosting, you must specify an index document. Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. For more information, see [Configuring an index document](IndexDocumentSupport.md).

1. To provide your own custom error document for 4XX class errors, in **Error document**, enter the custom error document file name. 

   The error document name is case sensitive and must exactly match the file name of the HTML error document that you plan to upload to your S3 bucket. If you don't specify a custom error document and an error occurs, Amazon S3 returns a default HTML error document. For more information, see [Configuring a custom error document](CustomErrorDocSupport.md).

1. (Optional) If you want to specify advanced redirection rules, in **Redirection rules**, enter JSON to describe the rules.

   For example, you can conditionally route requests according to specific object key names or prefixes in the request. For more information, see [Configure redirection rules to use advanced conditional redirects](how-to-page-redirect.md#advanced-conditional-redirects).

1. Choose **Save changes**.

   Amazon S3 enables static website hosting for your bucket. At the bottom of the page, under **Static website hosting**, you see the website endpoint for your bucket.

1. Under **Static website hosting**, note the **Endpoint**.

   The **Endpoint** is the Amazon S3 website endpoint for your bucket. After you finish configuring your bucket as a static website, you can use this endpoint to test your website.

After you [edit block public access settings](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-custom-domain-walkthrough.html#root-domain-walkthrough-configure-bucket-permissions) and [add a bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-custom-domain-walkthrough.html#add-bucket-policy-root-domain) that allows public read access, you can use the website endpoint to access your website. 

In the next step, you configure your subdomain (`www.example.com`) to redirect requests to your domain (`example.com`). 

## Step 4: Configure your subdomain bucket for website redirect
Step 4: Configure subdomain bucket for redirect

After you configure your root domain bucket for website hosting, you can configure your subdomain bucket to redirect all requests to the domain. In this example, all requests for `www.example.com` are redirected to `example.com`.

**To configure a redirect request**

1. On the Amazon S3 console, in the **General purpose buckets** list, choose your subdomain bucket name (`www.example.com` in this example).

1. Choose **Properties**.

1. Under **Static website hosting**, choose **Edit**.

1. Choose **Redirect requests for an object**. 

1. In the **Target bucket** box, enter your root domain, for example, **example.com**.

1. For **Protocol**, choose **http**.

1. Choose **Save changes**.

## Step 5: Configure logging for website traffic
Step 5: Configure logging

If you want to track the number of visitors accessing your website, you can optionally enable logging for your root domain bucket. For more information, see [Logging requests with server access logging](ServerLogs.md). If you plan to use Amazon CloudFront to speed up your website, you can also use CloudFront logging.

**To enable server access logging for your root domain bucket**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the same Region where you created the bucket that is configured as a static website, create a bucket for logging, for example `logs.example.com`.

1. Create a folder for the server access logging log files (for example, `logs`).

1. (Optional) If you want to use CloudFront to improve your website performance, create a folder for the CloudFront log files (for example, `cdn`).
**Important**  
When you create or update a distribution and enable CloudFront logging, CloudFront updates the bucket access control list (ACL) to give the `awslogsdelivery` account `FULL_CONTROL` permissions to write logs to your bucket. For more information, see [Permissions required to configure standard logging and to access your log files](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#AccessLogsBucketAndFileOwnership) in the *Amazon CloudFront Developer Guide*. If the bucket that stores the logs uses the Bucket owner enforced setting for S3 Object Ownership to disable ACLs, CloudFront cannot write logs to the bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

1. In the **Buckets** list, choose your root domain bucket.

1. Choose **Properties**.

1. Under **Server access logging**, choose **Edit**.

1. Choose **Enable**.

1. Under the **Target bucket**, choose the bucket and folder destination for the server access logs:
   + Browse to the folder and bucket location:

     1. Choose **Browse S3**.

     1. Choose the bucket name, and then choose the logs folder. 

     1. Choose **Choose path**.
   + Enter the S3 bucket path, for example, `s3://logs.example.com/logs/`.

1. Choose **Save changes**.

   In your log bucket, you can now access your logs. Amazon S3 writes website access logs to your log bucket every 2 hours.

## Step 6: Upload index and website content


In this step, you upload your index document and optional website content to your root domain bucket. 

When you enable static website hosting for your bucket, you enter the name of the index document (for example, **index.html**). After you enable static website hosting for the bucket, you upload an HTML file with this index document name to your bucket.

**To configure the index document**

1. Create an `index.html` file.

   If you don't have an `index.html` file, you can use the following HTML to create one:

   ```
   <html xmlns="http://www.w3.org/1999/xhtml" >
   <head>
       <title>My Website Home Page</title>
   </head>
   <body>
     <h1>Welcome to my website</h1>
     <p>Now hosted on Amazon S3!</p>
   </body>
   </html>
   ```

1. Save the index file locally.

   The index document file name must exactly match the index document name that you enter in the **Static website hosting** dialog box. The index document name is case sensitive. For example, if you enter `index.html` for the **Index document** name in the **Static website hosting** dialog box, your index document file name must also be `index.html` and not `Index.html`.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to use to host a static website.

1. Enable static website hosting for your bucket, and enter the exact name of your index document (for example, `index.html`). For more information, see [Enabling website hosting](EnableWebsiteHosting.md).

   After enabling static website hosting, proceed to step 6. 

1. To upload the index document to your bucket, do one of the following:
   + Drag and drop the index file into the console bucket listing.
   + Choose **Upload**, and follow the prompts to choose and upload the index file.

   For step-by-step instructions, see [Uploading objects](upload-objects.md).

1. (Optional) Upload other website content to your bucket.

## Step 7: Upload an error document


When you enable static website hosting for your bucket, you enter the name of the error document (for example, **404.html**). After you enable static website hosting for the bucket, you upload an HTML file with this error document name to your bucket.

**To configure an error document**

1. Create an error document, for example `404.html`.

1. Save the error document file locally.

   The error document name is case sensitive and must exactly match the name that you enter when you enable static website hosting. For example, if you enter `404.html` for the **Error document** name in the **Static website hosting** dialog box, your error document file name must also be `404.html`.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to use to host a static website.

1. Enable static website hosting for your bucket, and enter the exact name of your error document (for example, `404.html`). For more information, see [Enabling website hosting](EnableWebsiteHosting.md) and [Configuring a custom error document](CustomErrorDocSupport.md).

   After enabling static website hosting, proceed to step 6. 

1. To upload the error document to your bucket, do one of the following:
   + Drag and drop the error document file into the console bucket listing.
   + Choose **Upload**, and follow the prompts to choose and upload the index file.

   For step-by-step instructions, see [Uploading objects](upload-objects.md).

## Step 8: Edit S3 Block Public Access settings
Step 8: Edit Block Public Access

In this example, you edit block public access settings for the domain bucket (`example.com`) to allow public access.

By default, Amazon S3 blocks public access to your account and buckets. If you want to use a bucket to host a static website, you can use these steps to edit your block public access settings. 

**Warning**  
Before you complete these steps, review [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md) to ensure that you understand and accept the risks involved with allowing public access. When you turn off block public access settings to make your bucket public, anyone on the internet can access your bucket. We recommend that you block all public access to your buckets.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the bucket that you have configured as a static website.

1. Choose **Permissions**.

1. Under **Block public access (bucket settings)**, choose **Edit**.

1. Clear **Block *all* public access**, and choose **Save changes**.  
![\[The Amazon S3 console, showing the block public access bucket settings.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/edit-public-access-clear.png)

   Amazon S3 turns off the Block Public Access settings for your bucket. To create a public static website, you might also have to [edit the Block Public Access settings](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html) for your account before adding a bucket policy. If the Block Public Access settings for your account are currently turned on, you see a note under **Block public access (bucket settings)**.

## Step 9: Attach a bucket policy


In this example, you attach a bucket policy to the domain bucket (`example.com`) to allow public read access. You replace the *Bucket-Name* in the example bucket policy with the name of your domain bucket, for example `example.com`.

After you edit S3 Block Public Access settings, you can add a bucket policy to grant public read access to your bucket. When you grant public read access, anyone on the internet can access your bucket.

**Important**  
The following policy is an example only and allows full access to the contents of your bucket. Before you proceed with this step, review [How can I secure the files in my Amazon S3 bucket?](https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/) to ensure that you understand the best practices for securing the files in your S3 bucket and risks involved in granting public access.

1. Under **Buckets**, choose the name of your bucket.

1. Choose **Permissions**.

1. Under **Bucket Policy**, choose **Edit**.

1. To grant public read access for your website, copy the following bucket policy, and paste it in the **Bucket policy editor**.

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "PublicReadGetObject",
               "Effect": "Allow",
               "Principal": "*",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::Bucket-Name/*"
               ]
           }
       ]
   }
   ```

1. Update the `Resource` to your bucket name.

   In the preceding example bucket policy, *Bucket-Name* is a placeholder for the bucket name. To use this bucket policy with your own bucket, you must update this name to match your bucket name.

1. Choose **Save changes**.

   A message appears indicating that the bucket policy has been successfully added.

   If you see an error that says `Policy has invalid resource`, confirm that the bucket name in the bucket policy matches your bucket name. For information about adding a bucket policy, see [How do I add an S3 bucket policy?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-bucket-policy.html)

   If you get an error message and cannot save the bucket policy, check your account and bucket Block Public Access settings to confirm that you allow public access to the bucket.

In the next step, you can figure out your website endpoints and test your domain endpoint.

## Step 10: Test your domain endpoint


After you configure your domain bucket to host a public website, you can test your endpoint. For more information, see [Website endpoints](WebsiteEndpoints.md). You can only test the endpoint for your domain bucket because your subdomain bucket is set up for website redirect and not static website hosting. 

**Note**  
Amazon S3 does not support HTTPS access to the website. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.  
For more information, see [How do I use CloudFront to serve a static website hosted on Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/) and [Requiring HTTPS for communication between viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html).

1. Under **Buckets**, choose the name of your bucket.

1. Choose **Properties**.

1. At the bottom of the page, under **Static website hosting**, choose your **Bucket website endpoint**.

   Your index document opens in a separate browser window.

In the next step, you use Amazon Route 53 to enable customers to use both of your custom URLs to navigate to your site. 

## Step 11: Add alias records for your domain and subdomain
Step 11: Add alias records

In this step, you create the alias records that you add to the hosted zone for your domain maps `example.com` and `www.example.com`. Instead of using IP addresses, the alias records use the Amazon S3 website endpoints. Amazon Route 53 maintains a mapping between the alias records and the IP addresses where the Amazon S3 buckets reside. You create two alias records, one for your root domain and one for your subdomain.

### Add an alias record for your root domain and subdomain


**To add an alias record for your root domain (`example.com`)**

1. Open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).
**Note**  
If you don't already use Route 53, see [Step 1: Register a domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started.html#getting-started-find-domain-name) in the *Amazon Route 53 Developer Guide*. After completing your setup, you can resume the instructions.

1. Choose **Hosted zones**.

1. In the list of hosted zones, choose the name of the hosted zone that matches your domain name.

1. Choose **Create record**.

1. Choose **Switch to wizard**.
**Note**  
If you want to use quick create to create your alias records, see [Configuring Route 53 to route traffic to an S3 Bucket](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html#routing-to-s3-bucket-configuring).

1. Choose **Simple routing**, and choose **Next**.

1. Choose **Define simple record**.

1. In **Record name**, accept the default value, which is the name of your hosted zone and your domain.

1. In **Value/Route traffic to**, choose **Alias to S3 website endpoint**.

1. Choose the Region.

1. Choose the S3 bucket.

   The bucket name should match the name that appears in the **Name** box. In the **Choose S3 bucket** list, the bucket name appears with the Amazon S3 website endpoint for the Region where the bucket was created, for example, `s3-website-us-west-1.amazonaws.com (example.com)`.

   **Choose S3 bucket** lists a bucket if:
   + You configured the bucket as a static website.
   + The bucket name is the same as the name of the record that you're creating.
   + The current AWS account created the bucket.

   If your bucket does not appear in the **Choose S3 bucket** list, enter the Amazon S3 website endpoint for the Region where the bucket was created, for example, **s3-website-us-west-2.amazonaws.com**. For a complete list of Amazon S3 website endpoints, see [Amazon S3 Website endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_website_region_endpoints). For more information about the alias target, see [Value/route traffic to](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-alias.html#rrsets-values-alias-alias-target) in the *Amazon Route 53 Developer Guide*.

1. In **Record type**, choose **A ‐ Routes traffic to an IPv4 address and some AWS resources**.

1. For **Evaluate target health**, choose **No**.

1. Choose **Define simple record**.

**To add an alias record for your subdomain (`www.example.com`)**

1. Under **Configure records**, choose **Define simple record**.

1. In **Record name** for your subdomain, type `www`.

1. In **Value/Route traffic to**, choose **Alias to S3 website endpoint**.

1. Choose the Region.

1. Choose the S3 bucket, for example, `s3-website-us-west-2.amazonaws.com (www.example.com)`.

   If your bucket does not appear in the **Choose S3 bucket** list, enter the Amazon S3 website endpoint for the Region where the bucket was created, for example, **s3-website-us-west-2.amazonaws.com**. For a complete list of Amazon S3 website endpoints, see [Amazon S3 Website endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_website_region_endpoints). For more information about the alias target, see [Value/route traffic to](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-alias.html#rrsets-values-alias-alias-target) in the *Amazon Route 53 Developer Guide*.

1. In **Record type**, choose **A ‐ Routes traffic to an IPv4 address and some AWS resources**.

1. For **Evaluate target health**, choose **No**.

1. Choose **Define simple record**.

1. On the **Configure records** page, choose **Create records**.

**Note**  
Changes generally propagate to all Route 53 servers within 60 seconds. When propagation is done, you can route traffic to your Amazon S3 bucket by using the names of the alias records that you created in this procedure.

### Add an alias record for your root domain and subdomain (old Route 53 console)


**To add an alias record for your root domain (`example.com`)**

The Route 53 console has been redesigned. In the Route 53 console you can temporarily use the old console. If you choose to work with the old Route 53 console, use the procedure below.

1. Open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).
**Note**  
If you don't already use Route 53, see [Step 1: Register a domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started.html#getting-started-find-domain-name) in the *Amazon Route 53 Developer Guide*. After completing your setup, you can resume the instructions.

1. Choose **Hosted Zones**.

1. In the list of hosted zones, choose the name of the hosted zone that matches your domain name.

1. Choose **Create Record Set**.

1. Specify the following values:  
**Name**  
Accept the default value, which is the name of your hosted zone and your domain.   
For the root domain, you don't need to enter any additional information in the **Name** field.  
**Type**  
Choose **A – IPv4 address**.  
**Alias**  
Choose **Yes**.  
**Alias Target**  
In the **S3 website endpoints** section of the list, choose your bucket name.   
The bucket name should match the name that appears in the **Name** box. In the **Alias Target** listing, the bucket name is followed by the Amazon S3 website endpoint for the Region where the bucket was created, for example `example.com (s3-website-us-west-2.amazonaws.com)`. **Alias Target** lists a bucket if:  
   + You configured the bucket as a static website.
   + The bucket name is the same as the name of the record that you're creating.
   + The current AWS account created the bucket.
If your bucket does not appear in the **Alias Target** listing, enter the Amazon S3 website endpoint for the Region where the bucket was created, for example, `s3-website-us-west-2`. For a complete list of Amazon S3 website endpoints, see [Amazon S3 Website endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_website_region_endpoints). For more information about the alias target, see [Value/route traffic to](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-alias.html#rrsets-values-alias-alias-target) in the *Amazon Route 53 Developer Guide*.  
**Routing Policy**  
Accept the default value of **Simple**.  
**Evaluate Target Health**  
Accept the default value of **No**.

1. Choose **Create**.

**To add an alias record for your subdomain (`www.example.com`)**

1. In the hosted zone for your root domain (`example.com`), choose **Create Record Set**.

1. Specify the following values:  
**Name**  
For the subdomain, enter `www` in the box.   
**Type**  
Choose **A – IPv4 address**.  
**Alias**  
Choose **Yes**.  
**Alias Target**  
In the **S3 website endpoints** section of the list, choose the same bucket name that appears in the **Name** field—for example, `www.example.com (s3-website-us-west-2.amazonaws.com)`.  
**Routing Policy**  
Accept the default value of **Simple**.  
**Evaluate Target Health**  
Accept the default value of **No**.

1. Choose **Create**.

**Note**  
Changes generally propagate to all Route 53 servers within 60 seconds. When propagation is done, you can route traffic to your Amazon S3 bucket by using the names of the alias records that you created in this procedure.

## Step 12: Test the website


Verify that the website and the redirect work correctly. In your browser, enter your URLs. In this example, you can try the following URLs:
+ **Domain** (`http://example.com`) – Displays the index document in the `example.com` bucket.
+ **Subdomain** (`http://www.example.com`) – Redirects your request to `http://example.com`. You see the index document in the `example.com` bucket.

If your website or redirect links don't work, you can try the following:
+ **Clear cache** – Clear the cache of your web browser.
+ **Check name servers** – If your web page and redirect links don't work after you've cleared your cache, you can compare the name servers for your domain and the name servers for your hosted zone. If the name servers don't match, you might need to update your domain name servers to match those listed under your hosted zone. For more information, see [Adding or changing name servers and glue records for a domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-name-servers-glue-records.html).

After you've successfully tested your root domain and subdomain, you can set up an [Amazon CloudFront](http://aws.amazon.com/cloudfront) distribution to improve the performance of your website and provide logs that you can use to review website traffic. For more information, see [Speeding up your website with Amazon CloudFront](website-hosting-cloudfront-walkthrough.md).

# Speeding up your website with Amazon CloudFront


You can use [Amazon CloudFront](http://aws.amazon.com/cloudfront) to improve the performance of your Amazon S3 website. CloudFront makes your website files (such as HTML, images, and video) available from data centers around the world (known as *edge locations*). When a visitor requests a file from your website, CloudFront automatically redirects the request to a copy of the file at the nearest edge location. This results in faster download times than if the visitor had requested the content from a data center that is located farther away.

CloudFront caches content at edge locations for a period of time that you specify. If a visitor requests content that has been cached for longer than the expiration date, CloudFront checks the origin server to see if a newer version of the content is available. If a newer version is available, CloudFront copies the new version to the edge location. Changes that you make to the original content are replicated to edge locations as visitors request the content. 

**Using CloudFront without Route 53**  
The tutorial on this page uses Route 53 to point to your CloudFront distribution. However, if you want to serve content hosted in an Amazon S3 bucket using CloudFront without using Route 53, see [Amazon CloudFront Tutorials: Setting up a Dynamic Content Distribution for Amazon S3](https://aws.amazon.com/cloudfront/getting-started/S3/). When you serve content hosted in an Amazon S3 bucket using CloudFront, you can use any bucket name, and both HTTP and HTTPS are supported. 

**Automating set up with an CloudFormation template**  
For more information about using an CloudFormation template to configure a secure static website that creates a CloudFront distribution to serve your website, see [Getting started with a secure static website](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html) in the *Amazon CloudFront Developer Guide*.

**Topics**
+ [

## Step 1: Create a CloudFront distribution
](#create-distribution)
+ [

## Step 2: Update the record sets for your domain and subdomain
](#update-record-sets)
+ [

## (Optional) Step 3: Check the log files
](#check-log-files)

## Step 1: Create a CloudFront distribution


First, you create a CloudFront distribution. This makes your website available from data centers around the world.

**To create a distribution with an Amazon S3 origin**

1. Open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. Choose **Create Distribution**.

1. On the **Create Distribution** page, in the **Origin Settings** section, for **Origin Domain Name**, enter the Amazon S3 website endpoint for your bucket—for example, **example.com.s3-website.us-west-1.amazonaws.com**.

   CloudFront fills in the **Origin ID** for you.

1. For **Default Cache Behavior Settings**, keep the values set to the defaults. 

   With the default settings for **Viewer Protocol Policy**, you can use HTTPS for your static website. For more information these configuration options, see [Values that You Specify When You Create or Update a Web Distribution](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/WorkingWithDownloadDistributions.html#DownloadDistValuesYouSpecify) in the *Amazon CloudFront Developer Guide*.

1. For **Distribution Settings**, do the following:

   1. Leave **Price Class** set to **Use All Edge Locations (Best Performance)**.

   1. Set **Alternate Domain Names (CNAMEs)** to the root domain and `www` subdomain. In this tutorial, these are `example.com` and `www.example.com`. 
**Important**  
Before you perform this step, note the [requirements for using alternate domain names](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-requirements), in particular the need for a valid SSL/TLS certificate. 

   1. For **SSL Certificate**, choose **Custom SSL Certificate (example.com)**, and choose the custom certificate that covers the domain and subdomain names.

      For more information, see [SSL Certificate](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesSSLCertificate) in the *Amazon CloudFront Developer Guide*.

   1. In **Default Root Object**, enter the name of your index document, for example, `index.html`. 

      If the URL used to access the distribution doesn't contain a file name, the CloudFront distribution returns the index document. The **Default Root Object** should exactly match the name of the index document for your static website. For more information, see [Configuring an index document](IndexDocumentSupport.md).

   1. Set **Logging** to **On**.
**Important**  
When you create or update a distribution and enable CloudFront logging, CloudFront updates the bucket access control list (ACL) to give the `awslogsdelivery` account `FULL_CONTROL` permissions to write logs to your bucket. For more information, see [Permissions required to configure standard logging and to access your log files](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html#AccessLogsBucketAndFileOwnership) in the *Amazon CloudFront Developer Guide*. If the bucket that stores the logs uses the Bucket owner enforced setting for S3 Object Ownership to disable ACLs, CloudFront cannot write logs to the bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

   1. For **Bucket for Logs**, choose the logging bucket that you created.

      For more information about configuring a logging bucket, see [(Optional) Logging web traffic](LoggingWebsiteTraffic.md).

   1. If you want to store the logs that are generated by traffic to the CloudFront distribution in a folder, in **Log Prefix**, enter the folder name.

   1. Keep all other settings at their default values.

1. Choose **Create Distribution**.

1. To see the status of the distribution, find the distribution in the console and check the **Status** column. 

   A status of `InProgress` indicates that the distribution is not yet fully deployed.

   After your distribution is deployed, you can reference your content with the new CloudFront domain name.

1. Record the value of **Domain Name** shown in the CloudFront console, for example, `dj4p1rv6mvubz.cloudfront.net`. 

1. To verify that your CloudFront distribution is working, enter the domain name of the distribution in a web browser.

   If your website is visible, the CloudFront distribution works. If your website has a custom domain registered with Amazon Route 53, you will need the CloudFront domain name to update the record set in the next step.

## Step 2: Update the record sets for your domain and subdomain


Now that you have successfully created a CloudFront distribution, update the alias record in Route 53 to point to the new CloudFront distribution.

**To update the alias record to point to a CloudFront distribution**

1. Open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

1. In the left navigation, choose **Hosted zones**.

1. On the **Hosted Zones** page, choose the hosted zone that you created for your subdomain, for example, `www.example.com`.

1. Under **Records**, select the *A* record that you created for your subdomain. 

1. Under **Record details**, choose **Edit record**.

1. Under **Route traffic to**, choose **Alias to CloudFront distribution**.

1. Under **Choose distribution**, choose the CloudFront distribution.

1. Choose **Save**.

1. To redirect the *A* record for the root domain to the CloudFront distribution, repeat this procedure for the root domain, for example, `example.com`.

   The update to the record sets takes effect within 2–48 hours. 

1. To see whether the new *A* records have taken effect, in a web browser, enter your subdomain URL, for example, `http://www.example.com`. 

   If the browser no longer redirects you to the root domain (for example, `http://example.com`), the new A records are in place. When the new *A* record has taken effect, traffic routed by the new *A* record to the CloudFront distribution is not redirected to the root domain. Any visitors who reference the site by using `http://example.com` or `http://www.example.com` are redirected to the nearest CloudFront edge location, where they benefit from faster download times.
**Tip**  
Browsers can cache redirect settings. If you think the new *A* record settings should have taken effect, but your browser still redirects `http://www.example.com` to `http://example.com`, try clearing your browser history and cache, closing and reopening your browser application, or using a different web browser. 

## (Optional) Step 3: Check the log files


The access logs tell you how many people are visiting the website. They also contain valuable business data that you can analyze with other services, such as [Amazon EMR](https://docs.aws.amazon.com/emr/latest/DeveloperGuide/). 

CloudFront logs are stored in the bucket and folder that you choose when you create a CloudFront distribution and enable logging. CloudFront writes logs to your log bucket within 24 hours from when the corresponding requests are made.

**To see the log files for your website**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the logging bucket for your website.

1. Choose the CloudFront logs folder.

1. Download the `.gzip` files written by CloudFront before opening them.

   If you created your website only as a learning exercise, you can delete the resources that you allocated so that you no longer accrue charges. To do so, see [Cleaning up your example resources](getting-started-cleanup.md). After you delete your AWS resources, your website is no longer available.

# Cleaning up your example resources
Cleaning up example resources

If you created your static website as a learning exercise, you should delete the AWS resources that you allocated so that you no longer accrue charges. After you delete your AWS resources, your website is no longer available.

**Topics**
+ [

## Step 1: Delete the Amazon CloudFront distribution
](#getting-started-cleanup-cloudfront)
+ [

## Step 2: Delete the Route 53 hosted zone
](#getting-started-cleanup-route53)
+ [

## Step 3: Disable logging and delete your S3 bucket
](#getting-started-cleanup-s3)

## Step 1: Delete the Amazon CloudFront distribution


Before you delete an Amazon CloudFront distribution, you must disable it. A disabled distribution is no longer functional and does not accrue charges. You can enable a disabled distribution at any time. After you delete a disabled distribution, it is no longer available.

**To disable and delete a CloudFront distribution**

1. Open the CloudFront console at [https://console.aws.amazon.com/cloudfront/v4/home](https://console.aws.amazon.com/cloudfront/v4/home).

1. Select the distribution that you want to disable, and then choose **Disable**.

1. When prompted for confirmation, choose **Yes, Disable**.

1. Select the disabled distribution, and then choose **Delete**.

1. When prompted for confirmation, choose **Yes, Delete**.

## Step 2: Delete the Route 53 hosted zone


Before you delete the hosted zone, you must delete the record sets that you created. You don't need to delete the NS and SOA records; these are automatically deleted when you delete the hosted zone.

**To delete the record sets**

1. Open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

1.  In the list of domain names, select your domain name, and then choose **Go to Record Sets**. 

1. In the list of record sets, select the *A* records that you created. 

   The type of each record set is listed in the **Type** column. 

1. Choose **Delete Record Set**. 

1. When prompted for confirmation, choose **Confirm**. 

**To delete a Route 53 hosted zone**

1.  Continuing from the previous procedure, choose **Back to Hosted Zones**. 

1.  Select your domain name, and then choose **Delete Hosted Zone**. 

1.  When prompted for confirmation, choose **Confirm**. 

## Step 3: Disable logging and delete your S3 bucket


Before you delete your S3 bucket, make sure that logging is disabled for the bucket. Otherwise, AWS continues to write logs to your bucket as you delete it.

**To disable logging for a bucket**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Under **Buckets**, choose your bucket name, and then choose **Properties**.

1. From **Properties**, choose **Logging**.

1. Clear the **Enabled** check box.

1. Choose **Save**.

Now, you can delete your bucket. For more information, see [Deleting a general purpose bucket](delete-bucket.md).

# Deploying a static website to AWS Amplify Hosting from an S3 general purpose bucket
Deploying a static website to Amplify from Amazon S3

We recommend that you use [AWS Amplify Hosting](https://docs.aws.amazon.com//amplify/latest/userguide/welcome.html.html) to host static website content stored on S3. Amplify Hosting is a fully managed service that makes it easy to deploy your websites on a globally available content delivery network (CDN) powered by Amazon CloudFront, allowing secure static website hosting without extensive setup. With AWS Amplify Hosting, you can select the location of your objects within your general purpose bucket, deploy your content to a managed CDN, and generate a public HTTPS URL for your website to be accessible anywhere. Deploying a static website using Amplify Hosting provides you with the following benefits and features:
+ **Deployment to the AWS content delivery network (CDN) powered by Amazon CloudFront** - CloudFront is a web service that speeds up distribution of your static and dynamic web content to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance, increased reliability and availability. For more information, see [How CloudFront delivers content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html) in the *Amazon CloudFront Developer Guide*.
+ **HTTPS support** - Provides secure communication and data transfer between your website and a user’s web browser.
+ **Custom domains** - Easily connect your website to a custom URL purchased from a domain registrar such as Amazon Route 53. 
+ **Custom SSL certificates** - When you set up your custom domain, you can use the default managed certificate that Amplify provisions for you or you can use your own custom certificate purchased from the third-party certificate authority of your choice.
+ **Built in metrics and CloudWatch monitoring** - Monitor traffic, errors, data transfer, and latency for your website.
+ **Password protection** - Restrict access to your website, by setting up a username and password requirement in the Amplify console.
+ **Redirects and rewrites ** - Create redirect and rewrite rules in the Amplify console to enable a web server to reroute navigation from one URL to another.

When you deploy your application from an Amazon S3 general purpose bucket to Amplify Hosting, AWS charges are based on Amplify's pricing model. For more information, see [AWS Amplify Pricing](https://aws.amazon.com/amplify/pricing/).

**Important**  
Amplify Hosting is not available in all of the AWS Regions where Amazon S3 is available. To deploy a static website to Amplify Hosting, the Amazon S3 general purpose bucket containing your website must be located in a region where Amplify is available. For the list of regions where Amplify is available, see [Amplify endpoints](https://docs.aws.amazon.com/general/latest/gr/amplify.html#amplify_region) in the *Amazon Web Services General Reference*.

You can start the deployment process from the Amazon S3 console, the Amplify console, the AWS CLI, or the AWS SDKs. You can only deploy to Amplify from a general purpose bucket located in your own account. Amplify doesn't support cross-account bucket access. 

Use the following instructions to deploy a static website from an Amazon S3 general purpose bucket to Amplify Hosting starting from the Amazon S3 console.

## Deploying a static website to Amplify from the S3 console


**To deploy a static website from the Amazon S3 console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the general purpose bucket that contains the website you want to deploy to Amplify Hosting.

1. Choose the **Properties** tab.

1. Under **Static website hosting**, choose **Create Amplify app**. At this step, the deployment process will move to the Amplify console.

1. On the **Deploy with S3 ** page, do the following steps.

   1. For **App name**, enter the name of your app or website.

   1. For **Branch name**, enter the name of your app's backend.

   1. For **S3 location of objects to host**, either enter the directory path to your general purpose bucket or choose **Browse S3** to locate and select it.

1. Choose **Save and deploy**.

**Note**  
 If you update any of the objects for a static website in your general purpose bucket hosted on Amplify, you must redeploy the application to Amplify Hosting to cause the changes to take effect. Amplify Hosting doesn't automatically detect changes to your bucket. For more information, see [Updating a static website deployed to Amplify from an S3 bucket](https://docs.aws.amazon.com//amplify/latest/userguide/update-website-deployed-from-s3.html) in the *AWS Amplify Hosting User Guide*. 

To start directly from the Amplify console, see [Deploying a static website from S3 using the Amplify console](https://docs.aws.amazon.com//amplify/latest/userguide/deploy--from-amplify-console.html) in the *AWS Amplify Hosting User Guide*.

To get started using the AWS SDKs, see [Creating a bucket policy to deploy a static website from S3 using the AWS SDKs](https://docs.aws.amazon.com//amplify/latest/userguide/deploy-with-sdks.html) in the *AWS Amplify Hosting User Guide*. 