

# Logging requests with server access logging
Logging server access

Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. This information can also help you learn about your customer base and understand your Amazon S3 bill.

**Note**  
Server access logs don't record information about wrong-Region redirect errors for Regions that launched after March 20, 2019. Wrong-Region redirect errors occur when a request for an object or bucket is made outside the Region in which the bucket exists. 

## How do I enable log delivery?


To enable log delivery, perform the following basic steps. For details, see [Enabling Amazon S3 server access logging](enable-server-access-logging.md).

1. **Provide the name of the destination bucket** (also known as a *target bucket*). This bucket is where you want Amazon S3 to save the access logs as objects. Both the source and destination buckets must be in the same AWS Region and owned by the same account. The destination bucket must not have an S3 Object Lock default retention period configuration. The destination bucket must also not have Requester Pays enabled.

   You can have logs delivered to any bucket that you own that is in the same Region as the source bucket, including the source bucket itself. But for simpler log management, we recommend that you save access logs in a different bucket. 

   When your source bucket and destination bucket are the same bucket, additional logs are created for the logs that are written to the bucket, which creates an infinite loop of logs. We do not recommend doing this because it could result in a small increase in your storage billing. In addition, the extra logs about logs might make it harder to find the log that you are looking for. 

   If you choose to save access logs in the source bucket, we recommend that you specify a destination prefix (also known as a *target prefix*) for all log object keys. When you specify a prefix, all the log object names begin with a common string, which makes the log objects easier to identify. 

1. **(Optional) Assign a destination prefix to all Amazon S3 log object keys.** The destination prefix (also known as a *target prefix*) makes it simpler for you to locate the log objects. For example, if you specify the prefix value `logs/`, each log object that Amazon S3 creates begins with the `logs/` prefix in its key, for example:

   ```
   logs/2013-11-01-21-32-16-E568B2907131C0C0
   ```

   If you specify the prefix value `logs`, the log object appears as follows:

   ```
   logs2013-11-01-21-32-16-E568B2907131C0C0
   ```

   [Prefixes](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) are also useful to distinguish between source buckets when multiple buckets log to the same destination bucket.

   This prefix can also help when you delete the logs. For example, you can set a lifecycle configuration rule for Amazon S3 to delete objects with a specific prefix. For more information, see [Deleting Amazon S3 log files](deleting-log-files-lifecycle.md).

1. **(Optional) Set permissions **so that others can access the generated logs.** By default, only the bucket owner always has full access to the log objects. If your destination bucket uses the Bucket owner enforced setting for S3 Object Ownership to disable access control lists (ACLs), you can't grant permissions in destination grants (also known as *target grants*) that use ACLs. However, you can update your bucket policy for the destination bucket to grant access to others. For more information, see [Identity and Access Management for Amazon S3](security-iam.md) and [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general). 

1. **(Optional) Set a log object key format for the log files.** You have two options for the log object key format (also known as the *target object key format*): 
   + **Non-date-based partitioning** – This is the original log object key format. If you choose this format, the log file key format appears as follows: 

     ```
     [DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
     ```

     For example, if you specify `logs/` as the prefix, your log objects are named like this: 

     ```
     logs/2013-11-01-21-32-16-E568B2907131C0C0
     ```
   + **Date-based partitioning** – If you choose date-based partitioning, you can choose the event time or delivery time for the log file as the date source used in the log format. This format makes it easier to query the logs.

     If you choose date-based partitioning, the log file key format appears as follows: 

     ```
     [DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
     ```

     For example, if you specify `logs/` as the target prefix, your log objects are named like this:

     ```
     logs/123456789012/us-west-2/amzn-s3-demo-source-bucket/2023/03/01/2023-03-01-21-32-16-E568B2907131C0C0
     ```

     For delivery time delivery, the time in the log file names corresponds to the delivery time for the log files. 

     For event time delivery, the year, month, and day correspond to the day on which the event occurred, and the hour, minutes and seconds are set to `00` in the key. The logs delivered in these log files are for a specific day only. 

   

   If you're configuring your logs through the AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API, use `TargetObjectKeyFormat` to specify the log object key format. To specify non-date-based partitioning, use `SimplePrefix`. To specify data-based partitioning, use `PartitionedPrefix`. If you use `PartitionedPrefix`, use `PartitionDateSource` to specify either `EventTime` or `DeliveryTime`.

   For `SimplePrefix`, the log file key format appears as follows:

   ```
   [TargetPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
   ```

   For `PartitionedPrefix` with event time or delivery time, the log file key format appears as follows:

   ```
   [TargetPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
   ```

## Log object key format


Amazon S3 uses the following object key formats for the log objects that it uploads in the destination bucket:
+ **Non-date-based partitioning** – This is the original log object key format. If you choose this format, the log file key format appears as follows: 

  ```
  [DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
  ```
+ **Date-based partitioning** – If you choose date-based partitioning, you can choose the event time or delivery time for the log file as the date source used in the log format. This format makes it easier to query the logs.

  If you choose date-based partitioning, the log file key format appears as follows: 

  ```
  [DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
  ```

In the log object key, `YYYY`, `MM`, `DD`, `hh`, `mm`, and `ss` are the digits of the year, month, day, hour, minute, and seconds (respectively). These dates and times are in Coordinated Universal Time (UTC). 

A log file delivered at a specific time can contain records written at any point before that time. There is no way to know whether all log records for a certain time interval have been delivered or not. 

The `UniqueString` component of the key is there to prevent overwriting of files. It has no meaning, and log processing software should ignore it. 

## How are logs delivered?


Amazon S3 periodically collects access log records, consolidates the records in log files, and then uploads log files to your destination bucket as log objects. If you enable logging on multiple source buckets that identify the same destination bucket, the destination bucket will have access logs for all those source buckets. However, each log object reports access log records for a specific source bucket. 

Amazon S3 uses a special log delivery account to write server access logs. These writes are subject to the usual access control restrictions. We recommend that you update the bucket policy on the destination bucket to grant access to the logging service principal (`logging.s3.amazonaws.com`) for access log delivery. You can also grant access for access log delivery to the S3 log delivery group through your bucket access control list (ACL). However, granting access to the S3 log delivery group by using your bucket ACL is not recommended. 

When you enable server access logging and grant access for access log delivery through your destination bucket policy, you must update the policy to allow `s3:PutObject` access for the logging service principal. If you use the Amazon S3 console to enable server access logging, the console automatically updates the destination bucket policy to grant these permissions to the logging service principal. For more information about granting permissions for server access log delivery, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general). 

**Note**  
S3 does not support delivery of CloudTrail logs or server access logs to the requester or the bucket owner for VPC endpoint requests when the VPC endpoint policy denies them or for requests that fail before the VPC policy is evaluated.

**Bucket owner enforced setting for S3 Object Ownership**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, ACLs are disabled and no longer affect permissions. You must update the bucket policy on the destination bucket to grant access to the logging service principal. For more information about Object Ownership, see [Grant access to the S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs).

## Best-effort server log delivery


Server access log records are delivered on a best-effort basis. Most requests for a bucket that is properly configured for logging result in a delivered log record. Most log records are delivered within a few hours of the time that they are recorded, but they can be delivered more frequently. 

The completeness and timeliness of server logging is not guaranteed. The log record for a particular request might be delivered long after the request was actually processed, or *it might not be delivered at all*. It is possible that you might even see a duplication of a log record. The purpose of server logs is to give you an idea of the nature of traffic against your bucket. Although log records are rarely lost or duplicated, be aware that server logging is not meant to be a complete accounting of all requests.

Because of the best-effort nature of server logging, your usage reports might include one or more access requests that do not appear in a delivered server log. You can find these usage reports under **Cost & usage reports** in the AWS Billing and Cost Management console.

## Bucket logging status changes take effect over time


Changes to the logging status of a bucket take time to actually affect the delivery of log files. For example, if you enable logging for a bucket, some requests made in the following hour might be logged, and others might not. Suppose that you change the destination bucket for logging from bucket A to bucket B. For the next hour, some logs might continue to be delivered to bucket A, whereas others might be delivered to the new destination bucket B. In all cases, the new settings eventually take effect without any further action on your part. 

For more information about logging and log files, see the following sections:

**Topics**
+ [

## How do I enable log delivery?
](#server-access-logging-overview)
+ [

## Log object key format
](#server-log-keyname-format)
+ [

## How are logs delivered?
](#how-logs-delivered)
+ [

## Best-effort server log delivery
](#LogDeliveryBestEffort)
+ [

## Bucket logging status changes take effect over time
](#BucketLoggingStatusChanges)
+ [

# Enabling Amazon S3 server access logging
](enable-server-access-logging.md)
+ [

# Amazon S3 server access log format
](LogFormat.md)
+ [

# Deleting Amazon S3 log files
](deleting-log-files-lifecycle.md)
+ [

# Using Amazon S3 server access logs to identify requests
](using-s3-access-logs-to-identify-requests.md)
+ [

# Troubleshoot server access logging
](troubleshooting-server-access-logging.md)

# Enabling Amazon S3 server access logging
Enabling server access logging

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. This information can also help you learn about your customer base and understand your Amazon S3 bill.

By default, Amazon S3 doesn't collect server access logs. When you enable logging, Amazon S3 delivers access logs for a source bucket to a destination bucket (also known as a *target bucket*) that you choose. The destination bucket must be in the same AWS Region and AWS account as the source bucket. 

An access log record contains details about the requests that are made to a bucket. This information can include the request type, the resources that are specified in the request, and the time and date that the request was processed. For more information about logging basics, see [Logging requests with server access logging](ServerLogs.md). 

**Important**  
There is no extra charge for enabling server access logging on an Amazon S3 bucket. However, any log files that the system delivers to you will accrue the usual charges for storage. (You can delete the log files at any time.) We do not assess data-transfer charges for log file delivery, but we do charge the normal data-transfer rate for accessing the log files.
Your destination bucket should not have server access logging enabled. You can have logs delivered to any bucket that you own that is in the same Region as the source bucket, including the source bucket itself. However, delivering logs to the source bucket will cause an infinite loop of logs and is not recommended. For simpler log management, we recommend that you save access logs in a different bucket. For more information, see [How do I enable log delivery?](ServerLogs.md#server-access-logging-overview)
S3 buckets that have S3 Object Lock enabled can't be used as destination buckets for server access logs. Your destination bucket must not have a default retention period configuration.
The destination bucket must not have Requester Pays enabled.

You can enable or disable server access logging by using the Amazon S3 console, Amazon S3 API, the AWS Command Line Interface (AWS CLI), or AWS SDKs. 

## Permissions for log delivery


Amazon S3 uses a special log delivery account to write server access logs. These writes are subject to the usual access control restrictions. For access log delivery, you must grant the logging service principal (`logging.s3.amazonaws.com`) access to your destination bucket.

To grant permissions to Amazon S3 for log delivery, you can use either a bucket policy or bucket access control lists (ACLs), depending on your destination bucket's S3 Object Ownership settings. However, we recommend that you use a bucket policy instead of ACLs. 

**Bucket owner enforced setting for S3 Object Ownership**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, ACLs are disabled and no longer affect permissions. In this case, you must update the bucket policy for the destination bucket to grant access to the logging service principal. You can't update your bucket ACL to grant access to the S3 log delivery group. You also can't include destination grants (also known as *target grants*) in your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. 

For information about migrating existing bucket ACLs for access log delivery to a bucket policy, see [Grant access to the S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs). For more information about Object Ownership, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md). When you create new buckets, ACLs are disabled by default.

**Granting access by using a bucket policy**  
To grant access by using the bucket policy on the destination bucket, update the bucket policy to grant the `s3:PutObject` permission to the logging service principal. If you use the Amazon S3 console to enable server access logging, the console automatically updates the bucket policy on the destination bucket to grant this permission to the logging service principal. If you enable server access logging programmatically, you must manually update the bucket policy for the destination bucket to grant access to the logging service principal. 

For an example bucket policy that grants access to the logging service principal, see [Grant permissions to the logging service principal by using a bucket policy](#grant-log-delivery-permissions-bucket-policy).

**Granting access by using bucket ACLs**  
You can alternately use bucket ACLs to grant access for access log delivery. You add a grant entry to the bucket ACL that grants `WRITE` and `READ_ACP` permissions to the S3 log delivery group. However, granting access to the S3 log delivery group by using bucket ACLs is not recommended. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md). For information about migrating existing bucket ACLs for access log delivery to a bucket policy, see [Grant access to the S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs). For an example ACL that grants access to the logging service principal, see [Grant permissions to the log delivery group by using a bucket ACL](#grant-log-delivery-permissions-acl).

### Grant permissions to the logging service principal by using a bucket policy


This example bucket policy grants the `s3:PutObject` permission to the logging service principal (`logging.s3.amazonaws.com`). To use this bucket policy, replace the `user input placeholders` with your own information. In the following policy, `amzn-s3-demo-destination-bucket` is the destination bucket where server access logs will be delivered, and `amzn-s3-demo-source-bucket` is the source bucket. `EXAMPLE-LOGGING-PREFIX` is the optional destination prefix (also known as a *target prefix*) that you want to use for your log objects. `SOURCE-ACCOUNT-ID` is the AWS account that owns the source bucket. 

**Note**  
If there are `Deny` statements in your bucket policy, make sure that they don't prevent Amazon S3 from delivering access logs.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3ServerAccessLogsPolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "logging.s3.amazonaws.com"
            },
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/EXAMPLE-LOGGING-PREFIX*",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket"
                },
                "StringEquals": {
                    "aws:SourceAccount": "SOURCE-ACCOUNT-ID"
                }
            }
        }
    ]
}
```

------

### Grant permissions to the log delivery group by using a bucket ACL


**Note**  
As a security best practice, Amazon S3 disables access control lists (ACLs) by default in all new buckets. For more information about ACL permissions in the Amazon S3 console, see [Configuring ACLs](managing-acls.md). 

Although we do not recommend this approach, you can grant permissions to the log delivery group by using a bucket ACL. However, if the destination bucket uses the Bucket owner enforced setting for Object Ownership, you can't set bucket or object ACLs. You also can't include destination grants (also known as *target grants*) in your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. Instead, you must use a bucket policy to grant access to the logging service principal (`logging.s3.amazonaws.com`). For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

In the bucket ACL, the log delivery group is represented by the following URL:

```
1. http://acs.amazonaws.com/groups/s3/LogDelivery
```

To grant `WRITE` and `READ_ACP` (ACL read) permissions, add the following grants to the destination bucket ACL:

```
 1. <Grant>
 2.     <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:type="Group">
 3.         <URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI> 
 4.     </Grantee>
 5.     <Permission>WRITE</Permission>
 6. </Grant>
 7. <Grant>
 8.     <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:type="Group">
 9.         <URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI> 
10.     </Grantee>
11.     <Permission>READ_ACP</Permission>
12. </Grant>
```

For examples of adding ACL grants programmatically, see [Configuring ACLs](managing-acls.md).

**Important**  
When you enable Amazon S3 server access logging by using AWS CloudFormation on a bucket and you're using ACLs to grant access to the S3 log delivery group, you must also add "`AccessControl": "LogDeliveryWrite"` to your CloudFormation template. Doing so is important because you can grant those permissions only by creating an ACL for the bucket, but you can't create custom ACLs for buckets in CloudFormation. You can use only canned ACLs with CloudFormation.

## To enable server access logging


To enable server access logging by using the Amazon S3 console, Amazon S3 REST API, AWS SDKs, and AWS CLI, use the following procedures.

### Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable server access logging for.

1. Choose **Properties**.

1. In the **Server access logging** section, choose **Edit**.

1. Under **Server access logging**, choose **Enable**. 

1. Under **Destination bucket**, specify a bucket and an optional prefix. If you specify a prefix, we recommend including a forward slash (`/`) after the prefix to make it easier to find your logs. 
**Note**  
Specifying a prefix with a slash (`/`) makes it simpler for you to locate the log objects. For example, if you specify the prefix value `logs/`, each log object that Amazon S3 creates begins with the `logs/` prefix in its key, as follows:  

   ```
   logs/2013-11-01-21-32-16-E568B2907131C0C0
   ```
If you specify the prefix value `logs`, the log object appears as follows:  

   ```
   logs2013-11-01-21-32-16-E568B2907131C0C0
   ```

1. Under **Log object key format**, do one of the following:
   + To choose non-date-based partitioning, choose **[DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]**.
   + To choose date-based partitioning, choose **[DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]**, then choose **S3 event time** or **Log file delivery time**.

1. Choose **Save changes**.

   When you enable server access logging on a bucket, the console both enables logging on the source bucket and updates the bucket policy for the destination bucket to grant the `s3:PutObject` permission to the logging service principal (`logging.s3.amazonaws.com`). For more information about this bucket policy, see [Grant permissions to the logging service principal by using a bucket policy](#grant-log-delivery-permissions-bucket-policy).

   You can view the logs in the destination bucket. After you enable server access logging, it might take a few hours before the logs are delivered to the target bucket. For more information about how and when logs are delivered, see [How are logs delivered?](ServerLogs.md#how-logs-delivered)

For more information, see [Viewing the properties for an S3 general purpose bucket](view-bucket-properties.md).

### Using the REST API


To enable logging, you submit a [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) request to add the logging configuration on the source bucket. The request specifies the destination bucket (also known as a *target bucket*) and, optionally, the prefix to be used with all log object keys. 

The following example identifies `amzn-s3-demo-destination-bucket` as the destination bucket and *`logs/`* as the prefix. 

```
1. <BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
2.   <LoggingEnabled>
3.     <TargetBucket>amzn-s3-demo-destination-bucket</TargetBucket>
4.     <TargetPrefix>logs/</TargetPrefix>
5.   </LoggingEnabled>
6. </BucketLoggingStatus>
```

The following example identifies `amzn-s3-demo-destination-bucket` as the destination bucket, *`logs/`* as the prefix, and `EventTime` as the log object key format. 

```
 1. <BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
 2.   <LoggingEnabled>
 3.     <TargetBucket>amzn-s3-demo-destination-bucket</TargetBucket>
 4.     <TargetPrefix>logs/</TargetPrefix>
 5.     <TargetObjectKeyFormat>
 6.       <PartitionedPrefix>
 7.          <PartitionDateSource>EventTime</PartitionDateSource>
 8.       </PartitionedPrefix>
 9.   </TargetObjectKeyFormat>
10.   </LoggingEnabled>
11. </BucketLoggingStatus>
```

The log objects are written and owned by the S3 log delivery account, and the bucket owner is granted full permissions on the log objects. You can optionally use destination grants (also known as *target grants*) to grant permissions to other users so that they can access the logs. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html). 

**Note**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, you can't use destination grants to grant permissions to other users. To grant permissions to others, you can update the bucket policy on the destination bucket. For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general). 

To retrieve the logging configuration on a bucket, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlogging.html) API operation. 

To delete the logging configuration, you send a `PutBucketLogging` request with an empty `BucketLoggingStatus`: 

```
1. <BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
2. </BucketLoggingStatus>
```

To enable logging on a bucket, you can use either the Amazon S3 API or the AWS SDK wrapper libraries.

### Using the AWS SDKs


The following examples enable logging on a bucket. You must create two buckets, a source bucket and a destination (target) bucket. The examples update the bucket ACL on the destination bucket first. They then grant the log delivery group the necessary permissions to write logs to the destination bucket, and then they enable logging on the source bucket. 

These examples won't work on destination buckets that use the Bucket owner enforced setting for Object Ownership.

If the destination (target) bucket uses the Bucket owner enforced setting for Object Ownership, you can't set bucket or object ACLs. You also can't include destination (target) grants in your [PutBucketLogging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. You must use a bucket policy to grant access to the logging service principal (`logging.s3.amazonaws.com`). For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

------
#### [ .NET ]

**SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/dotnetv3/S3#code-examples). 

```
    using System;
    using System.IO;
    using System.Threading.Tasks;
    using Amazon.S3;
    using Amazon.S3.Model;
    using Microsoft.Extensions.Configuration;

    /// <summary>
    /// This example shows how to enable logging on an Amazon Simple Storage
    /// Service (Amazon S3) bucket. You need to have two Amazon S3 buckets for
    /// this example. The first is the bucket for which you wish to enable
    /// logging, and the second is the location where you want to store the
    /// logs.
    /// </summary>
    public class ServerAccessLogging
    {
        private static IConfiguration _configuration = null!;

        public static async Task Main()
        {
            LoadConfig();

            string bucketName = _configuration["BucketName"];
            string logBucketName = _configuration["LogBucketName"];
            string logObjectKeyPrefix = _configuration["LogObjectKeyPrefix"];
            string accountId = _configuration["AccountId"];

            // If the AWS Region defined for your default user is different
            // from the Region where your Amazon S3 bucket is located,
            // pass the Region name to the Amazon S3 client object's constructor.
            // For example: RegionEndpoint.USWest2 or RegionEndpoint.USEast2.
            IAmazonS3 client = new AmazonS3Client();

            try
            {
                // Update bucket policy for target bucket to allow delivery of logs to it.
                await SetBucketPolicyToAllowLogDelivery(
                    client,
                    bucketName,
                    logBucketName,
                    logObjectKeyPrefix,
                    accountId);

                // Enable logging on the source bucket.
                await EnableLoggingAsync(
                    client,
                    bucketName,
                    logBucketName,
                    logObjectKeyPrefix);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine($"Error: {e.Message}");
            }
        }

        /// <summary>
        /// This method grants appropriate permissions for logging to the
        /// Amazon S3 bucket where the logs will be stored.
        /// </summary>
        /// <param name="client">The initialized Amazon S3 client which will be used
        /// to apply the bucket policy.</param>
        /// <param name="sourceBucketName">The name of the source bucket.</param>
        /// <param name="logBucketName">The name of the bucket where logging
        /// information will be stored.</param>
        /// <param name="logPrefix">The logging prefix where the logs should be delivered.</param>
        /// <param name="accountId">The account id of the account where the source bucket exists.</param>
        /// <returns>Async task.</returns>
        public static async Task SetBucketPolicyToAllowLogDelivery(
            IAmazonS3 client,
            string sourceBucketName,
            string logBucketName,
            string logPrefix,
            string accountId)
        {
            var resourceArn = @"""arn:aws:s3:::" + logBucketName + "/" + logPrefix + @"*""";

            var newPolicy = @"{
                                ""Statement"":[{
                                ""Sid"": ""S3ServerAccessLogsPolicy"",
                                ""Effect"": ""Allow"",
                                ""Principal"": { ""Service"": ""logging.s3.amazonaws.com"" },
                                ""Action"": [""s3:PutObject""],
                                ""Resource"": [" + resourceArn + @"],
                                ""Condition"": {
                                ""ArnLike"": { ""aws:SourceArn"": ""arn:aws:s3:::" + sourceBucketName + @""" },
                                ""StringEquals"": { ""aws:SourceAccount"": """ + accountId + @""" }
                                        }
                                    }]
                                }";
            Console.WriteLine($"The policy to apply to bucket {logBucketName} to enable logging:");
            Console.WriteLine(newPolicy);

            PutBucketPolicyRequest putRequest = new PutBucketPolicyRequest
            {
                BucketName = logBucketName,
                Policy = newPolicy,
            };
            await client.PutBucketPolicyAsync(putRequest);
            Console.WriteLine("Policy applied.");
        }

        /// <summary>
        /// This method enables logging for an Amazon S3 bucket. Logs will be stored
        /// in the bucket you selected for logging. Selected prefix
        /// will be prepended to each log object.
        /// </summary>
        /// <param name="client">The initialized Amazon S3 client which will be used
        /// to configure and apply logging to the selected Amazon S3 bucket.</param>
        /// <param name="bucketName">The name of the Amazon S3 bucket for which you
        /// wish to enable logging.</param>
        /// <param name="logBucketName">The name of the Amazon S3 bucket where logging
        /// information will be stored.</param>
        /// <param name="logObjectKeyPrefix">The prefix to prepend to each
        /// object key.</param>
        /// <returns>Async task.</returns>
        public static async Task EnableLoggingAsync(
            IAmazonS3 client,
            string bucketName,
            string logBucketName,
            string logObjectKeyPrefix)
        {
            Console.WriteLine($"Enabling logging for bucket {bucketName}.");
            var loggingConfig = new S3BucketLoggingConfig
            {
                TargetBucketName = logBucketName,
                TargetPrefix = logObjectKeyPrefix,
            };

            var putBucketLoggingRequest = new PutBucketLoggingRequest
            {
                BucketName = bucketName,
                LoggingConfig = loggingConfig,
            };
            await client.PutBucketLoggingAsync(putBucketLoggingRequest);
            Console.WriteLine($"Logging enabled.");
        }

        /// <summary>
        /// Loads configuration from settings files.
        /// </summary>
        public static void LoadConfig()
        {
            _configuration = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("settings.json") // Load settings from .json file.
                .AddJsonFile("settings.local.json", true) // Optionally, load local settings.
                .Build();
        }
    }
```
+  For API details, see [PutBucketLogging](https://docs.aws.amazon.com/goto/DotNetSDKV3/s3-2006-03-01/PutBucketLogging) in *AWS SDK for .NET API Reference*. 

------
#### [ Java ]

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.BucketLoggingStatus;
import software.amazon.awssdk.services.s3.model.LoggingEnabled;
import software.amazon.awssdk.services.s3.model.PartitionedPrefix;
import software.amazon.awssdk.services.s3.model.PutBucketLoggingRequest;
import software.amazon.awssdk.services.s3.model.TargetObjectKeyFormat;

// Class to set a bucket policy on a target S3 bucket and enable server access logging on a source S3 bucket.
public class ServerAccessLogging {
    private static S3Client s3Client;

    public static void main(String[] args) {
        String sourceBucketName = "SOURCE-BUCKET";
        String targetBucketName = "TARGET-BUCKET";
        String sourceAccountId = "123456789012";
        String targetPrefix = "logs/";

        // Create S3 Client.
        s3Client = S3Client.builder().
                region(Region.US_EAST_2)
                .build();

        // Set a bucket policy on the target S3 bucket to enable server access logging by granting the
        // logging.s3.amazonaws.com principal permission to use the PutObject operation.
        ServerAccessLogging serverAccessLogging = new ServerAccessLogging();
        serverAccessLogging.setTargetBucketPolicy(sourceAccountId, sourceBucketName, targetBucketName);

        // Enable server access logging on the source S3 bucket.
        serverAccessLogging.enableServerAccessLogging(sourceBucketName, targetBucketName,
                targetPrefix);

    }

    // Function to set a bucket policy on the target S3 bucket to enable server access logging by granting the
    // logging.s3.amazonaws.com principal permission to use the PutObject operation.
    public void setTargetBucketPolicy(String sourceAccountId, String sourceBucketName, String targetBucketName) {
        String policy = "{\n" +
                "    \"Version\": \"2012-10-17\",\n" +
                "    \"Statement\": [\n" +
                "        {\n" +
                "            \"Sid\": \"S3ServerAccessLogsPolicy\",\n" +
                "            \"Effect\": \"Allow\",\n" +
                "            \"Principal\": {\"Service\": \"logging.s3.amazonaws.com\"},\n" +
                "            \"Action\": [\n" +
                "                \"s3:PutObject\"\n" +
                "            ],\n" +
                "            \"Resource\": \"arn:aws:s3:::" + targetBucketName + "/*\",\n" +
                "            \"Condition\": {\n" +
                "                \"ArnLike\": {\n" +
                "                    \"aws:SourceArn\": \"arn:aws:s3:::" + sourceBucketName + "\"\n" +
                "                },\n" +
                "                \"StringEquals\": {\n" +
                "                    \"aws:SourceAccount\": \"" + sourceAccountId + "\"\n" +
                "                }\n" +
                "            }\n" +
                "        }\n" +
                "    ]\n" +
                "}";
        s3Client.putBucketPolicy(b -> b.bucket(targetBucketName).policy(policy));
    }

    // Function to enable server access logging on the source S3 bucket.
    public void enableServerAccessLogging(String sourceBucketName, String targetBucketName,
            String targetPrefix) {
        TargetObjectKeyFormat targetObjectKeyFormat = TargetObjectKeyFormat.builder()
                .partitionedPrefix(PartitionedPrefix.builder().partitionDateSource("EventTime").build())
                .build();
        LoggingEnabled loggingEnabled = LoggingEnabled.builder()
                .targetBucket(targetBucketName)
                .targetPrefix(targetPrefix)
                .targetObjectKeyFormat(targetObjectKeyFormat)
                .build();
        BucketLoggingStatus bucketLoggingStatus = BucketLoggingStatus.builder()
                .loggingEnabled(loggingEnabled)
                .build();
        s3Client.putBucketLogging(PutBucketLoggingRequest.builder()
                .bucket(sourceBucketName)
                .bucketLoggingStatus(bucketLoggingStatus)
                .build());
    }

}
```

------

### Using the AWS CLI


We recommend that you create a dedicated logging bucket in each AWS Region that you have S3 buckets in. Then have your Amazon S3 access logs delivered to that S3 bucket. For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-logging.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-logging.html) in the *AWS CLI Reference*.

If the destination (target) bucket uses the Bucket owner enforced setting for Object Ownership, you can't set bucket or object ACLs. You also can't include destination (target) grants in your [PutBucketLogging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. You must use a bucket policy to grant access to the logging service principal (`logging.s3.amazonaws.com`). For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

**Example — Enable access logs with five buckets across two Regions**  
In this example, you have the following five buckets:   
+ `amzn-s3-demo-source-bucket-us-east-1`
+ `amzn-s3-demo-source-bucket1-us-east-1`
+ `amzn-s3-demo-source-bucket2-us-east-1`
+ `amzn-s3-demo-bucket1-us-west-2`
+ `amzn-s3-demo-bucket2-us-west-2`
**Note**  
The final step of the following procedure provides example bash scripts that you can use to create your logging buckets and enable server access logging on these buckets. To use those scripts, you must create the `policy.json` and `logging.json` files, as described in the following procedure.

1. Create two logging destination buckets in the US West (Oregon) and US East (N. Virginia) Regions and give them the following names:
   + `amzn-s3-demo-destination-bucket-logs-us-east-1`
   + `amzn-s3-demo-destination-bucket1-logs-us-west-2`

1. Later in these steps, you will enable server access logging as follows:
   + `amzn-s3-demo-source-bucket-us-east-1` logs to the S3 bucket `amzn-s3-demo-destination-bucket-logs-us-east-1` with the prefix `amzn-s3-demo-source-bucket-us-east-1`
   + `amzn-s3-demo-source-bucket1-us-east-1` logs to the S3 bucket `amzn-s3-demo-destination-bucket-logs-us-east-1` with the prefix `amzn-s3-demo-source-bucket1-us-east-1`
   + `amzn-s3-demo-source-bucket2-us-east-1` logs to the S3 bucket `amzn-s3-demo-destination-bucket-logs-us-east-1` with the prefix `amzn-s3-demo-source-bucket2-us-east-1`
   + `amzn-s3-demo-bucket1-us-west-2` logs to the S3 bucket `amzn-s3-demo-destination-bucket1-logs-us-west-2` with the prefix `amzn-s3-demo-bucket1-us-west-2`
   + `amzn-s3-demo-bucket2-us-west-2` logs to the S3 bucket `amzn-s3-demo-destination-bucket1-logs-us-west-2` with the prefix `amzn-s3-demo-bucket2-us-west-2`

1. For each destination logging bucket, grant permissions for server access log delivery by using a bucket ACL *or* a bucket policy:
   + **Update the bucket policy** (Recommended) – To grant permissions to the logging service principal, use the following `put-bucket-policy` command. Replace `amzn-s3-demo-destination-bucket-logs` with the name of your destination bucket.

     ```
     1. aws s3api put-bucket-policy --bucket amzn-s3-demo-destination-bucket-logs --policy file://policy.json
     ```

     `Policy.json` is a JSON document in the current folder that contains the following bucket policy. To use this bucket policy, replace the `user input placeholders` with your own information. In the following policy, *`amzn-s3-demo-destination-bucket-logs`* is the destination bucket where server access logs will be delivered, and `amzn-s3-demo-source-bucket` is the source bucket. `SOURCE-ACCOUNT-ID` is the AWS account that owns the source bucket.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "S3ServerAccessLogsPolicy",
                 "Effect": "Allow",
                 "Principal": {
                     "Service": "logging.s3.amazonaws.com"
                 },
                 "Action": [
                     "s3:PutObject"
                 ],
                 "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket-logs/*",
                 "Condition": {
                     "ArnLike": {
                         "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket"
                     },
                     "StringEquals": {
                         "aws:SourceAccount": "SOURCE-ACCOUNT-ID"
                     }
                 }
             }
         ]
     }
     ```

------
   + **Update the bucket ACL** – To grant permissions to the S3 log delivery group, use the following `put-bucket-acl` command. Replace *`amzn-s3-demo-destination-bucket-logs`* with the name of your destination (target) bucket.

     ```
     1. aws s3api put-bucket-acl --bucket amzn-s3-demo-destination-bucket-logs  --grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery 
     ```

1. Then, create a `logging.json` file that contains your logging configuration (based on one of the three examples that follow). After you create the `logging.json` file, you can apply the logging configuration by using the following `put-bucket-logging` command. Replace *`amzn-s3-demo-destination-bucket-logs`* with the name of your destination (target) bucket.

   ```
   1. aws s3api put-bucket-logging --bucket amzn-s3-demo-destination-bucket-logs --bucket-logging-status file://logging.json 
   ```
**Note**  
Instead of using this `put-bucket-logging` command to apply the logging configuration on each destination bucket, you can use one of the bash scripts provided in the next step. To use those scripts, you must create the `policy.json` and `logging.json` files, as described in this procedure.

   The `logging.json` file is a JSON document in the current folder that contains your logging configuration. If a destination bucket uses the Bucket owner enforced setting for Object Ownership, your logging configuration can't contain destination (target) grants. For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).  
**Example – `logging.json` without destination (target) grants**  

   The following example `logging.json` file doesn't contain destination (target) grants. Therefore, you can apply this configuration to a destination (target) bucket that uses the Bucket owner enforced setting for Object Ownership.

   ```
     {
         "LoggingEnabled": {
             "TargetBucket": "amzn-s3-demo-destination-bucket-logs",
             "TargetPrefix": "amzn-s3-demo-destination-bucket/"
          }
      }
   ```  
**Example – `logging.json` with destination (target) grants**  

   The following example `logging.json` file contains destination (target) grants.

   If the destination bucket uses the Bucket owner enforced setting for Object Ownership, you can't include destination (target) grants in your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

   ```
     {
         "LoggingEnabled": {
             "TargetBucket": "amzn-s3-demo-destination-bucket-logs",
             "TargetPrefix": "amzn-s3-demo-destination-bucket/",
             "TargetGrants": [
                  {
                     "Grantee": {
                         "Type": "CanonicalUser",
                         "ID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
                      },
                     "Permission": "FULL_CONTROL"
                  }
              ]
          }
      }
   ```

**Grantee values**  
You can specify the person (grantee) to whom you're assigning access rights (by using request elements) in the following ways:
   + By the person's ID:

     ```
     {
       "Grantee": {
         "Type": "CanonicalUser",
         "ID": "ID"
       }
     }
     ```
   + By URI:

     ```
     {
       "Grantee": {
         "Type": "Group",
         "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers"
       }
     }
     ```  
**Example – `logging.json` with the log object key format set to S3 event time**  

   The following `logging.json` file changes the log object key format to S3 event time. For more information about setting the log object key format, see [How do I enable log delivery?](ServerLogs.md#server-access-logging-overview)

   ```
     { 
       "LoggingEnabled": {
           "TargetBucket": "amzn-s3-demo-destination-bucket-logs",
           "TargetPrefix": "amzn-s3-demo-destination-bucket/",
           "TargetObjectKeyFormat": { 
               "PartitionedPrefix": { 
                   "PartitionDateSource": "EventTime" 
               }
            }
       }
   }
   ```

1. Use one of the following bash scripts to add access logging for all the buckets in your account. Replace *`amzn-s3-demo-destination-bucket-logs`* with the name of your destination (target) bucket, and replace `us-west-2` with the name of the Region that your buckets are located in.
**Note**  
This script works only if all of your buckets are in the same Region. If you have buckets in multiple Regions, you must adjust the script.   
**Example – Grant access with bucket policies and add logging for the buckets in your account**  

   ```
     loggingBucket='amzn-s3-demo-destination-bucket-logs'
     region='us-west-2'
     
     
     # Create the logging bucket.
     aws s3 mb s3://$loggingBucket --region $region
     
     aws s3api put-bucket-policy --bucket $loggingBucket --policy file://policy.json
     
     # List the buckets in this account.
     buckets="$(aws s3 ls | awk '{print $3}')"
     
     # Put a bucket logging configuration on each bucket.
     for bucket in $buckets
         do 
           # This if statement excludes the logging bucket.
           if [ "$bucket" == "$loggingBucket" ] ; then
               continue;
           fi
           printf '{
             "LoggingEnabled": {
               "TargetBucket": "%s",
               "TargetPrefix": "%s/"
           }
         }' "$loggingBucket" "$bucket"  > logging.json
         aws s3api put-bucket-logging --bucket $bucket --bucket-logging-status file://logging.json
         echo "$bucket done"
     done
     
     rm logging.json
     
     echo "Complete"
   ```  
**Example – Grant access with bucket ACLs and add logging for the buckets in your account**  

   ```
     loggingBucket='amzn-s3-demo-destination-bucket-logs'
     region='us-west-2'
     
     
     # Create the logging bucket.
     aws s3 mb s3://$loggingBucket --region $region
     
     aws s3api put-bucket-acl --bucket $loggingBucket --grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery
     
     # List the buckets in this account.
     buckets="$(aws s3 ls | awk '{print $3}')"
     
     # Put a bucket logging configuration on each bucket.
     for bucket in $buckets
         do 
           # This if statement excludes the logging bucket.
           if [ "$bucket" == "$loggingBucket" ] ; then
               continue;
           fi
           printf '{
             "LoggingEnabled": {
               "TargetBucket": "%s",
               "TargetPrefix": "%s/"
           }
         }' "$loggingBucket" "$bucket"  > logging.json
         aws s3api put-bucket-logging --bucket $bucket --bucket-logging-status file://logging.json
         echo "$bucket done"
     done
     
     rm logging.json
     
     echo "Complete"
   ```

## Verifying your server access logs setup


After you enable server access logging, complete the following steps: 
+ Access the destination bucket and verify that the log files are being delivered. After the access logs are set up, Amazon S3 immediately starts capturing requests and logging them. However, it might take a few hours before the logs are delivered to the destination bucket. For more information, see [Bucket logging status changes take effect over time](ServerLogs.md#BucketLoggingStatusChanges) and [Best-effort server log delivery](ServerLogs.md#LogDeliveryBestEffort).

  You can also automatically verify log delivery by using Amazon S3 request metrics and setting up Amazon CloudWatch alarms for these metrics. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).
+ Verify that you are able to open and read the contents of the log files.

For server access logging troubleshooting information, see [Troubleshoot server access logging](troubleshooting-server-access-logging.md).

# Amazon S3 server access log format
Log format

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. You can use server access logs for the following purposes: 
+ Performing security and access audits
+ Learning about your customer base
+ Understanding your Amazon S3 bill

This section describes the format and other details about Amazon S3 server access log files.

Server access log files consist of a sequence of newline-delimited log records. Each log record represents one request and consists of space-delimited fields.

The following is an example log consisting of five log records.

**Note**  
Any field can be set to `-` to indicate that the data was unknown or unavailable, or that the field was not applicable to this request. 

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 3E57427F3EXAMPLE REST.GET.VERSIONING - "GET /amzn-s3-demo-bucket1?versioning HTTP/1.1" 200 - 113 - 7 - "-" "S3Console/0.4" - s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 arn:aws:s3:us-west-1:123456789012:accesspoint/example-AP Yes us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 891CE47D2EXAMPLE REST.GET.LOGGING_STATUS - "GET /amzn-s3-demo-bucket1?logging HTTP/1.1" 200 - 242 - 11 - "-" "S3Console/0.4" - 9vKBE6vMhrNiWHZmb2L0mXOcqPGzQOI5XLnCtZNPxev+Hf+7tpT6sxDwDty4LHBUOZJG96N1234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - - us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be A1206F460EXAMPLE REST.GET.BUCKETPOLICY - "GET /amzn-s3-demo-bucket1?policy HTTP/1.1" 404 NoSuchBucketPolicy 297 - 38 - "-" "S3Console/0.4" - BNaBsXZQQDbssi6xMBdBU2sLt+Yf5kZDmeBUP35sFoKa3sLLeMC78iwEIWxs99CRUrbS4n11234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - Yes us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:01:00 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 7B4A0FABBEXAMPLE REST.GET.VERSIONING - "GET /amzn-s3-demo-bucket1?versioning HTTP/1.1" 200 - 113 - 33 - "-" "S3Console/0.4" - Ke1bUcazaN1jWuUlPJaxF64cQVpUEhoZKEG/hmy/gijN/I1DeWqDfFvnpybfEseEME/u7ME1234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - - us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:01:57 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DD6CC733AEXAMPLE REST.PUT.OBJECT s3-dg.pdf "PUT /amzn-s3-demo-bucket1/s3-dg.pdf HTTP/1.1" 200 - - 4406583 41754 28 "-" "S3Console/0.4" - 10S62Zv81kBW7BB6SX4XJ48o6kpcl6LPwEoizZQQxJd5qDSCTLX0TgS37kYUBKQW3+bPdrg1234= SigV4 ECDHE-RSA-AES128-SHA AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - Yes us-east-1
```

The following is an example log record for the **Compute checksum** operation:

```
7cd47ef2be amzn-s3-demo-bucket [06/Feb/2019:00:00:38 +0000] - 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be e5042925-b524-4b3b-a869-f3881e78ff3a S3.COMPUTE.OBJECT.CHECKSUM example-object - - - - 1048576 - - - - - bPf7qjG4XwYdPgDQTl72GW/uotRhdPz2UryEyAFLDSRmKrakUkJCYLtAw6fdANcrsUYc1M/kIulXM1u5vZQT5g== - - - - - - - -
```

**Topics**
+ [

## Log record fields
](#log-record-fields)
+ [

## Additional logging for copy operations
](#AdditionalLoggingforCopyOperations)
+ [

## Custom access log information
](#LogFormatCustom)
+ [

## Programming considerations for extensible server access log format
](#LogFormatExtensible)

## Log record fields


The following list describes the log record fields.

**Bucket Owner**  
The canonical user ID of the owner of the source bucket. The canonical user ID is another form of the AWS account ID. For more information about the canonical user ID, see [AWS account identifiers](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) in the *AWS General Reference*. For information about how to find the canonical user ID for your account, see [Finding the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId).  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```

**Bucket**  
The name of the bucket that the request was processed against. If the system receives a malformed request and cannot determine the bucket, the request will not appear in any server access log.  
**Example entry**  

```
amzn-s3-demo-bucket1
```

**Time**  
The time at which the request was received; these dates and times are in Coordinated Universal Time (UTC). The format, using `strftime()` terminology, is as follows: `[%d/%b/%Y:%H:%M:%S %z]`  
**Example entry**  

```
[06/Feb/2019:00:00:38 +0000]
```

**Remote IP**  
The apparent IP address of the requester. Intermediate proxies and firewalls might obscure the actual IP address of the machine that's making the request.  
**Example entry**  

```
192.0.2.3
```

**Requester**  
The canonical user ID of the requester, or a `-` for unauthenticated requests. If the requester was an IAM user, this field returns the requester's IAM user name along with the AWS account that the IAM user belongs to. This identifier is the same one used for access control purposes.  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```
If the requester is using an assumed role, this field returns the assumed IAM role.  
**Example entry**  

```
arn:aws:sts::123456789012:assumed-role/roleName/test-role
```

**Request ID**  
A string generated by Amazon S3 to uniquely identify each request. For **Compute checksum** job requests, the **Request ID** field displays the associated job ID. For more information, see [Compute checksums](batch-ops-compute-checksums.md).  
**Example entry**  

```
3E57427F33A59F07
```

**Operation**  
The operation listed here is declared as `SOAP.operation`, `REST.HTTP_method.resource_type`, `WEBSITE.HTTP_method.resource_type`, or `BATCH.DELETE.OBJECT`, or `S3.action.resource_type` for [S3 Lifecycle and logging](lifecycle-and-other-bucket-config.md#lifecycle-general-considerations-logging). For [https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-compute-checksums.html](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-compute-checksums.html) job requests, the operation is listed as `S3.COMPUTE.OBJECT.CHECKSUM`.  
**Example entry**  

```
REST.PUT.OBJECT
S3.COMPUTE.OBJECT.CHECKSUM
```

**Key**  
The key (object name) part of the request.  
**Example entry**  

```
/photos/2019/08/puppy.jpg
```

**Request-URI**  
The `Request-URI` part of the HTTP request message. This field may include unescaped quotes from the user input.  
**Example Entry**  

```
"GET /amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg?x-foo=bar HTTP/1.1"
```

**HTTP status**  
The numeric HTTP status code of the response.  
**Example entry**  

```
200
```

**Error Code**  
The Amazon S3 [Error responses ](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html), or `-` if no error occurred.  
**Example entry**  

```
NoSuchBucket
```

**Bytes Sent**  
The number of response bytes sent, excluding HTTP protocol overhead, or `-` if zero.  
**Example entry**  

```
2662992
```

**Object Size**  
The total size of the object in question.  
**Example entry**  

```
3462992
```

**Total Time**  
The number of milliseconds that the request was in flight from the server's perspective. This value is measured from the time that your request is received to the time that the last byte of the response is sent. Measurements made from the client's perspective might be longer because of network latency.  
**Example entry**  

```
70
```

**Turn-Around Time**  
The number of milliseconds that Amazon S3 spent processing your request. This value is measured from the time that the last byte of your request was received until the time that the first byte of the response was sent.  
**Example entry**  

```
10
```

**Referer**  
The value of the HTTP `Referer` header, if present. HTTP user-agents (for example, browsers) typically set this header to the URL of the linking or embedding page when making a request. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"http://www.example.com/webservices"
```

**User-Agent**  
The value of the HTTP `User-Agent` header. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"curl/7.15.1"
```

**Version Id**  
The version ID in the request, or `-` if the operation doesn't take a `versionId` parameter.  
**Example entry**  

```
3HL4kqtJvjVBH40Nrjfkd
```

**Host Id**  
The `x-amz-id-2` or Amazon S3 extended request ID.   
**Example entry**  

```
s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=
```

**Signature Version**  
The signature version, `SigV2` or `SigV4`, that was used to authenticate the request or a `-` for unauthenticated requests.  
**Example entry**  

```
SigV2
```

**Cipher Suite**  
The Transport Layer Security (TLS) cipher that was negotiated for an HTTPS request or a `-` for HTTP.  
**Example entry**  

```
ECDHE-RSA-AES128-GCM-SHA256
```

**Authentication Type**  
The type of request authentication used: `AuthHeader` for authentication headers, `QueryString` for query string (presigned URL), or a `-` for unauthenticated requests.  
**Example entry**  

```
AuthHeader
```

**Host Header**  
The endpoint used to connect to Amazon S3.  
**Example entry**  

```
s3.us-west-2.amazonaws.com
```
Some earlier Regions support legacy endpoints. You might see these endpoints in your server access logs or AWS CloudTrail logs. For more information, see [Legacy endpoints](VirtualHosting.md#s3-legacy-endpoints). For a complete list of Amazon S3 Regions and endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *Amazon Web Services General Reference*.

**TLS version**  
The Transport Layer Security (TLS) version negotiated by the client. The value is one of following: `TLSv1.1`, `TLSv1.2`, `TLSv1.3`, or `-` if TLS wasn't used.  
**Example entry**  

```
TLSv1.2
```

**Access Point ARN**  
The Amazon Resource Name (ARN) of the access point of the request. If the access point ARN is malformed or not used, the field will contain a `-`. For more information about access points, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md). For more information about ARNs, see [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the *AWS Reference Guide*.  
**Example entry**  

```
arn:aws:s3:us-east-1:123456789012:accesspoint/example-AP
```

**aclRequired**  
A string that indicates whether the request required an access control list (ACL) for authorization. If the request required an ACL for authorization, the string is `Yes`. If no ACLs were required, the string is `-`. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md). For more information about using the `aclRequired` field to disable ACLs, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).   
**Example entry**  

```
Yes
```

**Source region**  
The AWS Region from which the request originated. This field shows a dash (`-`) when the origin region cannot be determined (such as PrivateLink connections, Direct Connect connections, Bring your own IP addresses (BYOIP), or non-AWS IP addresses) or when the log is generated by operations triggered based on customer-set policies or actions, such as lifecycle and checksum.  
**Example entry**  

```
us-east-1
```

## Additional logging for copy operations


A copy operation involves a `GET` and a `PUT`. For that reason, we log two records when performing a copy operation. The previous section describes the fields related to the `PUT` part of the operation. The following list describes the fields in the record that relate to the `GET` part of the copy operation.

**Bucket Owner**  
The canonical user ID of the bucket that stores the object being copied. The canonical user ID is another form of the AWS account ID. For more information about the canonical user ID, see [AWS account identifiers](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) in the *AWS General Reference*. For information about how to find the canonical user ID for your account, see [Finding the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId).  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```

**Bucket**  
The name of the bucket that stores the object that's being copied.  
**Example entry**  

```
amzn-s3-demo-bucket1
```

**Time**  
The time at which the request was received; these dates and times are in Coordinated Universal Time (UTC). The format, using `strftime()` terminology, is as follows: `[%d/%B/%Y:%H:%M:%S %z]`  
**Example entry**  

```
[06/Feb/2019:00:00:38 +0000]
```

**Remote IP**  
The apparent IP address of the requester. Intermediate proxies and firewalls might obscure the actual IP address of the machine that's making the request.  
**Example entry**  

```
192.0.2.3
```

**Requester**  
The canonical user ID of the requester, or a `-` for unauthenticated requests. If the requester was an IAM user, this field will return the requester's IAM user name along with the AWS account root user that the IAM user belongs to. This identifier is the same one used for access control purposes.  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```
If the requester is using an assumed role, this field returns the assumed IAM role.  
**Example entry**  

```
arn:aws:sts::123456789012:assumed-role/roleName/test-role
```

**Request ID**  
A string generated by Amazon S3 to uniquely identify each request. For **Compute checksum** job requests, the **Request ID** field displays the associated job ID. For more information, see [Compute checksums](batch-ops-compute-checksums.md).  
**Example entry**  

```
3E57427F33A59F07
```

**Operation**  
The operation listed here is declared as `SOAP.operation`, `REST.HTTP_method.resource_type`, `WEBSITE.HTTP_method.resource_type`, or `BATCH.DELETE.OBJECT`.  
**Example entry**  

```
REST.COPY.OBJECT_GET
```

**Key**  
The key (object name) of the object being copied, or `-` if the operation doesn't take a key parameter.   
**Example entry**  

```
/photos/2019/08/puppy.jpg
```

**Request-URI**  
The `Request-URI` part of the HTTP request message. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"GET /amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg?x-foo=bar"
```

**HTTP status**  
The numeric HTTP status code of the `GET` portion of the copy operation.  
**Example entry**  

```
200
```

**Error Code**  
The Amazon S3 [Error responses ](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html) of the `GET` portion of the copy operation, or `-` if no error occurred.  
**Example entry**  

```
NoSuchBucket
```

**Bytes Sent**  
The number of response bytes sent, excluding the HTTP protocol overhead, or `-` if zero.  
**Example entry**  

```
2662992
```

**Object Size**  
The total size of the object in question.  
**Example entry**  

```
3462992
```

**Total Time**  
The number of milliseconds that the request was in flight from the server's perspective. This value is measured from the time that your request is received to the time that the last byte of the response is sent. Measurements made from the client's perspective might be longer because of network latency.  
**Example entry**  

```
70
```

**Turn-Around Time**  
The number of milliseconds that Amazon S3 spent processing your request. This value is measured from the time that the last byte of your request was received until the time that the first byte of the response was sent.  
**Example entry**  

```
10
```

**Referer**  
The value of the HTTP `Referer` header, if present. HTTP user-agents (for example, browsers) typically set this header to the URL of the linking or embedding page when making a request. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"http://www.example.com/webservices"
```

**User-Agent**  
The value of the HTTP `User-Agent` header. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"curl/7.15.1"
```

**Version Id**  
The version ID of the object being copied, or `-` if the `x-amz-copy-source` header didn't specify a `versionId` parameter as part of the copy source.  
**Example Entry**  

```
3HL4kqtJvjVBH40Nrjfkd
```

**Host Id**  
The `x-amz-id-2` or Amazon S3 extended request ID.  
**Example entry**  

```
s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=
```

**Signature Version**  
The signature version, `SigV2` or `SigV4`, that was used to authenticate the request, or a `-` for unauthenticated requests.  
**Example entry**  

```
SigV4
```

**Cipher Suite**  
The Transport Layer Security (TLS) cipher that was negotiated for an HTTPS request, or a `-` for HTTP.  
**Example entry**  

```
ECDHE-RSA-AES128-GCM-SHA256
```

**Authentication Type**  
The type of request authentication used: `AuthHeader` for authentication headers, `QueryString` for query strings (presigned URLs), or a `-` for unauthenticated requests.  
**Example entry**  

```
AuthHeader
```

**Host Header**  
The endpoint that was used to connect to Amazon S3.  
**Example entry**  

```
s3.us-west-2.amazonaws.com
```
Some earlier Regions support legacy endpoints. You might see these endpoints in your server access logs or AWS CloudTrail logs. For more information, see [Legacy endpoints](VirtualHosting.md#s3-legacy-endpoints). For a complete list of Amazon S3 Regions and endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *Amazon Web Services General Reference*.

**TLS version**  
The Transport Layer Security (TLS) version negotiated by the client. The value is one of following: `TLSv1.1`, `TLSv1.2`, `TLSv1.3`, or `-` if TLS wasn't used.  
**Example entry**  

```
TLSv1.2
```

**Access Point ARN**  
The Amazon Resource Name (ARN) of the access point of the request. If the access point ARN is malformed or not used, the field will contain a `-`. For more information about access points, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md). For more information about ARNs, see [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the *AWS Reference Guide*.  
**Example entry**  

```
arn:aws:s3:us-east-1:123456789012:accesspoint/example-AP
```

**aclRequired**  
A string that indicates whether the request required an access control list (ACL) for authorization. If the request required an ACL for authorization, the string is `Yes`. If no ACLs were required, the string is `-`. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md). For more information about using the `aclRequired` field to disable ACLs, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).   
**Example entry**  

```
Yes
```

**Source region**  
The AWS Region from which the request originated. This field shows a dash (`-`) when the origin region cannot be determined (such as PrivateLink connections, Direct Connect connections, Bring your own IP addresses (BYOIP), or non-AWS IP addresses) or when the log is generated by operations triggered based on customer-set policies or actions, such as lifecycle and checksum.  
**Example entry**  

```
us-east-1
```

## Custom access log information


You can include custom information to be stored in the access log record for a request. To do this, add a custom query-string parameter to the URL for the request. Amazon S3 ignores query-string parameters that begin with `x-`, but includes those parameters in the access log record for the request, as part of the `Request-URI` field of the log record. 

For example, a `GET` request for `"s3.amazonaws.com/amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg?x-user=johndoe"` works the same as the request for `"s3.amazonaws.com/amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg"`, except that the `"x-user=johndoe"` string is included in the `Request-URI` field for the associated log record. This functionality is available in the REST interface only.

## Programming considerations for extensible server access log format


Occasionally, we might extend the access log record format by adding new fields to the end of each line. Therefore, make sure that any of your code that parses server access logs can handle trailing fields that it might not understand. 

# Deleting Amazon S3 log files
Deleting log files

An Amazon S3 bucket with server access logging enabled can accumulate many server log objects over time. Your application might need these access logs for a specific period after they are created, and after that, you might want to delete them. You can use Amazon S3 Lifecycle configuration to set rules so that Amazon S3 automatically queues these objects for deletion at the end of their life. 

You can define a lifecycle configuration for a subset of objects in your S3 bucket by using a shared prefix. If you specified a prefix in your server access logging configuration, you can set a lifecycle configuration rule to delete log objects that have that prefix. 

For example, suppose that your log objects have the prefix `logs/`. You can set a lifecycle configuration rule to delete all objects in the bucket that have the prefix `logs/` after a specified period of time. 

For more information about lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

For general information about server access logging, see [Logging requests with server access logging](ServerLogs.md).

# Using Amazon S3 server access logs to identify requests
Identifying S3 requests

You can identify Amazon S3 requests by using Amazon S3 server access logs. 

**Note**  
To identify Amazon S3 requests, we recommend that you use AWS CloudTrail data events instead of Amazon S3 server access logs. CloudTrail data events are easier to set up and contain more information. For more information, see [Identifying Amazon S3 requests using CloudTrail](cloudtrail-request-identification.md).
Depending on how many access requests you get, analyzing your logs might require more resources or time than using CloudTrail data events.

**Topics**
+ [

## Querying access logs for requests by using Amazon Athena
](#querying-s3-access-logs-for-requests)
+ [

## Identifying Signature Version 2 requests by using Amazon S3 access logs
](#using-s3-access-logs-to-identify-sigv2-requests)
+ [

## Identifying object access requests by using Amazon S3 access logs
](#using-s3-access-logs-to-identify-objects-access)

## Querying access logs for requests by using Amazon Athena
Using Amazon Athena

You can identify Amazon S3 requests with Amazon S3 access logs by using Amazon Athena. 

Amazon S3 stores server access logs as objects in an S3 bucket. It is often easier to use a tool that can analyze the logs in Amazon S3. Athena supports analysis of S3 objects and can be used to query Amazon S3 access logs.

**Example**  
The following example shows how you can query Amazon S3 server access logs in Amazon Athena. Replace the `user input placeholders` used in the following examples with your own information.  
To specify an Amazon S3 location in an Athena query, you must provide an S3 URI for the bucket where your logs are delivered to. This URI must include the bucket name and prefix in the following format: `s3://amzn-s3-demo-bucket1-logs/prefix/` 

1. Open the Athena console at [https://console.aws.amazon.com/athena/](https://console.aws.amazon.com/athena/home).

1. In the Query Editor, run a command similar to the following. Replace `s3_access_logs_db` with the name that you want to give to your database. 

   ```
   CREATE DATABASE s3_access_logs_db
   ```
**Note**  
It's a best practice to create the database in the same AWS Region as your S3 bucket. 

1. In the Query Editor, run a command similar to the following to create a table schema in the database that you created in step 2. Replace `s3_access_logs_db.mybucket_logs` with the name that you want to give to your table. The `STRING` and `BIGINT` data type values are the access log properties. You can query these properties in Athena. For `LOCATION`, enter the S3 bucket and prefix path as noted earlier.

------
#### [ Date-based partitioning ]

   ```
   CREATE EXTERNAL TABLE s3_access_logs_db.mybucket_logs( 
    `bucketowner` STRING, 
    `bucket_name` STRING, 
    `requestdatetime` STRING, 
    `remoteip` STRING, 
    `requester` STRING, 
    `requestid` STRING, 
    `operation` STRING, 
    `key` STRING, 
    `request_uri` STRING, 
    `httpstatus` STRING, 
    `errorcode` STRING, 
    `bytessent` BIGINT, 
    `objectsize` BIGINT, 
    `totaltime` STRING, 
    `turnaroundtime` STRING, 
    `referrer` STRING, 
    `useragent` STRING, 
    `versionid` STRING, 
    `hostid` STRING, 
    `sigv` STRING, 
    `ciphersuite` STRING, 
    `authtype` STRING, 
    `endpoint` STRING, 
    `tlsversion` STRING,
    `accesspointarn` STRING,
    `aclrequired` STRING,
    `sourceregion` STRING)
    PARTITIONED BY (
      `timestamp` string)
   ROW FORMAT SERDE 
    'org.apache.hadoop.hive.serde2.RegexSerDe' 
   WITH SERDEPROPERTIES ( 
    'input.regex'='([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) (-|[0-9]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) ([^ ]*)(?: ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*))?.*$') 
   STORED AS INPUTFORMAT 
    'org.apache.hadoop.mapred.TextInputFormat' 
   OUTPUTFORMAT 
    'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
   LOCATION
    's3://bucket-name/prefix-name/account-id/region/source-bucket-name/'
    TBLPROPERTIES (
     'projection.enabled'='true', 
     'projection.timestamp.format'='yyyy/MM/dd', 
     'projection.timestamp.interval'='1', 
     'projection.timestamp.interval.unit'='DAYS', 
     'projection.timestamp.range'='2024/01/01,NOW', 
     'projection.timestamp.type'='date', 
     'storage.location.template'='s3://bucket-name/prefix-name/account-id/region/source-bucket-name/${timestamp}')
   ```

------
#### [ Non-date-based partitioning ]

   ```
   CREATE EXTERNAL TABLE `s3_access_logs_db.mybucket_logs`(
     `bucketowner` STRING, 
     `bucket_name` STRING, 
     `requestdatetime` STRING, 
     `remoteip` STRING, 
     `requester` STRING, 
     `requestid` STRING, 
     `operation` STRING, 
     `key` STRING, 
     `request_uri` STRING, 
     `httpstatus` STRING, 
     `errorcode` STRING, 
     `bytessent` BIGINT, 
     `objectsize` BIGINT, 
     `totaltime` STRING, 
     `turnaroundtime` STRING, 
     `referrer` STRING, 
     `useragent` STRING, 
     `versionid` STRING, 
     `hostid` STRING, 
     `sigv` STRING, 
     `ciphersuite` STRING, 
     `authtype` STRING, 
     `endpoint` STRING, 
     `tlsversion` STRING,
     `accesspointarn` STRING,
     `aclrequired` STRING,
     `sourceregion` STRING)
   ROW FORMAT SERDE 
     'org.apache.hadoop.hive.serde2.RegexSerDe' 
   WITH SERDEPROPERTIES ( 
     'input.regex'='([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) (-|[0-9]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) ([^ ]*)(?: ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*))?.*$') 
   STORED AS INPUTFORMAT 
     'org.apache.hadoop.mapred.TextInputFormat' 
   OUTPUTFORMAT 
     'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
   LOCATION
     's3://amzn-s3-demo-bucket1-logs/prefix/'
   ```

------

1. In the navigation pane, under **Database**, choose your database.

1. Under **Tables**, choose **Preview table** next to your table name.

   In the **Results** pane, you should see data from the server access logs, such as `bucketowner`, `bucket`, `requestdatetime`, and so on. This means that you successfully created the Athena table. You can now query the Amazon S3 server access logs.

**Example — Show who deleted an object and when (timestamp, IP address, and IAM user)**  

```
SELECT requestdatetime, remoteip, requester, key 
FROM s3_access_logs_db.mybucket_logs 
WHERE key = 'images/picture.jpg' AND operation like '%DELETE%';
```

**Example — Show all operations that were performed by an IAM user**  

```
SELECT * 
FROM s3_access_logs_db.mybucket_logs 
WHERE requester='arn:aws:iam::123456789123:user/user_name';
```

**Example — Show all operations that were performed on an object in a specific time period**  

```
SELECT *
FROM s3_access_logs_db.mybucket_logs
WHERE Key='prefix/images/picture.jpg' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2017-02-18:07:00:00','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2017-02-18:08:00:00','yyyy-MM-dd:HH:mm:ss');
```

**Example — Show how much data was transferred to a specific IP address in a specific time period**  

```
SELECT coalesce(SUM(bytessent), 0) AS bytessenttotal
FROM s3_access_logs_db.mybucket_logs
WHERE remoteip='192.0.2.1'
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2022-06-01','yyyy-MM-dd')
AND parse_datetime('2022-07-01','yyyy-MM-dd');
```

**Example — Find request IDs for HTTP 5xx errors in a specific time period**  

```
SELECT requestdatetime, key, httpstatus, errorcode, requestid, hostid 
FROM s3_access_logs_db.mybucket_logs
WHERE httpstatus like '5%' AND timestamp
BETWEEN '2024/01/29'
AND '2024/01/30'
```

**Note**  
To reduce the time that you retain your logs, you can create an S3 Lifecycle configuration for your server access logs bucket. Create lifecycle configuration rules to remove log files periodically. Doing so reduces the amount of data that Athena analyzes for each query. For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).

## Identifying Signature Version 2 requests by using Amazon S3 access logs
SigV2 requests

Amazon S3 support for Signature Version 2 will be turned off (deprecated). After that, Amazon S3 will no longer accept requests that use Signature Version 2, and all requests must use Signature Version 4 signing. You can identify Signature Version 2 access requests by using Amazon S3 access logs. 

**Note**  
To identify Signature Version 2 requests, we recommend that you use AWS CloudTrail data events instead of Amazon S3 server access logs. CloudTrail data events are easier to set up and contain more information than server access logs. For more information, see [Identifying Amazon S3 Signature Version 2 requests by using CloudTrail](cloudtrail-request-identification.md#cloudtrail-identification-sigv2-requests).

**Example — Show all requesters that are sending Signature Version 2 traffic**  

```
SELECT requester, sigv, Count(sigv) as sigcount 
FROM s3_access_logs_db.mybucket_logs
GROUP BY requester, sigv;
```

## Identifying object access requests by using Amazon S3 access logs
Object access

You can use queries on Amazon S3 server access logs to identify Amazon S3 object access requests, for operations such as `GET`, `PUT`, and `DELETE`, and discover further information about those requests.

The following Amazon Athena query example shows how to get all `PUT` object requests for Amazon S3 from a server access log. 

**Example — Show all requesters that are sending `PUT` object requests in a certain period**  

```
SELECT bucket_name, requester, remoteip, key, httpstatus, errorcode, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE operation='REST.PUT.OBJECT' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z') 
BETWEEN parse_datetime('2019-07-01:00:42:42',yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2019-07-02:00:42:42','yyyy-MM-dd:HH:mm:ss')
```

The following Amazon Athena query example shows how to get all `GET` object requests for Amazon S3 from the server access log. 

**Example — Show all requesters that are sending `GET` object requests in a certain period**  

```
SELECT bucket_name, requester, remoteip, key, httpstatus, errorcode, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE operation='REST.GET.OBJECT' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z') 
BETWEEN parse_datetime('2019-07-01:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2019-07-02:00:42:42','yyyy-MM-dd:HH:mm:ss')
```

The following Amazon Athena query example shows how to get all anonymous requests to your S3 buckets from the server access log. 

**Example — Show all anonymous requesters that are making requests to a bucket during a certain period**  

```
SELECT bucket_name, requester, remoteip, key, httpstatus, errorcode, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE requester IS NULL 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z') 
BETWEEN parse_datetime('2019-07-01:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2019-07-02:00:42:42','yyyy-MM-dd:HH:mm:ss')
```

The following Amazon Athena query shows how to identify all requests to your S3 buckets that required an access control list (ACL) for authorization. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs. After you've created these bucket policies, you can disable ACLs for these buckets. For more information about disabling ACLs, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md). 

**Example — Identify all requests that required an ACL for authorization**  

```
SELECT bucket_name, requester, key, operation, aclrequired, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE aclrequired = 'Yes' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2022-05-10:00:00:00','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2022-08-10:00:00:00','yyyy-MM-dd:HH:mm:ss')
```

**Note**  
You can modify the date range as needed to suit your needs.
These query examples might also be useful for security monitoring. You can review the results for `PutObject` or `GetObject` calls from unexpected or unauthorized IP addresses or requesters and for identifying any anonymous requests to your buckets.
This query only retrieves information from the time at which logging was enabled. 
If you are using AWS CloudTrail logs, see [Identifying access to S3 objects by using CloudTrail](cloudtrail-request-identification.md#cloudtrail-identification-object-access). 

# Troubleshoot server access logging


The following topics can help you troubleshoot issues that you might encounter when setting up logging with Amazon S3.

**Topics**
+ [

## Common error messages when setting up logging
](#common-errors)
+ [

## Troubleshooting delivery failures
](#delivery-failures)

## Common error messages when setting up logging


The following common error messages can appear when you're enabling logging through the AWS Command Line Interface (AWS CLI) and AWS SDKs: 

Error: Cross S3 location logging not allowed

If the destination bucket (also known as a *target bucket*) is in a different Region than the source bucket, a Cross S3 location logging not allowed error occurs. To resolve this error, make sure that the destination bucket configured to receive the access logs is in the same AWS Region and AWS account as the source bucket.

Error: The owner for the bucket to be logged and the target bucket must be the same

When you're enabling server access logging, this error occurs if the specified destination bucket belongs to a different account. To resolve this error, make sure that the destination bucket is in the same AWS account as the source bucket.

**Note**  
We recommend that you choose a destination bucket that's different from the source bucket. When the source bucket and destination bucket are the same, additional logs are created for the logs that are written to the bucket, which can increase your storage bill. These extra logs about logs can also make it difficult to find the particular logs that you're looking for. For simpler log management, we recommend saving access logs in a different bucket. For more information, see [How do I enable log delivery?](ServerLogs.md#server-access-logging-overview).

Error: The target bucket for logging does not exist

The destination bucket must exist prior to setting the configuration. This error indicates that the destination bucket doesn't exist or can't be found. Make sure that the bucket name is spelled correctly, and then try again.

Error: Target grants not allowed for bucket owner enforced buckets

This error indicates that the destination bucket uses the Bucket owner enforced setting for S3 Object Ownership. The Bucket owner enforced setting doesn't support destination (target) grants. For more information, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general).

## Troubleshooting delivery failures


To avoid server access logging issues, make sure that you're following these best practices:
+ **The S3 log delivery group has write access to the destination bucket** – The S3 log delivery group delivers server access logs to the destination bucket. A bucket policy or bucket access control list (ACL) can be used to grant write access to the destination bucket. However, we recommend that you use a bucket policy instead of an ACL. For more information about how to grant write access to your destination bucket, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general).
**Note**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, be aware of the following:   
ACLs are disabled and no longer affect permissions. This means that you can't update your bucket ACL to grant access to the S3 log delivery group. Instead, to grant access to the logging service principal, you must update the bucket policy for the destination bucket. 
You can't include destination grants in your `PutBucketLogging` configuration. 
+ **The bucket policy for the destination bucket allows access to the logs** – Check the bucket policy of the destination bucket. Search the bucket policy for any statements that contain `"Effect": "Deny"`. Then, verify that the `Deny` statement isn't preventing access logs from being written to the bucket.
+ **S3 Object Lock isn't enabled on the destination bucket** – Check if the destination bucket has Object Lock enabled. Object Lock blocks server access log delivery. You must choose a destination bucket that doesn't have Object Lock enabled.
+ **Amazon S3 managed keys (SSE-S3) is selected if default encryption is enabled on the destination bucket** – You can use default bucket encryption on the destination bucket only if you use server-side encryption with Amazon S3 managed keys (SSE-S3). Default server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) is not supported for server access logging destination buckets. For more information about how to enable default encryption, see [Configuring default encryption](default-bucket-encryption.md).
+ **The destination bucket does not have Requester Pays enabled** – Using a Requester Pays bucket as the destination bucket for server access logging is not supported. To allow delivery of server access logs, disable the Requester Pays option on the destination bucket.
+ **Review your AWS Organizations service control policies (SCPs) and resource control policies (RCPs)** – When you're using AWS Organizations, check the service control policies and resource control policies to make sure that Amazon S3 access is allowed. These policies specify the maximum permissions for principals and resources in the affected accounts. Search the policies for any statements that contain `"Effect": "Deny"` and verify that `Deny` statements aren't preventing any access logs from being written to the bucket. For more information, see [Authorization policies in AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_authorization_policies.html) in the *AWS Organizations User Guide*.
+ **Allow some time for recent logging configuration changes to take effect** – Enabling server access logging for the first time, or changing the destination bucket for logs, requires time to fully take effect. It might take longer than an hour for all requests to be properly logged and delivered. 

  To check for log delivery failures, enable request metrics in Amazon CloudWatch. If the logs are not delivered within a few hours, look for the `4xxErrors` metric, which can indicate log delivery failures. For more information about enabling request metrics, see [Creating a CloudWatch metrics configuration for all the objects in your bucket](configure-request-metrics-bucket.md).