

# Viewing metrics with Amazon S3 Storage Lens
<a name="storage_lens_view_metrics"></a>

S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket.

By default, all dashboards are configured with free metrics, which include metrics that you can use to understand usage and activity across your S3 storage, optimize your storage costs, and implement data-protection and access-management best practices. Free metrics are aggregated down to the bucket level. With free metrics, data is available for queries for up to 14 days.

Advanced metrics and recommendations include the following additional features that you can use to gain further insight into usage and activity across your storage and best practices for optimizing your storage:
+ Contextual recommendations (available only in the dashboard)
+ Advanced metrics (including activity metrics aggregated by bucket)
+ Prefix aggregation
+ Storage Lens group aggregation
+ Storage Lens group aggregation
+ Amazon CloudWatch publishing

Advanced metrics data is available for queries for 15 months. There are additional charges for using S3 Storage Lens with advanced metrics. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing). For more information about free and advanced metrics, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

**Topics**
+ [Viewing S3 Storage Lens metrics on the dashboards](storage_lens_view_metrics_dashboard.md)
+ [Viewing Amazon S3 Storage Lens metrics using a data export](storage_lens_view_metrics_export.md)
+ [Monitor S3 Storage Lens metrics in CloudWatch](storage_lens_view_metrics_cloudwatch.md)
+ [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md)

# Viewing S3 Storage Lens metrics on the dashboards
<a name="storage_lens_view_metrics_dashboard"></a>

In the Amazon S3 console, S3 Storage Lens provides an interactive default dashboard that you can use to visualize insights and trends in your data. You can also use this dashboard to flag outliers and receive recommendations for optimizing storage costs and applying data-protection best practices. Your dashboard has drill-down options to generate insights at the account, bucket, AWS Region, prefix, or Storage Lens group level. If you've enabled S3 Storage Lens to work with AWS Organizations, you can also generate insights at the organization level (such as data for all accounts that are part of your AWS Organizations hierarchy). The dashboard always loads for the latest date that has metrics available.

The S3 Storage Lens default dashboard on the console is named **default-account-dashboard**. Amazon S3 pre-configures this dashboard to visualize the summarized insights and trends for your entire account and updates them daily in the S3 console. You can't modify the configuration scope of the default dashboard, but you can upgrade the metrics selection from the free metrics to the paid advanced metrics and recommendations. With advanced metrics and recommendations, you can access additional metrics and features. These features include advanced metric categories, prefix-level aggregation, contextual recommendations, and Amazon CloudWatch publishing.

You can disable the default dashboard, but you can't delete it. If you disable your default dashboard, it is no longer updated. You also will no longer receive any new daily metrics in S3 Storage Lens or in the **Account snapshot** section on the **Buckets** page. You can still see historic data in the default dashboard until the 14-day period for data queries expires. This period is 15 months if you've enabled advanced metrics and recommendations. To access this data, you can re-enable the default dashboard within the expiration period.

You can create additional S3 Storage Lens dashboards and scope them by AWS Regions, S3 buckets, or accounts. You can also scope your dashboards by organization if you've enabled Storage Lens to work with AWS Organizations. When you create or edit an S3 Storage Lens dashboard, you define your dashboard scope and metrics selection. 

 

You can disable or delete any additional dashboards that you create. 
+ If you disable a dashboard, it is no longer updated, and you will no longer receive any new daily metrics. You can still see historic data for free metrics until the 14-day expiration period. If you enabled advanced metrics and recommendations for that dashboard, this period is 15 months. To access this data, you can re-enable the dashboard within the expiration period. 
+ If you delete your dashboard, you lose all your dashboard configuration settings. You will no longer receive any new daily metrics, and you also lose access to the historical data associated with that dashboard. If you want to access the historic data for a deleted dashboard, you must create another dashboard with the same name in the same home Region.

**Topics**
+ [Viewing an Amazon S3 Storage Lens dashboard](#storage_lens_console_viewing)
+ [Understanding your S3 Storage Lens dashboard](#storage_lens_console_viewing_dashboard)

## Viewing an Amazon S3 Storage Lens dashboard
<a name="storage_lens_console_viewing"></a>

The following procedure shows how to view an S3 Storage Lens dashboard in the S3 console. For use-case based walkthroughs that show how to use your dashboard to optimize costs, implement best practices, and improve the performance of applications that access your S3 buckets, see [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md).

**Note**  
You can't use your account's root user credentials to view Amazon S3 Storage Lens dashboards. To access S3 Storage Lens dashboards, you must grant the required AWS Identity and Access Management (IAM) permissions to a new or existing IAM user. Then, sign in with those user credentials to access S3 Storage Lens dashboards. For more information, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md) and [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

   Your dashboard opens in S3 Storage Lens. The **Snapshot for *date*** section shows the latest date that S3 Storage Lens has collected metrics for. Your dashboard always loads the latest date that has metrics available.

1. (Optional) To change the date for your S3 Storage Lens dashboard, in the top-right date selector, choose a new date.

1. (Optional) To apply temporary filters to further limit the scope of your dashboard data, do the following:

   1. Expand the **Filters** section.

   1. To filter by specific accounts, AWS Regions, storage classes, buckets, prefixes, or Storage Lens groups, choose the options to filter by.
**Note**  
The **Prefixes** filter and the **Storage Lens groups** filter can’t be applied at the same time.

   1. To update a filter, choose **Apply**.

   1. To remove a filter, click on the **X** next to the filter.

1. In any section in your S3 Storage Lens dashboard, to see data for a specific metric, for **Metric**, choose the metric name.

1. In any chart or visualization in your S3 Storage Lens dashboard, you can drill down into deeper levels of aggregation by using the **Accounts**, **AWS Regions**, **Storage classes**, **Buckets**, **Prefixes**, or **Storage Lens groups** tabs. For an example, see [Uncover cold Amazon S3 buckets](storage-lens-optimize-storage.md#uncover-cold-buckets).

## Understanding your S3 Storage Lens dashboard
<a name="storage_lens_console_viewing_dashboard"></a>

Your S3 Storage Lens dashboard has a primary **Overview** tab, and up to five additional tabs that represent each aggregation level:
+ **Accounts**
+ **AWS Regions**
+ **Storage classes**
+ **Buckets**
+ **Prefixes**
+ **Storage Lens groups**

On the **Overview** tab, your dashboard data is aggregated into three different sections: **Snapshot for *date***, **Trends and distributions**, and **Top N overview**. 

For more information about your S3 Storage Lens dashboard, see the following sections.

### Snapshot
<a name="storage-lens-snapshot"></a>

The **Snapshot for *date*** section shows summary metrics that S3 Storage Lens has aggregated for the date selected. These summary metrics include the following metrics:
+ **Total storage** – The total amount of storage used in bytes.
+ **Object count** – The total number of objects in your AWS account.
+ **Average object size** – The average object size.
+ **Active buckets** – The total number of active buckets in active usage with storage > 0 bytes in your account.
+ **Accounts** – The number of accounts whose storage is in scope. This value is **1** unless you are using AWS Organizations and your S3 Storage Lens has trusted access with a valid service-linked role. For more information, see [Using service-linked roles for Amazon S3 Storage Lens](using-service-linked-roles.md). 
+ **Buckets** – The total number of buckets in your account.

**Metric data**  
For each metric that appears in the snapshot, you can see the following data:
+ **Metric name** – The name of the metric.
+ **Metric category** – The category that the metric is organized into.
+ **Total for *date*** – The total count for the date selected.
+ **% change** – The percentage change from the last snapshot date.
+ **30-day trend** – A trend-line showing the changes for the metric over a 30-day period.
+ **Recommendation** – A contextual recommendation based on the data that's provided in the snapshot. Recommendations are available with advanced metrics and recommendations. For more information, see [Recommendations](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_recommendations).

**Metrics categories**  
You can optionally update your dashboard **Snapshot for *date*** section to display metrics for other categories. If you want to see snapshot data for additional metrics, you can choose from the following **Metrics categories**:
+ **Cost optimization** 
+ **Data protection**
+ **Activity** (available with advanced metrics)
+ **Access management**
+ **Performance**
+ **Events**

The **Snapshot for *date*** section displays only a selection of metrics for each category. To see all metrics for a specific category, choose the metric in the **Trends and distributions** or **Top N overview** sections. For more information about metric categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of S3 Storage Lens metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

### Trends and distributions
<a name="storage-lens-trends"></a>

The second section of the **Overview** tab is **Trends and distributions**. In the **Trends and distributions** section, you can choose two metrics to compare over a date range that you define. The **Trends and distributions** section shows the relationship between two metrics over time. This section displays charts that you can use to see the **Storage class** and **Region** distribution between the two trends that you are tracking. You can optionally drill down into a data point in one of the charts for deeper analysis.

 For a walkthrough that uses the **Trends and distributions** section, see [Identify buckets that don't use server-side encryption with AWS KMS for default encryption (SSE-KMS)](storage-lens-data-protection.md#storage-lens-sse-kms).

### Top N overview
<a name="storage-lens-top-n"></a>

The third section of the S3 Storage Lens dashboard is **Top N overview** (sorted in ascending or descending order). This section displays your selected metrics across the top number of accounts, AWS Regions, buckets, prefixes, or Storage Lens groups. If you enabled S3 Storage Lens to work with AWS Organizations, you can also see your selected metrics across your organization.

For a walkthrough that uses the **Top N overview** section, see [Identify your largest S3 buckets](storage-lens-optimize-storage.md#identify-largest-s3-buckets).

### Drill down and analyze by options
<a name="storage-lens-drill-down"></a>

To provide a fluid experience for analysis, the S3 Storage Lens dashboard provides an action menu, which appears when you choose any chart value. To use this menu, choose any chart value to see the associated metrics values, and then choose from two options in the box that appears:
+ The **Drill down** action applies the selected value as a filter across all tabs of your dashboard. You can then drill down into that value for deeper analysis.
+ The **Analyze by** action takes you to the **Dimension** tab that you select and applies that tab value as a filter. These tabs include **Accounts**, **AWS Regions**, **Storage classes**, **Buckets**, **Prefixes** (for dashboards that have **Advanced metrics** and **Prefix aggregation** enabled), and **Storage Lens groups** (for dashboards that have **Advanced metrics** and **Storage Lens group aggregation** enabled). With **Analyze by**, you can view the data in the context of the new dimension for deeper analysis.

The **Drill down** and **Analyze by** actions might be disabled if the outcome would yield illogical results or would not have any value. Both the **Drill down** and **Analyze by** actions apply filters on top of any existing filters across all tabs of the dashboard. You can also remove the filters as needed.

### Tabs
<a name="storage-lens-dimension-tabs"></a>

The dimension-level tabs provide a detailed view of all values within a particular dimension. For example, the **AWS Regions** tab shows metrics for all AWS Regions, and the **Buckets** tab shows metrics for all buckets. Each dimension tab contains an identical layout consisting of four sections:
+ A trend chart that displays your top *N* items within the dimension over the last 30 days for the selected metric. By default, this chart displays the top 10 items, but you can decrease it to at least 3 items or increase it up to 50 items.
+ A histogram chart that shows a vertical bar chart for the selected date and metric. If you have a large number of items to display in this chart, you might need to scroll horizontally.
+ A bubble analysis chart that plots all items within the dimension. This chart represents the first metric on the x axis and the second metric on the y axis. The third metric is represented by the size of the bubble. 
+ A metric grid view that contains each item in the dimension listed in rows. The columns represent each available metric, arranged in metrics category tabs for easier navigation. 

# Viewing Amazon S3 Storage Lens metrics using a data export
<a name="storage_lens_view_metrics_export"></a>

Amazon S3 Storage Lens metrics are generated daily in CSV or Apache Parquet-formatted metrics export files and placed in an S3 general purpose bucket in your account. From there, you can ingest the metrics export into the analytics tools of your choice, such as Amazon Quick and Amazon Athena, where you can analyze storage usage and activity trends. You can also send daily metric exports to an AWS-managed S3 table bucket for immediate querying, using AWS analytics services or third-party tools.

**Topics**
+ [Using an AWS KMS key to encrypt your metrics exports](storage_lens_encrypt_permissions.md)
+ [What is an S3 Storage Lens export manifest?](storage_lens_whatis_metrics_export_manifest.md)
+ [Understanding the Amazon S3 Storage Lens export schemas](storage_lens_understanding_metrics_export_schema.md)

# Using an AWS KMS key to encrypt your metrics exports
<a name="storage_lens_encrypt_permissions"></a>

To grant Amazon S3 Storage Lens permission to encrypt your metrics exports by using a customer managed key, you must use a key policy. To update your key policy so that you can use a KMS key to encrypt your S3 Storage Lens metrics exports, follow these steps. 

**To grant S3 Storage Lens permissions to encrypt data by using your KMS key**

1. Sign into the AWS Management Console by using the AWS account that owns the customer managed key.

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the **Region selector** in the upper-right corner of the page.

1. In the left navigation pane, choose **Customer managed keys**. 

1. Under **Customer managed keys**, choose the key that you want to use to encrypt the metrics exports. AWS KMS keys are Region-specific and must be in the same Region as the metrics export destination S3 bucket.

1. Under **Key policy**, choose **Switch to policy view**. 

1. To update the key policy, choose **Edit**. 

1. Under **Edit key policy**, add the following key policy to the existing key policy. To use this policy, replace the ` user input placeholders ` with your information.

   ```
   {
       "Sid": "Allow Amazon S3 Storage Lens use of the KMS key",
        "Effect": "Allow",
       "Principal": {
           "Service": "storage-lens.s3.amazonaws.com"
       },
       "Action": [
           "kms:GenerateDataKey"
       ],
       "Resource": "*",
       "Condition": {
          "StringEquals": {
              "aws:SourceArn": "arn:aws:s3:us-east-1:        source-account-id:storage-lens/your-dashboard-name",
              "aws:SourceAccount": "source-account-id"
           }
        }
   }
   ```

1. Choose **Save changes**. 

For more information about creating customer managed keys and using key policies, see the following topics in the *AWS Key Management Service Developer Guide*: 
+  [Create a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) 
+  [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) 

You can also use the AWS KMS `PUT` key policy API operation ([https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html)) to copy the key policy to the customer managed keys that you want to use to encrypt the metrics exports by using the REST API, AWS CLI, and SDKs.

# What is an S3 Storage Lens export manifest?
<a name="storage_lens_whatis_metrics_export_manifest"></a>

S3 Storage Lens daily metrics exports in general-purpose buckets may be split into multiple files due to the large amount of data aggregated. The manifest file `manifest.json` describes where the metrics export files for that day are located. Whenever a new export is delivered, it's accompanied by a new manifest. Each manifest contained in the `manifest.json` file provides metadata and other basic information about the export. 

The manifest information includes the following properties:
+  `sourceAccountId` – The account ID of the configuration owner.
+  `configId` – A unique identifier for the dashboard.
+  `destinationBucket` – The destination bucket Amazon Resource Name (ARN) that the metrics export is placed in.
+  `reportVersion` – The version of the export.
+  `reportDate` – The date of the report.
+  `reportFormat` – The format of the report.
+  `reportSchema` – The schema of the report.
+  `reportFiles` – The actual list of the export report files that are in the destination bucket.

Manifest destination path example:

```
user-defined-prefix/StorageLens/111122223333/example-dashboard-configuration-id/V_1/manifests/dt=2025-03-18/manifest.json
```

The following example shows a `manifest.json` file for a CSV-formatted Storage Lens default metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",  
   "reportVersion": "V_1",  
   "reportDate": "2025-07-15",  
   "reportFormat": "CSV",  
   "reportSchema": "version_number,configuration_id,report_date,aws_account_number,aws_region,storage_class,record_type,record_value,bucket_name,metric_name,metric_value",  
   "reportFiles": [  
        {  
            "key": "DestinationPrefix/StorageLens/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-07-15/12345678-1234-1234-1234-123456789012.csv",  
            "size": 1603959,  
            "md5Checksum": "2177e775870def72b8d84febe1ad3574"  
        }  
   ]  
}
```

The following example shows a `manifest.json` file for a CSV-formatted Storage Lens expanded prefixes metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",   
   "reportVersion": "V_1",  
   "reportDate": "2025-11-03",  
   "reportFormat": "CSV",  
   "reportSchema": "version_number,configuration_id,report_date,aws_account_number,aws_region,storage_class,record_type,record_value,bucket_name,metric_name,metric_value",  
   "reportFiles": [  
        {  
            "key": "DestinationPrefix/StorageLensExpandedPrefixes/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-11-03/EXAMPLE1234-56ab-78cd-90ef-EXAMPLE11111.csv",  
            "size": 1603959,  
            "md5Checksum": "2177e775870def72b8d84febe1ad3574"  
        }  
      ]  
}
```

The following example shows a `manifest.json` file for a Parquet-formatted Storage Lens default metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",  
   "reportVersion": "V_1",  
   "reportDate": "2025-11-03",  
   "reportFormat": "Parquet",  
   "reportSchema": "message s3.storage.lens { required string version_number; required string configuration_id; required string report_date; required string aws_account_number; required string aws_region; required string storage_class; required string record_type; required string record_value; required string bucket_name; required string metric_name; required long metric_value; }",  
   "reportFiles": [  
      {  
         "key": "DestinationPrefix/StorageLens/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-11-03/bd23de7c-b46a-4cf4-bcc5-b21aac5be0f5.par",  
         "size": 14714,  
         "md5Checksum": "b5c741ee0251cd99b90b3e8eff50b944"  
      }  
   ]  
}
```

The following example shows a `manifest.json` file for a Parquet-formatted Storage Lens expanded prefixes metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",  
   "reportVersion": "V_1",  
   "reportDate": "2025-11-03",  
   "reportFormat": "Parquet",  
   "reportSchema": "message s3.storage.lens { required string version_number; required string configuration_id; required string report_date; required string aws_account_number; required string aws_region; required string storage_class; required string record_type; required string record_value; required string bucket_name; required string metric_name; required long metric_value; }",  
   "reportFiles": [  
      {  
         "key": "DestinationPrefix/StorageLensExpandedPrefixes/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-11-03/bd23de7c-b46a-4cf4-bcc5-b21aac5be0f5.par",  
         "size": 14714,  
         "md5Checksum": "b5c741ee0251cd99b90b3e8eff50b944"  
      }  
   ]  
}
```

You can configure your metrics export to be generated as part of your dashboard configuration in the Amazon S3 console or by using the Amazon S3 REST API, AWS CLI, and SDKs.

# Understanding the Amazon S3 Storage Lens export schemas
<a name="storage_lens_understanding_metrics_export_schema"></a>

S3 Storage Lens export schemas vary depending on your export destination. Choose the appropriate schema based on whether you're exporting to S3 general purpose buckets or S3 tables.

**Topics**
+ [Export schema for S3 general purpose buckets](#storage_lens_general_purpose_bucket_schema)
+ [Export schemas for S3 tables](#storage_lens_s3_tables_schema)

## Export schema for S3 general purpose buckets
<a name="storage_lens_general_purpose_bucket_schema"></a>

The following table contains the schema of your S3 Storage Lens metrics export when exporting to S3 general purpose buckets.


| Attribute name  | Data type | Column name | Description | 
| --- | --- | --- | --- | 
|  VersionNumber  | String |  version\$1number  | The version of the S3 Storage Lens metrics being used. | 
|  ConfigurationId  | String |  configuration\$1id  | The  configuration\$1id of your S3 Storage Lens configuration. | 
|  ReportDate  | String  |  report\$1date  | The date that the metrics were tracked. | 
|  AwsAccountNumber  |  String  |  aws\$1account\$1number  | Your AWS account number. | 
|  AwsRegion  |  String  |  aws\$1region  | The AWS Region for which the metrics are being tracked. | 
|  StorageClass  |  String  |  storage\$1class  | The storage class of the bucket in question. | 
|  RecordType  |  ENUM  |  record\$1type  |  The type of artifact that is being reported (ACCOUNT, BUCKET, or PREFIX).  | 
|  RecordValue  |  String  |  record\$1value  | The value of the RecordType artifact.  The `record_value` is URL-encoded.   | 
|  BucketName  |  String  |  bucket\$1name  | The name of the bucket that is being reported. | 
|  MetricName  |  String  |  metric\$1name  | The name of the metric that is being reported. | 
|  MetricValue  |  Long  |  metric\$1value  | The value of the metric that is being reported. | 

### Example of an S3 Storage Lens metrics export
<a name="storage_lens_sample_metrics_export"></a>

The following is an example of an S3 Storage Lens metrics export based on this schema. 

**Note**  
You can identify metrics for Storage Lens groups by looking for the `STORAGE_LENS_GROUP_BUCKET` or `STORAGE_LENS_GROUP_ACCOUNT` values in the `record_type` column. The `record_value` column will display the Amazon Resource Name (ARN) for the Storage Lens group, for example, `arn:aws:s3:us-east-1:123456789012:storage-lens-group/slg-1`. 

![\[An example S3 Storage Lens metrics export file.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/sample_storage_lens_export.png)


The following is an example of an S3 Storage Lens metrics export with Storage Lens groups data.

![\[An example S3 Storage Lens metrics export file with Storage Lens groups data.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/StorageLensGroups_metricsexport.png)


## Export schemas for S3 tables
<a name="storage_lens_s3_tables_schema"></a>

When exporting S3 Storage Lens metrics to S3 tables, the data is organized into three separate table schemas: storage metrics, bucket property metrics, and activity metrics.

**Topics**
+ [Storage metrics table schema](#storage_lens_s3_tables_storage_metrics)
+ [Bucket property metrics table schema](#storage_lens_s3_tables_bucket_property_metrics)
+ [Activity metrics table schema](#storage_lens_s3_tables_activity_metrics)

### Storage metrics table schema
<a name="storage_lens_s3_tables_storage_metrics"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, LENS GROUP.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  object\$1count  | long | Number of objects stored for the current referenced item | 
|  storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1object\$1count  | long | Number of objects encrypted with a customer managed key stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with a customer managed key stored for the current referenced item | 
|  current\$1version\$1object\$1count  | long | Number of current version objects stored for the current referenced item | 
|  current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of current version bytes stored for the current referenced item | 
|  delete\$1marker\$1object\$1count  | long | Number of delete marker objects stored for the current referenced item | 
|  delete\$1marker\$1storage\$1bytes  | DECIMAL(38,0) | Number of delete marker bytes stored for the current referenced item | 
|  encrypted\$1object\$1count  | long | Number of encrypted objects stored for the current referenced item | 
|  encrypted\$1storage\$1bytes  | DECIMAL(38,0) | Number of encrypted bytes stored for the current referenced item | 
|  incomplete\$1mpu\$1object\$1older\$1than\$17\$1days\$1count  | long | Number of incomplete multipart upload objects older than 7 days stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1older\$1than\$17\$1days\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored older than 7 days for the current referenced item | 
|  incomplete\$1mpu\$1object\$1count  | long | Number of incomplete multipart upload objects stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored for the current referenced item | 
|  non\$1current\$1version\$1object\$1count  | long | Number of non-current version objects stored for the current referenced item | 
|  non\$1current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of non-current version bytes stored for the current referenced item | 
|  object\$1lock\$1enabled\$1object\$1count  | long | Number of objects stored for for objects with lock enabled in the current referenced item | 
|  object\$1lock\$1enabled\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for objects with lock enabled in the current referenced item | 
|  replicated\$1object\$1count  | long | Number of objects replicated for the current referenced item | 
|  replicated\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes replicated for the current referenced item | 
|  replicated\$1object\$1source\$1count  | long | Number of objects replicated as source stored for the current referenced item | 
|  replicated\$1storage\$1source\$1bytes  | DECIMAL(38,0) | Number of bytes replicated as source for the current referenced item | 
|  sse\$1kms\$1object\$1count  | long | Number of objects encrypted with SSE key stored for the current referenced item | 
|  sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with SSE key stored for the current referenced item | 
|  object\$10kb\$1count  | long | Number of objects with sizes equal to 0KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$10kb\$1to\$1128kb\$1count  | long | Number of objects with sizes greater than 0KB and less than equal to 128KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128kb\$1to\$1256kb\$1count  | long | Number of objects with sizes greater than 128KB and less than equal to 256KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256kb\$1to\$1512kb\$1count  | long | Number of objects with sizes greater than 256KB and less than equal to 512KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512kb\$1to\$11mb\$1count  | long | Number of objects with sizes greater than 512KB and less than equal to 1MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11mb\$1to\$12mb\$1count  | long | Number of objects with sizes greater than 1MB and less than equal to 2MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12mb\$1to\$14mb\$1count  | long | Number of objects with sizes greater than 2MB and less than equal to 4MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$14mb\$1to\$18mb\$1count  | long | Number of objects with sizes greater than 4MB and less than equal to 8MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$18mb\$1to\$116mb\$1count  | long | Number of objects with sizes greater than 8MB and less than equal to 16MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$116mb\$1to\$132mb\$1count  | long | Number of objects with sizes greater than 16MB and less than equal to 32MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$132mb\$1to\$164mb\$1count  | long | Number of objects with sizes greater than 32MB and less than equal to 64MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$164mb\$1to\$1128mb\$1count  | long | Number of objects with sizes greater than 64MB and less than equal to 128MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128mb\$1to\$1256mb\$1count  | long | Number of objects sizes greater than 128MB and less than equal to 256MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256mb\$1to\$1512mb\$1count  | long | Number of objects sizes greater than 256MB and less than equal to 512MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512mb\$1to\$11gb\$1count  | long | Number of objects sizes greater than 512MB and less than equal to 1GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11gb\$1to\$12gb\$1count  | long | Number of objects sizes greater than 1GB and less than equal to 2GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12gb\$1to\$14gb\$1count  | long | Number of objects sizes greater than 2GB and less than equal to 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1larger\$1than\$14gb\$1count  | long | Number of objects sizes greater than 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 

### Bucket property metrics table schema
<a name="storage_lens_s3_tables_bucket_property_metrics"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, LENS GROUP.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix. | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  bucket\$1name  | string | Bucket name | 
|  versioning\$1enabled\$1bucket\$1count  | long | Number of buckets with versioning enabled for the current referenced item | 
|  mfa\$1delete\$1enabled\$1bucket\$1count  | long | Number of buckets with MFA delete enabled for the current referenced item | 
|  sse\$1kms\$1enabled\$1bucket\$1count  | long | Number of buckets with KMS enabled for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1enforced\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner enforced for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1preferred\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner preferred for the current referenced item | 
|  object\$1ownership\$1object\$1writer\$1bucket\$1count  | long | Number of buckets with Object Ownership object writer for the current referenced item | 
|  transfer\$1acceleration\$1enabled\$1bucket\$1count  | long | Number of buckets with transfer acceleration enabled for the current referenced item | 
|  event\$1notification\$1enabled\$1bucket\$1count  | long | Number of buckets with event notification enabled for the current referenced item | 
|  transition\$1lifecycle\$1rule\$1count  | long | Number of transition lifecycle rules for the current referenced item | 
|  expiration\$1lifecycle\$1rule\$1count  | long | Number of expiration lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1transition\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version transition lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1expiration\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version expiration lifecycle rules for the current referenced item | 
|  abort\$1incomplete\$1multipart\$1upload\$1lifecycle\$1rule\$1count  | long | Number of abort incomplete multipart upload lifecycle rules for the current referenced item | 
|  expired\$1object\$1delete\$1marker\$1lifecycle\$1rule\$1count  | long | Number of expire object delete marker lifecycle rules for the current referenced item | 
|  same\$1region\$1replication\$1rule\$1count  | long | Number of Same-Region Replication rule count for the current referenced item | 
|  cross\$1region\$1replication\$1rule\$1count  | long | Number of Cross-Region Replication rule count for the current referenced item | 
|  same\$1account\$1replication\$1rule\$1count  | long | Number of Same-account replication rule count for the current referenced item | 
|  cross\$1account\$1replication\$1rule\$1count  | long | Number of Cross-account replication rule count for the current referenced item | 
|  invalid\$1destination\$1replication\$1rule\$1count  | long | Number of buckets with Invalid destination replication for the current referenced item | 

### Activity metrics table schema
<a name="storage_lens_s3_tables_activity_metrics"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  all\$1request\$1count  | long | Number of \$1all\$1 requests for the current referenced item | 
|  all\$1sse\$1kms\$1encrypted\$1request\$1count  | long | Number of KMS encrypted requests for the current referenced item | 
|  all\$1unsupported\$1sig\$1request\$1count  | long | Number of unsupported sig requests for the current referenced item | 
|  all\$1unsupported\$1tls\$1request\$1count  | long | Number of unsupported TLS requests for the current referenced item | 
|  bad\$1request\$1error\$1400\$1count  | long | Number of 400 bad request errors for the current referenced item | 
|  delete\$1request\$1count  | long | Number of delete requests for the current referenced item | 
|  downloaded\$1bytes  | decimal(0,0) | Number of downloaded bytes for the current referenced item | 
|  error\$14xx\$1count  | long | Number of 4xx errors for the current referenced item | 
|  error\$15xx\$1count  | long | Number of 5xx errors for the current referenced item | 
|  forbidden\$1error\$1403\$1count  | long | Number of 403 forbidden errors for the current referenced item | 
|  get\$1request\$1count  | long | Number of get requests for the current referenced item | 
|  head\$1request\$1count  | long | Number of head requests for the current referenced item | 
|  internal\$1server\$1error\$1500\$1count  | long | Number of 500 internal server errors for the current referenced item | 
|  list\$1request\$1count  | long | Number of list requests for the current referenced item | 
|  not\$1found\$1error\$1404\$1count  | long | Number of 404 not found errors for the current referenced item | 
|  ok\$1status\$1200\$1count  | long | Number of 200 OK requests for the current referenced item | 
|  partial\$1content\$1status\$1206\$1count  | long | Number of 206 partial content requests for the current referenced item | 
|  post\$1request\$1count  | long | Number of post requests for the current referenced item | 
|  put\$1request\$1count  | long | Number of put requests for the current referenced item | 
|  select\$1request\$1count  | long | Number of select requests for the current referenced item | 
|  select\$1returned\$1bytes  | decimal(0,0) | Number of bytes returned by select requests for the current referenced item | 
|  select\$1scanned\$1bytes  | decimal(0,0) | Number of bytes scanned by select requests for the current referenced item | 
|  service\$1unavailable\$1error\$1503\$1count  | long | Number of 503 service unavailable errors for the current referenced item | 
|  uploaded\$1bytes  | decimal(0,0) | Number of uploaded bytes for the current referenced item | 
|  average\$1first\$1byte\$1latency  | long | Average per-request time between when an S3 bucket receives a complete request and when it starts returning the response, measured over the past 24 hours | 
|  average\$1total\$1request\$1latency  | long | Average elapsed per-request time between the first byte received and the last byte sent to an S3 bucket, measured over the past 24 hours | 
|  read\$10kb\$1request\$1count  | long | Number of GetObject requests with data sizes of 0KB, including both range-based requests and whole object requests | 
|  read\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 0KB and up to 128KB, including both range-based requests and whole object requests | 
|  read\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128KB and up to 256KB, including both range-based requests and whole object requests | 
|  read\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256KB and up to 512KB, including both range-based requests and whole object requests | 
|  read\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512KB and up to 1MB, including both range-based requests and whole object requests | 
|  read\$11mb\$1to\$12mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1MB and up to 2MB, including both range-based requests and whole object requests | 
|  read\$12mb\$1to\$14mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2MB and up to 4MB, including both range-based requests and whole object requests | 
|  read\$14mb\$1to\$18mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4MB and up to 8MB, including both range-based requests and whole object requests | 
|  read\$18mb\$1to\$116mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 8MB and up to 16MB, including both range-based requests and whole object requests | 
|  read\$116mb\$1to\$132mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 16MB and up to 32MB, including both range-based requests and whole object requests | 
|  read\$132mb\$1to\$164mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 32MB and up to 64MB, including both range-based requests and whole object requests | 
|  read\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 64MB and up to 128MB, including both range-based requests and whole object requests | 
|  read\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128MB and up to 256MB, including both range-based requests and whole object requests | 
|  read\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256MB and up to 512MB, including both range-based requests and whole object requests | 
|  read\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512MB and up to 1GB, including both range-based requests and whole object requests | 
|  read\$11gb\$1to\$12gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1GB and up to 2GB, including both range-based requests and whole object requests | 
|  read\$12gb\$1to\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2GB and up to 4GB, including both range-based requests and whole object requests | 
|  read\$1larger\$1than\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4GB, including both range-based requests and whole object requests | 
|  write\$10kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes of 0KB | 
|  write\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 0KB and up to 128KB | 
|  write\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128KB and up to 256KB | 
|  write\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256KB and up to 512KB | 
|  write\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512KB and up to 1MB | 
|  write\$11mb\$1to\$12mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1MB and up to 2MB | 
|  write\$12mb\$1to\$14mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2MB and up to 4MB | 
|  write\$14mb\$1to\$18mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4MB and up to 8MB | 
|  write\$18mb\$1to\$116mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 8MB and up to 16MB | 
|  write\$116mb\$1to\$132mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 16MB and up to 32MB | 
|  write\$132mb\$1to\$164mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 32MB and up to 64MB | 
|  write\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 64MB and up to 128MB | 
|  write\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128MB and up to 256MB | 
|  write\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256MB and up to 512MB | 
|  write\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512MB and up to 1GB | 
|  write\$11gb\$1to\$12gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1GB and up to 2GB | 
|  write\$12gb\$1to\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2GB and up to 4GB | 
|  write\$1larger\$1than\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4GB | 
|  concurrent\$1put\$1503\$1error\$1count  | long | Number of 503 errors that are generated due to concurrent writes to the same object | 
|  cross\$1region\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region | 
|  cross\$1region\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region | 
|  cross\$1region\$1without\$1replication\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region, excluding cross-region replication requests | 
|  cross\$1region\$1without\$1replication\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region, excluding cross-region replication bytes | 
|  inregion\$1request\$1count  | long | Number of requests that originate from a client in same Region as bucket's home Region | 
|  inregion\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls from same Region as bucket's home Region | 
|  unique\$1objects\$1accessed\$1daily\$1count  | long | Number of objects that were accessed at least once in last 24 hrs | 

# Monitor S3 Storage Lens metrics in CloudWatch
<a name="storage_lens_view_metrics_cloudwatch"></a>

You can publish S3 Storage Lens metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. For more information about CloudWatch features, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html).

You can enable the CloudWatch publishing option for new or existing dashboard configurations by using the Amazon S3 console, Amazon S3 REST API, AWS CLI, and AWS SDKs. Dashboards that are upgraded to S3 Storage Lens advanced metrics and recommendations can use the CloudWatch publishing option. For S3 Storage Lens advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). No additional CloudWatch metrics publishing charges apply; however, other CloudWatch charges, such as dashboards, alarms, and API calls, do apply. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch. 

After you enable the CloudWatch publishing option, you can use the following CloudWatch features to monitor and analyze your S3 Storage LensStorage Lens data:
+ [Dashboards](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-dashboards) – Use CloudWatch dashboards to create customized S3 Storage Lens dashboards. Share your CloudWatch dashboard with people who don't have direct access to your AWS account, across teams, with stakeholders, and with people external to your organizations. 
+ [Alarms and triggered actions](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-alarms) – Configure alarms that watch metrics and take action when a threshold is breached. For example, you can configure an alarm that sends an Amazon SNS notification when the **Incomplete Multipart Upload Bytes** metric exceeds 1 GB for three consecutive days. 
+ [Anomaly detection](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-alarms) – Enable anomaly detection to continuously analyze metrics, determine normal baselines, and surface anomalies. You can create an anomaly detection alarm based on the expected value of a metric. For example, you can monitor anomalies for the **Object Lock Enabled Bytes** metric to detect unauthorized removal of Object Lock settings.
+ [Metric math](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-metric-math) – You can also use metric math to query multiple S3 Storage Lens metrics and use math expressions to create new time series based on these metrics. For example, you can create a new metric to get the average object size by dividing `StorageBytes` by `ObjectCount`.

For more information about the CloudWatch publishing option for S3 Storage Lens metrics, see the following topics.

**Topics**
+ [S3 Storage Lens metrics and dimensions](storage-lens-cloudwatch-metrics-dimensions.md)
+ [Enabling CloudWatch publishing for S3 Storage Lens](storage-lens-cloudwatch-enable-publish-option.md)
+ [Working with S3 Storage Lens metrics in CloudWatch](storage-lens-cloudwatch-monitoring-cloudwatch.md)

# S3 Storage Lens metrics and dimensions
<a name="storage-lens-cloudwatch-metrics-dimensions"></a>

To send S3 Storage Lens metrics to CloudWatch, you must enable the CloudWatch publishing option within S3 Storage Lens advanced metrics. After advanced metrics are enabled, you can use [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) to monitor S3 Storage Lens metrics alongside other application metrics and create a unified view of your operational health. You can use dimensions to filter your S3 Storage Lens metrics in CloudWatch by organization, account, bucket, storage class, Region, and metrics configuration ID.

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch. 

For more information about S3 Storage Lens metrics and dimensions in CloudWatch, see the following topics.

**Topics**
+ [Metrics](#storage-lens-cloudwatch-metrics)
+ [Dimensions](#storage-lens-cloudwatch-dimensions)

## Metrics
<a name="storage-lens-cloudwatch-metrics"></a>

S3 Storage Lens metrics are available as metrics within CloudWatch. S3 Storage Lens metrics are published to the `AWS/S3/Storage-Lens` namespace. This namespace is only for S3 Storage Lens metrics. Amazon S3 bucket, request, and replication metrics are published to the `AWS/S3` namespace. 

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

In S3 Storage Lens, metrics are aggregated and stored only in the designated home Region. S3 Storage Lens metrics are also published to CloudWatch in the home Region that you specify in the S3 Storage Lens configuration. 

For a complete list of S3 Storage Lens metrics, including a list of those metrics available in CloudWatch, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Note**  
The valid statistic for S3 Storage Lens metrics in CloudWatch is Average. For more information about statistics in CloudWatch, see [ CloudWatch statistics definitions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Statistics-definitions.html) in the *Amazon CloudWatch User Guide*.

### Granularity of S3 Storage Lens metrics in CloudWatch
<a name="storage-lens-cloudwatch-metrics-granularity"></a>

S3 Storage Lens offers metrics at organization, account, bucket, and prefix granularity. S3 Storage Lens publishes organization, account, and bucket-level S3 Storage Lens metrics to CloudWatch. Prefix-level S3 Storage Lens metrics are not available in CloudWatch.

For more information about the granularity of S3 Storage Lens metrics available in CloudWatch, see the following list:
+ **Organization** – Metrics aggregated across the member accounts in your organization. S3 Storage Lens publishes metrics for member accounts to CloudWatch in the management account. 
  + **Organization and account** – Metrics for the member accounts in your organization. 
  + **Organization and bucket** – Metrics for Amazon S3 buckets in the member accounts of your organization.
+ **Account** (Non-organization level) – Metrics aggregated across the buckets in your account. 
+ **Bucket** (Non-organization level) – Metrics for a specific bucket. In CloudWatch, S3 Storage Lens publishes these metrics to the AWS account that created the S3 Storage Lens configuration. S3 Storage Lens publishes these metrics only for non-organization configurations.

## Dimensions
<a name="storage-lens-cloudwatch-dimensions"></a>

When S3 Storage Lens sends data to CloudWatch, dimensions are attached to each metric. Dimensions are categories that describe the characteristics of metrics. You can use dimensions to filter the results that CloudWatch returns. 

For example, all S3 Storage Lens metrics in CloudWatch have the `configuration_id` dimension. You can use this dimension to differentiate between metrics associated with a specific S3 Storage Lens configuration. The `organization_id` identifies organization-level metrics. For more information about dimensions in CloudWatch, see [Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension) in the *CloudWatch User Guide*. 

Different dimensions are available for S3 Storage Lens metrics depending on the granularity of the metrics. For example, you can use the `organization_id` dimension to filter organization-level metrics by the AWS Organizations ID. However, you can't use this dimension for bucket and account-level metrics. For more information, see [Filtering metrics using dimensions](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-dimensions).

To see which dimensions are available for your S3 Storage Lens configuration, see the following table.


|  **Dimension**  |  **Description**  |  **Bucket**  | **Account** |  **Organization**  |  **Organization and bucket**  |  **Organization and account**  | 
| --- | --- | --- | --- | --- | --- | --- | 
| configuration\$1id |  The dashboard name for the S3 Storage Lens configuration reported in the metrics  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| metrics\$1version |  The version of the S3 Storage Lens metrics. The metrics version has a fixed value of `1.0`.  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| organization\$1id |  The AWS Organizations ID for the metrics  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| aws\$1account\$1number | The AWS account that's associated with the metrics | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| aws\$1region | The AWS Region for the metrics | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| bucket\$1name |  The name of the S3 bucket that's reported in the metrics  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | 
| storage\$1class |  The storage class for the bucket that's reported in the metrics  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| record\$1type |  The granularity of the metrics: ORGANIZATION, ACCOUNT, BUCKET  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) BUCKET | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) ACCOUNT | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) BUCKET | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) ACCOUNT | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) ORGANIZATION | 

# Enabling CloudWatch publishing for S3 Storage Lens
<a name="storage-lens-cloudwatch-enable-publish-option"></a>

You can publish S3 Storage Lens metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. For more information about CloudWatch features, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html).

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

You can enable CloudWatch support for new or existing dashboard configurations by using the S3 console, Amazon S3 REST APIs, AWS CLI, and AWS SDKs. The CloudWatch publishing option is available for dashboards that are upgraded to S3 Storage Lens advanced metrics and recommendations. For S3 Storage Lens advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). No additional CloudWatch metrics publishing charges apply; however, other CloudWatch charges, such as dashboards, alarms, and API calls, do apply.

To enable the CloudWatch publishing option for S3 Storage Lens metrics, see the following topics.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch.   
Currently, S3 Storage Lens metrics cannot be consumed through CloudWatch streams. 

## Using the S3 console
<a name="storage-lens-cloudwatch-enable-publish-console"></a>

When you update an S3 Storage Lens dashboard, you can't change the dashboard name or home Region. You also can't change the scope of the default dashboard, which is scoped to your entire account's storage.

**To update an S3 Storage Lens dashboard to enable CloudWatch publishing**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **S3 Storage Lens**, **Dashboards**.

1. Choose the dashboard that you want to edit, and then choose **Edit.**

1. Under **Metrics selection**, choose **Advanced metrics and recommendations**.

   Advanced metrics and recommendations are available for an additional charge. Advanced metrics and recommendations include a 15-month period for data queries, usage metrics aggregated at the prefix level, activity metrics aggregated by bucket, the CloudWatch publishing option, and contextual recommendations that help you optimize storage costs and apply data-protection best practices. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

1. Under **Select Advanced metrics and recommendations features**, select **CloudWatch publishing**.
**Important**  
If your configuration enables prefix aggregation for usage metrics, prefix-level metrics will not be published to CloudWatch. Only bucket, account, and organization-level S3 Storage Lens metrics are published to CloudWatch.

1. Choose **Save changes**.

**To create a new S3 Storage Lens dashboard that enables CloudWatch support**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**. 

1. Choose **Create dashboard**. 

1. Under **General**, define the following configuration options:

   1. For **Dashboard name**, enter your dashboard name.

      Dashboard names must be fewer than 65 characters and must not contain special characters or spaces. You can't change the dashboard name after you create your dashboard.

   1. Choose the **Home Region ** for your dashboard.

      Metrics for all Regions included in this dashboard scope are stored centrally in the designated home Region. In CloudWatch, S3 Storage Lens metrics are also available in the home Region. You can't change the home Region after you create your dashboard.

1. (Optional) To add tags, choose **Add tag** and enter the tag **Key** and **Value**.
**Note**  
You can add up to 50 tags to your dashboard configuration.

1. Define the scope for your configuration:

   1. If you're creating an organization-level configuration, choose the accounts to include in the configuration: **Include all accounts in your configuration** or **Limit the scope to your signed-in account**.
**Note**  
When you create an organization-level configuration that includes all accounts, you can include or exclude only Regions, not buckets.

   1. Choose the Regions and buckets that you want S3 Storage Lens to include in the dashboard configuration by doing the following:
      + To include all Regions, choose **Include Regions and buckets**.
      + To include specific Regions, clear **Include all Regions**. Under **Choose Regions to include**, choose the Regions that you want S3 Storage Lens to include in the dashboard.
      + To include specific buckets, clear **Include all buckets**. Under **Choose buckets to include**, choose the buckets that you want S3 Storage Lens to include in the dashboard. 
**Note**  
You can choose up to 50 buckets.

1. For **Metrics selection**, choose **Advanced metrics and recommendations**.

   For more information about advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

1. Under **Advanced metrics and recommendations features**, select the options that you want to enable:
   + **Advanced metrics** 
   + **CloudWatch publishing**
**Important**  
If you enable prefix aggregation for your S3 Storage Lens configuration, prefix-level metrics will not be published to CloudWatch. Only bucket, account, and organization-level S3 Storage Lens metrics are published to CloudWatch.
   + **Prefix aggregation**
**Note**  
For more information about advanced metrics and recommendations features, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. If you enabled **Advanced metrics**, select the **Advanced metrics categories** that you want to display in your S3 Storage Lens dashboard:
   + **Activity metrics**
   + **Detailed status code metrics**
   + **Advanced cost optimization metrics**
   + **Advanced data protection metrics**

   For more information about metrics categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

1. (Optional) Configure your metrics export.

   For more information about how to configure a metrics export, see step [Using the S3 console](storage_lens_creating_dashboard.md#storage_lens_console_creating).

1. Choose **Create dashboard**.

## Using the AWS CLI
<a name="storage-lens-cloudwatch-enable-publish-cli"></a>

The following AWS CLI example enables the CloudWatch publishing option by using a S3 Storage Lens organization-level advanced metrics and recommendations configuration. To use this example, replace the `user input placeholders` with your own information.

```
aws s3control put-storage-lens-configuration --account-id=555555555555 --config-id=your-configuration-id --region=us-east-1 --storage-lens-configuration=file://./config.json

config.json
{
  "Id": "SampleS3StorageLensConfiguration",
  "AwsOrg": {
    "Arn": "arn:aws:organizations::123456789012:organization/o-abcdefgh"
  },
  "AccountLevel": {
    "ActivityMetrics": {
      "IsEnabled":true
    },
    "AdvancedCostOptimizationMetrics": {
      "IsEnabled":true
    },
    "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
    },
    "DetailedStatusCodesMetrics": {
      "IsEnabled":true
    },
    "BucketLevel": {
      "ActivityMetrics": {
        "IsEnabled":true
      },
      "AdvancedCostOptimizationMetrics": {
        "IsEnabled":true
      },
      "DetailedStatusCodesMetrics": {
        "IsEnabled":true
      },
      "PrefixLevel":{
        "StorageMetrics":{
          "IsEnabled":true,
          "SelectionCriteria":{
            "MaxDepth":5,
            "MinStorageBytesPercentage":1.25,
            "Delimiter":"/"
          }
        }
      }
    }
  },
  "Exclude": {
    "Regions": [
      "eu-west-1"
    ],
    "Buckets": [
      "arn:aws:s3:::amzn-s3-demo-source-bucket "
    ]
  },
  "IsEnabled": true,
  "DataExport": {
    "S3BucketDestination": {
      "OutputSchemaVersion": "V_1",
      "Format": "CSV",
      "AccountId": "111122223333",
      "Arn": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
      "Prefix": "prefix-for-your-export-destination",
      "Encryption": {
        "SSES3": {}
      }
    },
    "CloudWatchMetrics": {
      "IsEnabled": true
    }
  }
}
```

## Using the AWS SDK for Java
<a name="storage-lens-cloudwatch-enable-publish-sdk"></a>

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.AccountLevel;
import com.amazonaws.services.s3control.model.ActivityMetrics;
import com.amazonaws.services.s3control.model.BucketLevel;
import com.amazonaws.services.s3control.model.CloudWatchMetrics;
import com.amazonaws.services.s3control.model.Format;
import com.amazonaws.services.s3control.model.Include;
import com.amazonaws.services.s3control.model.OutputSchemaVersion;
import com.amazonaws.services.s3control.model.PrefixLevel;
import com.amazonaws.services.s3control.model.PrefixLevelStorageMetrics;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.S3BucketDestination;
import com.amazonaws.services.s3control.model.SSES3;
import com.amazonaws.services.s3control.model.SelectionCriteria;
import com.amazonaws.services.s3control.model.StorageLensAwsOrg;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;
import com.amazonaws.services.s3control.model.StorageLensDataExport;
import com.amazonaws.services.s3control.model.StorageLensDataExportEncryption;
import com.amazonaws.services.s3control.model.StorageLensTag;

import java.util.Arrays;
import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class CreateAndUpdateDashboard {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "Source Account ID";
        String exportAccountId = "Destination Account ID";
        String exportBucketArn = "arn:aws:s3:::amzn-s3-demo-destination-bucket"; // The destination bucket for your metrics export must be in the same Region as your S3 Storage Lens configuration.
        String awsOrgARN = "arn:aws:organizations::123456789012:organization/o-abcdefgh";
        Format exportFormat = Format.CSV;

        try {
            SelectionCriteria selectionCriteria = new SelectionCriteria()
                    .withDelimiter("/")
                    .withMaxDepth(5)
                    .withMinStorageBytesPercentage(10.0);
            PrefixLevelStorageMetrics prefixStorageMetrics = new PrefixLevelStorageMetrics()
                    .withIsEnabled(true)
                    .withSelectionCriteria(selectionCriteria);
            BucketLevel bucketLevel = new BucketLevel()
                    .withActivityMetrics(new ActivityMetrics().withIsEnabled(true))
                    .withAdvancedCostOptimizationMetrics(new AdvancedCostOptimizationMetrics().withIsEnabled(true))
                    .withAdvancedDataProtectionMetrics(new AdvancedDataProtectionMetrics().withIsEnabled(true))
                    .withDetailedStatusCodesMetrics(new DetailedStatusCodesMetrics().withIsEnabled(true))
                    .withPrefixLevel(new PrefixLevel().withStorageMetrics(prefixStorageMetrics));
            AccountLevel accountLevel = new AccountLevel()
                    .withActivityMetrics(new ActivityMetrics().withIsEnabled(true))
                    .withAdvancedCostOptimizationMetrics(new AdvancedCostOptimizationMetrics().withIsEnabled(true))
                    .withAdvancedDataProtectionMetrics(new AdvancedDataProtectionMetrics().withIsEnabled(true))
                    .withDetailedStatusCodesMetrics(new DetailedStatusCodesMetrics().withIsEnabled(true))
                    .withBucketLevel(bucketLevel);

            Include include = new Include()
                    .withBuckets(Arrays.asList("arn:aws:s3:::amzn-s3-demo-bucket"))
                    .withRegions(Arrays.asList("us-west-2"));

            StorageLensDataExportEncryption exportEncryption = new StorageLensDataExportEncryption()
                    .withSSES3(new SSES3());
            S3BucketDestination s3BucketDestination = new S3BucketDestination()
                    .withAccountId(exportAccountId)
                    .withArn(exportBucketArn)
                    .withEncryption(exportEncryption)
                    .withFormat(exportFormat)
                    .withOutputSchemaVersion(OutputSchemaVersion.V_1)
                    .withPrefix("Prefix");
            CloudWatchMetrics cloudWatchMetrics = new CloudWatchMetrics()
                    .withIsEnabled(true);
            StorageLensDataExport dataExport = new StorageLensDataExport()
                    .withCloudWatchMetrics(cloudWatchMetrics)
                    .withS3BucketDestination(s3BucketDestination);

            StorageLensAwsOrg awsOrg = new StorageLensAwsOrg()
                    .withArn(awsOrgARN);

            StorageLensConfiguration configuration = new StorageLensConfiguration()
                    .withId(configurationId)
                    .withAccountLevel(accountLevel)
                    .withInclude(include)
                    .withDataExport(dataExport)
                    .withAwsOrg(awsOrg)
                    .withIsEnabled(true);

            List<StorageLensTag> tags = Arrays.asList(
                    new StorageLensTag().withKey("key-1").withValue("value-1"),
                    new StorageLensTag().withKey("key-2").withValue("value-2")
            );

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfiguration(new PutStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withStorageLensConfiguration(configuration)
                    .withTags(tags)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

## Using the REST API
<a name="storage-lens-cloudwatch-enable-publish-api"></a>

To enable the CloudWatch publishing option by using the Amazon S3 REST API, you can use [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html).

**Next steps**  
After you enable the CloudWatch publishing option, you can access your S3 Storage Lens metrics in CloudWatch. You also can leverage CloudWatch features to monitor and analyze your S3 Storage Lens data in CloudWatch. For more information, see the following topics:
+ [S3 Storage Lens metrics and dimensions](storage-lens-cloudwatch-metrics-dimensions.md)
+ [Working with S3 Storage Lens metrics in CloudWatch](storage-lens-cloudwatch-monitoring-cloudwatch.md)

# Working with S3 Storage Lens metrics in CloudWatch
<a name="storage-lens-cloudwatch-monitoring-cloudwatch"></a>

You can publish S3 Storage Lens metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. For more information about CloudWatch features, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html).

You can enable the CloudWatch publishing option for new or existing dashboard configurations by using the Amazon S3 console, Amazon S3 REST APIs, AWS CLI, and AWS SDKs. The CloudWatch publishing option is available for dashboards that are upgraded to S3 Storage Lens advanced metrics and recommendations. For S3 Storage Lens advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). No additional CloudWatch metrics publishing charges apply; however, other CloudWatch charges, such as dashboards, alarms, and API calls, do apply. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch.   
Currently, S3 Storage Lens metrics cannot be consumed through CloudWatch streams. 

For more information about working with S3 Storage Lens metrics in CloudWatch, see the following topics.

**Topics**
+ [Working with CloudWatch dashboards](#storage-lens-cloudwatch-monitoring-cloudwatch-dashboards)
+ [Setting alarms, triggering actions, and using anomaly detection](#storage-lens-cloudwatch-monitoring-cloudwatch-alarms)
+ [Filtering metrics using dimensions](#storage-lens-cloudwatch-monitoring-cloudwatch-dimensions)
+ [Calculating new metrics with metric math](#storage-lens-cloudwatch-monitoring-cloudwatch-metric-math)
+ [Using search expressions in graphs](#storage-lens-cloudwatch-monitoring-cloudwatch-search-expressions)

## Working with CloudWatch dashboards
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-dashboards"></a>

You can use CloudWatch dashboards to monitor S3 Storage Lens metrics alongside other application metrics and create a unified view of your operational health. Dashboards are customizable home pages in the CloudWatch console that you can use to monitor your resources in a single view. 

CloudWatch has broad permissions control that doesn't support limiting access to a specific set of metrics or dimensions. Users in your account or organization who have access to CloudWatch will have access to metrics for all S3 Storage Lens configurations where the CloudWatch support option is enabled. You can't manage permissions for specific dashboards as you can in S3 Storage Lens. For more information about CloudWatch permissions, see [Managing access permissions to your CloudWatch resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-access-control-overview-cw.html) in the *Amazon CloudWatch User Guide*.

For more information about using CloudWatch dashboards and configuring permissions, see [Using Amazon CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) and [Sharing CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html) in the *Amazon CloudWatch User Guide*.

## Setting alarms, triggering actions, and using anomaly detection
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-alarms"></a>

You can configure CloudWatch alarms that watch S3 Storage Lens metrics in CloudWatch and take action when a threshold is breached. For example, you can configure an alarm that sends an Amazon SNS notification when the **Incomplete Multipart Upload Bytes** metric exceeds 1 GB for three consecutive days.

You can also enable anomaly detection to continuously analyze your S3 Storage Lens metrics, determine normal baselines, and surface anomalies. You can create an anomaly detection alarm based on a metric's expected value. For example, you can monitor anomalies for the **Object Lock Enabled Bytes** metric to detect unauthorized removal of Object Lock settings.

For more information and examples, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) and [Creating an alarm from a metric on a graph](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_alarm_metric_graph.html) in the *Amazon CloudWatch User Guide*.

## Filtering metrics using dimensions
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-dimensions"></a>

You can use dimensions to filter S3 Storage Lens metrics in the CloudWatch console. For example, you can filter by `configuration_id`, `aws_account_number`, `aws_region`, `bucket_name`, and more.

S3 Storage Lens supports multiple dashboard configurations per account. This means that different configurations can include the same bucket. When these metrics are published to CloudWatch, the bucket will have duplicate metrics within CloudWatch. To view metrics only for a specific S3 Storage Lens configuration in CloudWatch, you can use the `configuration_id` dimension. When you filter by `configuration_id`, you see only the metrics that are associated with the configuration that you identify.

For more information about filtering by configuration ID, see [Searching for available metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/finding_metrics_with_cloudwatch.html) in the *Amazon CloudWatch User Guide*.

## Calculating new metrics with metric math
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-metric-math"></a>

You can use metric math to query multiple S3 Storage Lens metrics and use math expressions to create new time series based on these metrics. For example, you can create a new metric for unencrypted objects by subtracting Encrypted Objects from Object Count. You can also create a metric to get the average object size by dividing `StorageBytes` by `ObjectCount` or the number bytes accessed on one day by dividing `BytesDownloaded` by `StorageBytes`.

For more information, see [Using metric math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html) in the *Amazon CloudWatch User Guide*.

## Using search expressions in graphs
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-search-expressions"></a>

With S3 Storage Lens metrics, you can create a search expression. For example, you can create a search expression for all metrics that are named **IncompleteMultipartUploadStorageBytes** and add `SUM` to the expression. With this search expression, you can see your total incomplete multipart upload bytes across all dimensions of your storage in a single metric.

This example shows the syntax that you would use to create a search expression for all metrics named **IncompleteMultipartUploadStorageBytes**.

```
SUM(SEARCH('{AWS/S3/Storage-Lens,aws_account_number,aws_region,configuration_id,metrics_version,record_type,storage_class} MetricName="IncompleteMultipartUploadStorageBytes"', 'Average',86400))
```

For more information about this syntax, see [CloudWatch search expression syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html) in the *Amazon CloudWatch User Guide*. To create a CloudWatch graph with a search expression, see [Creating a CloudWatch graph with a search expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-search-expression.html)in the *Amazon CloudWatch User Guide*.

# Amazon S3 Storage Lens metrics use cases
<a name="storage-lens-use-cases"></a>

You can use your Amazon S3 Storage Lens dashboard to visualize insights and trends, flag outliers, and receive recommendations. S3 Storage Lens metrics are organized into categories that align with key use cases. You can use these metrics to do the following: 
+ Identify cost-optimization opportunities
+ Apply data-protection best practices
+ Apply access-management best practices
+ Improve the performance of application workloads

For example, with cost-optimization metrics, you can identify opportunities to reduce your Amazon S3 storage costs. You can identify buckets with multipart uploads that are more than 7-days old or buckets that are accumulating noncurrent versions.

Similarly, you can use data-protection metrics to identify buckets that aren't following data-protection best practices within your organization. For example, you can identify buckets that don’t use AWS Key Management Service keys (SSE-KMS) for default encryption or don't have S3 Versioning enabled. 

With S3 Storage Lens access-management metrics, you can identify bucket settings for S3 Object Ownership so that you can migrate access control list (ACL) permissions to bucket policies and disable ACLs.

If you have [S3 Storage Lens advanced metrics](storage_lens_basics_metrics_recommendations.md) enabled, you can use detailed status-code metrics to get counts for successful or failed requests that you can use to troubleshoot access or performance issues. 

With advanced metrics, you can also access additional cost-optimization and data-protection metrics that you can use to identify opportunities to further reduce your overall S3 storage costs and better align with best practices for protecting your data. For example, advanced cost-optimization metrics include lifecycle rule counts that you can use to identify buckets that don't have lifecycle rules to expire incomplete multipart uploads that are more than 7 days old. Advanced data-protection metrics include replication rule counts.

For more information about metrics categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of S3 Storage Lens metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Topics**
+ [Using Amazon S3 Storage Lens to optimize your storage costs](storage-lens-optimize-storage.md)
+ [Using S3 Storage Lens to protect your data](storage-lens-data-protection.md)
+ [Using S3 Storage Lens to audit Object Ownership settings](storage-lens-access-management.md)
+ [Using S3 Storage Lens metrics to improve performance](storage-lens-detailed-status-code.md)

# Using Amazon S3 Storage Lens to optimize your storage costs
<a name="storage-lens-optimize-storage"></a>

You can use S3 Storage Lens cost-optimization metrics to reduce the overall cost of your S3 storage. Cost-optimization metrics can help you confirm that you've configured Amazon S3 cost effectively and according to best practices. For example, you can identify the following cost-optimization opportunities: 
+ Buckets with incomplete multipart uploads older than 7 days
+ Buckets that are accumulating numerous noncurrent versions
+ Buckets that don't have lifecycle rules to abort incomplete multipart uploads
+ Buckets that don't have lifecycle rules to expire noncurrent versions objects
+ Buckets that don't have lifecycle rules to transition objects to a different storage class

You can then use this data to add additional lifecycle rules to your buckets. 

The following examples show how you can use cost- optimization metrics in your S3 Storage Lens dashboard to optimize your storage costs.

**Topics**
+ [Identify your largest S3 buckets](#identify-largest-s3-buckets)
+ [Uncover cold Amazon S3 buckets](#uncover-cold-buckets)
+ [Locate incomplete multipart uploads](#locate-incomplete-mpu)
+ [Reduce the number of noncurrent versions retained](#reduce-noncurrent-versions-retained)
+ [Identify buckets that don't have lifecycle rules and review lifecycle rule counts](#identify-missing-lifecycle-rules)

## Identify your largest S3 buckets
<a name="identify-largest-s3-buckets"></a>

You pay for storing objects in S3 buckets. The rate that you're charged depends on your objects' sizes, how long you store the objects, and their storage classes. With S3 Storage Lens, you get a centralized view of all the buckets in your account. To see all the buckets in all of your organization's accounts, you can configure an AWS Organizations-level S3 Storage Lens dashboard. From this dashboard view, you can identify your largest buckets.

### Step 1: Identify your largest buckets
<a name="optimize-storage-identify-largest-buckets"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

   When the dashboard opens, you can see the latest date that S3 Storage Lens has collected metrics for. Your dashboard always loads to the latest date that has metrics available.

1. To see a ranking of your largest buckets by the **Total storage** metric for a selected date range, scroll down to the **Top N overview for *date*** section.

   You can toggle the sort order to show the smallest buckets. You can also adjust the **Metric** selection to rank your buckets by any of the available metrics. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations.
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. For more detailed insights about your buckets, scroll up to the top of the page, and then choose the **Bucket** tab. 

   On the **Bucket** tab, you can see details such as the recent growth rate, the average object size, the largest prefixes, and the number of objects.

### Step 2: Navigate to your buckets and investigate
<a name="optimize-storage-investigate"></a>

After you've identified your largest S3 buckets, you can navigate to each bucket within the S3 console to view the objects in the bucket, understand its associated workload, and identify its internal owners. You can contact the bucket owners to find out whether the growth is expected or whether the growth needs further monitoring and control.

## Uncover cold Amazon S3 buckets
<a name="uncover-cold-buckets"></a>

If you have [S3 Storage Lens advanced metrics](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection) enabled, you can use [activity metrics](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types) to understand how cold your S3 buckets are. A "cold" bucket is one whose storage is no longer accessed (or very rarely accessed). This lack of activity typically indicates that the bucket's objects aren't frequently accessed.

Activity metrics, such as **GET Requests** and **Download Bytes**, indicate how often your buckets are accessed each day. To understand the consistency of the access pattern and to spot buckets that are no longer being accessed at all, you can trend this data over several months. The **Retrieval rate** metric, which is computed as **Download bytes / Total storage**, indicates the proportion of storage in a bucket that is accessed daily.

**Note**  
Download bytes are duplicated in cases where the same object is downloaded multiple times during the day.

**Prerequisite**  
To see activity metrics in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations** and then select **Activity metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Identify active buckets
<a name="storage-lens-identify-active-buckets"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. Choose the **Bucket** tab, and then scroll down to the **Bubble analysis by buckets for *date*** section.

   In the **Bubble analysis by buckets for *date*** section, you can plot your buckets on multiple dimensions by using any three metrics to represent the **X-axis**, **Y-axis**, and **Size** of the bubble. 

1. To find buckets that have gone cold, for **X-axis**, **Y-axis**, and **Size**, choose the **Total storage**, **% retrieval rate**, and **Average object size** metrics.

1. In the **Bubble analysis by buckets for *date*** section, locate any buckets with retrieval rates of zero (or near zero) and a larger relative storage size, and choose the bubble that represents the bucket. 

   A box will appear with choices for more granular insights. Do one of the following:

   1. To update the **Bucket** tab to display metrics only for the selected bucket, choose **Drill down**, and then choose **Apply**. 

   1. To aggregate your bucket-level data to by account, AWS Region, storage class, or bucket, choose **Analyze by** and then make a choice for **Dimension**. For example, to aggregate by storage class, choose **Storage class** for **Dimension**.

   To find buckets that have gone cold, do a bubble analysis using the **Total storage**, **% retrieval rate**, and **Average object size** metrics. Look for any buckets with retrieval rates of zero (or near zero) and a larger relative storage size. 

   The **Bucket** tab of your dashboard updates to display data for your selected aggregation or filter. If you aggregated by storage class or another dimension, that new tab opens in your dashboard (for example, the **Storage class** tab). 

### Step 2: Investigate cold buckets
<a name="storage-lens-investigate-buckets"></a>

From here, you can identify the owners of cold buckets in your account or organization and find out if that storage is still needed. You can then optimize costs by configuring [lifecycle expiration configurations](object-lifecycle-mgmt.md) for these buckets or archiving the data in one of the [Amazon Glacier storage classes](https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html). 

To avoid the problem of cold buckets going forward, you can [automatically transition your data by using S3 Lifecycle configurations](lifecycle-configuration-examples.md) for your buckets, or you can enable [auto-archiving with S3 Intelligent-Tiering](archived-objects.md).

You can also use step 1 to identify hot buckets. Then, you can ensure that these buckets use the correct [S3 storage class](storage-class-intro.md) to ensure that they serve their requests most effectively in terms of performance and cost.

## Locate incomplete multipart uploads
<a name="locate-incomplete-mpu"></a>

You can use multipart uploads to upload very large objects (up to 50 TB) as a set of parts for improved throughput and quicker recovery from network issues. In cases where the multipart upload process doesn't finish, the incomplete parts remain in the bucket (in an unusable state). These incomplete parts incur storage costs until the upload process is finished, or until the incomplete parts are removed. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).

With S3 Storage Lens, you can identify the number of incomplete multipart upload bytes in your account or across your entire organization, including incomplete multipart uploads that are more than 7 days old. For a complete list of incomplete multipart upload metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md). 

As a best practice, we recommend configuring lifecycle rules to expire incomplete multipart uploads that are older than a specific number of days. When you create your lifecycle rule to expire incomplete multipart uploads, we recommend 7 days as a good starting point. 

### Step 1: Review overall trends for incomplete multipart uploads
<a name="locate-incomplete-mpu-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In the **Snapshot for *date*** section, under **Metrics categories**, choose **Cost optimization**.

   The **Snapshot for *date*** section updates to display **Cost optimization** metrics, which include **Incomplete multipart upload bytes greater than 7 days old**. 

   In any chart in your S3 Storage Lens dashboard, you can see metrics for incomplete multipart uploads. You can use these metrics to further assess the impact of incomplete multipart upload bytes on your storage, including their contribution to overall growth trends. You can also drill down to deeper levels of aggregation, using the **Account**, **AWS Region**, **Bucket**, or **Storage class** tabs for a deeper analysis of your data. For an example, see [Uncover cold Amazon S3 buckets](#uncover-cold-buckets).

### Step 2: Identify buckets that have the most incomplete multipart upload bytes but don't have lifecycle rules to abort incomplete multipart uploads
<a name="locate-incomplete-mpu-step2"></a>

**Prerequisite**  
To see the **Abort incomplete multipart upload lifecycle rule count** metric in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Advanced cost optimization metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. To identify specific buckets that are accumulating incomplete multipart uploads greater than 7 days old, go to the **Top N overview for *date*** section. 

   By default, the **Top N overview for *date*** section displays metrics for the top 3 buckets. You can increase or decrease the number of buckets in the **Top N** field. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. (This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations.) 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. For **Metric**, choose **Incomplete multipart upload bytes greater than 7 days old** in the **Cost optimization** category.

   Under **Top *number* buckets**, you can see the buckets with the most incomplete multipart upload storage bytes that are greater than 7 days old.

1. To view more detailed bucket-level metrics for incomplete multipart uploads, scroll to the top of the page, and then choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. For **Metrics categories**, select **Cost optimization**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Cost optimization** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific cost-optimization metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all cost-optimization metrics until only **Incomplete multipart upload bytes greater than 7 days old** and **Abort incomplete multipart upload lifecycle rule count** remain selected. 

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display bucket-level metrics for incomplete multipart uploads and lifecycle rule counts. You can use this data to identify buckets that have the most incomplete multipart upload bytes that are greater than 7 days old and are missing lifecycle rules to abort incomplete multipart uploads. Then, you can navigate to these buckets in the S3 console and add lifecycle rules to delete abandoned incomplete multipart uploads.

### Step 3: Add a lifecycle rule to delete incomplete multipart uploads after 7 days
<a name="locate-incomplete-mpu-step3"></a>

To automatically manage incomplete multipart uploads, you can use the S3 console to create a lifecycle configuration to expire incomplete multipart upload bytes from a bucket after a specified number of days. For more information, see [Configuring a bucket lifecycle configuration to delete incomplete multipart uploads](mpu-abort-incomplete-mpu-lifecycle-config.md).

## Reduce the number of noncurrent versions retained
<a name="reduce-noncurrent-versions-retained"></a>

When enabled, S3 Versioning retains multiple distinct copies of the same object that you can use to quickly recover data if an object is accidentally deleted or overwritten. If you've enabled S3 Versioning without configuring lifecycle rules to transition or expire noncurrent versions, a large number of previous noncurrent versions can accumulate, which can have storage-cost implications. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

### Step 1: Identify buckets with the most noncurrent object versions
<a name="reduce-noncurrent-versions-retained-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In the **Snapshot for *date*** section, under **Metric categories**, choose **Cost optimization**.

   The **Snapshot for *date*** section updates to display **Cost optimization** metrics, which include the metric for **% noncurrent version bytes**. The **% noncurrent version bytes** metric represents the proportion of your total storage bytes that is attributed to noncurrent versions, within the dashboard scope and for the selected date.
**Note**  
If your **% noncurrent version bytes** is greater than 10 percent of your storage at the account level, you might be storing too many object versions.

1. To identify specific buckets that are accumulating a large number of noncurrent versions:

   1. Scroll down to the **Top N overview for *date*** section. For **Top N**, enter the number of buckets that you would like to see data for. 

   1. For **Metric**, choose **% noncurrent version bytes**.

      Under **Top *number* buckets**, you can see the buckets (for the number that you specified) with the highest **% noncurrent version bytes**. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

   1. To view more detailed bucket-level metrics for noncurrent object versions, scroll to the top of the page, and then choose the **Bucket** tab.

      In any chart or visualization in your S3 Storage Lens dashboard, you can drill down to deeper levels of aggregation, using the **Account**, **AWS Region**, **Storage class**, or **Bucket** tabs. For an example, see [Uncover cold Amazon S3 buckets](#uncover-cold-buckets).

   1. In the **Buckets** section, for **Metric categories**, select **Cost optimization**. Then, clear **Summary**. 

      You can now see the **% noncurrent version bytes** metric, along with other metrics related to noncurrent versions.

### Step 2: Identify buckets that are missing transition and expiration lifecycle rules for managing noncurrent versions
<a name="reduce-noncurrent-versions-retained-step2"></a>

**Prerequisite**  
To see the **Noncurrent version transition lifecycle rule count** and **Noncurrent version expiration lifecycle rule count** metrics in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Advanced cost optimization metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket ** tab.

1. Scroll down to the **Buckets** section. For **Metrics categories**, select **Cost optimization**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Cost optimization** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific cost-optimization metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all cost-optimization metrics until only the following remain selected:
   + **% noncurrent version bytes**
   + **Noncurrent version transition lifecycle rule count**
   + **Noncurrent version expiration lifecycle rule count**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display metrics for noncurrent version bytes and noncurrent version lifecycle rule counts. You can use this data to identify buckets that have a high percentage of noncurrent version bytes but are missing transition and expiration lifecycle rules. Then, you can navigate to these buckets in the S3 console and add lifecycle rules to these buckets.

### Step 3: Add lifecycle rules to transition or expire noncurrent object versions
<a name="reduce-noncurrent-versions-retained-step3"></a>

After you've determined which buckets require further investigation, you can navigate to the buckets within the S3 console and add a lifecycle rule to expire noncurrent versions after a specified number of days. Alternatively, to reduce costs while still retaining noncurrent versions, you can configure a lifecycle rule to transition noncurrent versions to one of the Amazon Glacier storage classes. For more information, see [Specifying a lifecycle rule for a versioning-enabled bucket](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex6). 

## Identify buckets that don't have lifecycle rules and review lifecycle rule counts
<a name="identify-missing-lifecycle-rules"></a>

S3 Storage Lens provides S3 Lifecycle rule count metrics that you can use to identify buckets that are missing lifecycle rules. To find buckets that don't have lifecycle rules, you can use the **Total buckets without lifecycle rules** metric. A bucket with no S3 Lifecycle configuration might have storage that you no longer need or can migrate to a lower-cost storage class. You can also use lifecycle rule count metrics to identify buckets that are missing specific types of lifecycle rules, such as expiration or transition rules.

**Prerequisite**  
To see lifecycle rule count metrics and the **Total buckets without lifecycle rules** metric in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Advanced cost optimization metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Identify buckets without lifecycle rules
<a name="identify-missing-lifecycle-rules-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. To identify specific buckets without lifecycle rules, scroll down to the **Top N overview for *date*** section.

   By default, the **Top N overview for *date*** section displays metrics for the top 3 buckets. In the **Top N** field, you can increase the number of buckets. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. For **Metric**, choose **Total buckets without lifecycle rules** from the **Cost optimization** category.

1. Review the following data for **Total buckets without lifecycle rules**:
   + **Top *number* accounts** ‐ See which accounts that have the most buckets without lifecycle rules.
   + **Top *number* Regions** ‐ View a breakdown of buckets without lifecycle rules by Region.
   + **Top *number* buckets** ‐ See which buckets don't have lifecycle rules. 

   In any chart or visualization in your S3 Storage Lens dashboard, you can drill down to deeper levels of aggregation, using the **Account**, **AWS Region**, **Storage class**, or **Bucket** tabs. For an example, see [Uncover cold Amazon S3 buckets](#uncover-cold-buckets).

   After you identify which buckets don't have lifecycle rules, you can also review specific lifecycle rule counts for your buckets. 

### Step 2: Review lifecycle rule counts for your buckets
<a name="identify-missing-lifecycle-rules-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In your S3 Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, select **Cost optimization**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Cost optimization** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific cost-optimization metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all cost-optimization metrics until only the following remain selected:
   + **Transition lifecycle rule count**
   + **Expiration lifecycle rule count**
   + **Noncurrent version transition lifecycle rule count**
   + **Noncurrent version expiration lifecycle rule count**
   + **Abort incomplete multipart upload lifecycle rule count**
   + **Total lifecycle rule count**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display lifecycle rule count metrics for your buckets. You can use this data to identify buckets without lifecycle rules or buckets that are missing specific kinds of lifecycle rules, for example, expiration or transition rules. Then, you can navigate to these buckets in the S3 console and add lifecycle rules to these buckets.

### Step 3: Add lifecycle rules
<a name="identify-missing-lifecycle-rules-step3"></a>

After you've identified buckets with no lifecycle rules, you can add lifecycle rules. For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md) and [Examples of S3 Lifecycle configurations](lifecycle-configuration-examples.md).

# Using S3 Storage Lens to protect your data
<a name="storage-lens-data-protection"></a>

You can use Amazon S3 Storage Lens data-protection metrics to identify buckets where data-protection best practices haven't been applied. You can use these metrics to take action and apply standard settings that align with best practices for protecting your data across the buckets in your account or organization. For example, you can use data-protection metrics to identify buckets that don't use AWS Key Management Service (AWS KMS) keys (SSE-KMS) for default encryption or requests that use AWS Signature Version 2 (SigV2). 

The following use cases provide strategies for using your S3 Storage Lens dashboard to identify outliers and apply data-protection best practices across your S3 buckets.

**Topics**
+ [Identify buckets that don't use server-side encryption with AWS KMS for default encryption (SSE-KMS)](#storage-lens-sse-kms)
+ [Identify buckets that have S3 Versioning enabled](#storage-lens-data-protection-versioning)
+ [Identify requests that use AWS Signature Version 2 (SigV2)](#storage-lens-data-protection-sigv)
+ [Count the total number of replication rules for each bucket](#storage-lens-data-protection-replication-rule)
+ [Identify percentage of Object Lock bytes](#storage-lens-data-protection-object-lock)

## Identify buckets that don't use server-side encryption with AWS KMS for default encryption (SSE-KMS)
<a name="storage-lens-sse-kms"></a>

With Amazon S3 default encryption, you can set the default encryption behavior for an S3 bucket. For more information, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md).

You can use the **SSE-KMS enabled bucket count** and **% SSE-KMS enabled buckets** metrics to identify buckets that use server-side encryption with AWS KMS keys (SSE-KMS) for default encryption. S3 Storage Lens also provides metrics for unencrypted bytes, unencrypted objects, encrypted bytes, and encrypted objects. For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md). 

You can analyze SSE-KMS encryption metrics in the context of general encryption metrics to identify buckets that don't use SSE-KMS. If you want to use SSE-KMS for all the buckets in your account or organization, you can then update the default encryption settings for these buckets to use SSE-KMS. In addition to SSE-KMS, you can use server-side encryption with Amazon S3 managed keys (SSE-S3) or customer-provided keys (SSE-C). For more information, see [Protecting data with encryption](UsingEncryption.md). 

### Step 1: Identify which buckets are using SSE-KMS for default encryption
<a name="storage-lens-sse-kms-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Trends and distributions** section, choose **% SSE-KMS enabled bucket count** for the primary metric and **% encrypted bytes** for the secondary metric.

   The **Trend for *date*** chart updates to display trends for SSE-KMS and encrypted bytes. 

1. To view more granular, bucket-level insights for SSE-KMS:

   1. Choose a point on the chart. A box will appear with choices for more granular insights.

   1. Choose the **Buckets** dimension. Then choose **Apply**.

1. In the **Distribution by buckets for *date*** chart, choose the **SSE-KMS enabled bucket count** metric. 

1. You can now see which buckets have SSE-KMS enabled and which do not.

### Step 2: Update bucket default encryption settings
<a name="storage-lens-sse-kms-step2"></a>

Now that you've determined which buckets use SSE-KMS in the context of your **% encrypted bytes**, you can identify buckets that don't use SSE-KMS. You can then optionally navigate to these buckets within the S3 console and update their default encryption settings to use SSE-KMS or SSE-S3. For more information, see [Configuring default encryption](default-bucket-encryption.md).

## Identify buckets that have S3 Versioning enabled
<a name="storage-lens-data-protection-versioning"></a>

When enabled, the S3 Versioning feature retains multiple versions of the same object that can be used to quickly recover data if an object is accidentally deleted or overwritten. You can use the **Versioning-enabled bucket count** metric to see which buckets use S3 Versioning. Then, you can take action in the S3 console to enable S3 Versioning for other buckets.

### Step 1: Identify buckets that have S3 Versioning enabled
<a name="storage-lens-data-protection-versioning-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Trends and distributions** section, choose **Versioning-enabled bucket count** for the primary metric and **Buckets** for the secondary metric.

   The **Trend for *date*** chart updates to display trends for S3 Versioning enabled buckets. Right below the trends line, you can see the **Storage class distribution** and **Region distribution** subsections.

1. To view more granular insights for any of the buckets that you see in the **Trend for *date*** chart so that you can perform a deeper analysis, do the following:

   1. Choose a point on the chart. A box will appear with choices for more granular insights.

   1. Choose a dimension to apply to your data for deeper analysis: **Account**, **AWS Region**, **Storage class**, or **Bucket**. Then choose **Apply**.

1. In the **Bubble analysis by buckets for *date*** section, choose the **Versioning-enabled bucket count**, **Buckets**, and **Active buckets** metrics.

   The **Bubble analysis by buckets for *date*** section updates to display data for the metrics that you selected. You can use this data to see which buckets have S3 Versioning enabled in the context of your total bucket count. In the **Bubble analysis by buckets for *date*** section, you can plot your buckets on multiple dimensions by using any three metrics to represent the **X-axis**, **Y-axis**, and **Size** of the bubble. 

### Step 2: Enable S3 Versioning
<a name="storage-lens-data-protection-versioning-step2"></a>

After you've identified buckets that have S3 Versioning enabled, you can identify buckets that have never had S3 Versioning enabled or are versioning suspended. Then, you can optionally enable versioning for these buckets in the S3 console. For more information, see [Enabling versioning on buckets](manage-versioning-examples.md).

## Identify requests that use AWS Signature Version 2 (SigV2)
<a name="storage-lens-data-protection-sigv"></a>

You can use the **All unsupported signature requests** metric to identify requests that use AWS Signature Version 2 (SigV2). This data can help you identify specific applications that are using SigV2. You can then migrate these applications to AWS Signature Version 4 (SigV4). 

SigV4 is the recommended signing method for all new S3 applications. SigV4 provides improved security and is supported in all AWS Regions. For more information, see [Amazon S3 update - SigV2 deprecation period extended & modified](https://aws.amazon.com/blogs/aws/amazon-s3-update-sigv2-deprecation-period-extended-modified/).

**Prerequisite**  
To see **All unsupported signature requests** in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations** and then select **Advanced data protection metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Examine SigV2 signing trends by AWS account, Region, and bucket
<a name="storage-lens-data-protection-sigv-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. To identify specific buckets, accounts, and Regions with requests that use SigV2:

   1. Under **Top N overview for *date***, in **Top N**, enter the number of buckets that you would like to see data for. 

   1. For **Metric**, choose **All unsupported signature requests** from the **Data protection** category.

      The **Top N overview for *date*** updates to display data for SigV2 requests by account, AWS Region, and bucket. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

### Step 2: Identify buckets that are accessed by applications through SigV2 requests
<a name="storage-lens-data-protection-sigv-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, choose **Data protection**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Data protection** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific data-protection metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all data-protection metrics until only the following metrics remain selected:
   + **All unsupported signature requests**
   + **% all unsupported signature requests**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display bucket-level metrics for SigV2 requests. You can use this data to identify specific buckets that have SigV2 requests. Then, you can use this information to migrate your applications to SigV4. For more information, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) in the *Amazon Simple Storage Service API Reference*.

## Count the total number of replication rules for each bucket
<a name="storage-lens-data-protection-replication-rule"></a>

S3 Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. For more information, see [Replicating objects within and across Regions](replication.md). 

You can use S3 Storage Lens replication rule count metrics to get detailed per-bucket information about your buckets that are configured for replication. This information includes replication rules within and across buckets and Regions.

**Prerequisite**  
To see replication rule count metrics in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations** and then select **Advanced data protection metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Count the total number of replication rules for each bucket
<a name="storage-lens-data-protection-replication-rule-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, choose **Data protection**. Then clear **Summary**.

1. To filter the **Buckets** list to display only replication rule count metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all data-protection metrics until only the replication rule count metrics remain selected:
   + **Same-Region Replication rule count**
   + **Cross-Region Replication rule count**
   + **Same-account replication rule count**
   + **Cross-account replication rule count**
   + **Total replication rule count**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

### Step 2: Add replication rules
<a name="storage-lens-data-protection-replication-rule-step2"></a>

After you have a per-bucket replication rule count, you can optionally create additional replication rules. For more information, see [Examples for configuring live replication](replication-example-walkthroughs.md).

## Identify percentage of Object Lock bytes
<a name="storage-lens-data-protection-object-lock"></a>

With S3 Object Lock, you can store objects by using a *write-once-read-many (WORM)* model. You can use Object Lock to help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can enable Object Lock only when you create a bucket and also enable S3 Versioning. However, you can edit the retention period for individual object versions or apply legal holds for buckets that have Object Lock enabled. For more information, see [Locking objects with Object Lock](object-lock.md).

You can use Object Lock metrics in S3 Storage Lens to see the **% Object Lock bytes** metric for your account or organization. You can use this information to identify buckets in your account or organization that aren't following your data-protection best practices. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Snapshot** section, under **Metrics categories**, choose **Data protection**.

   The **Snapshot** section updates to display data-protection metrics, including the **% Object Lock bytes** metric. You can see the overall percentage of Object Lock bytes for your account or organization. 

1. To see the **% Object Lock bytes** per bucket, scroll down to the **Top N overview** section.

   To get object-level data for Object Lock, you can also use the **Object Lock object count** and **% Object Lock objects** metrics. 

1. For **Metric**, choose **% Object Lock bytes** from the **Data protection** category.

   By default, the **Top N overview for *date*** section displays metrics for the top 3 buckets. In the **Top N** field, you can increase the number of buckets. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. Review the following data for **% Object Lock bytes**:
   + **Top *number* accounts** ‐ See which accounts have the highest and lowest **% Object Lock bytes**.
   + **Top *number* Regions** ‐ View a breakdown of **% Object Lock bytes** by Region.
   + **Top *number* buckets** ‐ See which buckets have the highest and lowest **% Object Lock bytes**.

# Using S3 Storage Lens to audit Object Ownership settings
<a name="storage-lens-access-management"></a>

Amazon S3 Object Ownership is an S3 bucket-level setting that you can use to disable access control lists (ACLs) and control ownership of the objects in your bucket. If you set Object Ownership to bucket owner enforced, you can disable [access control lists (ACLs)](acl-overview.md) and take ownership of every object in your bucket. This approach simplifies access management for data stored in Amazon S3. 

By default, when another AWS account uploads an object to your S3 bucket, that account (the object writer) owns the object, has access to it, and can grant other users access to it through ACLs. You can use Object Ownership to change this default behavior. 

A majority of modern use cases in Amazon S3 no longer require the use of ACLs. Therefore, we recommend that you disable ACLs, except in circumstances where you must control access for each object individually. By setting Object Ownership to bucket owner enforced, you can disable ACLs and rely on policies for access control. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

With S3 Storage Lens access-management metrics, you can identify buckets that don't have disabled ACLs. After identifying these buckets, you can migrate ACL permissions to policies and disable ACLs for these buckets.

**Topics**
+ [Step 1: Identify general trends for Object Ownership settings](#storage-lens-access-management-step1)
+ [Step 2: Identify bucket-level trends for Object Ownership settings](#storage-lens-access-management-step2)
+ [Step 3: Update your Object Ownership setting to bucket owner enforced to disable ACLs](#storage-lens-access-management-step3)

## Step 1: Identify general trends for Object Ownership settings
<a name="storage-lens-access-management-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Snapshot for *date*** section, under **Metrics categories**, choose **Access management**.

   The **Snapshot for *date*** section updates to display the **% Object Ownership bucket owner enforced** metric. You can see the overall percentage of buckets in your account or organization that use the bucket owner enforced setting for Object Ownership to disable ACLs.

## Step 2: Identify bucket-level trends for Object Ownership settings
<a name="storage-lens-access-management-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. To view more detailed bucket-level metrics, choose the **Bucket** tab.

1. In the **Distribution by buckets for *date*** section, choose the **% Object Ownership bucket owner enforced** metric.

   The chart updates to show a per-bucket breakdown for **% Object Ownership bucket owner enforced**. You can see which buckets use the bucket owner enforced setting for Object Ownership to disable ACLs.

1. To view the bucket owner enforced settings in context, scroll down to the **Buckets** section. For **Metrics categories**, select **Access management**. Then clear **Summary**.

   The **Buckets** list displays data for all three Object Ownership settings: bucket owner enforced, bucket owner preferred, and object writer.

1. To filter the **Buckets** list to display metrics only for a specific Object Ownership setting, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the metrics that you don't want to see.

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

## Step 3: Update your Object Ownership setting to bucket owner enforced to disable ACLs
<a name="storage-lens-access-management-step3"></a>

After you've identified buckets that use the object writer and bucket owner preferred setting for Object Ownership, you can migrate your ACL permissions to bucket policies. When you've finished migrating your ACL permissions, you can then update your Object Ownership settings to bucket owner enforced in order to disable ACLs. For more information, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md).

# Using S3 Storage Lens metrics to improve performance
<a name="storage-lens-detailed-status-code"></a>

If you have [S3 Storage Lens advanced metrics](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection) enabled, you can use detailed status-code metrics to get counts for successful or failed requests. You can use this information to troubleshoot access or performance issues. Detailed status-code metrics show counts for HTTP status codes, such as 403 Forbidden and 503 Service Unavailable. You can examine overall trends for detailed status-code metrics across S3 buckets, accounts, and organizations. Then, you can drill down into bucket-level metrics to identify workloads that are currently accessing these buckets and causing errors. 

For example, you can look at the **403 Forbidden error count** metric to identify workloads that are accessing buckets without the correct permissions applied. After you've identified these workloads, you can do a deep dive outside of S3 Storage Lens to troubleshoot your 403 Forbidden errors.

This example shows you how to do a trend analysis for the 403 Forbidden error by using the **403 Forbidden error count** and the **% 403 Forbidden errors** metrics. You can use these metrics to identify workloads that are accessing buckets without the correct permissions applied. You can do a similar trend analysis for any of the other **Detailed status code metrics**. For more information, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Prerequisite**  
To see **Detailed status code metrics** in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Detailed status code metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

**Topics**
+ [Step 1: Do a trend analysis for an individual HTTP status code](#storage-lens-detailed-status-code-step1)
+ [Step 2: Analyze error counts by bucket](#storage-lens-detailed-status-code-step2)
+ [Step 3: Troubleshoot errors](#storage-lens-detailed-status-code-step3)

## Step 1: Do a trend analysis for an individual HTTP status code
<a name="storage-lens-detailed-status-code-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Trends and distributions** section, for **Primary metric**, choose **403 Forbidden error count** from the **Detailed status codes** category. For **Secondary metric**, choose **% 403 Forbidden errors**.

1. Scroll down to the **Top N overview for *date*** section. For **Metrics**, choose **403 Forbidden error count** or **% 403 Forbidden errors** from the **Detailed status codes** category.

   The **Top N overview for *date*** section updates to display the top 403 Forbidden error counts by account, AWS Region, and bucket. 

## Step 2: Analyze error counts by bucket
<a name="storage-lens-detailed-status-code-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. For **Metrics categories**, select **Detailed status code** metrics. Then clear **Summary**.

   The **Buckets** list updates to display all the available detailed status code metrics. You can use this information to see which buckets have a large proportion of certain HTTP status codes and which status codes are common across buckets. 

1. To filter the **Buckets** list to display only specific detailed status-code metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for any detailed status-code metrics that you don't want to view in the **Buckets** list.

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list displays error count metrics for the number of buckets that you specified. You can use this information to identify specific buckets that are experiencing many errors and troubleshoot errors by bucket.

## Step 3: Troubleshoot errors
<a name="storage-lens-detailed-status-code-step3"></a>

 After you identify buckets with a high proportion of specific HTTP status codes, you can troubleshoot these errors. For more information, see the following:
+ [Why am I getting a 403 Forbidden error when I try to upload files in Amazon S3? ](https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-forbidden-error/)
+ [Why am I getting a 403 Forbidden error when I try to modify a bucket policy in Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-bucket-policy/)
+ [How do I troubleshoot 403 Forbidden errors from my Amazon S3 bucket where all the resources are from the same AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403-resource-same-account/)
+ [How do I troubleshoot an HTTP 500 or 503 error from Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/)