

# Working with S3 Storage Lens data in S3 Tables
<a name="storage-lens-s3-tables"></a>

Amazon S3 Storage Lens can export your storage analytics and insights to S3 Tables, enabling you to query your Storage Lens metrics using SQL with AWS analytics services like Amazon Athena, Amazon EMR, Amazon SageMaker Studio (SMStudio), and other AWS analytics tools. When you configure S3 Storage Lens to export to S3 Tables, your metrics are automatically stored in read-only Apache Iceberg tables in the AWS-managed `aws-s3` table bucket.

This integration provides structured data access for querying Storage Lens metrics using standard SQL, analytics integration with AWS analytics services, historical analysis capabilities, and cost optimization with no additional charges for exporting to AWS-managed S3 Tables.

**Topics**
+ [Exporting S3 Storage Lens metrics to S3 Tables](storage-lens-s3-tables-export.md)
+ [Table naming for S3 Storage Lens export to S3 Tables](storage-lens-s3-tables-naming.md)
+ [Understanding S3 Storage Lens table schemas](storage-lens-s3-tables-schemas.md)
+ [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md)
+ [Querying S3 Storage Lens data with analytics tools](storage-lens-s3-tables-querying.md)
+ [Using AI assistants with S3 Storage Lens tables](storage-lens-s3-tables-ai-tools.md)

# Exporting S3 Storage Lens metrics to S3 Tables
<a name="storage-lens-s3-tables-export"></a>

You can configure Amazon S3 Storage Lens to export your storage analytics and insights to S3 Tables. When you enable S3 Tables export, your metrics are automatically stored in read-only Apache Iceberg tables in the AWS-managed `aws-s3` table bucket, making them queryable using SQL with AWS analytics services like Amazon Athena, Amazon Redshift, and Amazon EMR.

**Note**  
There is no additional charge for exporting S3 Storage Lens metrics to AWS-managed S3 Tables. Standard charges apply for table storage, table management, and requests on the tables. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing). 

## Enable S3 Tables export using the console
<a name="storage-lens-s3-tables-export-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). 

1. In the left navigation pane, choose **Storage Lens**, and then choose **Storage Lens Dashboards**. 

1. In **Storage Lens Dashboards** list, choose the dashboard that you want to edit.

1. Choose **Edit**. 

1. On the **Dashboard** page, navigate to **Metrics export and publishing** section.

1. To enable Table Export for **Default metrics report**, select **Table bucket** in the Bucket type.

1. To enable Table Export for **Expanded prefixes metrics report**, select **Table bucket** in the Bucket type.

1. Review dashboard config and click **Submit**. 

**Note**  
After you enable S3 Tables export, it can take up to 48 hours for the first data to be available in the tables.

**Note**  
There is no additional charge for exporting S3 Storage Lens metrics to AWS-managed S3 Tables. Standard charges apply for table storage, table management, requests on the tables, and monitoring. You can enable or disable export to S3 Tables by using the Amazon S3 console, Amazon S3 API, the AWS CLI, or AWS SDKs.

**Note**  
By default, records in your S3 tables don't expire. To help minimize storage costs for your tables, you can enable and configure record expiration for the tables. With this option, Amazon S3 automatically removes records from a table when the records expire. See: [Record expiration for tables.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-record-expiration.html) 

## Enable S3 Tables export using the AWS CLI
<a name="storage-lens-s3-tables-export-cli"></a>

**Note**  
Before running the following commands, make sure that you have an up to date CLI version. See [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). 

The following example enables S3 Tables export for an S3 Storage Lens configuration "Default metrics report" using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3control put-storage-lens-configuration --account-id=555555555555 --config-id=your-configuration-id --storage-lens-configuration '{
   "Id":"your-configuration-id",
   "AccountLevel":{
      "ActivityMetrics":{
        "IsEnabled":true
      },
      "BucketLevel":{
         "ActivityMetrics":{
            "IsEnabled":true
         }
      }
   },
   "DataExport":{
      "S3BucketDestination":{
         "OutputSchemaVersion":"V_1",
         "Format":"CSV",
         "AccountId":"555555555555",
         "Arn":"arn:aws:s3:::my-export-bucket",
         "Prefix":"storage-lens-exports/"
      },
      "StorageLensTableDestination":{
         "IsEnabled":true
      }
   },
   "IsEnabled":true
}'
```

## Enable S3 Tables export using the AWS SDKs
<a name="storage-lens-s3-tables-export-sdk"></a>

The following example enables S3 Tables export for an S3 Storage Lens configuration "Default metrics report" using the AWS SDK for Python (Boto3). To use this example, replace the *user input placeholders* with your own information.

```
import boto3

s3control = boto3.client('s3control')

response = s3control.put_storage_lens_configuration( AccountId='555555555555', ConfigId='your-configuration-id', StorageLensConfiguration={
        'Id': 'your-configuration-id',
        'AccountLevel': {
            'ActivityMetrics': {
              'IsEnabled': True
            },
            'BucketLevel': {
                'ActivityMetrics': {
                    'IsEnabled': True
                }
            }
        },
        'DataExport': {
            'S3BucketDestination': {
                'OutputSchemaVersion': 'V_1',
                'Format': 'CSV',
                'AccountId': '555555555555',
                'Arn': 'arn:aws:s3:::my-export-bucket',
                'Prefix': 'storage-lens-exports/'
            },
            'StorageLensTableDestination': {
                'IsEnabled': True
            }
        },
        'IsEnabled': True
    }
)
```

For more information about using the AWS SDKs, see [AWS SDKs and tools](https://aws.amazon.com/developer/tools/). 

## Next steps
<a name="storage-lens-s3-tables-export-next-steps"></a>

After enabling S3 Tables export, you can:
+ Learn about [Table naming for S3 Storage Lens export to S3 Tables](storage-lens-s3-tables-naming.md) 
+ Learn about [Understanding S3 Storage Lens table schemas](storage-lens-s3-tables-schemas.md) 

# Table naming for S3 Storage Lens export to S3 Tables
<a name="storage-lens-s3-tables-naming"></a>

When you export S3 Storage Lens metrics to S3 Tables, the tables are organized using Apache Iceberg catalog conventions with specific naming patterns to ensure compatibility and organization.

## Table location structure
<a name="storage-lens-s3-tables-naming-location"></a>

The complete table location follows this pattern:

```
s3tablescatalog/aws-s3/<namespace>/<table-name>
```

### Table bucket name
<a name="storage-lens-s3-tables-naming-bucket"></a>

 **Table Bucket:** `aws-s3` 

The S3 Storage Lens export uses the `aws-s3` table bucket, which is the designated bucket for AWS S3-related system tables.

### Catalog name
<a name="storage-lens-s3-tables-naming-catalog"></a>

 **Catalog:** `s3tablescatalog/aws-s3` 

S3 Storage Lens tables are stored in the S3 catalog because Storage Lens provides insights about three types of S3 resources:
+ Storage metrics
+ Bucket properties
+ API usage metrics

## Namespace naming convention
<a name="storage-lens-s3-tables-naming-namespace"></a>

Namespaces organize tables within the catalog. For S3 Storage Lens, the namespace is derived from your Storage Lens configuration ID.

### Standard namespace format
<a name="storage-lens-s3-tables-naming-namespace-standard"></a>

For Storage Lens configuration IDs without dots (`.`): 

```
lens_<configuration-id>_exp
```

 **Example:** If your configuration ID is `my-lens-config`, the namespace will be:

```
lens_my-lens-config_exp
```

### Namespace format with dot character or uppercase letters handling
<a name="storage-lens-s3-tables-naming-namespace-dots"></a>

Storage Lens configuration IDs can contain dots (`.`) or uppercase letters (`A-Z`), but S3 Tables namespaces only support lowercase letters, numbers, hyphens (`-`), and underscores (`_`). When your configuration ID contains dots, they are converted to hyphens, uppercase letters are converted to lower case letters, and a hash suffix is added for uniqueness:

```
lens_<configuration-id-with-dots-or-uppercase-replaced>_exp_<7-char-hash>
```

 **Example:** If your configuration ID is `my.LENS.config`, the namespace will be:

```
lens_my-lens-config_exp_a1b2c3d
```

Where `a1b2c3d` is the first 7 characters of the SHA-1 hash of the original configuration ID.

## Complete examples
<a name="storage-lens-s3-tables-naming-examples"></a>

For a Storage Lens configuration with ID `production-metrics`: 
+  **Table Bucket:** `aws-s3` 
+  **Catalog:** `s3tablescatalog/aws-s3` 
+  **Namespace:** `lens_production-metrics_exp` 
+  **Full Path:** `s3tablescatalog/aws-s3/lens_production-metrics_exp/<table-name>` 

For a Storage Lens configuration with ID `prod.us.east.metrics`: 
+  **Table Bucket:** `aws-s3` 
+  **Catalog:** `s3tablescatalog/aws-s3` 
+  **Namespace:** `lens_prod-us-east-metrics_exp_f8e9a1b` (with hash)
+  **Full Path:** `s3tablescatalog/aws-s3/lens_prod-us-east-metrics_exp_f8e9a1b/<table-name>` 

## Table types
<a name="storage-lens-s3-tables-naming-types"></a>

The following table shows the different types of tables created for S3 Storage Lens exports:


| Catalog | Namespace | S3 table name | Description | 
| --- | --- | --- | --- | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | default\$1storage\$1metrics | This table contains the storage metrics for your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | default\$1activity\$1metrics | This table contains the activity metrics for your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | expanded\$1prefixes\$1storage\$1metrics | This table contains the storage metrics for all the prefixes in your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | expanded\$1prefixes\$1activity\$1metrics | This table contains the activity metrics for all the prefixes in your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | bucket\$1property\$1metrics | This table contains the bucket property metrics for all the buckets in your Storage Lens configuration. | 

## Next steps
<a name="storage-lens-s3-tables-naming-next-steps"></a>
+ Learn about [Understanding S3 Storage Lens table schemas](storage-lens-s3-tables-schemas.md) 
+ Learn about [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md) 

# Understanding S3 Storage Lens table schemas
<a name="storage-lens-s3-tables-schemas"></a>

When exporting S3 Storage Lens metrics to S3 tables, the data is organized into three separate table schemas: storage metrics, bucket property metrics, and activity metrics.

## Storage metrics table schema
<a name="storage-lens-s3-tables-schemas-storage"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, STORAGE\$1LENS\$1GROUP\$1BUCKET, STORAGE\$1LENS\$1GROUP\$1ACCOUNT.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  object\$1count  | long | Number of objects stored for the current referenced item | 
|  storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1object\$1count  | long | Number of objects encrypted with a customer managed key stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with a customer managed key stored for the current referenced item | 
|  current\$1version\$1object\$1count  | long | Number of current version objects stored for the current referenced item | 
|  current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of current version bytes stored for the current referenced item | 
|  delete\$1marker\$1object\$1count  | long | Number of delete marker objects stored for the current referenced item | 
|  delete\$1marker\$1storage\$1bytes  | DECIMAL(38,0) | Number of delete marker bytes stored for the current referenced item | 
|  encrypted\$1object\$1count  | long | Number of encrypted objects stored for the current referenced item | 
|  encrypted\$1storage\$1bytes  | DECIMAL(38,0) | Number of encrypted bytes stored for the current referenced item | 
|  incomplete\$1mpu\$1object\$1older\$1than\$17\$1days\$1count  | long | Number of incomplete multipart upload objects older than 7 days stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1older\$1than\$17\$1days\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored older than 7 days for the current referenced item | 
|  incomplete\$1mpu\$1object\$1count  | long | Number of incomplete multipart upload objects stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored for the current referenced item | 
|  non\$1current\$1version\$1object\$1count  | long | Number of non-current version objects stored for the current referenced item | 
|  non\$1current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of non-current version bytes stored for the current referenced item | 
|  object\$1lock\$1enabled\$1object\$1count  | long | Number of objects stored for for objects with lock enabled in the current referenced item | 
|  object\$1lock\$1enabled\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for objects with lock enabled in the current referenced item | 
|  replicated\$1object\$1count  | long | Number of objects replicated for the current referenced item | 
|  replicated\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes replicated for the current referenced item | 
|  replicated\$1object\$1source\$1count  | long | Number of objects replicated as source stored for the current referenced item | 
|  replicated\$1storage\$1source\$1bytes  | DECIMAL(38,0) | Number of bytes replicated as source for the current referenced item | 
|  sse\$1kms\$1object\$1count  | long | Number of objects encrypted with SSE key stored for the current referenced item | 
|  sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with SSE key stored for the current referenced item | 
|  object\$10kb\$1count  | long | Number of objects with sizes equal to 0KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$10kb\$1to\$1128kb\$1count  | long | Number of objects with sizes greater than 0KB and less than equal to 128KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128kb\$1to\$1256kb\$1count  | long | Number of objects with sizes greater than 128KB and less than equal to 256KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256kb\$1to\$1512kb\$1count  | long | Number of objects with sizes greater than 256KB and less than equal to 512KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512kb\$1to\$11mb\$1count  | long | Number of objects with sizes greater than 512KB and less than equal to 1MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11mb\$1to\$12mb\$1count  | long | Number of objects with sizes greater than 1MB and less than equal to 2MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12mb\$1to\$14mb\$1count  | long | Number of objects with sizes greater than 2MB and less than equal to 4MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$14mb\$1to\$18mb\$1count  | long | Number of objects with sizes greater than 4MB and less than equal to 8MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$18mb\$1to\$116mb\$1count  | long | Number of objects with sizes greater than 8MB and less than equal to 16MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$116mb\$1to\$132mb\$1count  | long | Number of objects with sizes greater than 16MB and less than equal to 32MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$132mb\$1to\$164mb\$1count  | long | Number of objects with sizes greater than 32MB and less than equal to 64MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$164mb\$1to\$1128mb\$1count  | long | Number of objects with sizes greater than 64MB and less than equal to 128MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128mb\$1to\$1256mb\$1count  | long | Number of objects sizes greater than 128MB and less than equal to 256MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256mb\$1to\$1512mb\$1count  | long | Number of objects sizes greater than 256MB and less than equal to 512MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512mb\$1to\$11gb\$1count  | long | Number of objects sizes greater than 512MB and less than equal to 1GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11gb\$1to\$12gb\$1count  | long | Number of objects sizes greater than 1GB and less than equal to 2GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12gb\$1to\$14gb\$1count  | long | Number of objects sizes greater than 2GB and less than equal to 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1larger\$1than\$14gb\$1count  | long | Number of objects sizes greater than 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 

## Bucket property metrics table schema
<a name="storage-lens-s3-tables-schemas-bucket-property"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, STORAGE\$1LENS\$1GROUP\$1BUCKET, STORAGE\$1LENS\$1GROUP\$1ACCOUNT.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix. | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  bucket\$1name  | string | Bucket name | 
|  versioning\$1enabled\$1bucket\$1count  | long | Number of buckets with versioning enabled for the current referenced item | 
|  mfa\$1delete\$1enabled\$1bucket\$1count  | long | Number of buckets with MFA delete enabled for the current referenced item | 
|  sse\$1kms\$1enabled\$1bucket\$1count  | long | Number of buckets with KMS enabled for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1enforced\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner enforced for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1preferred\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner preferred for the current referenced item | 
|  object\$1ownership\$1object\$1writer\$1bucket\$1count  | long | Number of buckets with Object Ownership object writer for the current referenced item | 
|  transfer\$1acceleration\$1enabled\$1bucket\$1count  | long | Number of buckets with transfer acceleration enabled for the current referenced item | 
|  event\$1notification\$1enabled\$1bucket\$1count  | long | Number of buckets with event notification enabled for the current referenced item | 
|  transition\$1lifecycle\$1rule\$1count  | long | Number of transition lifecycle rules for the current referenced item | 
|  expiration\$1lifecycle\$1rule\$1count  | long | Number of expiration lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1transition\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version transition lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1expiration\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version expiration lifecycle rules for the current referenced item | 
|  abort\$1incomplete\$1multipart\$1upload\$1lifecycle\$1rule\$1count  | long | Number of abort incomplete multipart upload lifecycle rules for the current referenced item | 
|  expired\$1object\$1delete\$1marker\$1lifecycle\$1rule\$1count  | long | Number of expire object delete marker lifecycle rules for the current referenced item | 
|  same\$1region\$1replication\$1rule\$1count  | long | Number of Same-Region Replication rule count for the current referenced item | 
|  cross\$1region\$1replication\$1rule\$1count  | long | Number of Cross-Region Replication rule count for the current referenced item | 
|  same\$1account\$1replication\$1rule\$1count  | long | Number of Same-account replication rule count for the current referenced item | 
|  cross\$1account\$1replication\$1rule\$1count  | long | Number of Cross-account replication rule count for the current referenced item | 
|  invalid\$1destination\$1replication\$1rule\$1count  | long | Number of buckets with Invalid destination replication for the current referenced item | 

## Activity metrics table schema
<a name="storage-lens-s3-tables-schemas-activity"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, STORAGE\$1LENS\$1GROUP\$1BUCKET, STORAGE\$1LENS\$1GROUP\$1ACCOUNT.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  all\$1request\$1count  | long | Number of \$1all\$1 requests for the current referenced item | 
|  all\$1sse\$1kms\$1encrypted\$1request\$1count  | long | Number of KMS encrypted requests for the current referenced item | 
|  all\$1unsupported\$1sig\$1request\$1count  | long | Number of unsupported sig requests for the current referenced item | 
|  all\$1unsupported\$1tls\$1request\$1count  | long | Number of unsupported TLS requests for the current referenced item | 
|  bad\$1request\$1error\$1400\$1count  | long | Number of 400 bad request errors for the current referenced item | 
|  delete\$1request\$1count  | long | Number of delete requests for the current referenced item | 
|  downloaded\$1bytes  | decimal(0,0) | Number of downloaded bytes for the current referenced item | 
|  error\$14xx\$1count  | long | Number of 4xx errors for the current referenced item | 
|  error\$15xx\$1count  | long | Number of 5xx errors for the current referenced item | 
|  forbidden\$1error\$1403\$1count  | long | Number of 403 forbidden errors for the current referenced item | 
|  get\$1request\$1count  | long | Number of get requests for the current referenced item | 
|  head\$1request\$1count  | long | Number of head requests for the current referenced item | 
|  internal\$1server\$1error\$1500\$1count  | long | Number of 500 internal server errors for the current referenced item | 
|  list\$1request\$1count  | long | Number of list requests for the current referenced item | 
|  not\$1found\$1error\$1404\$1count  | long | Number of 404 not found errors for the current referenced item | 
|  ok\$1status\$1200\$1count  | long | Number of 200 OK requests for the current referenced item | 
|  partial\$1content\$1status\$1206\$1count  | long | Number of 206 partial content requests for the current referenced item | 
|  post\$1request\$1count  | long | Number of post requests for the current referenced item | 
|  put\$1request\$1count  | long | Number of put requests for the current referenced item | 
|  select\$1request\$1count  | long | Number of select requests for the current referenced item | 
|  select\$1returned\$1bytes  | decimal(0,0) | Number of bytes returned by select requests for the current referenced item | 
|  select\$1scanned\$1bytes  | decimal(0,0) | Number of bytes scanned by select requests for the current referenced item | 
|  service\$1unavailable\$1error\$1503\$1count  | long | Number of 503 service unavailable errors for the current referenced item | 
|  uploaded\$1bytes  | decimal(0,0) | Number of uploaded bytes for the current referenced item | 
|  average\$1first\$1byte\$1latency  | long | Average per-request time between when an S3 bucket receives a complete request and when it starts returning the response, measured over the past 24 hours | 
|  average\$1total\$1request\$1latency  | long | Average elapsed per-request time between the first byte received and the last byte sent to an S3 bucket, measured over the past 24 hours | 
|  read\$10kb\$1request\$1count  | long | Number of GetObject requests with data sizes of 0KB, including both range-based requests and whole object requests | 
|  read\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 0KB and up to 128KB, including both range-based requests and whole object requests | 
|  read\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128KB and up to 256KB, including both range-based requests and whole object requests | 
|  read\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256KB and up to 512KB, including both range-based requests and whole object requests | 
|  read\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512KB and up to 1MB, including both range-based requests and whole object requests | 
|  read\$11mb\$1to\$12mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1MB and up to 2MB, including both range-based requests and whole object requests | 
|  read\$12mb\$1to\$14mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2MB and up to 4MB, including both range-based requests and whole object requests | 
|  read\$14mb\$1to\$18mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4MB and up to 8MB, including both range-based requests and whole object requests | 
|  read\$18mb\$1to\$116mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 8MB and up to 16MB, including both range-based requests and whole object requests | 
|  read\$116mb\$1to\$132mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 16MB and up to 32MB, including both range-based requests and whole object requests | 
|  read\$132mb\$1to\$164mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 32MB and up to 64MB, including both range-based requests and whole object requests | 
|  read\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 64MB and up to 128MB, including both range-based requests and whole object requests | 
|  read\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128MB and up to 256MB, including both range-based requests and whole object requests | 
|  read\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256MB and up to 512MB, including both range-based requests and whole object requests | 
|  read\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512MB and up to 1GB, including both range-based requests and whole object requests | 
|  read\$11gb\$1to\$12gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1GB and up to 2GB, including both range-based requests and whole object requests | 
|  read\$12gb\$1to\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2GB and up to 4GB, including both range-based requests and whole object requests | 
|  read\$1larger\$1than\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4GB, including both range-based requests and whole object requests | 
|  write\$10kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes of 0KB | 
|  write\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 0KB and up to 128KB | 
|  write\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128KB and up to 256KB | 
|  write\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256KB and up to 512KB | 
|  write\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512KB and up to 1MB | 
|  write\$11mb\$1to\$12mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1MB and up to 2MB | 
|  write\$12mb\$1to\$14mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2MB and up to 4MB | 
|  write\$14mb\$1to\$18mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4MB and up to 8MB | 
|  write\$18mb\$1to\$116mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 8MB and up to 16MB | 
|  write\$116mb\$1to\$132mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 16MB and up to 32MB | 
|  write\$132mb\$1to\$164mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 32MB and up to 64MB | 
|  write\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 64MB and up to 128MB | 
|  write\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128MB and up to 256MB | 
|  write\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256MB and up to 512MB | 
|  write\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512MB and up to 1GB | 
|  write\$11gb\$1to\$12gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1GB and up to 2GB | 
|  write\$12gb\$1to\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2GB and up to 4GB | 
|  write\$1larger\$1than\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4GB | 
|  concurrent\$1put\$1503\$1error\$1count  | long | Number of 503 errors that are generated due to concurrent writes to the same object | 
|  cross\$1region\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region | 
|  cross\$1region\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region | 
|  cross\$1region\$1without\$1replication\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region, excluding cross-region replication requests | 
|  cross\$1region\$1without\$1replication\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region, excluding cross-region replication bytes | 
|  inregion\$1request\$1count  | long | Number of requests that originate from a client in same Region as bucket's home Region | 
|  inregion\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls from same Region as bucket's home Region | 
|  unique\$1objects\$1accessed\$1daily\$1count  | long | Number of objects that were accessed at least once in last 24 hrs | 

## Next steps
<a name="storage-lens-s3-tables-schemas-next-steps"></a>
+ Learn about [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md) 
+ Start [Querying S3 Storage Lens data with analytics tools](storage-lens-s3-tables-querying.md) 
+ Review the [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md) for detailed metric definitions

# Permissions for S3 Storage Lens tables
<a name="storage-lens-s3-tables-permissions"></a>

To work with S3 Storage Lens data exported to S3 Tables, you need appropriate AWS Identity and Access Management (IAM) permissions. This topic covers the permissions required for exporting metrics and managing encryption.

## Permissions for metrics export to S3 Tables
<a name="storage-lens-s3-tables-permissions-export"></a>

To create and work with S3 Storage Lens tables and table buckets, you must have certain `s3tables` permissions. At a minimum, to configure S3 Storage Lens to S3 Tables, you must have the following `s3tables` permissions:
+  `s3tables:CreateTableBucket` – This permission allows you to create an AWS-managed table bucket. All S3 Storage Lens metrics in your account are stored in a single AWS-managed table bucket named `aws-s3`. 
+  `s3tables:PutTableBucketPolicy` – S3 Storage Lens uses this permission to set a table bucket policy that allows `systemtables.s3.amazonaws.com` access to the bucket so that logs can be delivered.

**Important**  
If you remove permissions for the service principal `systemtables.s3.amazonaws.com`, S3 Storage Lens will not be able to update the S3 tables with data based on your configuration. We recommend adding other access control policies in addition to the policy already provided, instead of editing the canned policy that is added when your table bucket is set up.

**Note**  
A separate S3 table for each type of metric export is created for each Storage Lens configuration. If you have multiple Storage Lens configurations in the Region, separate tables are created for additional configurations. For example, there are three types of tables available for your S3 table bucket.

## Permissions for AWS KMS encrypted tables
<a name="storage-lens-s3-tables-permissions-kms"></a>

All data in S3 tables including S3 Storage Lens metrics are encrypted with SSE-S3 encryption by default. You can choose to encrypt your Storage Lens metrics report with AWS KMS keys (SSE-KMS). If you choose to encrypt your S3 Storage Lens metric reports with KMS keys, you must have additional permissions.

1. The user or IAM role needs the following permissions. You can grant these permissions by using the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).
   +  `kms:DescribeKey` on the AWS KMS key used

1. On the key policy for the AWS KMS key, you need the following permissions. You can grant these permissions by using the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms). To use this policy, replace the ` user input placeholders ` with your own information.

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "EnableSystemTablesKeyUsage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "systemtables.s3.amazonaws.com"
               },
               "Action": [
                   "kms:DescribeKey",
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-id",
               "Condition": {
                   "StringEquals": {
                       "aws:SourceAccount": "111122223333"
                   }
               }
           },
           {
               "Sid": "EnableKeyUsage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "maintenance.s3tables.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-id",
               "Condition": {
                   "StringLike": {
                       "kms:EncryptionContext:aws:s3:arn": "<table-bucket-arn>/*"
                   }
               }
           }
       ]
   }
   ```

## Service-linked role for S3 Storage Lens
<a name="storage-lens-s3-tables-permissions-slr"></a>

S3 Storage Lens uses a service-linked role to write metrics to S3 Tables. This role is automatically created when you enable S3 Tables export for the first time in your account. The service-linked role has the following permissions:
+  `s3tables:CreateTable` - To create tables in the `aws-s3` table bucket
+  `s3tables:PutTableData` - To write metrics data to tables
+  `s3tables:GetTable` - To retrieve table metadata

You don't need to manually create or manage this service-linked role. For more information about service-linked roles, see [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the *IAM User Guide*. 

## Best practices for permissions
<a name="storage-lens-s3-tables-permissions-best-practices"></a>

Follow these best practices when configuring permissions for S3 Storage Lens tables:
+  **Use least privilege** - Grant only the permissions required for specific tasks. For example, if users only need to query data, don't grant permissions to modify Storage Lens configurations.
+  **Use IAM roles** - Use IAM roles instead of long-term access keys for applications and services that access S3 Storage Lens tables.
+  **Enable AWS CloudTrail** - Enable CloudTrail logging to monitor access to S3 Storage Lens tables and track permission changes.
+  **Use resource-based policies** - When possible, use resource-based policies to control access to specific tables or namespaces.
+  **Regularly review permissions** - Periodically review and audit IAM policies and Lake Formation permissions to ensure they follow the principle of least privilege.

## Troubleshooting permissions
<a name="storage-lens-s3-tables-permissions-troubleshooting"></a>

### Access denied when enabling S3 Tables export
<a name="storage-lens-s3-tables-permissions-troubleshooting-export"></a>

 **Problem:** You receive an "access denied" error when trying to enable S3 Tables export.

 **Solution:** Verify that your IAM user or role has the `s3:PutStorageLensConfiguration` permission and the necessary S3 Tables permissions.

### Access denied when querying tables
<a name="storage-lens-s3-tables-permissions-troubleshooting-query"></a>

 **Problem:** You receive an "access denied" error when querying S3 Storage Lens tables in Amazon Athena.

 **Solution:** Verify that:
+ Analytics integration is enabled on the `aws-s3` table bucket
+ Lake Formation permissions are correctly configured
+ Your IAM user or role has the necessary Amazon Athena permissions

### KMS encryption errors
<a name="storage-lens-s3-tables-permissions-troubleshooting-kms"></a>

 **Problem:** You receive KMS-related errors when accessing encrypted tables.

 **Solution:** Verify that:
+ Your IAM policy includes the required KMS permissions
+ The KMS key policy grants permissions to the S3 Storage Lens service principal
+ The KMS key is in the same Region as your Storage Lens configuration

## Next steps
<a name="storage-lens-s3-tables-permissions-next-steps"></a>
+ Learn about [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md) 
+ Learn about [Querying S3 Storage Lens data with analytics tools](storage-lens-s3-tables-querying.md) 
+ Learn about [Using AI assistants with S3 Storage Lens tables](storage-lens-s3-tables-ai-tools.md) 

# Querying S3 Storage Lens data with analytics tools
<a name="storage-lens-s3-tables-querying"></a>

Before you can query S3 Storage Lens data exported to S3 Tables using AWS analytics services like Amazon Athena or Amazon EMR, you must enable analytics integration on the AWS-managed `aws-s3` table bucket and configure AWS Lake Formation permissions.

**Important**  
Enabling analytics integration on the "aws-s3" table bucket is a required step that is often missed. Without this configuration, you will not be able to query your S3 Storage Lens tables using AWS analytics services.

## Prerequisites
<a name="storage-lens-s3-tables-querying-prerequisites"></a>

Before you begin, ensure that you have:
+ An S3 Storage Lens configuration with S3 Tables export enabled. For more information, see [Exporting S3 Storage Lens metrics to S3 Tables](storage-lens-s3-tables-export.md) .
+ Access to Amazon Athena or another analytics service.
+ Waited 24-48 hours after enabling export for the first data to be available.

## Integration overview
<a name="storage-lens-s3-tables-querying-integration-overview"></a>

For detailed information about integrating S3 Tables with AWS analytics services, including prerequisites, IAM role configuration, and step-by-step procedures, see [Integrating Amazon S3 Tables with AWS analytics services.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) 

After you enable S3 Tables export and set up analytics integration, you can query your S3 Storage Lens data using AWS analytics services such as Amazon Athena, Amazon Redshift, and Amazon EMR. This enables you to perform custom analysis, create dashboards, and derive insights from your storage data using standard SQL.

## Querying with Amazon Athena
<a name="storage-lens-s3-tables-querying-athena"></a>

Amazon Athena is a serverless interactive query service that makes it easy to analyze data using standard SQL. Use the following steps to query S3 Storage Lens data in Athena.

**Note**  
In all query examples, replace `lens_my-config_exp` with your actual Storage Lens configuration namespace. For more information about namespace naming, see [Table naming for S3 Storage Lens export to S3 Tables](storage-lens-s3-tables-naming.md) .

### Example: Query top storage consumers
<a name="storage-lens-s3-tables-querying-athena-top-consumers"></a>

The following query identifies the top 10 buckets by storage consumption:

```
SELECT 
    bucket_name,
    storage_class,
    SUM(storage_bytes) / POWER(1024, 3) AS storage_gb,
    SUM(object_count) AS objects
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time = (
    SELECT MAX(report_time) 
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
)
    AND record_type = 'BUCKET'
    AND bucket_name != ''
GROUP BY bucket_name, storage_class
ORDER BY storage_gb DESC
LIMIT 10
```

### Example: Analyze storage growth over time
<a name="storage-lens-s3-tables-querying-athena-growth"></a>

The following query analyzes storage growth over the last 30 days:

```
SELECT 
    CAST(report_time AS date) AS report_date,
    SUM(storage_bytes) / POWER(1024, 3) AS total_storage_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time >= current_date - interval '30' day
    AND record_type = 'ACCOUNT'
GROUP BY CAST(report_time AS date)
ORDER BY report_date DESC;
```

### Example: Identify incomplete multipart uploads
<a name="storage-lens-s3-tables-querying-athena-mpu"></a>

The following query finds buckets with incomplete multipart uploads older than 7 days:

```
SELECT 
    bucket_name,
    SUM(incomplete_mpu_storage_older_than_7_days_bytes) / POWER(1024, 3) AS wasted_storage_gb,
    SUM(incomplete_mpu_object_older_than_7_days_count) AS wasted_objects
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time = (
    SELECT MAX(report_time) 
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
)
    AND record_type = 'BUCKET'
    AND incomplete_mpu_storage_older_than_7_days_bytes > 0
GROUP BY bucket_name
ORDER BY wasted_storage_gb DESC;
```

### Example: Find cold data candidates
<a name="storage-lens-s3-tables-querying-athena-cold-data"></a>

The following query identifies prefixes with no activity in the last 100 days that are stored in hot storage tiers:

```
WITH recent_activity AS (
    SELECT DISTINCT 
        bucket_name,
        record_value AS prefix_path
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."expanded_prefixes_activity_metrics"
    WHERE report_time >= current_date - interval '100' day
        AND record_type = 'PREFIX'
        AND all_request_count > 0
)
SELECT 
    s.bucket_name,
    s.record_value AS prefix_path,
    s.storage_class,
    SUM(s.storage_bytes) / POWER(1024, 3) AS storage_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."expanded_prefixes_storage_metrics" s
LEFT JOIN recent_activity r 
    ON s.bucket_name = r.bucket_name 
    AND s.record_value = r.prefix_path
WHERE s.report_time = (
    SELECT MAX(report_time) 
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."expanded_prefixes_storage_metrics"
)
    AND s.record_type = 'PREFIX'
    AND s.storage_class IN ('STANDARD', 'REDUCED_REDUNDANCY')
    AND s.storage_bytes > 1073741824  -- > 1GB
    AND r.prefix_path IS NULL  -- No recent activity
GROUP BY s.bucket_name, s.record_value, s.storage_class
ORDER BY storage_gb DESC
LIMIT 20;
```

### Example: Analyze request patterns
<a name="storage-lens-s3-tables-querying-athena-requests"></a>

The following query analyzes request patterns to understand access frequency:

```
SELECT 
    bucket_name,
    SUM(all_request_count) AS total_requests,
    SUM(get_request_count) AS get_requests,
    SUM(put_request_count) AS put_requests,
    ROUND(100.0 * SUM(get_request_count) / NULLIF(SUM(all_request_count), 0), 2) AS get_percentage,
    SUM(downloaded_bytes) / POWER(1024, 3) AS downloaded_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_activity_metrics"
WHERE report_time >= current_date - interval '7' day
    AND record_type = 'BUCKET'
    AND bucket_name != ''
GROUP BY bucket_name
HAVING SUM(all_request_count) > 0
ORDER BY total_requests DESC
LIMIT 10;
```

## Querying with Apache Spark on Amazon EMR
<a name="storage-lens-s3-tables-querying-emr"></a>

Amazon EMR provides a managed Hadoop framework that makes it easy to process vast amounts of data using Apache Spark. You can use the Iceberg connector to read S3 Storage Lens tables directly.

### Read S3 Tables with Spark
<a name="storage-lens-s3-tables-querying-emr-spark"></a>

Use the following Python code to read S3 Storage Lens data with Spark:

```
from pyspark.sql import SparkSession

spark = SparkSession.builder \
    .appName("S3StorageLensAnalysis") \
    .config("spark.sql.catalog.s3tablescatalog", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3tablescatalog.catalog-impl", "org.apache.iceberg.aws.glue.GlueCatalog") \
    .getOrCreate()

# Read S3 Storage Lens data
df = spark.read \
    .format("iceberg") \
    .load("s3tablescatalog/aws-s3.lens_my-config_exp.default_storage_metrics")

# Analyze data
df.filter("record_type = 'BUCKET'") \
    .groupBy("bucket_name", "storage_class") \
    .sum("storage_bytes") \
    .orderBy("sum(storage_bytes)", ascending=False) \
    .show(10)
```

## Query optimization best practices
<a name="storage-lens-s3-tables-querying-optimization"></a>

Follow these best practices to optimize query performance and reduce costs:
+  **Filter by report\$1time** – Always include date filters to reduce the amount of data scanned. This is especially important for tables with long retention periods.

  ```
  WHERE report_time >= current_date - interval '7' day
  ```
+  **Use record\$1type filters** – Specify the appropriate aggregation level (ACCOUNT, BUCKET, PREFIX) to query only the data you need.

  ```
  WHERE record_type = 'BUCKET'
  ```
+  **Include LIMIT clauses** – Use LIMIT for exploratory queries to control result size and reduce query costs.

  ```
  LIMIT 100
  ```
+  **Filter empty records** – Use conditions to exclude empty or zero-value records.

  ```
  WHERE storage_bytes > 0
  ```
+  **Use the latest data** – When analyzing current state, filter for the most recent report\$1time to avoid scanning historical data.

  ```
  WHERE report_time = (SELECT MAX(report_time) FROM table_name)
  ```

### Example optimized query pattern
<a name="storage-lens-s3-tables-querying-optimization-example"></a>

The following query demonstrates best practices for optimization:

```
SELECT 
    bucket_name,
    SUM(storage_bytes) / POWER(1024, 3) AS storage_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time >= current_date - interval '7' day  -- Date filter
    AND record_type = 'BUCKET'                         -- Record type filter
    AND storage_bytes > 0                              -- Non-empty filter
    AND bucket_name != ''                              -- Non-empty filter
GROUP BY bucket_name
ORDER BY storage_gb DESC
LIMIT 100;                                             -- Result limit
```

## Troubleshooting
<a name="storage-lens-s3-tables-querying-troubleshooting"></a>

### Query returns no results
<a name="storage-lens-s3-tables-querying-troubleshooting-no-results"></a>

 **Problem:** Your query completes successfully but returns no results.

 **Solution:** 
+ Verify that data is available by checking the latest report\$1time:

  ```
  SELECT MAX(report_time) AS latest_data
  FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics";
  ```
+ Ensure that you're using the correct namespace name. Use `SHOW TABLES IN `lens_my-config_exp`;` to list available tables.
+ Wait 24-48 hours after enabling S3 Tables export for the first data to be available.

### Access denied errors
<a name="storage-lens-s3-tables-querying-troubleshooting-access"></a>

 **Problem:** You receive access denied errors when running queries.

 **Solution:** Verify that AWS Lake Formation permissions are correctly configured. For more information, see [Integrating Amazon S3 Tables with AWS analytics services.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) 

## Next steps
<a name="storage-lens-s3-tables-querying-next-steps"></a>
+ Learn about [Using AI assistants with S3 Storage Lens tables](storage-lens-s3-tables-ai-tools.md)
+ Review the [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md) for metric definitions
+ Explore [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md) for more analysis ideas
+ Learn about [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) for serverless querying

# Using AI assistants with S3 Storage Lens tables
<a name="storage-lens-s3-tables-ai-tools"></a>

You can use AI assistants and conversational AI tools to interact with your S3 Storage Lens data exported to S3 Tables using natural language. By leveraging the Model Context Protocol (MCP) and the MCP Server for Amazon S3 Tables, you can query, analyze, and gain insights from your storage data without writing SQL queries.

## Overview
<a name="storage-lens-s3-tables-ai-tools-overview"></a>

Model Context Protocol (MCP) is a standardized way for AI applications to access and utilize contextual information. The MCP Server for Amazon S3 Tables provides tools that enable AI assistants to interact with your S3 Tables data using natural language interfaces. This democratizes data access and enables individuals across technical skill levels to work with S3 Storage Lens metrics.

With the MCP Server for S3 Tables, you can use natural language to:
+ List S3 table buckets, namespaces, and tables
+ Query S3 Storage Lens metrics and get insights
+ Analyze storage trends and patterns
+ Identify cost optimization opportunities
+ Generate reports and visualizations

## Supported AI assistants
<a name="storage-lens-s3-tables-ai-tools-supported"></a>

The MCP Server for S3 Tables works with various AI assistants that support the Model Context Protocol, including:
+ **Kiro** - An AI coding assistant with built-in MCP support
+ **Amazon Q Developer** - AWS's AI-powered assistant for developers
+ **Cline** - An AI coding assistant with MCP integration
+ **Claude Desktop** - Anthropic's desktop application with MCP support
+ **Cursor** - An AI-powered code editor

**Important**  
AI-generated SQL queries and recommendations should be reviewed and validated before use. Verify that queries are appropriate for your data structure, use case, and performance requirements. Always test recommendations in a non-production environment before implementing them in production.

## Setting up Kiro with S3 Storage Lens tables
<a name="storage-lens-s3-tables-ai-tools-kiro-setup"></a>

Kiro is an AI coding assistant that provides seamless integration with S3 Tables through the MCP Server. Kiro can help you install and configure the MCP Server directly through its interface, simplifying the setup process.

For more information about Kiro, see [Kiro AI](https://kiro.ai/).

### Prerequisites
<a name="storage-lens-s3-tables-ai-tools-kiro-prerequisites"></a>

Before you begin, ensure that you have:
+ Kiro installed on your system. Download from [https://kiro.ai/](https://kiro.ai/)
+ AWS CLI configured with appropriate credentials
+ An S3 Storage Lens configuration with S3 Tables export enabled
+ Permissions to query S3 Tables. For more information, see [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md).

### Step 1: Install the S3 Tables MCP Server
<a name="storage-lens-s3-tables-ai-tools-kiro-step1"></a>

You can install the S3 Tables MCP Server in two ways:

**Option 1: Using Kiro's built-in MCP server management**  
Kiro can help you discover and install MCP servers directly through its interface:

1. Open Kiro

1. Access the MCP server management interface (typically through settings or command palette)

1. Search for "S3 Tables" or "awslabs.s3-tables-mcp-server"

1. Follow Kiro's prompts to install and configure the server

**Option 2: Manual installation using uvx**  
Alternatively, you can manually install the MCP Server using `uvx`, a Python package runner:

```
uvx awslabs.s3-tables-mcp-server@latest
```

For more information about installing the MCP Server, see the [AWS S3 Tables MCP Server documentation](https://awslabs.github.io/mcp/servers/s3-tables-mcp-server).

### Step 2: Configure Kiro MCP settings
<a name="storage-lens-s3-tables-ai-tools-kiro-step2"></a>

Create or update your Kiro MCP configuration file at `~/.kiro/settings/mcp.json` with the following content:

```
{
  "mcpServers": {
    "awslabs.s3-tables-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.s3-tables-mcp-server@latest"],
      "env": {
        "AWS_PROFILE": "your-aws-profile",
        "AWS_REGION": "us-east-1"
      }
    }
  }
}
```

Replace `your-aws-profile` with your AWS CLI profile name and `us-east-1` with your AWS Region.

### Step 3: Verify the configuration
<a name="storage-lens-s3-tables-ai-tools-kiro-step3"></a>

After configuring the MCP Server, restart Kiro and verify that the S3 Tables tools are available. You can check the available MCP servers in Kiro's settings or by asking Kiro to list available tools.

## Example use cases with AI assistants
<a name="storage-lens-s3-tables-ai-tools-examples"></a>

The following examples demonstrate how to use natural language prompts with AI assistants to interact with S3 Storage Lens data.

### Example 1: Query top storage consumers
<a name="storage-lens-s3-tables-ai-tools-examples-consumers"></a>

**Prompt:** "Show me the top 10 buckets by storage consumption from my S3 Storage Lens data."

The AI assistant will use the MCP Server to query your S3 Storage Lens tables and return the results, including bucket names, storage classes, and storage amounts.

### Example 2: Analyze storage growth
<a name="storage-lens-s3-tables-ai-tools-examples-growth"></a>

**Prompt:** "Analyze my storage growth over the last 30 days and show me the trend."

The AI assistant will query the storage metrics table, calculate daily storage totals, and present the growth trend.

### Example 3: Identify cost optimization opportunities
<a name="storage-lens-s3-tables-ai-tools-examples-optimization"></a>

**Prompt:** "Find buckets with incomplete multipart uploads older than 7 days that are wasting storage."

The AI assistant will query the storage metrics table for incomplete multipart uploads and provide a list of buckets with potential cost savings.

### Example 4: Find cold data candidates
<a name="storage-lens-s3-tables-ai-tools-examples-cold-data"></a>

**Prompt:** "Identify prefixes with no activity in the last 100 days that are stored in hot storage tiers."

The AI assistant will analyze both storage and activity metrics to identify data that could be moved to colder storage tiers for cost optimization.

### Example 5: Generate storage reports
<a name="storage-lens-s3-tables-ai-tools-examples-reports"></a>

**Prompt:** "Create a summary report of my S3 storage showing total storage, object counts, and request patterns for the last week."

The AI assistant will query multiple tables, aggregate the data, and generate a comprehensive report.

## Best practices for using AI assistants
<a name="storage-lens-s3-tables-ai-tools-best-practices"></a>

Follow these best practices when using AI assistants with S3 Storage Lens data:
+ **Be specific in your prompts** - Provide clear, specific instructions about what data you want to analyze and what insights you're looking for.
+ **Verify AI-generated queries** - Always review and validate the SQL queries and recommendations that the AI assistant generates before executing them or taking action. AI assistants may occasionally produce incorrect queries or recommendations that need to be verified against your specific use case and data.
+ **Use appropriate permissions** - Ensure that the IAM credentials used by the AI assistant have only the necessary permissions. For read-only analysis, grant only SELECT permissions.
+ **Monitor usage** - Track the queries executed by AI assistants using AWS CloudTrail to maintain audit trails.
+ **Start with simple queries** - Begin with straightforward queries to understand how the AI assistant interprets your prompts, then progress to more complex analysis.

## Logging and traceability
<a name="storage-lens-s3-tables-ai-tools-logging"></a>

When using the S3 Tables MCP Server with AI assistants, you have multiple ways to audit operations:
+ **Local logs** - The MCP Server logs requests and responses locally. You can specify a log directory using the `--log-dir` configuration option.
+ **AWS CloudTrail** - All S3 Tables operations via the MCP Server using PyIceberg will have `awslabs/mcp/s3-tables-mcp-server/<version>` as the user agent string. You can filter CloudTrail logs by this user agent to trace actions performed by AI assistants.
+ **AI assistant history** - AI assistants like Kiro and Cline maintain history logs that record natural language requests, LLM responses, and instructions provided to the MCP Server.

## Security considerations
<a name="storage-lens-s3-tables-ai-tools-security"></a>

When using AI assistants with S3 Storage Lens data, follow these security best practices:
+ **Use least privilege access** - Grant AI assistants only the minimum permissions required for their tasks.
+ **Enable MFA** - Use multi-factor authentication for AWS accounts that AI assistants access.
+ **Review permissions regularly** - Periodically audit the permissions granted to AI assistants and revoke unnecessary access.
+ **Use separate credentials** - Consider using separate AWS credentials for AI assistant access to facilitate tracking and auditing.
+ **Avoid sharing sensitive data** - Be cautious about sharing sensitive information in prompts to AI assistants, especially when using cloud-based AI services.

## Troubleshooting
<a name="storage-lens-s3-tables-ai-tools-troubleshooting"></a>

### AI assistant cannot connect to S3 Tables
<a name="storage-lens-s3-tables-ai-tools-troubleshooting-connection"></a>

**Problem:** The AI assistant reports that it cannot connect to S3 Tables or the MCP Server is not responding.

**Solution:**
+ Verify that the MCP Server is correctly installed using `uvx awslabs.s3-tables-mcp-server@latest --version`
+ Check that your AWS credentials are configured correctly
+ Ensure that the MCP configuration file has the correct AWS profile and region

### Access denied errors
<a name="storage-lens-s3-tables-ai-tools-troubleshooting-access"></a>

**Problem:** The AI assistant receives access denied errors when querying S3 Storage Lens tables.

**Solution:**
+ Verify that analytics integration is enabled on the `aws-s3` table bucket
+ Check that Lake Formation permissions are correctly configured
+ Ensure that the AWS credentials have the necessary IAM permissions

### Incorrect or unexpected results
<a name="storage-lens-s3-tables-ai-tools-troubleshooting-results"></a>

**Problem:** The AI assistant returns incorrect or unexpected results.

**Solution:**
+ Review the SQL query generated by the AI assistant
+ Verify that you're using the correct namespace name for your Storage Lens configuration
+ Check that data is available by querying the latest report\$1time
+ Refine your prompt to be more specific about what you want to analyze

## Additional resources
<a name="storage-lens-s3-tables-ai-tools-resources"></a>

For more information about using AI assistants with S3 Tables, see the following resources:
+ [Kiro AI](https://kiro.ai/) - AI coding assistant with built-in MCP support
+ [Implementing conversational AI for S3 Tables using Model Context Protocol (MCP)](https://aws.amazon.com/blogs/storage/implementing-conversational-ai-for-s3-tables-using-model-context-protocol-mcp/) - AWS Storage Blog
+ [AWS S3 Tables MCP Server documentation](https://awslabs.github.io/mcp/servers/s3-tables-mcp-server)
+ [Model Context Protocol specification](https://modelcontextprotocol.io/)