

# Use cases
Use cases

You can use the AWS Cost and Usage Reports (AWS CUR) to suit your reports management needs. This section goes in-depth to help you understand use cases such as tracking your Savings Plans and Reserved Instance (RI) utilization, charges, and allocations.

**Topics**
+ [

# Understanding Savings Plans
](cur-sp.md)
+ [

# Understanding your reservations
](understanding-ri.md)
+ [

# Understanding data transfer charges
](cur-data-transfers-charges.md)
+ [

# Understanding split cost allocation data
](split-cost-allocation-data.md)

# Understanding Savings Plans


You can use Cost and Usage Reports (AWS CUR) to track your Savings Plans utilization, charges, and allocations.

## Savings Plans line items


Savings Plans provide a flexible pricing model that offers low prices on Amazon EC2, AWS Fargate, AWS Lambda, and Amazon SageMaker AI in exchange for a commitment to a consistent amount of usage (measured in \$1/hour) for a 1-year or 3-year term.

The following line items in AWS CUR help you track and manage your spend with Savings Plans. 

**Note**  
In the following tables, the columns and rows from AWS CUR are transposed for clarity. The values in the first column represent the headers of a report. These examples include only a few key AWS CUR columns. To learn more about other AWS CUR columns, see the [Data dictionary](data-dictionary.md).

**Upfront fee**  
The **SavingsPlanUpfrontFee** line item is added to your bill when you purchase an `All Upfront` or `Partial Upfront` Savings Plans. The following table shows how this one-time fee appears in some AWS CUR columns.      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cur/latest/userguide/cur-sp.html)

 **Savings Plans recurring monthly fee**  
The **SavingsPlanRecurringFee** line item describes the recurring hourly charges that correspond to `No Upfront` or `Partial Upfront` Savings Plans. Initially, the **SavingsPlanRecurringFee** is added to your bill on the day of purchase and hourly thereafter.  
The **SavingsPlanRecurringFee** allocated to the hour (applicable to Hourly cost and usage) or day (applicable to Daily cost and usage) is added to your bill at the hour of purchase. It is added every hour/day of the billing period subsequently.  
For an `All Upfront` Savings Plans, the line item indicates the portion of the Savings Plans unused during the billing period.  
The following table shows how the recurring hourly charges appear in some AWS CUR columns.      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cur/latest/userguide/cur-sp.html)
  
The SavingsPlanRecurringFee is calculated differently than the recurring RI fee. The recurring RI fee is a monthly charge while the SavingsPlanRecurringFee is an hourly charge. For information on the recurring RI fee, see [Recurring monthly RI fee](regular-reserved-instances.md#recurring-monthly).

**Savings Plans discount benefits**  
The **SavingsPlanCoveredUsage** line item describes the instance usage that received Savings Plans benefits. A **SavingsPlanCoveredUsage** line item shows an unblended cost of what the On-Demand charge would have been without the Savings Plans benefit. This unblended cost is offset by the corresponding **SavingsPlanNegation** line item.   
In each **SavingsPlanCoveredUsage** line item, you can see how that usage was billed against your Savings Plans hourly commitment by using the **savingsPlan/SavingsPlanRate** and **savingsPlan/SavingsPlanEffectiveCost** fields.  
You'll see a corresponding **SavingsPlanNegation** for each **SavingsPlanCoveredUsage** line item. **SavingsPlanNegation** line items offset the unblended cost of **SavingsPlanCoveredUsage**, and grouped at the hourly level by SavingsPlanARN, Operation, Usage Type, and Availability Zone. Therefore, one **SavingsPlanNegation** line item might correspond to multiple **SavingsPlanCoveredUsage** line items.  
The following table shows how the covered usage and the negation line items appear in some AWS CUR columns.  
      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cur/latest/userguide/cur-sp.html)
When you have more usage than your Savings Plans commitment can cover, your uncovered usage still appears as a Usage Line Item and the covered usage appears as **SavingsPlanCoveredUsage** with the corresponding **SavingsPlanNegation** line items.

# Understanding your reservations
Understanding reservations

You can use the AWS Cost and Usage Reports (AWS CUR) to track your Reserved Instance (RI) utilization, charges, and allocations. This section is an in-depth description to understand your reservations.

**Topics**
+ [

# Understanding your reservation line items
](regular-reserved-instances.md)
+ [

# Understanding your amortized reservation data
](amortized-reservation.md)
+ [

# Monitoring your size flexible reservations for Amazon EC2
](monitor-flexible-reservation.md)
+ [

# Monitoring your On-Demand capacity reservations
](monitor-ondemand-reservations.md)

# Understanding your reservation line items


RIs provide you a significant discount compared to On-Demand Instance pricing. RIs aren't physical instances. They're a billing discount applied to the use of On-Demand Instances in your account. These On-Demand Instances must match certain attributes to benefit from the billing discount. 

**Topics**
+ [

## Upfront fee
](#upfront-fee)
+ [

## True-up fee
](#true-up-fee)
+ [

## Recurring monthly RI fee
](#recurring-monthly)
+ [

## RI discount benefits
](#discount-benefits)
+ [

## Reserved Instance type
](#ri-type)
+ [

## Reserved Instance benefits applied to instance usage
](#ri-instance-usage)

**Note**  
In the following tables, the columns and rows from AWS CUR are transposed for clarity. The values in the first column represent the headers of a report. These examples include only a few key AWS CUR columns. To learn more about other AWS CUR columns, see the [Data dictionary](data-dictionary.md).

## Upfront fee


The **Fee** line item is added to your bill when you purchase an `All Upfront` or `Partial Upfront` RI.

The following table shows how this one-time fee appears in some AWS CUR columns.


|  |  | 
| --- |--- |
| lineItem/LineItemType | Fee | 
| lineItem/ProductCode | AmazonEC2 | 
| lineItem/UsageStartDate | 2016-01-01T00:00:00Z | 
| lineItem/LineItemDescription | Sign up charge for subscription: 363836886, planId: 1026576 | 
| lineItem/UnblendedCost | 68 | 
| Reservation/ReservationARN | arn:aws:ec2:us-east-1:123456789012:reserved-instances/f8c204c1-dd48-43f1-adb8-f88aa61e0dea | 

## True-up fee


If you exchange a Convertible Reserved Instance, any cost associated with the exchange of the original Reserved Instance and the new Reserved instance (true-up fee) is also added to your bill as a **Fee** line item. For a true-up fee, the **reservation/ReservationARN** column contains **reserved-instance-exchange/riex**.

The following table shows a true-up fee from exchanging a Convertible Reserved Instance.


| lineItem/LineItemType | lineItem/ProductCode | lineItem/UsageStartDate | lineItem/LineItemDescription | lineItem/UnblendedCost | Reservation/ReservationARN | 
| --- | --- | --- | --- | --- | --- | 
| Fee | AmazonEC2 | 2016-01-01T00:00:00Z |  |  | arn:aws:ec2:eu-west-1:012345678901:reserved-instance-exchange/riex-examplef-5d71-4215-886f-17a3f64ea972 | 

## Recurring monthly RI fee


The **RI Fee** line item describes the recurring monthly charges that are associated RIs applied that month. The **RI Fee** initially is added to your bill on the day of purchase and on the first day of each billing period thereafter.

The **RI Fee** is calculated by multiplying your discounted hourly rate and the number of hours in the month.

The following table shows how the recurring monthly charges appear in the report.


|  |  | 
| --- |--- |
| lineItem/LineItemType | RI fee | 
| lineItem/ProductCode | AmazonEC2 | 
| lineItem/UsageStartDate | 2016-01-01T00:00:00Z | 
| lineItem/UsageType | HeavyUsage: m4.large | 
| lineItem/LineItemDescription | USD 0.0309 hourly fee per Linux/UNIX (Amazon VPC), m4.large instance | 
| lineItem/NormalizationFactor | 4 | 
| lineItem/UnblendedCost | 23 | 
| Reservation/AvailabilityZone |  | 
| Reservation/ReservationARN | arn:aws:ec2:us-east-1:123456789012:reserved-instances/f8c204c1-dd48-43f1-adb8-f88aa61e0dea | 
| Reservation/TotalReservedunits | 744 | 
| Reservation/TotalReservedNormalizedUnits | 2976 | 

Recurring monthly charges are recorded differently for RIs that have an Availability Zone or AWS Region Region scope. For RIs that have an Availability Zone scope, the corresponding Availability Zone is shown in the **reservation/AvailabilityZone** column. For RIs that have a Region scope, the **reservation/AvailabilityZone** column is empty. RIs with a Region scope have values for the **lineitem/NormalizationFactor** and **reservation/TotalReservedNormalizedUnits** columns that show the instance size.

**Note**  
The recurring RI fee is calculated differently than the SavingsPlanRecurringFee. The recurring RI fee is a monthly charge while the SavingsPlanRecurringFee is an hourly charge. For information on the SavingsPlanRecurringFee, see [Understanding Savings Plans](cur-sp.md).

## RI discount benefits


The **Discounted Usage** line item describes the instance usage that received a matching RI discount benefit, and is added to your bill when you have usage that matches one of your RIs. AWS calculates RI discount benefits based on matching usage: for example, the use of an instance that matches the instance reservation. If you have matching usage, the cost associated with the usage line item is always zero because the charges associated with RIs are already accounted for in the two other line items (the upfront fee and the recurring monthly charges).

The following table shows an example of usage that received an RI discount benefit.


|  |  | 
| --- |--- |
| lineItem/LineItemType | DiscountedUsage | 
| lineItem/ProductCode | AmazonEC2 | 
| lineItem/UsageStartDate | 2016-01-01T00:00:00Z | 
| lineItem/UsageType | BoxUsage:m4.large | 
| lineItem/LineItemDescription | Linux/UNIX (Amazon VPC), m4.large Reserved Instance applied | 
| lineItem/ResourceId | i-1bd250bc | 
| lineItem/AvailabilityZone | us-east-1b | 
| lineItem/NormalizationFactor | 4 | 
| lineItem/NormalizedUsageAmount | 4 | 
| lineItem/UnblendedRate | 0 | 
| lineItem/UnblendedCost | 0 | 
| Reservation/ReservationARN | arn:aws:ec2:us-east-1:123456789012:reserved-instances/f8c204c1-dd48-43f1-adb8-f88aa61e0dea | 

The value for **UsageAmount** in the Amazon EC2 **DiscountedUsage** line is the actual number of hours used. The value for **NormalizedUsageAmount** is the value for **UsageAmount** multiplied by the value for **NormalizationFactor**. The value for **NormalizationFactor** is determined by the instance size. When an RI benefit discount is applied to a matching line item of usage, the Amazon Resource Name (ARN) value in the **reservation/ReservationARN** column for the initial upfront fees and recurring monthly charges matches the ARN value in the discounted usage line items. 

For more information about mapping instance size to normalization factor, see [ Support for modifying instance sizes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modification-instancemove.html) in the *Amazon EC2 User Guide*.

## Reserved Instance type


To determine if your report line items are associated with a Standard Reserved Instance or a Convertible Reserved Instance, filter the **lineItem/LineItemType** column by **Fee** or **RI fee**. Then, review the **product/OfferingClass** column, which indicates the Reserved Instance type.

To determine if your report line items are associated with a zonal or regional Reserved Instance, review the **reservation/AvailabilityZone** column. For zonal Reserved Instances, this column shows the corresponding Availability Zone. For regional Reserved Instances, this column is empty.

## Reserved Instance benefits applied to instance usage


To understand which instance usage line items benefitted from which Reserved Instances, you can filter your report by one or more of the following columns:
+ **reservation/reservationARN**: Filter this column by a reservation ARN to identify which Reserved Instance lease is associated with each line item.
+ **lineitem/ResourceId**: Review this column for the ID of the resource that's covered by the Reserved Instance.
+ **lineitem/LineItemType**: Filter this column by **Fee**, **RI fee**, or **DiscountedUsage** to determine the associated fees or benefits.
+ **lineitem/UsageType**: Filter this column by **HeavyUsage** to identify **RI fee** line items. Or, filter this column by **BoxUsage** to identify **DiscountedUsage** line items.
+ **lineitem/UsageAmount**: For **RI fee** line items, this column shows the total number of hours in the month that the Reserved Instance was applied. For **DiscountedUsage** line items, this column shows the total number of hours that the Reserved Instance was applied to a specific instance at the daily or monthly level, depending on how you configured your report.

To understand a size flexible Reserved Instance’s number of normalized units applied to instance usage, review the **lineitem/NormalizedUsageAmount** column in your report. The value in this column equals the product of the following columns:
+ **lineitem/UsageAmount**: This column shows the metered instance usage measured in hours.
+ **lineItem/NormalizationFactor**: For **DiscountedUsage** and **RI fee** line items, this column shows the associated normalization factor of the instance. For more information on the normalization factor, see [Instance size flexibility determined by normalization factor](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html#ri-normalization-factor) in the *Amazon EC2 User Guide*.

For AWS Organizations with multiple accounts, to see which accounts purchased or benefitted from a Reserved Instance, review the following columns:
+ **reservation/reservationARN**: Review the reservation ARNs to see which accounts purchased the Reserved Instance. The ARN includes the account ID.
+ **lineitem/UsageAccountId**: For **DiscountedUsage** line items, this column identifies the account IDs that received benefits from the purchased Reserved Instances.

**Note**  
A Reserved Instance is a billing subscription and not a resource like an Amazon EC2 instance. Because of this, Reserved Instances that are tagged don't populate line items like a tagged resource. For line items with **DiscountedUsage**, tags populate for the tagged resources and not for the Reserved Instance.  
To identify costs associated with a specific Reserved Instance lease, you can filter **Fee** or **RI fee** line items by the Reserved Instance ARN, which is the lease ID. To organize your cost data for Reserved Instances, consider using AWS Cost Categories. For more information, see [Managing your costs with AWS Cost Categories](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html) in the *AWS Billing User Guide*

# Understanding your amortized reservation data


Amortizing is when you distribute one-time reservation costs across the billing period that is affected by that cost. Amortizing enables you to see your costs in accrual-based accounting as opposed to cash-based accounting. For example, if you pay \$1365 for an All Upfront RI for one year and you have a matching instance that uses that RI, that instance costs you \$11 a day, amortized.

You can see the data that Billing and Cost Management uses to calculate your amortized costs in the following Cost and Usage Reports columns. 

**Topics**
+ [

## Reserved Instance inventory
](#ri-inventory)
+ [

## Amortization data for the billing period
](#amortization-billing-period)
+ [

## Reserved Instance effective costs
](#ri-effective-costs)

**Note**  
Not all **reservation/** columns are populated for every Reserved Instance line item. The **reservation/** columns in your report are populated based on the line item type. For example, **RI fee** line items populate the **reservation/UnusedAmortizedUpfrontFeeForBillingPeriod** column. Meanwhile, **DiscountedUsage** line items populate the **reservation/effectivecost** column.

## Reserved Instance inventory


You can use the following columns to track your RI inventory. The values for these columns appear only for RI subscription line items (also known as `RI Fee` line items) and not for the actual instances using the RIs.

For more information about column descriptions and sample values, see [Reservation details](reservation-columns.md).
+ reservation/UpfrontValue
+ reservation/startTime
+ reservation/endTime
+ reservation/modificationStatus

## Amortization data for the billing period


You can use the following columns to understand the amortized costs of your RIs for the billing period. The values for these columns appear only for RI subscription line items (also known as `RI Fee` line items) and not for the actual instances using the RIs.

For more information about column descriptions and sample values, see [Reservation details](reservation-columns.md).
+ reservation/amortizedUpfrontFeeForBillingPeriod
+ reservation/unusedQuantity
+ reservation/unusedNormalizedUnitQuantity
+ reservation/unusedRecurringFee
+ reservation/unusedAmortizedUpfrontFeeForBillingPeriod

## Reserved Instance effective costs


You can use the following columns to understand your effective cost at the instance level. The values for these columns appear only for instance usage line items (also known as `Discounted Usage boxUsage` line items).

For more information about column descriptions and sample values, see [Reservation details](reservation-columns.md).
+ reservation/amortizedUpfrontCostForUsage
+ reservation/recurringFeeForUsage
+ reservation/effectiveCost

# Monitoring your size flexible reservations for Amazon EC2


Amazon EC2 Reserved Instances that apply to a Region provide Availability Zone flexibility and instance size flexibility. Reserved Instances that provide Availability Zone flexibility provide a discount on usage in any Availability Zone in the Region. Reserved Instances that provide instance size flexibility provide a discount on usage, regardless of instance size in that family. Size flexible Reserved Instances apply to the smallest instance sizes first. For more information, see [How Reserved Instances are applied](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html) in the *Amazon EC2 User Guide*.

To understand how instance size flexibility provided by your Reserved Instance is applied to your usage, refer to the **lineItem/NormalizationFactor** and **lineItem/NormalizedUsageAmount** columns.

**Note**  
Instance size flexibility is supported only by Linux or Unix Reserved Instances with default tenancy that are assigned to a Region. For more information on the limitations of instance size flexibility for Regional Reserved Instances, see [How regional Reserved Instances are applied ](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html#apply-regional-ri) in the *Amazon EC2 User Guide*.

In a Cost and Usage Report, the Reserved Instance usage is applied by default to the account that purchased the Reserved Instance. Any available Reserved Instance benefit that the purchasing account can’t use within the hour is then applied to other linked accounts based on the available matching On-Demand Instance usage.

## Example


You purchase one `m4.xlarge` RI in a given Region. This `m4.xlarge` RI can be applied automatically to all `m4` instance usage in the same Region. In the following table, AWS applied the `m4.xlarge` to two separate `m4.large` instances.


|  |  |  |  | 
| --- |--- |--- |--- |
| lineItem/LineItemType | RIFee | Discounted Usage | Discounted Usage | 
| lineItem/ProductCode | AmazonEC2 | AmazonEC2 | AmazonEC2 | 
| lineItem/UsageStartDate | 2016-01-01T00:00:00Z | 2016-01-01T00:00:00Z | 2016-01-01T00:00:00Z | 
| lineItem/UsageType | HeavyUsage:m4.xlarge | BoxUsage:m4.large | BoxUsage:m4.large | 
| lineItem/LineItemDescription | USD 0.0618 hourly fee per Linux/UNIX (Amazon VPC), m4.xlarge instance | Linux/UNIX (Amazon VPC), m4.large Reserved Instance applied | Linux/UNIX (Amazon VPC), m4.large Reserved Instance applied | 
| lineItem/ResourceId |  | i-1bd250bc | i-1df340ed | 
| lineItem/UsageAmount |  | 1 | 1 | 
| lineItem/NormalizationFactor | 4 | 4 | 4 | 
| lineItem/NormalizedUsageAmount |  | 4 | 4 | 
| lineItem/UnblendedRate |  | 0 | 0 | 
| lineItem/UnblendedCost | 46 | 0 | 0 | 
| Reservation/ ReservationARN | arn:aws:ec2:us-east-1: 123456789012:reserved-instances /f8c204c1 | arn:aws:ec2:us-east-1: 123456789012:reserved-instances /f8c204c1 | arn:aws:ec2:us-east-1: 123456789012:reserved-instances /f8c204c1 | 
| Reservation/TotalReservedUnits | 744 |  |  | 
| Reservation/TotalReserved NormalizedUnits | 5952 |  |  | 

The two `m4.large` usage line items have different **ResourceId**s, and both received a discount benefit from the single `m4.xlarge` RI. This is shown by matching the **reservationARN** value across the usage and recurring monthly charge line items.

For more information about RI purchase options, see [How you are billed](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts-reserved-instances-application.html#reserved-instances-payment-options) in the *Amazon EC2 User Guide*.

# Monitoring your On-Demand capacity reservations


Capacity reservations enable you to reserve capacity for your Amazon EC2 instances for any duration in a specific Availability Zone. This enables you to create and manage capacity reservations separately from the billing discounts offered by Regional Reserved Instances (RI). To benefit from billing discounts, you can use Regional RIs in combination with capacity reservations.

## Capacity reservation line items


You can use some columns defined in the AWS CUR data dictionary to track your capacity reservations. The following columns are also used for capacity reservations.

This section defines these line items with supplementary definitions specific to capacity reservations.

For more information about Cost and Usage Reports column descriptions, see [Line item details](Lineitem-columns.md).

A \$1 [B](#lcr-B) \$1 C \$1 D \$1 E \$1 F \$1 G \$1 H \$1 I \$1 J \$1 K \$1 L \$1 M \$1 N \$1 O \$1 P \$1 Q \$1 [R](#lcr-R) \$1 S \$1 T \$1 [U](#lcr-U) \$1 VWXYZ 

### B


#### lineItem/BlendedRate


For capacity reservations with a **UsageType** of **Reservation** or **DedicatedRes**, the **BlendedRate** is `0`. This is because the capacity reservation costs are associated with the instance that provides the capacity, instead of with the capacity reservation itself. 

### R


#### lineItem/ResourceId


If you included `lineItem/ResourceId` when you created your Cost and Usage Reports, you can identify and track your capacity reservations using the **ResourceId** column. The capacity reservation **ResourceId** is captured only for the **UnusedBox,** **UnusedDed**, **Reservation**, and **DedicatedRes** **UsageTypes**.

Capacity reservations always include a `cr-` in their resource ID, and the resource ID has the following format:

```
arn:aws:ec2:<region>:<account id>:<capacity-reservation>/cr-0be443example1db6f
```

### U


#### lineItem/UnblendedCost


The `BlendedRate` multiplied by the `UsageAmount`.

#### lineItem/UnblendedRate


For capacity reservations with a **UsageType** of **Reservation** or **DedicatedRes**, the **UnblendedRate** is `0`. This is because the costs for capacity reservations are associated with the instance that provides the capacity, instead of with the capacity reservation itself.

#### lineItem/UsageAmount


How much of a capacity reservation you've used. Each capacity reservation can have multiple slots for an hour, enabling you to run more than one instance that uses the reservation during an hour. Therefore, it's possible to use more than one instance-hour in an hour. **UsageAmount** is calculated by multiplying the number of instance slots covered by the line item with the number of hours covered by the line item.

#### lineItem/UsageType


How much of a specific reservation you've used. For Amazon EC2, the options are as follows:

##### lineItem/lineitemtype = BoxUsage


For this `UsageType`, the `UsageAmount` column is the amount of instance-hours of an instance you've used.

For example, a report covers 1 hour and has a capacity reservation line item that can cover 10 instances. If you use two instance-slots during the time period covered by the report, the **BoxUsage** **UsageAmount** covers the number of instance hours that you reserved and used. In this case, this is two (the number of used instance slots) multiplied by 1 hour (the time covered by the report) for a total of two. For a report that covers 1 day, the **UsageAmount** is two multiplied by 24, for a total of 48.

##### DedicatedRes


For a **UsageType** of **DedicatedRes**, the **UsageAmount** column describes how many instance-hours of a dedicated capacity reservation you reserved.

##### Reservation


For a **UsageType** of **Reservation**, the **UsageAmount** column describes how many instance-hours of a capacity reservation you reserved.

For example, if a report covers one hour and has a capacity reservation line item that can cover 10 instances, the **Reservation** **UsageAmount** covers the number of instance slots that you reserved. In this case, that's 10 (the number of available instance slots) multiplied by 1 hour (the time covered by the report) for a total of 10. For a report that covers 1 day, the **UsageAmount** would be 10 multiplied by 24, for a total of 240.

##### UnusedBox


For a **UsageType** of **UnusedBox**, the **UsageAmount** column describes how many instance-hours of a capacity reservation you reserved, but didn't use.

For example, a report covers 1 hour and has a capacity reservation line item that can cover 10 instances. If you didn't use eight instance-slots during the time period covered by the report, the **UnusedBox** **UsageAmount** covers the number of instance hours that you reserved but didn't use. In this case, that's eight (the number of unused instance slots) multiplied by 1 hour (the time covered by the report) for a total of eight. For a report that covers 1 day, the **UsageAmount** is eight multiplied by 24, for a total of 192.

##### UnusedDed


For a **UsageType** of **UnusedDed**, the **UsageAmount** column describes how many instance-hours of a dedicated capacity reservation that you reserved, but didn't use.

# Understanding data transfer charges


You can identify your AWS data transfer charges using the [lineItem/UsageType](Lineitem-columns.md#Lineitem-details-U-UsageType) column of your AWS CUR.

**Note**  
Data transfer charges can vary depending on the services used and the source AWS Region. For detailed pricing information, refer to the service’s pricing page. For example, see [Amazon EC2 On-Demand Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) for detailed pricing information about Amazon EC2 data transfer.

## Data transfer within an AWS Region


Data transfer between Availability Zones in the same AWS Region have a **UsageType** of `Region-DataTransfer-Regional-Bytes`. For example, the `USE2-DataTransfer-Regional-Bytes` usage type identifies charges for data transfer between Availability Zones in the US East (Ohio) Region.

For a given resource, you’re charged for both inbound and outbound traffic in a data transfer within an AWS Region. This means for each resource metered, you'll see two `DataTransfer-Regional-Bytes` line items for each data transfer. Confirm the service's pricing page for more information, because some services have in-Region traffic at no cost.

## Data transfer between AWS Regions


Data transfer between different AWS Regions can have the following usage types:
+ `Source Region-Destination Region-AWS-In-Bytes`: Measures incoming data transfer TO the destination Region FROM another specific AWS Region.
+ `Source Region-Destination Region-AWS-Out-Bytes`: Measures outgoing data transfer FROM the source Region TO another specific AWS Region.
+ `Source Region-AWS-In-Bytes`: This usage type appears when traffic flows via VPC Peering.
+ `Source Region-AWS-Out-Bytes`: This usage type appears when traffic flows via VPC Peering.

For each resource, data transfer between AWS Regions corresponds to two line items in your report:
+ A line item for the data transferred into the destination Region 
+ A line item for the data transferred out from the source Region

There's no charge for the data transferred into the destination Region. The data transfer charge is determined by the data transferred out from the source Region.

For example, a data transfer from the `USE2` Region to the `APS3` Region will have both a `APS3-USE2-AWS-In-Bytes` line item and a `USE2-APS3-AWS-Out-Bytes` line item. The `APS3-USE2-AWS-In-Bytes` line item has no corresponding charge. The data transfer charge is associated with the `USE2-APS3-AWS-Out-Bytes` line item.

## Data transfer out to the internet


Data transfer from AWS to the internet have a **UsageType** of `Region-DataTransfer-Out-Bytes`. For example, the `USE2-DataTransfer-Out-Bytes` usage type identifies charges for data transfer from the `USE2` Region to the internet.

There’s no charge for data transfer from the internet to AWS.

**Note**  
Data transfer usage types that don’t have the Region prefix, such as `DataTransfer-Regional-Bytes` or `DataTransfer-Out-Bytes`, represent data transfer from the US East (N. Virginia) Region.

## Direct Connect traffic


Direct Connect data transfer over a public virtual interface have usage types that end with `DataXfer-In` or `DataXfer-Out`.

Direct Connect data transfer over a private or transit virtual interface have usage types that end with `DataXfer-In:dc.3` or `DataXfer-Out:dc.3`.

## S3 Transfer Acceleration traffic


Amazon S3 data transfer using S3 Transfer Acceleration have usage types that contain `ABytes`:
+ Between Amazon S3 and Amazon EC2: Usage types that end with `C3DataTransfer-In-ABytes` or `C3DataTransfer-Out-ABytes`
+ Between Amazon S3 and the internet: Usage types that end with `DataTransfer-In-ABytes` or `DataTransfer-Out-ABytes`
+ Between Amazon S3 and CloudFront: Usage types that end with `CloudFront-In-ABytes` or `CloudFront-Out-ABytes`
+ Between Amazon S3 buckets in different AWS Regions: Usage type of `Source Region-Destination Region-AWS-Out-ABytes`

## CloudFront traffic


CloudFront data transfer have a usage type of `Region-DataTransfer-Out-Bytes` or `Region-DataTransfer-Out-OBytes` coupled with the product code `AmazonCloudFront`. The Region prefix in the usage type refers to the CloudFront Edge location used in the data transfer. For example, the `AP-DataTransfer-Out-Bytes` usage type identifies charges for data transfer from the AP Region to the internet.

**Tip**  
Use the [lineItem/ProductCode](Lineitem-columns.md#Lineitem-details-P-ProductCode) column to distinguish CloudFront data transfer from data transfer out to the internet. The usage types for these data transfer types look similar.

# Understanding split cost allocation data


You can use Cost and Usage Reports (AWS CUR) to track your Amazon ECS and Amazon EKS container costs. Using split cost allocation data, you can allocate your container costs to individual business units and teams, based on how your container workloads consume shared compute and memory resources. Split cost allocation data introduces cost and usage data for new container-level resources (that is, ECS tasks and Kubernetes pods) to AWS CUR. Previously, AWS CUR only supported costs at the EC2 instance level. Split cost allocation data generates container-level costs by looking at each container’s EC2 instance resource consumption, and generates cost based on the amortized cost of the instance and the percentage of CPU and memory resources consumed by the containers that ran on the instance.

For accelerated computing instances used with Amazon EKS, split cost allocation data includes resource allocation for specialized processors alongside CPU and memory. This covers NVIDIA and AMD GPUs, AWS Trainium, and AWS Inferentia accelerators. The feature is available only for Amazon EKS environments and provides pod-level resource reservation data for these accelerated computing resources. This allows you to track and allocate costs for workloads that use these specialized processors, such as AI/ML applications and other computationally intensive tasks. For a current list of accelerated computing instances, see [Accelerated Computing](https://aws.amazon.com/ec2/instance-types/#Accelerated_Computing).

Split cost allocation data introduces new usage records and new cost metric columns for each containerized resource ID (that is, ECS task and Kubernetes pod) in AWS CUR. For more information, see [Split line item details](https://docs.aws.amazon.com/cur/latest/userguide/split-line-item-columns.html).

When including split cost allocation data in AWS CUR, two new usage records are added for each ECS task and Kubernetes pod per hour in order to reflect the CPU and memory costs. To estimate the number of new line items in AWS CUR per day, use the following formula:

For ECS: `(number of tasks * average task lifetime * 2) * 24`

For EKS: `(number of pods * average pod lifetime * 2) * 24`

For example, if you have 1,000 pods running each hour across a cluster of 10 EC2 instances and the lifetime for the pod is less than 1 hour, then: 

`(1000 * 1 * 2) * 24 = 48,000 new usage records in AWS CUR`

For accelerated computing instances in Amazon EKS, three new usage records are added for each Kubernetes pod per hour in order to reflect the accelerator, CPU, and memory costs. To estimate the number of new line items in AWS CUR per day, use the following formula:

For EKS with accelerated computing: `(number of pods * average pod lifetime * 3) * 24`

For example, if you have 1,000 pods running each hour across a cluster of 10 EC2 instances and the lifetime for each pod is less than one hour, then: `(1000 * 1 * 3) * 24 = 72,000 new usage records in AWS CUR`

**Note**  
For ECS: When it comes to AWS cost allocation tags, you can use Amazon ECS-managed tags or user-added tags for your Cost and Usage Reports. These tags apply to all new ECS split cost allocation data usage records. For more information, see [Tagging your ECS resources for billing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html#tag-resources-for-billing).  
For EKS: Split cost allocation data creates new cost allocation tags for some Kubernetes attributes. These tags include `aws:eks:cluster-name`, `aws:eks:deployment`, `aws:eks:namespace`, `aws:eks:node`, `aws:eks:workload-name`, and `aws:eks:workload-type`.  
`aws:eks:cluster-name`, `aws:eks:namespace`, and `aws:eks:node` are populated retrospectively with the name of the cluster, namespace, and node.
`aws:eks:workload-type` is only populated if there is exactly one workload managing the pod, and is one of the built in workloads. Workload types include `ReplicaSet`, `StatefulSet`, `Job`, `DaemonSet`, or `ReplicationController`, and `aws:eks:workload-name` includes the name of the workload. For more information, see [Workloads](https://kubernetes.io/docs/concepts/workloads/) in the *Kubernetes Documentation*.
`aws:eks:deployment` is only populated for the workload type `ReplicaSet`. It is the deployment that creates a `ReplicaSet`.
These tags apply to all new EKS split cost allocation data usage records. These tags are enabled for cost allocation by default. If you previously used and disabled the `aws:eks:cluster-name` tag, then split cost allocation data keeps this setting and doesn't enable the tag. You can enable it from the [Cost allocation tags](https://console.aws.amazon.com/billing/home#/tags) console page.

# Enabling split cost allocation data


**Note**  
Split cost allocation data is not available in Cost Explorer. It is available in legacy Cost and Usage Reports (CUR) and Cost and Usage Report 2.0 (CUR 2.0) with Data Exports.

It is a prerequisite to opt in to split cost allocation data through the Cost Management preferences.

**To opt in to split cost allocation data**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, choose **Cost Management preferences**.

1. Under **General**, in the **Split cost allocation data** section, choose between the following:
   + **Amazon Elastic Container Service (Amazon ECS)** to opt in to Amazon ECS only.
   + **Amazon Elastic Kubernetes Service (Amazon EKS)** to opt in to Amazon EKS only. For Amazon EKS, choose between the following:
     + **Resource requests**: This allocates your Amazon EC2 by Kubernetes pod CPU and memory resources only. This will encourage application teams to only provision what they need.
     + **Amazon Managed Service for Prometheus**: This allocates your Amazon EC2 costs by the higher of Kubernetes pod CPU and memory resource requests and actual utilization. This ensures each application team pays for what they use. To learn more about setting up Amazon Managed Service for Prometheus, see [Setting up](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-setting-up.html) in the *Amazon Managed Service for Prometheus user guide*. 

       Prerequisite: You must enable all features in AWS Organizations. To learn more, see [Enabling all features in your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) in the *Organizations user guide*.
     + **Amazon CloudWatch Container Insights**: This provides more granular cost visibility for your clusters running multiple application containers using shared EC2 instances, enabling better cost allocation for the shared costs of your EKS clusters.

**Note**  
Only regular and payer accounts have access to the AWS Cost Management preferences and can opt in to split cost allocation data. Once opted in, member accounts can view the data in the Cost and Usage Reports.
If you choose resource requests, only the pods configured with memory and CPU requests are used by split cost allocation data. Pods that haven't requested any usage won't see any split cost data.
If you choose Amazon Managed Service for Prometheus, you need to enable all features in AWS Organizations. For more information, see [Enabling all features in your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html). In addition, split cost allocation data creates a new service-linked role, which enables access to AWS services and resources used or managed by split cost allocation data.
For accelerated computing instances, only the Resource request option is supported. Neither Amazon Managed Service for Prometheus nor Amazon CloudWatch Container Insights are supported for these instances. When using accelerated computing instances, the system will default to Resource request to compute accelerator, CPU, and memory costs, even if other measurement options are enabled.

Once you’ve opted in, you can choose to have cost and usage data for container-level resources included in your report during step one of report creation or later by editing the report details.

**To include cost and usage data in your report**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, under **Legacy Pages**, choose **Cost and Usage Reports**.

1. Whether creating a new report or editing an existing report, in the **Specify report details** page, under **Report content**, select **Split cost allocation data**.

**Note**  
You can also use the AWS CUR API or the AWS Command Line Interface (CLI) to manage your split cost allocation data preferences.

Split cost allocation data enables cost visibility for all Amazon ECS and Amazon EKS container objects across your entire consolidated billing family (payer and linked accounts). Once activated, split cost allocation data automatically scans for tasks and containers. It ingests the telemetry usage data for your container workloads and prepares the granular cost data for the current month.

**Note**  
It can take up to 24 hours for the data to be visible in AWS CUR.

For information about managing access to Billing and Cost Management console pages, see [﻿Overview of managing access permissions](https://docs.aws.amazon.com/cost-management/latest/userguide/control-access-billing.html).

For information regarding AWS Cost Management preferences and controlling access to Cost Explorer, see [﻿Controlling access to Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-access.html).

# Example of split cost allocation data


The purpose of the following example is to show you how split cost allocation data is calculated by computing the cost of individual Amazon ECS services, tasks in Amazon ECS clusters, and Kubernetes namespace and pods in Amazon EKS clusters. The rates used throughout the example are for illustrative purposes only.

**Note**  
The example demonstrates Kubernetes namespace and pods running in Amazon EKS clusters. We can then apply the same cost model to Amazon ECS service and tasks running in a Amazon ECS cluster.

You have the following usage in a single hour:
+ Single instance (m5.xlarge) shared cluster with two namespaces and four pods, running for the duration of a full hour.
+ Instance configuration is 4 vCPU and 16 GB of memory.
+ Amortized cost of the instance is \$11/hr.

Split cost allocation data uses relative unit weights for CPU and memory based on a 9:1 ratio. This is derived from per vCPU per hour and per GB per hour prices in [AWS Fargate](https://aws.amazon.com/fargate/pricing/).

## Step 1: Compute the unit cost for CPU and memory


`Unit-cost-per-resource = Hourly-instance-cost/((Memory-weight * Memory-available) + (CPU-weight * CPU-available))`

= \$11/( (1 \$1 16GB) \$1 (9 \$1 4vCPU)) = \$10.02

`Cost-per-vCPU-hour = CPU-weight * Unit-cost-per-resource`

= 9 \$1 \$10.02 = \$10.17

`Cost-per-GB-hour = Memory-weight * Unit-cost-per-resource`

= 1 \$1 \$10.02 = \$10.02


****  

| Instance | Instance type | vCPU-available | Memory-available | Amortized-cost-per-hour | Cost-per-vCPU-hour | Cost-per-GB-hour | 
| --- | --- | --- | --- | --- | --- | --- | 
| Instance1 | m5.xlarge | 4 | 16 | \$11 | \$10.17 | \$10.02 | 

## Step 2: Compute the allocated capacity and instance unused capacity

+ Allocated capacity: The memory and vCPU allocated to the Kubernetes pod from the parent EC2 instance, defined as the maximum of used and reserved capacity.
**Note**  
If memory or vCPU usage data is unavailable, reservation data will be used instead. For more information, see [Amazon ECS usage reports](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/usage-reports.html), or [Amazon EKS cost monitoring](https://docs.aws.amazon.com/eks/latest/userguide/cost-monitoring.html).
+ Instance unused capacity: The unused capacity of vCPU and memory.

`Pod1-Allocated-vCPU = Max (1 vCPU, 0.1 vCPU)` = 1 vCPU

`Pod1-Allocated-memory = Max (4 GB, 3 GB)` = 4 GB

`Instance-Unused-vCPU = Max (CPU-available - SUM(Allocated-vCPU), 0)` = Max (4 – 4.9, 0) = 0

`Instance-Unused-memory = Max (Memory-available - SUM(Allocated-memory), 0)` = Max (16 – 14, 0) = 2 GB

In this example, the instance has CPU over subscription, attributed to Pod2 that used more vCPU than what was reserved.


****  

| Pod name | Namespace | Reserved-vCPU | Used-vCPU | Allocated-vCPU | Reserved-memory | Used-memory | Allocated-memory | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| Pod1 | Namespace1 | 1 | 0.1 | 1 | 4 | 3 | 4 | 
| Pod2 | Namespace2 | 1 | 1.9 | 1.9 | 4 | 6 | 6 | 
| Pod3 | Namespace1 | 1 | 0.5 | 1 | 2 | 2 | 2 | 
| Pod4 | Namespace2 | 1 | 0.5 | 1 | 2 | 2 | 2 | 
| Unused | Unused |  |  | 0 |  |  | 2 | 
|  |  |  |  | 4.9 |  |  | 16 | 

## Step 3: Compute the split usage ratios

+ Split usage ratio: The percentage of CPU or memory used by the Kubernetes pod compared to the overall CPU or memory available on the EC2 instance.
+ Unused ratio: The percentage of CPU or memory used by the Kubernetes pod compared to the overall CPU or memory used on the EC2 instance (that is, not factoring in the unused CPU or memory on the instance).

`Pod1-vCPU-split-usage-ratio = Allocated-vCPU / Total-vCPU`

= 1 vCPU / 4.9vCPU = 0.204

`Pod1-Memory-split-usage-ratio = Allocated-GB / Total-GB`

= 4 GB/ 16GB = 0.250

`Pod1-vCPU-unused-ratio = Pod1-vCPU-split-usage-ratio / (Total-CPU-split-usage-ratio – Instance-unused-CPU)` (set to 0 if Instance-unused-CPU is 0)

= 0 (since Instance-unused-CPU is 0)

`Pod1-Memory-unused-ratio = Pod1-Memory-split-usage-ratio / (Total-Memory-split-usage-ratio – Instance-unused-memory)` (set to 0 if Instance-unused-memory is 0)

= 0.250 / (1-0.125) = 0.286


****  

| Pod name | Namespace | vCPU-split-usage-ratio | vCPU-unused-ratio | Memory-split-usage-ratio | Memory-unused-ratio | 
| --- | --- | --- | --- | --- | --- | 
| Pod1 | Namespace1 | 0.204 | 0 | 0.250 | 0.286 | 
| Pod2 | Namespace2 | 0.388 | 0 | 0.375 | 0.429 | 
| Pod3 | Namespace1 | 0.204 | 0 | 0.125 | 0.143 | 
| Pod4 | Namespace2 | 0.204 | 0 | 0.125 | 0.143 | 
| Unused | Unused | 0 |  | 0.125 |  | 
|  |  | 1 |  | 1 |  | 

## Step 4: Compute the split cost and unused costs

+ Split cost: The pay per use cost allocation of the EC2 instance cost based on allocated CPU and memory usage by the Kubernetes pod.
+ Unused instance cost: The cost of unused CPU or memory resources on the instance.

`Pod1-Split-cost = (Pod1-vCPU-split-usage-ratio * vCPU-available * Cost-per-vCPU-hour) + (Pod1-Memory-split-usage-ratio * Memory-available * Cost-per-GB-hour)`

= (0.204 \$1 4 vCPU \$1 \$10.17) \$1 (0.25 \$1 16GB \$1 \$10.02) = \$10.22

`Pod1-Unused-cost = (Pod1-vCPU-unused-ratio * Instance-vCPU-unused-ratio * vCPU-available * Cost-per-VCPU-hour) + (Pod1-Memory-unused-ratio * Instance-Memory-unused ratio * Memory-available * Cost-per-GB-hour)`

= (0 \$1 0 \$1 4 \$1 \$10.17) \$1 (0.286 \$1 0.125 \$1 16 \$1 \$10.02) = \$10.01

`Pod1-Total-split-cost = Pod1-Split-cost + Pod1-Unused-cost`

= \$10.23


****  

| Pod name | Namespace | Split-cost | Unused-cost | Total-split-cost | 
| --- | --- | --- | --- | --- | 
| Pod1 | Namespace1 | \$10.22 | \$10.01 | \$10.23 | 
| Pod2 | Namespace2 | \$10.38 | \$10.02 | \$10.40 | 
| Pod3 | Namespace1 | \$10.18 | \$10.01 | \$10.19 | 
| Pod4 | Namespace2 | \$10.18 | \$10.01 | \$10.19 | 
| Unused | Unused | \$10.04 |  |  | 
|  |  | \$11 | \$10.04 | \$11 | 

The cost of the service is the sum of the cost of pods associated with each namespace.

Total cost of Namespace1 = \$10.23 \$1 \$10.19 = \$10.42

Total cost of Namespace2 = \$10.40 \$1 \$10.19 = \$10.59

## Sample AWS CUR


If you have a Savings Plans covering the entire usage for the EC2 instance in the billing period, amortized costs are computed using savingsPlan/SavingsPlanEffectiveCost.

![\[Table showing EC2 instance usage details with Savings Plans and cost breakdown.\]](http://docs.aws.amazon.com/cur/latest/userguide/images/savings-plan-entire-usage.png)


If you have a Savings Plans covering partial usage for the EC2 instance in the billing period and the rest of the EC2 instance usage is billed at On-Demand rates, EC2 instance amortized costs are computed using savingsPlan/SavingsPlanEffectiveCost (for SavingsPlanCoveredUsage) \$1 lineItem/UnblendedCost (for On-Demand usage).

![\[Table showing EC2 instance usage details, costs, and savings plan information.\]](http://docs.aws.amazon.com/cur/latest/userguide/images/savings-plan-partial-usage.png)


# Example of split cost allocation data for accelerated instances


The purpose of the following example is to show you how split cost allocation data is calculated by computing the cost of Kubernetes namespace and pods in Amazon EKS clusters. The rates used throughout the example are for illustrative purposes only.

You have the following usage in a single hour:
+ Single EC2 instance that is running four pods across two namespaces, and you want to understand the costs of each namespace.
+ The EC2 instance is p3.16xlarge with 8 GPU, 64 vCPU, and 488 GB RAM.
+ Amortized cost of the instance is \$110/hr.

Split cost allocation data normalizes the cost per resource based on a relative ratio of GPU:(cpu: memory) of 9:1. This implies that a unit of GPU costs 9x as much as a unit of CPU and memory. CPU and memory are then assigned a weight of 9:1. For a non-accelerated EC2 instance, the current default behavior will be adopted which is cpu: memory weight defaults to 9:1.

## Step 1: Compute the unit cost


Based on the cpu and memory resources on the EC2 instance and using the ratio of mentioned above, Split Cost Allocation data first calculates the unit cost per GPU, vCPU-hr and GB-hr.

`GPU-Weight =9`

`GPU+Memory-Weight =1`

`CPU-Weight=1*.9=.9`

`Memory-Weight=1*0.1=0.1`

`Hourly-Instance-Cost=$10`

`GPU-Available=8`

`Memory-Available=488`

`CPU-Available=64`

`UnitCostPerResource = Hourly-Instance-Cost/(( GPU-Weight * GPU-Available) + (Memory-Weight * Memory-Available) + (CPU-Weight * CPU-Available)) = $10/((9*8gpu)+ (0.1 * 488GB) + (.9 * 64vcpu)) = $0.056`

`Cost-per-GPU-Hour = GPU-Weight * UnitCostPerResource = 9 * $0.056 = $0.504`

`Cost-per-vcpu-Hour = CPU-Weight * UnitCostPerResource = .9 * $0.056 = $0.05`

`Cost-per-GB-Hour = Memory-Weight * UnitCostPerResource = .1 * $0.056 = $0.00506`


**Table 1: Unit cost calculation**  

| Instance | Instance Type | vCPU Available | GPU Available | \$1\$1 | Memory Available | Amortized Cost per Hour | Cost per vCPU-Hour | Cost per GPU-Hour | Cost per GB-Hour | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
| Instance 1 | p3.16xlarge | 64 | 8 |  | 488 | \$110 | \$10.05 | \$10.50 | 0.005 | 

## Step 2: Calculate allocated and unused capacity


Allocated Capacity  
The GPU, vcpu and Memory allocated to the Kubernetes Pod from the parent EC2 Instance, defined as the Maximum of (reserved, used) capacity

Instance Unused Capacity  
The unused capacity of GPU, vcpu and Memory

`Pod1-Allocated-GPU = Max (1 GPU, 1 GPU) = 1 GPU`

`Pod1-Allocated-vcpu = Max (16 vcpu, 4 vcpu) = 16 vcpu`

`Pod1-Allocated-Memory = Max (100 GB, 60 GB) = 100 GB`

`Instance-Unused-GPU = Max (GPU-Available - SUM(Allocated-vcpu), 0)`

`= Max (8 – 8, 0) = 0`

`Instance-Unused-vcpu = Max (CPU-Available - SUM(Allocated-vcpu), 0)`

`= Max (16 – 18, 0) = 0`

`Instance-Unused-Memory = Max (Memory-Available - SUM(Allocated-Memory), 0)`

`= Max (488 – 440, 0) = 48 GB`

In this example, the instance has CPU over subscription, attributed to Pod 2 that used more GPU and vcpu that what was reserved.


**Table 2: Calculate Allocated and Unused Capacity**  

| Pod Name | Namespace | vcpu Reserved | vcpu Used | vcpu Allocated | GPU Reserved | GPU used | GPU Allocated | Memory Reserved | Memory Used | Memory Allocated | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
| Pod 1 | Namespace 1 | 16 | 4 | 16 | 1 | 1 | 1 | 100 | 60 | 100 | 
| Pod 2 | Namespace 2 | 16 | 18 | 18 | 2 | 3 | 3 | 100 | 140 | 140 | 
| Pod 3 | Namespace 1 | 16 | 4 | 16 | 2 | 1 | 2 | 100 | 60 | 100 | 
| Pod 4 | Namespace 2 | 16 | 4 | 16 | 2 | 2 | 2 | 100 | 40 | 100 | 
| Unused | Unused | 0 | 34 | 0 | 1 | 1 | 0 | 88 | 188 | 48 | 
| \$1\$1\$1 |  | 64 | 32 | 66 | 8 | 8 | 8 | 488 | 488 | 488 | 

## Step 3: Compute the split usage and utilization ratios


Split usage ratio  
The percentage of CPU or memory used by the Kubernetes pod compared to the overall CPU or memory available on the EC2 instance.

Unused ratio  
The percentage of CPU or memory used by the Kubernetes pod compared to the overall CPU or memory used on the EC2 instance (that is, not factoring in the unused CPU or memory on the instance).

The percentage of CPU or memory used by the Kubernetes Pod compared to the overall CPU or memory available on the EC2 Instance.

`Pod1-GPU-Utilization-Ratio = Allocated-GPU / Total-GPU`

`= 1 gpu / 8 gpu = 0.125`

`Pod1-vcpu-Utilization-Ratio = Allocated-vcpu / Total-vcpu`

`= 16 vcpu / 66 vcpu = 0.24`

`Pod1-Memory-Utilization-Ratio = Allocated-GB / Total-GB`

`= 100 GB/ 488GB = 0.205`

`Pod1-GPU-Split-Ratio = Pod1-GPU-Utilization-Ratio / (Total-GPU-Utilization-Ratio – Instance-Unused-GPU). Set to 0 if Instance-Unused-GPU = 0`

`= 0 since Instance-Unused-GPU is 0`

`Pod1-vcpu-Split-Ratio = Pod1-CPU-Utilization-Ratio / (Total-CPU-Utilization-Ratio – Instance-Unused-CPU). Set to 0 if Instance-Unused-CPU = 0`

`= 0 since Instance-Unused-CPU is 0`

`Pod1-Memory-Split-Ratio = Pod-Memory-Utilization-Ratio / (Total-Utilization-Ratio – Instance-Unused-Memory). Set to 0 if Instance-Unused-Memory = 0`

`= 0.204/ (1-0.102) = 0.227`


**Table 3: Compute Utilization ratios**  

| Pod Name | Namespace | vcpu Utilization | vcpu Split Ratio | GPU Utilization | GPU Split Ratio | Memory Utilization | Memory Split Ratio | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| Pod 1 | Namespace 1 | 0.242 | 0 | 0.125 | 0 | 0.205 | 0.227 | 
| Pod 2 | Namespace 2 | 0.277 | 0 | 0.375 | 0 | 0.287 | 0.318 | 
| Pod 3 | Namespace 1 | 0.242 | 0 | 0.25 | 0 | 0.205 | 0.227 | 
| Pod 4 | Namespace 2 | 0.242 | 0 | 0.25 | 0 | 0.205 | 0.227 | 
| Unused | Unused | 0 |  |  |  | 0.098 |  | 
|  |  | 1 | 0 | 1 | 0 | 1 | 1 | 

## Step 4: Compute the split cost and unused costs


Split Cost  
The pay per use cost allocation of the EC2 Instance cost based on allocated CPU and memory usage by the Kubernetes Pods

Unused Instance Cost  
The cost of unused CPU or memory resources on the instance

`Pod1-Split-Cost = (Pod1-GPU-Utilization-Ratio * GPU-Available * Cost per GPU-Hour) + (Pod1-vcpu-Utilization-Ratio * vcpu-Available * Cost per vcpu-Hour) + (Pod1-Memory-Utilization-Ratio * Memory-Available * Cost per GB-Hour)`

`= (.125*8gpu*$0.504) + (0.242 * 64 vcpu * $0.05) + (0.204 * 488GB * $0.00506) = 0.504+ 0.774 + 0.503 = $1.85`

`Pod1-Unused-Cost = (GPU-Split-Ratio * Unused-Cost) + (vcpu-Split-Ratio * Unused-Cost) + (Memory-Split-Ratio * Unused-Cost)`

`= (0*0*8*$0.504) + (0 * $0.05) + (0.227 *.102*488GB*$.00506) = $0.06`

`Pod1-Total-Split-Cost = Pod1-Split-Cost + Pod1-Unused-Cost = $1.85 + $0.06 = $1.91`

[Note: Unused cost = Unused util ratio \$1 Total resource \$1 resource hourly cost]


**Table 4 - Summary of the Split and Unused costs calculated each hour for all Pods running within the cluster**  

| Pod Name | Namespace | Split Cost | Unused Cost | Total Cost | 
| --- | --- | --- | --- | --- | 
| Pod 1 | Namespace 1 | \$11.85 | \$10.06 | \$11.91 | 
| Pod 2 | Namespace 2 | \$13.18 | \$10.09 | \$13.26 | 
| Pod 3 | Namespace 1 | \$12.35 | \$10.06 | \$12.41 | 
| Pod 4 | Namespace 2 | \$12.35 | \$10.06 | \$12.41 | 
| Total |  |  |  | \$110 | 

# Using Kubernetes labels for cost allocation in EKS


Split cost allocation data supports Kubernetes labels as cost allocation tags for Amazon EKS clusters. While these labels are automatically imported as user-defined cost allocation tags, they require activation at the management account level. Once activated, you can use them to attribute pod-level costs in your Cost and Usage Reports (CUR) using custom attributes such as cost center, application, business unit, and environment.

This feature helps organizations accurately track and allocate costs in shared EKS environments across teams, projects, or departments. Using Kubernetes labels, you can allocate your Kubernetes costs based on your specific business requirements and organizational design.

## Prerequisites


As prerequisites for using Kubernetes labels with split cost allocation data:
+ You need to enable split cost allocation data in the AWS Billing and Cost Management console. This must be enabled at the management account level. For details, see [Enabling split cost allocation data](https://docs.aws.amazon.com/cur/latest/userguide/enabling-split-cost-allocation-data.html).
+ You need an EKS cluster for which you want to track split cost allocation data. This can be an existing cluster, or you can create a new one. For more information, see [Create an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) in the *Amazon EKS User Guide*.
+ You must have labels assigned to your pods in the EKS cluster. For more information on how to create labels in Kubernetes, see [Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) in the *Kubernetes Documentation*.

## Working with Kubernetes labels in EKS


Split cost allocation data supports up to 50 Kubernetes labels per pod, which are sorted alphabetically before being imported as cost allocation tags. Any labels beyond the first 50 are automatically discarded. If you need to add a new cost allocation tag after reaching the 50-label limit, you must first remove an existing label and ensure your new label falls within the first 50 when alphabetically sorted.

**Note**  
Some AWS managed services automatically add labels to EKS pods. These labels count toward the 50-label limit per pod and will appear on your cost allocation tags page.  
While Kubernetes labels have no size restrictions, cost allocation tags have specific character limits: 128 characters for tag keys and 256 characters for tag values. Labels that exceed these character limits will be discarded and not presented as cost allocation tags. It's recommended to create labels that follow these character limits for cost allocation purposes.

The imported Kubernetes labels appear as cost allocation tags and must be activated at the payer account level. For more information on cost allocation tags and activation, see [Using user-defined cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html). The following cost allocation tag limits apply: 50 user-defined tags per resource and 500 user-defined tags per payer account. System-generated tags do not count toward these limits.

**Note**  
After you create and apply user-defined tags to your resources, it can take up to 24 hours for the tag keys to appear on your cost allocation tags page. Once you activate the tags, it can take an additional 24 hours for them to become active.

## Managing Kubernetes labels and cost allocation tags


You can add, delete, and edit Kubernetes labels in EKS, as well as deactivate the associated cost allocation tags. The following describes the expected behavior for each action.

**Adding a new label**

You can add a new Kubernetes label to a pod. If the label limit of 50 has not been reached, the new label will be imported and offered as a cost allocation tag, which can then be activated. However, if the limit of 50 has been reached, the new label will not be imported even if it falls within the alphabetical sort order of first 50 labels. You must first deactivate an existing cost allocation tag to import a new label.

**Editing a label**

Kubernetes does not allow you to edit a label key. To change a label key, you must remove it and add a new label. However, you can edit label values, which will be reflected in your next CUR.

**Deleting a label**

You can remove a label from EKS pods. Note that removing a label does not automatically deactivate its associated cost allocation tag. Split cost allocation data will continue to populate in CUR until you explicitly deactivate the cost allocation tag.

**Deactivating a cost allocation tag**

You can deactivate any cost allocation tag created from Kubernetes labels. Once deactivated, data will no longer populate in the respective columns, and the column will be deleted from the next month’s CUR.

## Best practices for managing Kubernetes labels for cost allocation


Kubernetes labels provide significant flexibility in shared cost allocation modeling. To maximize the potential of this capability, we recommend following these best practices to optimize your cost management approach.

**Understanding label limits**

The 50-label-per-pod limit is based on alphabetical sorting. Only the first 50 alphabetically ordered labels will be imported for cost allocation. To ensure critical labels are included, carefully plan your label naming to ensure important labels appear within the first 50 when alphabetically sorted.

**Following character constraints**

AWS cost allocation tags have the following character limits:
+ Tag keys: 128 characters
+ Tag values: 256 characters

While Kubernetes allows longer labels, any labels exceeding these limits will not be imported. Design your labels within these limits to ensure successful cost allocation tracking.

**Adding new labels when at capacity**

When a pod has reached the 50-label limit and you need to add a new cost allocation label, follow these steps:

1. Review existing labels and identify a cost allocation tag to deactivate.

1. Deactivate the selected tag.

1. Add the new cost allocation label.

1. Verify the new label falls within the first 50 alphabetically sorted labels.

**Note**  
Remember that only the first 50 alphabetically sorted labels are used for cost allocation.

# Using split cost allocation data with Amazon Managed Service for Prometheus


Splitting the cost data for Amazon EKS requires that you collect and store metrics from your clusters, including memory and CPU usage. Amazon Managed Service for Prometheus can be used for this purpose.

Once you're opted in to split cost allocation data and your Amazon Managed Service for Prometheus workspace starts receiving the two required metrics (`container_cpu_usage_seconds_total` and `container_memory_working_set_bytes`), split cost allocation data recognizes the metrics and uses them automatically.

**Note**  
The two required metrics (`container_cpu_usage_seconds_total` and `container_memory_working_set_bytes`) are present in the default Prometheus scrape configuration and the default configuration provided with an AWS managed collector. However, if you customize these configurations, do not relabel, modify, or remove the following labels from the `container_cpu_usage_seconds_total` and `container_memory_working_set_bytes` metrics: `name`, `namespace`, and `pod`. If you relabel, modify, or remove these labels, it can impact the ingestion of your metrics.

You can use Amazon Managed Service for Prometheus to collect EKS metrics from a single usage account, in a single Region. The Amazon Managed Service for Prometheus workspace must be in that account and Region. You need one Amazon Managed Service for Prometheus instance for each usage account and Region for which you want to monitor the costs. You can collect metrics for multiple clusters in the Amazon Managed Service for Prometheus workspace, as long as they're in the same usage account and Region.

The following sections describe how to send the correct metrics from your EKS cluster to the Amazon Managed Service for Prometheus workspace.

## Prerequisites


As prerequisites for using Amazon Managed Service for Prometheus with split cost allocation data:
+ You need to enable split cost allocation data in the AWS Billing and Cost Management console. For details, see [Enabling split cost allocation data](https://docs.aws.amazon.com/cur/latest/userguide/enabling-split-cost-allocation-data.html). Opting in to split cost allocation data creates a service-linked role in each usage account to query Amazon Managed Service for Prometheus for the Amazon EKS cluster metrics in that account. For more information, see [Service-linked roles for split cost allocation data](https://docs.aws.amazon.com/cost-management/latest/userguide/split-cost-allocation-data-SLR.html).
+ You need an EKS cluster for which you want to track split cost allocation data. This can be an existing cluster, or you can create a new one. For more information, see [Create an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) in the *Amazon EKS User Guide*.
**Note**  
You will need the `EKS cluster ARN`, `security group IDs`, and at least two `subnet IDs` (in different availability zones) for use in later steps.  
(optional) Set your EKS cluster’s authentication mode to either `API` or `API_AND_CONFIG_MAP`.
+ You need an Amazon Managed Service for Prometheus instance in the same account and Region as your EKS cluster. If you do not already have one, you can create one. For more information on creating an Amazon Managed Service for Prometheus instance, see [Create a workspace](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-create-workspace.html) in the *Amazon Managed Service for Prometheus User Guide*.
**Note**  
You will need the `Amazon Managed Service for Prometheus workspace ARN` for use in later steps.

## Forwarding EKS metrics to Amazon Managed Service for Prometheus


Once you have an EKS cluster and an Amazon Managed Service for Prometheus instance, you can forward the metrics from the cluster to the instance. You can send metrics in two ways.
+ [Option 1: Use an AWS managed collector.](https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data-resource-amp.html#use-managed-collector) This is the simplest way to send metrics from an EKS cluster to Amazon Managed Service for Prometheus. However, it does have a limit of only scraping metrics every 30 seconds at most.
+ [Option 2: Create your own Prometheus agent.](https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data-resource-amp.html#create-prometheus-agent) In this case, you have more control over the scraping configuration, but you must manage the agent after creating it.

### Option 1: Using an AWS managed collector


Using an AWS managed collector (a *scraper*) is the simplest way to send metrics from an EKS cluster to an Amazon Managed Service for Prometheus instance. The following procedure steps you through creating an AWS managed collector. For more detailed information, see [AWS managed collectors](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-collector.html) in the *Amazon Managed Service for Prometheus User Guide*.

**Note**  
AWS managed collectors have a minimum scrape interval of 30 seconds. If you have short-lived pods, the recommendation is to set your scraper interval to 15 seconds. To use a 15 second scraper interval, use option 2 to [create your own Prometheus agent](https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data-resource-amp.html#create-prometheus-agent).

There are three steps to create an AWS managed collector:

1. Create a scraper configuration.

1. Create the scraper.

1. Configure your EKS cluster to allow the scraper to access metrics.

*Step 1: Create a scraper configuration*

In order to create a scraper, you must have a scraper configuration. You can use a default configuration, or create your own. The following are three ways to get a scraper configuration:
+ Get the default configuration using the AWS CLI, by calling:

  ```
  aws amp get-default-scraper-configuration
  ```
+ Create your own configuration. For details, see the [Scraper configuration](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-collector-how-to.html#AMP-collector-configuration) instructions in the *Amazon Managed Service for Prometheus User Guide*.
+ Copy the sample configuration provided in that same [Scraper configuration](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-collector-how-to.html#AMP-collector-configuration) instructions in the *Amazon Managed Service for Prometheus User Guide*.

You can edit the scraper configuration, to modify the scrape interval or to filter the metrics that are scraped, for example.

To filter the metrics that are scraped to just include the two that are needed for split cost allocation data, use the following scraper configuration:

```
global:
   scrape_interval: 30s
   #external_labels:
     #clusterArn: <REPLACE_ME>
scrape_configs:
  - job_name: kubernetes-nodes-cadvisor
    scrape_interval: 30s
    scrape_timeout: 10s
    scheme: https
    authorization:
      type: Bearer
      credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    kubernetes_sd_configs:
    - role: node
    relabel_configs:
    - regex: (.+)
      replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
      source_labels:
      - __meta_kubernetes_node_name
      target_label: __metrics_path__
    - replacement: kubernetes.default.svc:443
      target_label: __address__
    metric_relabel_configs:
    - source_labels: [__name__]
      regex: 'container_cpu_usage_seconds_total|container_memory_working_set_bytes'
      action: keep
```

Once you have the scraper configuration, you must base64 encode it for use in *step 2*. The configuration is a text YAML file. To encode the file, use a website such as [https://www.base64encode.org/](https://www.base64encode.org/).

*Step 2: Create the scraper*

Now that you have a configuration file, you need to create your scraper. Create a scraper using the following AWS CLI command, based on the variables outlined in the prerequisites section. You must use information from your EKS cluster for the *<EKS-CLUSTER-ARN>*, *<SG-SECURITY-GROUP-ID>*, and *<SUBNET-ID>* fields, replace *<BASE64-CONFIGURATION-BLOB>* with the scraper configuration you created in the previous step, and replace *<AMP\$1WORKSPACE\$1ARN>* with your Amazon Managed Service for Prometheus workspace ARN.

```
aws amp create-scraper \ 
--source eksConfiguration="{clusterArn=<EKS-CLUSTER-ARN>,securityGroupIds=[<SG-SECURITY-GROUP-ID>],subnetIds=[<SUBNET-ID>]}" \ 
--scrape-configuration configurationBlob=<BASE64-CONFIGURATION-BLOB> \ 
--destination ampConfiguration={workspaceArn="<AMP_WORKSPACE_ARN>"}
```

Note down the `scraperId` that is returned for use in *step 3*.

*Step 3: Configure your EKS cluster to allow the scraper to access metrics*

If your EKS cluster’s authentication mode is set to either `API` or `API_AND_CONFIG_MAP`, then your scraper will automatically have the correct in-cluster access policy, and the scrapers will have access to your cluster. No further configuration is required, and metrics should be flowing to Amazon Managed Service for Prometheus.

If your EKS cluster’s authentication mode is not set to `API` or `API_AND_CONFIG_MAP`, you will need to manually configure the cluster to allow the scraper to access your metrics through a ClusterRole and ClusterRoleBinding. To learn how to enable these permissions, see [Manually configuring an EKS cluster for scraper access](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-collector-how-to.html#AMP-collector-eks-setup) in the *Amazon Managed Service for Prometheus User Guide*.

Once the scraper is active, verify that both metrics (`container_cpu_usage_seconds_total` and `container_memory_working_set_bytes`) are being pushed to your Amazon Managed Service for Prometheus workspace.

```
awscurl --service="aps" --region="<REGION>" "https://aps-workspaces.<REGION>.amazonaws.com/workspaces/<WorkSpace_ID>/api/v1/label/__name__/values"
```

Output:

```
{
"status": "success",
"data": [
"container_cpu_usage_seconds_total",
"container_memory_working_set_bytes",
"scrape_duration_seconds",
"scrape_samples_post_metric_relabeling",
"scrape_samples_scraped",
"scrape_series_added",
"up"
]
}
```

### Option 2: Creating your own Prometheus agent


If you can’t use the AWS managed collector, or already have your own Prometheus server, you can use your own Prometheus instance as an agent to scrape metrics from your EKS cluster and send them to Amazon Managed Service for Prometheus.

For detailed instructions on how to use your own Prometheus instance as an agent, see [Using a Prometheus instance as a collector](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-ingest-with-prometheus.html) in the *Amazon Managed Service for Prometheus User Guide*.

The following is a sample Prometheus scrape configuration that includes the Prometheus server scrape interval and the container metrics required for split cost allocation data. If you have short-lived pods, the recommendation is to lower the default Prometheus server scrape interval from 30 seconds to 15 seconds. Note that this can result in high Prometheus server memory usage.

```
global:
   scrape_interval: 30s
   #external_labels:
     #clusterArn: <REPLACE_ME>
scrape_configs:
  - job_name: kubernetes-nodes-cadvisor
    scrape_interval: 30s
    scrape_timeout: 10s
    scheme: https
    authorization:
      type: Bearer
      credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    kubernetes_sd_configs:
    - role: node
    relabel_configs:
    - regex: (.+)
      replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
      source_labels:
      - __meta_kubernetes_node_name
      target_label: __metrics_path__
    - replacement: kubernetes.default.svc:443
      target_label: __address__
    metric_relabel_configs:
    - source_labels: [__name__]
      regex: 'container_cpu_usage_seconds_total|container_memory_working_set_bytes'
      action: keep
```

If you followed [Set up ingestion from a new Prometheus server using Helm](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-ingest-metrics-new-Prometheus.html) in the in the *Amazon Managed Service for Prometheus User Guide*, then you can update your scrape configuration.

**To update your scrape configuration**

1. Edit `my_prometheus_values_yaml` from the guide and include the sample scrape config in the `server` block.

1. Run the following command, using `prometheus-chart-name` and `prometheus-namespace` from the *Amazon Managed Service for Prometheus User Guide*.

```
helm upgrade prometheus-chart-name prometheus-community/prometheus -n prometheus-namespace -f my_prometheus_values_yaml
```

To learn more about `scrape_interval`or how to use a non-global scrape\$1interval, refer to [Prometheus scrape configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config).

Alternatively, you can use the AWS Distro for OpenTelemetry collector that has a Prometheus Receiver, a Prometheus Remote Write Exporter, and the AWS Sigv4 Authentication Extension to achieve remote write access to Amazon Managed Service for Prometheus.

**Note**  
Once you have set up your Prometheus agent, unlike AWS managed collectors, you are responsible for keeping the agent up to date and running to collect metrics.

## Estimating your Amazon Managed Service for Prometheus costs


You can use AWS Pricing Calculator to estimate the cost of using Amazon Managed Service for Prometheus for split cost allocation data.

**To configure Amazon Managed Service for Prometheus for your estimate**

1. Open AWS Pricing Calculator at [https://calculator.aws/\$1/](https://calculator.aws/#/).

1. Choose **Create estimate**.

1. On the **Add service** page, enter **Amazon Managed Service for Prometheus** in the search field, and then choose **Configure**.

1. In the **Description** field, enter a description for your estimate.

1. Choose a **Region**.

1. Select **Calculate the cost using your infrastructure details**. This option allows you to estimate your ingestion, storage, and query sample costs based on your current or proposed infrastructure setup.

1. For **Number of EC2 instances**, enter the total number of EC2 instances across all your clusters for your entire consolidated billing family (including all accounts and Regions). If you use AWS Fargate, use the number of Fargate tasks as a proxy for your EC2 instance count.

1. Split cost allocation data requires two metrics: `container_cpu_usage_seconds_total` and `container_memory_working_set_bytes`. For **Prometheus metrics per EC2 instances**, enter 2.

1. Split cost allocation data suggests a scrape interval of 15 seconds. For **Metric collection interval (in seconds)**, enter 15. If you used a different interval (for example, 30 seconds), change this to the interval you set up.

1. Split cost allocation data does not impose any specific requirements for the other parameters so enter appropriate values for the rest of the input parameters as per your business requirements.

1. Choose **Save and add service**.

# Using split cost allocation data with Amazon CloudWatch Container Insights


Splitting the cost data for Amazon EKS requires that you collect and store metrics from your clusters, including memory and CPU usage. Amazon CloudWatch Container Insights can be used for this purpose.

Once you've opted in to split cost allocation data and set up the CloudWatch agent with EKS observability add-on on your EKS cluster, split cost allocation data starts receiving the two required metrics `(pod_cpu_usage_total` and `pod_memory_working_set`) in the `ContainerInsights` namespace and uses them automatically. To view the full set of container metrics for EKS, see [Amazon EKS and Kubernetes Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-EKS.html) in the *Amazon CloudWatch User Guide*.

The following sections describe how to send the correct metrics from your EKS cluster to split cost allocation data.

## Prerequisites


As prerequisites for using Amazon CloudWatch Container Insights with split cost allocation data:
+ You need to enable split cost allocation data in the AWS Billing and Cost Management console. For details, see [Enabling split cost allocation data](https://docs.aws.amazon.com/cur/latest/userguide/enabling-split-cost-allocation-data.html).
+ You need an EKS cluster for which you want to track split cost allocation data. This can be an existing cluster, or you can create a new one. For more information, see [Create an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) in the *Amazon EKS User Guide*.

## Setting up Amazon CloudWatch Container Insights to forward EKS metrics


You need to set up and configure the CloudWatch agent in order to forward EKS metrics. You can use either the [Amazon CloudWatch Observability EKS add-on or the Amazon CloudWatch Observability Helm chart](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Observability-EKS-addon.html) to install the CloudWatch agent and the Fluent-bit agent on an EKS cluster. For more information on how to install and set up the CloudWatch agent, see [Install the Amazon CloudWatch Observability EKS add-on](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-addon.html) in the *Amazon CloudWatch User Guide*.

The following are the minimum versions required for the CloudWatch agent and EKS add-on:
+ CloudWatch agent version: v1.300045.0
+ CloudWatch Observability EKS add-on version: v2.0.1-eksbuild.1

## Estimating your Amazon CloudWatch costs


Enabling the feature to use Amazon CloudWatch Container Insights with split cost allocation data adds two new metrics to Amazon CloudWatch Container Insights: `pod_cpu_usage_total` and `pod_memory_working_set`. For details on these metrics, see [Amazon EKS and Kubernetes Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-EKS.html) in the *Amazon CloudWatch User Guide*.

**To understand the costs associated with the feature**

1. Open Amazon CloudWatch Pricing at [https://aws.amazon.com/cloudwatch/pricing/](https://aws.amazon.com/cloudwatch/pricing/).

1. Navigate to the **Paid tier** section.

1. Choose the **Container Insights** tab.

1. For a detailed calculation of the costs, navigate to the **Pricing examples** section, and refer to **Example 13 - Container Insights for Amazon EKS and Kubernetes**.