

# Tagging use cases
<a name="tagging-use-cases"></a>

**Topics**
+ [Tags for cost allocation and financial management](tags-for-cost-allocation-and-financial-management.md)
+ [Tags for operations and support](tags-for-operations-and-support.md)
+ [Tags for data security, risk management, and access control](tags-for-data-security-risk-management-and-access-control.md)

# Tags for cost allocation and financial management
<a name="tags-for-cost-allocation-and-financial-management"></a>

 One of the first tagging use cases organizations often tackle is visibility and management of cost and usage. There are usually a few reasons for this: 
+  **It's typically a well understood scenario and requirements are well known.** For example, finance teams want to see the total cost of workloads and infrastructure that span across multiple services, features, accounts, or teams. One way to achieve this cost visibility is through consistent tagging of resources. 
+  **Tags and their values are clearly defined.** Usually, cost allocation mechanisms already exist in an organization’s finance systems, for example, tracking by cost center, business unit, team, or organization function. 
+  **Rapid, demonstrable return on investment.** It’s possible to track cost optimization trends over time when resources are tagged consistently, for example, for resources that were rightsized, auto-scaled, or put on a schedule. 

 Understanding how you incur costs in AWS allows you to make informed financial decisions. Knowing where you have incurred costs at the resource, workload, team, or organization level enhances your understanding of the value delivered at the applicable level when compared to the business outcomes achieved. 

 The engineering teams might not have experience with financial management of their resources. Attaching a person with a specialized skill in AWS financial management who can train engineering and development teams on the basics of AWS financial management and create a relationship between finance and engineering to foster the culture of FinOps will help achieve measurable outcomes for the business and encourage teams to build with cost in mind. Establishing good financial practices is covered in depth by the [Cost Optimization Pillar](https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/welcome.html) of the Well-Architected Framework, but we will touch on a few of the fundamental principles in this whitepaper. 

# Cost allocation tags
<a name="cost-allocation-tags"></a>

 Cost allocation refers to the assignment or distribution of incurred costs to the users or beneficiaries of those costs following a defined process. In the context of this whitepaper, we divide cost allocation into two types: *showback* and *chargeback*. 

 *Showback* tools and mechanisms help increase cost awareness. *Chargeback* helps with cost recovery and drives enablement of cost awareness. *Showback* is about presentation, calculation, and reporting of charges incurred by a specific entity, such as business unit, application, user, or cost center. For example: “the infrastructure engineering team was responsible for \$1X of AWS spend last month”. *Chargeback* is about an actual charging of incurred costs to those entities via an organization’s internal accounting processes, such as financial systems or journal vouchers. For example: “\$1X was deducted from the infrastructure engineering team's AWS budget.” In both cases, tagging resources appropriately can help allocate the cost to an entity, the only difference being whether or not someone is expected to make a payment. 

 Your organization's financial governance might require transparent accounting of costs incurred at the application, business unit, cost center, and team level. Performing cost attribution supported by [Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) provides you the data necessary to accurately attribute the costs incurred by an entity from appropriately tagged resources. 
+  **Accountability** — Ensure that cost is allocated to those who are responsible for resource usage. A single point of service or group can be accountable for spend reviews and reporting. 
+  **Financial transparency** — Show a clear view into cash allocations towards IT by creating effective dashboards and meaningful cost analysis for leadership. 
+  **Informed IT investments** — Track ROI based on project, application, or business line, and empower teams to make better business decisions, for example, allocate more funding to revenue generating applications. 

 In summary, cost allocation tags can help to tell you: 
+  Who owns the spend and is responsible for optimizing it? 
+  What workload, application, or product is incurring the spend? Which environment or stage? 
+  What spend areas are growing fastest? 
+  How much spend can be deducted from an AWS budget based on past trends? 
+  What was the impact of cost optimization efforts within particular workloads, applications, or products? 

 Activating resource tags for cost allocation helps with the definition of measurement practices within the organization that can be used to provide the visibility of AWS usage that increases transparency into accountability for spend. It also focuses on creating an appropriate level of granularity with respect to cost and usage visibility and inﬂuencing cloud consumption behaviors through cost allocation reporting and KPI tracking. 

# Building a cost allocation strategy
<a name="building-a-cost-allocation-strategy"></a>

## Defining and implementing a cost allocation model
<a name="defining-and-implementing-a-cost-allocation-model"></a>

Create account and cost structure for the resources being deployed in AWS. Establish the relationship between costs from AWS spend, how those costs were incurred, and who or what incurred those costs. Common cost structures are based on AWS Organizations, AWS accounts, environments, and entities within your organizations, such as a line of business or workload. Cost structures can be based on multiple attributes to permit the examination of costs in different ways or at different levels of granularity such as rolling up the costs of individual workloads to the line of business they serve.

 When choosing a cost structure that aligns with the desired outcomes, evaluate the cost allocation mechanisms on the *ease of implementation* versus *desired accuracy*. This might include considerations in regards to accountability, tooling availability, and cultural changes. Three popular cost allocation models that AWS customers usually start from are: 
+  **Account-based** — This model requires the least amount of effort and provides high accuracy for showbacks and chargebacks, and is suitable for organizations that have a defined account structure (and is consistent with the recommendations of the [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html) whitepaper). This provides clear cost visibility on a per-account basis. For cost visibility and allocation, you can use [AWS Cost Explorer](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html), [Cost and Usage Reports](https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html), as well as [AWS Budgets](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-managing-costs.html) for cost monitoring and tracking. These tools provide filtering and grouping options by AWS accounts. From a cost allocation perspective, this model doesn’t have to rely on accurate tagging of individual resources. 
+  **Business Unit or Team-based** — Cost allocatable to teams, business units, or organizations within an enterprise. This model requires a moderate amount of effort, provides high accuracy for showbacks and chargebacks, and is suitable for organizations that have a defined account structure (typically using AWS Organizations), with separation between various teams, applications, and workload types. This provides clear cost visibility across teams and applications, and as additional benefit reduces the risk of hitting [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) within a single AWS account. For example, each team may have five accounts (`prod`, `staging`, `test`, `dev`, `sandbox`), and no two teams and applications will share the same account. With such structure [AWS Cost Categories](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html) will then provide the functionality to group accounts or other tags (“meta-tagging”) into categories, which can be tracked in the tools mentioned in the previous example. It’s important to note that AWS Organizations allows tagging of accounts and organizational units (OUs), however these tags will not be applicable for cost allocation and billing reporting (that is, you cannot group or filter your cost in AWS Cost Explorer by OU). AWS Cost Categories should be used for this purpose. 
+  **Tag-based** — This model requires more effort compared to the previous two and will provide high accuracy for showbacks and chargebacks depending on the requirements and end goal. While we strongly recommend that you adopt the practices outlined in [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html) whitepaper, realistically customers often find themselves with mixed and complex account structures that take time to migrate away from. Implementing a rigorous and effective tagging strategy is the key in this scenario, followed by [activating relevant tags for cost allocation](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html) in the Billing and Cost Management console (in AWS Organizations, tags can be activated for cost allocation only from the Management Payer account). After tags are activated for cost allocation, then tools for cost visibility and allocation that were mentioned in the previous methods can be used for showbacks and chargebacks. Note that cost allocation tags are not retrospective, and will only appear in billing reporting and cost tracking tools after they were activated for cost allocation. 

 To summarize, if you need to track costs by business unit, you can use [AWS Cost Categories](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-cost-categories.html) to group linked accounts within AWS Organization accordingly and view this grouping in billing reports. When you create separate accounts for production and non-production environments, you can also filter the costs related to environments in tools such as [AWS Cost Explorer](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html), or track those costs using [AWS Budgets](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-managing-costs.html). Finally, if your use case requires more granular cost tracking, for example, by individual workloads or applications, you can tag resources within those accounts accordingly, [activate those tag keys for cost allocation](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html) on the management account, and then filter that cost by tag keys in the billing reporting tools. 

## Establishing cost reporting and monitoring processes
<a name="establing-cost-reporting-and-monitoring-processes"></a>

 Start with identifying the types of costs that are important for internal stakeholders (for example, daily spend, cost by account, cost by X, amortized costs). By doing so, you can mitigate budgetary risks associated with unexpected or anomalous spend faster than waiting for the finalized AWS invoice. Tags provide the attribution that enables these reporting scenarios. Insights gained from reporting can inform your actions to mitigate the impact from anomalous and unexpected spend on financial budgets. When there is an unexpected surge in costs, it's important to evaluate if there has been an unexpected surge in the value delivered so that you can determine if and what action is required. 

 When developing a tagging strategy to support cost allocation, keep in mind the following elements: 
+  **AWS Organizations** - Cost allocation within multiple accounts can be performed by account, groups of accounts, or group of tags created for resources on those accounts. Tags created for resources residing in individual accounts in AWS Organizations can be used for cost allocation only from the management account. 
+  **AWS Account** - Cost allocation within one AWS account can be performed by additional dimensions such as services or regions. It’s possible to further tag resources within an account and work with the groups of such resource tags. 
+  **Cost Allocation Tags** - Both user-created tags and AWS generated tags can be activated for cost allocation, if necessary. Enabling tags for cost allocation in the billing console (of the management account in AWS Organizations) helps with showbacks and chargebacks. 
+  **Cost Categories** - AWS Cost Categories allow grouping accounts and grouping tags (“meta-tagging”) within an AWS Organization, which further provides capability to analyze the cost related to these categories through tools such as AWS Cost Explorer, AWS Budgets and AWS Cost and Usage Report. 

## Performing showback and chargeback for business units, teams, or organizations within the enterprise
<a name="performing-showback-and-chargeback-for-business-units"></a>

 Attribute costs using your cost allocation process supported by your cost structure and cost allocation tags. Tags can be used to provide showback to teams that are not directly responsible to pay for costs but are responsible for having incurred those costs. This approach provides awareness of their contribution to spend and how those costs are incurred. Perform chargeback to the teams that are directly responsible for costs to recover the expense of the resources they have consumed, and to provide them with awareness of those costs and how they were incurred. 

## Measuring and circulating eﬃciency or value KPIs
<a name="measuring-and-circulating-kpis"></a>

 Agree on a set of unit cost or KPI metrics to measure the impact of your cloud financial management investments. This exercise creates a common language across technology and business stakeholders, and tells an eﬃciency-based story, rather than a story focused solely on absolute, aggregate spend. For additional information check this blog that talks [how unit metrics can help create alignment between business functions](https://aws.amazon.com/blogs/aws-cloud-financial-management/unit-metrics-help-create-alignment-between-business-functions/). 

## Allocating unallocatable spend
<a name="allocating-unallocatable-spend"></a>

 Depending on the organization’s accounting practices, different charge types might require different treatment. Identify the resources or cost categories that cannot be tagged. Depending on the services used and those planned to be used, agree on the mechanisms on how to treat and measure such unallocatable spend. For example, check the list of resources that are supported by [AWS Resource Groups and Tag Editor](https://docs.aws.amazon.com/ARG/latest/userguide/supported-resources.html) in the *AWS Resource Groups and Tags User Guide*. 

 A common example of cost category that cannot be tagged is some fees for commitment-based discounts such as Reserved Instances (RI) and Savings Plans (SP). While subscription fees and unused SP and RI fees cannot be tagged in advance of appearing in billing reporting tools, you can track how RI and SP discounts apply to accounts, resources and their tags in AWS Organizations after the fact. For example, in AWS Cost Explorer it’s possible to look at the amortized cost, group that spend by the relevant tag keys and apply filters relevant to your use case. In AWS Cost and Usage Report (CUR), you can filter out lines that correspond to usage covered by RI and SP discounts (read more in the use cases section of the [CUR documentation](https://docs.aws.amazon.com/cur/latest/userguide/use-cases.html)) and select the columns that are only relevant to you. Each tag key activated for cost allocation will be presented in its own separate column at the end of the CUR report, similarly to how it's presented in other legacy billing reports, such as [monthly cost allocation report](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/configurecostallocreport.html). For additional reference, check the [AWS Well-Architected Labs](https://www.wellarchitectedlabs.com/cost/300_labs/300_cur_queries/) for examples of gaining cost and usage insights from CUR data. 

## Reporting
<a name="reporting-charges"></a>

 In addition to AWS tools available to assist with showbacks and chargebacks, there is a range of other AWS created and third-party solutions that can help monitor the cost of tagged resources, and measure the effectiveness of the tagging strategy. Depending on both the requirements and the end objective of the organization, one could either invest time and resources into building customized solutions or purchase tools provided by one of the [AWS Cloud Management Tools Competency Partners](https://aws.amazon.com/products/management-tools/partner-solutions/?partner-solutions-cards.sort-by=item.additionalFields.partnerNameLower&partner-solutions-cards.sort-order=asc&awsf.partner-solutions-filter-partner-type=%2Aall&awsf.Filter%20Name%3A%20partner-solutions-filter-partner-use-case=%2Aall&awsf.partner-solutions-filter-partner-location=%2Aall). If you decide to create your own *single source of truth* cost allocation tool with controlled parameters relevant for the business, AWS Cost and Usage Report (CUR) provides most detailed cost and usage data and enables creation of customized optimization dashboards, allowing filtering and grouping by accounts, services, cost categories, cost allocation tags, and multiple other dimensions. Among CUR-based solutions developed by AWS that can be used as one of these tools, check [Cloud Intelligence Dashboards](https://www.wellarchitectedlabs.com/cost/200_labs/200_cloud_intelligence/) on the AWS Well-Architected Labs website. 

# Tags for operations and support
<a name="tags-for-operations-and-support"></a>

 An AWS environment will have multiple accounts, resources, and workloads with differing operational requirements. Tags can be used to provide context and guidance to support operations teams to enhance management of your services. Tags can also be used to provide operational governance transparency of the managed resources. 

 Some of the main factors driving consistent definition of operational tags are: 
+  **To filter resources during automated infrastructure activities.** For example, when deploying, updating or deleting resources. Another is the scaling of resources for cost optimization and out of hours usage reductions. See [AWS Instance Scheduler](https://aws.amazon.com/solutions/implementations/instance-scheduler/) solution for a working example. 
+  **Identifying isolated or deprecating resources.** Resources that have exceeded their defined lifespan or have been flagged for isolation by internal mechanisms should be appropriately tagged so as to assist support personnel in their investigation. Deprecating resources should be tagged before isolation, archival and deletion. 
+  **Support requirements for a group of resources.** Resources often have different support requirements, for example, these requirements could be negotiated between teams or set as part of an applications criticality. Further guidance on operating models can be found in the [Operational Excellence Pillar](https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/operating-model.html). 
+  **Enhance the incident management process.** By tagging resources with tags that offer greater transparency in incident management process, support teams and engineers as well as Major Incident Management (MIM) teams can more effectively manage events. 
+  **Backups.** Tags can also be used to identify the frequency your resources need to be backed up, and where the backup copies need to go or where to restore the backups. [Prescriptive guidance for Backup and recovery approaches on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/welcome.html). 
+  **Patching.** Patching mutable instances running in AWS is crucial in both your overarching patching strategy and for the patching of zero-day vulnerabilities. Deeper guidance on the wider patching strategy can be found in the [prescriptive guidance](https://docs.aws.amazon.com/prescriptive-guidance/latest/patch-management-hybrid-cloud/welcome.html). Patching of zero-day vulnerabilities is discussed in this [blog](https://aws.amazon.com/blogs/mt/avoid-zero-day-vulnerabilities-same-day-security-patching-aws-systems-manager/). 
+  **Operational observability**. Having an operational KPI strategy translated to resource tags will help operations teams to better track whether targets are being met to enhance business requirements. Developing a KPI strategy is a separate topic, but tends to be focused on a business operating in a steady state or where to measure the impact and outcomes of change. The [KPI Dashboards](https://wellarchitectedlabs.com/cost/200_labs/200_cloud_intelligence/cost-usage-report-dashboards/dashboards/2c_kpi_dashboard/) (AWS Well-Architected labs) and the Operations KPI Workshop (an [AWS Enterprise Support proactive service](https://aws.amazon.com/premiumsupport/technology-and-programs/proactive-services/)) both address measure performance in a steady state. The AWS enterprise strategy blog article [Measuring the Success of Your Transformation](https://aws.amazon.com/blogs/enterprise-strategy/measuring-the-success-of-your-transformation/), explores KPI measurement for a transformation program, such as IT modernization or migrating from on premises to AWS. 

# Automated infrastructure activities
<a name="automated-infrastructure-activities"></a>

 Tags can be used in a wide range of automation activities when managing infrastructure. Use of [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/index.html), for example, will allow you to manage automations and runbooks on resources specified by the defined key-value pair you create. For managed nodes, you could define a set of tags to track or target nodes by operating system and environment. You could then run an update script for all nodes in a group or review the status of those nodes. [Systems Manager Resources](https://docs.aws.amazon.com/systems-manager/latest/userguide/taggable-resources.html) can also be tagged to further refine and track your automated activities. 

 Automating the start and stop lifecycle of environment resources can provide a significant cost reduction to any organization. [Instance scheduler on AWS](https://aws.amazon.com/solutions/implementations/instance-scheduler/) is an example of a solution that can start and stop Amazon EC2 and Amazon RDS instances when they are not required. For example, developer environments utilizing Amazon EC2 or Amazon RDS instances that are not required to be running on weekends are not utilizing the cost saving potential that the shutting down of those instances can provide. By analyzing the needs of teams and their environments, and properly tagging these resources to automate their management, you can utilize your budget effectively. 

 *An example schedule tag used by instance scheduler on an Amazon EC2 instance:* 

```
{
    "Tags": [
        {
            "Key": "Schedule",
            "ResourceId": "i-1234567890abcdef8",
            "ResourceType": "instance",
            "Value": "mon-9am-fri-5pm"
        }
    ]
}
```

# Workload lifecycle
<a name="workload-lifecycle"></a>

**Review accuracy of supporting operational data.** Make sure that there are periodic reviews of the tags associated with your workload lifecycle, and that the appropriate stakeholders are involved in these reviews. 

 *Table 7 – Review operational tags as part of the workload lifecycle* 


|  Use Case  |  Tag Key  |  Rationale  |  Example Values  | 
| --- | --- | --- | --- | 
|  Account Owner  | example-inc:account-owner:owner  |  The owner of the account and it's contained resources.  | ops-center, dev-ops, app-team  | 
|  Account Owner Review  | example-inc:account-owner:review  |  Review of account ownership details being up to date and correct.  | <review date in the correct format defined in your tagging library>  | 
|  Data Owner  | example-inc:data-owner:owner  |  The data owner of the accounts residing data.  | bi-team, logistics, security  | 
|  Data Owner Review  | example-inc:data-owner:review  |  Review of data ownership details being up to date and correct.  | <review date in the correct format defined in your tagging library>  | 

## Assigning tags to suspending accounts before migrating to the suspended OU
<a name="assigning-tags-to-suspending-accounts"></a>

 Before suspending an account and moving into the suspended OU as detailed in the [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html) whitepaper, tags should be added to the account to aid in your internal tracing and auditing of an account’s lifecycle. For example, a relative URL or ticket reference on an organization’s ITSM ticketing system, that shows the audit trail for an application being suspended. 

 *Table 8 - Add operational tags when workload lifecycle enters new stage* 


|  Use Case  |  Tag Key  |  Rationale  |  Example Values  | 
| --- | --- | --- | --- | 
|  Account Owner  | example-inc:account-owner:owner  |  The owner of the account and it's contained resources.  | ops-center, dev-ops, app-team  | 
|  Data Owner  | example-inc:data-owner:owner  |  The data owner of the accounts residing data.  | bi-team, logistics, security  | 
|  Suspended Date  | example-inc:suspension:date  |  The date that the account was suspended  |  <suspended date in the correct format defined in your tagging library>  | 
|  Approval for suspension  | example-inc:suspension:approval  |  The link to the approval of account suspension  | workload/deprecation  | 

# Incident management
<a name="incident-management"></a>

 Tags can play a vital part in all phases of incident management starting from incident logging, prioritization, investigation, communication, resolution to closure. 

 Tags can detail where an incident should be logged, the team or teams that should be informed of the incident, and the defined escalation priority. It’s important to remember that tags are not encrypted, so consider what information you store in them. Also, in organizations, teams, and reporting lines, responsibilities change, so consider storing a link to a secure portal where this information can be more effectively managed. These tags don’t need to be exclusive. For instance, the application ID could be used to lookup the escalation paths in an IT service management portal. Make sure it is clear in your operational definitions that this tag is being used for multiple purposes. 

 Operational requirement tags can be detailed as well, to help incident managers and operations personnel further refine their objectives in response to an incident or event. 

 Relative links (to the knowledge system base URL) for [runbooks](https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.concept.runbook.en.html) and [playbooks](https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.concept.playbook.en.html) can be included as tags to assist the responding teams in identifying corresponding process, procedure and documentation. 

 *Table 9 - Use operational tags to inform incident management* 


|  Use Case  |  Tag Key  |  Rationale  |  Example Values  | 
| --- | --- | --- | --- | 
|  Incident Management  | example-inc:incident-management:escalationlog  |  The system in use by the supporting team to log incidents  | jira, servicenow, zendesk  | 
|  Incident Management  | example-inc:incident-management:escalationpath  |  Path of escalation  | ops-center, dev-ops, app-team  | 
|  Cost Allocation and Incident Management  | example-inc:cost-allocation:CostCenter |  Monitor costs by cost center. This is an example of a dual use tag where the cost center is being used as an application code for incident logging  | 123-\$1  | 
|  Backup Schedule  | example-inc:backup:schedule  |  Backup schedule of the resource  | Daily  | 
|  Playbook / Incident Management  | example-inc:incident-management:playbook  |  Documented playbook  | webapp/incident/playbook  | 

# Patching
<a name="patching"></a>

 Organizations can automate their patching strategy for mutable compute environments and keep mutable instances in-line with the defined patch baseline of that application environment by using AWS Systems Manager Patch Manager and AWS Lambda. A tagging strategy for mutable instances within these environments can be managed by assigning said instances to **Patch Groups** and **Maintenance Windows**. See the following examples for a Dev → Test → Prod split. AWS prescriptive guidance is available for the [patch management of mutable instances.](https://docs.aws.amazon.com/prescriptive-guidance/latest/patch-management-hybrid-cloud/welcome.html) 

 *Table 10 - Operational tags can be environment specific* 


|  Development  |  Staging  |  Production  | 
| --- | --- | --- | 
|  <pre>{<br />"Tags": [<br />{<br />"Key": "Maintenance Window",<br />"ResourceId": "i-012345678ab9ab111",<br />"ResourceType": "instance",<br />"Value": "cron(30 23 ? * TUE#1 *)"<br />},<br />{<br />"Key": "Name",<br />"ResourceId": "i-012345678ab9ab222",<br />"ResourceType": "instance",<br />"Value": "WEBAPP"<br />},<br />{<br />"Key": "Patch Group",<br />"ResourceId": "i-012345678ab9ab333",<br />"ResourceType": "instance",<br />"Value": "WEBAPP-DEV-AL2"<br />}<br />]<br />}<br /></pre>  |  <pre>{<br />"Tags": [<br />{<br />"Key": "Maintenance Window",<br />"ResourceId": "i-012345678ab9ab444",<br />"ResourceType": "instance",<br />"Value": "cron(30 23 ? * TUE#2 *)"<br />},<br />{<br />"Key": "Name",<br />"ResourceId": "i-012345678ab9ab555",<br />"ResourceType": "instance",<br />"Value": "WEBAPP"<br />},<br />{<br />"Key": "Patch Group",<br />"ResourceId": "i-012345678ab9ab666",<br />"ResourceType": "instance",<br />"Value": "WEBAPP-TEST-AL2"<br />}<br />]<br />}<br /></pre>  |  <pre>{<br />"Tags": [<br />{<br />"Key": "Maintenance Window",<br />"ResourceId": "i-012345678ab9ab777",<br />"ResourceType": "instance",<br />"Value": "cron(30 23 ? * TUE#3 *)"<br />},<br />{<br />"Key": "Name",<br />"ResourceId": "i-012345678ab9ab888",<br />"ResourceType": "instance",<br />"Value": "WEBAPP"<br />},<br />{<br />"Key": "Patch Group",<br />"ResourceId": "i-012345678ab9ab999",<br />"ResourceType": "instance",<br />"Value": "WEBAPP-PROD-AL2"<br />}<br />]<br />}<br /></pre>  | 

 Zero-day vulnerabilities can also be managed by having tags defined to complement your patching strategy. Refer to [Avoid zero-day vulnerabilities with same-day security patching using AWS Systems Manager](https://aws.amazon.com/blogs/mt/avoid-zero-day-vulnerabilities-same-day-security-patching-aws-systems-manager/) for detailed guidance. 

# Operational observability
<a name="operational-observability"></a>

 Observability is required to gain actionable insights into the performance of your environments and help you to detect and investigate problems. It also has a secondary purpose that allows you to define and measure key performance indicators (KPIs) and service level objectives (SLOs) such as uptime. For most organizations, important operations KPIs are mean time to detect (MTTD) and mean time to recover (MTTR) from an incident. 

Throughout observability, context is important, because data is collected and then associated tags are gathered. Regardless of the service, application, or application tier that you are focusing on, you can filter and analyze for that specific dataset. Tags can be used to automate onboarding to CloudWatch Alarms so that the right teams can be alerted when certain metric thresholds are breached. For example, a tag key `example-inc:ops:alarm-tag` and the value on it could indicate creation of the CloudWatch Alarm. A solution demonstrating this is described in [Use tags to create and maintain Amazon CloudWatch alarms for Amazon EC2 instances](https://aws.amazon.com/blogs/mt/use-tags-to-create-and-maintain-amazon-cloudwatch-alarms-for-amazon-ec2-instances-part-1/).

 Having too many alarms configured can easily create an alert storm—when a large number of alarms or notifications rapidly overwhelm operators and reduce their overall effectiveness while operators are manually triaging and prioritizing individual alarms. Additional context for the alarms can be provided in the form of tags, which means that rules can be defined within Amazon EventBridge to help ensure that focus is given to the upstream issue rather than downstream dependencies. 

 The role of operations alongside DevOps is often overlooked, but for many organizations, central operations teams still provide a critical first response outside of normal business hours. (More details can be found about this model in the [Operational Excellence whitepaper](https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/separated-aeo-ieo-with-cent-gov-and-partner.html).) Unlike the DevOps team that owns the workload, they typically do not have the same depth of knowledge, so the context that tags provide within dashboards and alerts, can direct them to the correct runbook for the issue, or initiate an automated runbook (refer to the blog post [Automating Amazon CloudWatch Alarms with AWS Systems Manager](https://aws.amazon.com/blogs/mt/automating-amazon-cloudwatch-alarms-with-aws-systems-manager/)). 

# Tags for data security, risk management, and access control
<a name="tags-for-data-security-risk-management-and-access-control"></a>

 Organizations have varying needs and obligations to meet regarding the appropriate handling of data storage and processing. Data classification is an important precursor for several use cases, such as access control, data retention, data analysis, and compliance. 

# Data security and risk management
<a name="data-security-and-risk-management"></a>

Within an AWS environment, you will probably have accounts with differing compliance and security requirements. For example, you might have a developer sandbox, and an account hosting the production environment for a highly regulated workload, such as processing payments. By isolating them into different accounts, you can [apply distinct security controls](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/benefits-of-using-multiple-aws-accounts.html#apply-distinct-security-controls-by-environment), [constrain access to sensitive data](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/benefits-of-using-multiple-aws-accounts.html#constrain-access-to-sensitive-data), and reduce the scope of audit for regulated workloads. 

 Adopting a single standard for all workloads can lead to challenges. While many controls apply equally across an environment, some controls are excessive or irrelevant for accounts that don't need to meet specific regulatory frameworks, and accounts where no personal identifiable data will ever be present (for example, a developer sandbox, or workload development accounts). This typically leads to false positive security findings that must be triaged and closed with no action, which takes effort away from findings that should be investigated. 

 *Table 11 – Example data security and risk management tags* 


|  Use case  |  Tag Key  |  Rationale  |  Example Values  | 
| --- | --- | --- | --- | 
| Incident management  | example-inc:incident- management:escalationlog |  The system in use by the supporting team to log incidents  |  jira, servicenow, zendesk  | 
| Incident management  | example-inc:incident- management:escalationpath |  Path of escalation  | ops-center, dev-ops, app-team  | 
| Data classification  | example-inc:data:classification |  Classify data for compliance and governance  | Public, Private, Confidential, Restricted  | 
| Compliance  | example-inc:compliance:framework |  Identifies the compliance framework the workload is subject to  | PCI-DSS, HIPAA  | 

 Manually managing different controls across an AWS environment is both time consuming and error prone. The next step is to automate the deployment of appropriate security controls, and configure inspection of resources, based on the classification of that account. By applying tags to the accounts and the resources within them, the deployment of controls can be automated and configured appropriately for the workload. 

**Example: **

 A workload includes an Amazon S3 bucket with the tag `example-inc:data:classification` with the value `Private`. The security tooling automation deploys AWS Config rule `s3-bucket-public-read-prohibited`, which checks the Amazon S3 bucket’s Block Public Access settings, the bucket policy, and the bucket access control list (ACL), confirming the bucket’s configuration is appropriate for its data classification. To ensure the content of the bucket is consistent with the classification, [Amazon Macie can be configured to check for personal identifiable information (PII)](https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-macie-supports-criteria-based-bucket-selection-sensitive-data-discovery-jobs/). The blog [Using Amazon Macie to Validate S3 Bucket Data Classification](https://aws.amazon.com/blogs/architecture/using-amazon-macie-to-validate-s3-bucket-data-classification/) explores this pattern in greater depth. 

 Certain regulatory environments, such as insurance and healthcare, might be subject to mandatory data retention policies. Data retention using tags, combined with Amazon S3 Lifecycle policies, can be an effective and simple way to scope object transitions to a different storage tier. Amazon S3 Lifecycle rules can also be used to expire objects for data deletion after the mandatory hold period expires. Refer to [Simplify your data lifecycle by using object tags with Amazon S3 Lifecycle](https://aws.amazon.com/blogs/storage/simplify-your-data-lifecycle-by-using-object-tags-with-amazon-s3-lifecycle/) for an in-depth guide to this process. 

 Additionally, when triaging or addressing security finding, tags can provide the investigator with important context that helps qualify the risk, and aids in engaging the appropriate teams to investigate or mitigate the finding. 

# Tags for identity management and access control
<a name="tags-for-identity-management-and-access-control"></a>

 When managing access control across an AWS environment with AWS IAM Identity Center, tags can enable several patterns for scaling. There are several delegation patterns that can be applied, some are based on tagging. We’ll address them individually and provide links to further reading on each. 

# ABAC for individual resources
<a name="abac-for-individual-resources"></a>

 IAM Identity Center users and IAM roles support attribute-based access control (ABAC), which allows you to define access to operations and resources based on tags. ABAC helps reduce the need to update permission policies and helps you to base access off of employee attributes from your corporate directory. If you are already using a multi-account strategy, ABAC can be used in addition to role-based access control (RBAC) to provide multiple teams operating in the same account granular access to different resources. For example, IAM Identity Center users or IAM roles can include conditions to limit access to specific Amazon EC2 instances which otherwise would have to be explicitly listed in each policy in order to access them. 

 Since an ABAC authorization model depends on tags for access to operations and resources, it is important to provide guardrails to prevent unintended access. SCPs can be used to protect tags across your organization by only allowing tags to be modified under certain conditions. The blogs [Securing resource tags used for authorization using a service control policy in AWS Organizations](https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/) and [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) provide information on how to implement this. 

 Where long-lived Amazon EC2 instances are being used to support more traditional operations practices then this approach can be utilized, the blog [Configure IAM Identity Center ABAC for Amazon EC2 instances and Systems Manager Session Manager](https://aws.amazon.com/blogs/security/configure-aws-sso-abac-for-ec2-instances-and-systems-manager-session-manager/) discusses this form of attribute based access control in more detail. As mentioned earlier, not all resource types support tagging, and of those that do, not all support enforcement using tag policies, so it’s a good idea to evaluate this prior to starting to implement this strategy on an AWS account.

To learn about services that support ABAC, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html). 