

# Planning your CloudWatch deployment
<a name="planning-cloudwatch-deployment"></a>

The complexity and scope of a logging and monitoring solution depends on several factors, including:
+ How many environments, Regions, and accounts are used and how this number might increase.
+ The variety and types of your existing workloads and architectures.
+ The compute types and OSs that must be logged and monitored.
+ Whether there are both on-premises locations and AWS infrastructure.
+ The aggregation and analytic requirements of multiple systems and applications.
+ Security requirements that prevent unauthorized exposure of logs and metrics.
+ Products and solutions that must integrate with your logging and monitoring solution to support operational processes.

You must regularly review and update your logging and monitoring solution with new or updated workload deployments. Updates to your logging, monitoring, and alarming should be identified and applied when issues are observed. These issues can then be proactively identified and prevented in the future. 

You must make sure that you consistently install and configure software and services for capturing and ingesting logs and metrics. An established logging and monitoring approach uses multiple AWS or independent software vendor (ISV) services and solutions for different domains (for example, security, performance, networking, or analytics). Each domain has its own deployment and configuration requirements. 

We recommend using CloudWatch to capture and ingest logs and metrics for multiple OSs and compute types. Many AWS services use CloudWatch to log, monitor, and publish logs and metrics, without requiring further configuration. CloudWatch provides a [software agent](https://docs.aws.amazon.com//AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) that can be installed and configured for different OSs and environments. The following sections outline how to deploy, install, and configure the CloudWatch agent for multiple accounts, Regions, and configurations:

**Topics**
+ [Using CloudWatch in centralized or distributed accounts](cloudwatch-centralized-distributed-accounts.md)
+ [Managing CloudWatch agent configuration files](create-store-cloudwatch-configurations.md)

# Using CloudWatch in centralized or distributed accounts
<a name="cloudwatch-centralized-distributed-accounts"></a>

Although CloudWatch is designed to monitor AWS services or resources in one account and Region, you can use a central account to capture logs and metrics from multiple accounts and Regions. If you use more than one account or Region, you should evaluate whether to use the centralized account approach or an individual account to capture logs and metrics. Typically, a hybrid approach is required for multi-account and multi-Region deployments to support the requirements of security, analytics, operations, and workload owners. 

The following table provides areas to consider when choosing to use a centralized, distributed, or hybrid approach.


****  

|  |  | 
| --- |--- |
| Account structures | Your organization might have several separate accounts (for example, accounts for non-production and production workloads) or thousands of accounts for single applications in specific environments. We recommend that you maintain application logs and metrics in the account that the workload runs on, which gives workload owners access to the logs and metrics. This enables them to have an active role in logging and monitoring. We also recommend that you use a separate logging account to aggregate all workload logs for analysis, aggregation, trends, and centralized operations. Separate logging accounts can also be used for security, archiving and monitoring, and analytics.  | 
| Access requirements | Team members (for example, workload owners or developers) require access to logs and metrics to troubleshoot and make improvements. Logs should be maintained in the workload's account to make access and troubleshooting easier. If logs and metrics are maintained in a separate account from the workload, users might need to regularly alternate between accounts. Using a centralized account provides log information to authorized users without granting access to the workload account. This can simplify access requirements for analytic workloads where aggregation is required from workloads running in multiple accounts. The centralized logging account can also have alternative search and aggregation options, such as an Amazon OpenSearch Service cluster. Amazon OpenSearch Service [provides fine-grained access control](https://docs.aws.amazon.com//opensearch-service/latest/developerguide/fgac.html) down to the field level for your logs. Fine-grained access control is important when you have sensitive or confidential data that requires specialized access and permissions. | 
| Operations | Many organizations have a centralized operations and security team or an external organization for operational support that requires access to logs for monitoring. Centralized logging and monitoring can make it easier to identify trends, search, aggregate, and perform analytics across all accounts and workloads. If your organization uses the “[you build it, you run it](https://aws.amazon.com//blogs/enterprise-strategy/enterprise-devops-why-you-should-run-what-you-build/)” approach for DevOps, then workload owners require logging and monitoring information in their account. A hybrid approach might be required to satisfy central operations and analytics, in addition to distributed workload ownership. | 
| Environment |  You can choose to host logs and metrics in a central location for production accounts and keep logs and metrics for other environments (for example, development or testing) in the same or separate accounts, depending on security requirements and account architecture. This helps prevent sensitive data created during production from being accessed by a broader audience.   | 

CloudWatch provides [multiple options](https://docs.aws.amazon.com//AmazonCloudWatch/latest/logs/Subscriptions.html) to process logs in real time with CloudWatch subscription filters. You can use subscription filters to stream logs in real time to AWS services for custom processing, analysis, and loading to other systems. This can be particularly helpful if you take a hybrid approach where your logs and metrics are available in individual accounts and Regions, in addition to a centralized account and Region. The following list provides examples of AWS services that can be used for this:
+ [Amazon Data Firehose](https://docs.aws.amazon.com//firehose/latest/dev/what-is-this-service.html) – Firehose provides a streaming solution that automatically scales and resizes based on the data volume being produced. You don’t need to manage the number of shards in an Amazon Kinesis data stream and you can directly connect to Amazon Simple Storage Service (Amazon S3), Amazon OpenSearch Service, or Amazon Redshift with no additional coding. Firehose is an effective solution if you want to centralize your logs in those AWS services.
+ [Amazon Kinesis Data Streams](https://docs.aws.amazon.com//streams/latest/dev/introduction.html) – Kinesis Data Streams is an appropriate solution if you need to integrate with a service that Firehose doesn't support and implement additional processing logic. You can create an Amazon CloudWatch Logs destination in your accounts and Regions that specifies a Kinesis data stream in a central account and an AWS Identity and Access Management (IAM) role that grants it permission to place records in the stream. Kinesis Data Streams provides a flexible, open-ended landing zone for your log data that can then be consumed by different options. You can read the Kinesis Data Streams log data into your account, perform preprocessing, and send the data to your chosen destination. 

  However, you must configure the shards for the stream so that it is appropriately sized for the log data that is produced. Kinesis Data Streams acts as a temporary intermediary or queue for your log data, and you can store the data within the Kinesis stream for between one to 365 days. Kinesis Data Streams also supports replay capability, which means you can replay data that was not consumed.
+ [Amazon OpenSearch Service](https://docs.aws.amazon.com//opensearch-service/latest/developerguide/what-is.html) – CloudWatch Logs can stream logs in a log group to an OpenSearch cluster in an individual or centralized account. When you configure a log group to stream data to an OpenSearch cluster, a Lambda function is created in the same account and Region as your log group. The Lambda function must have a network connection with the OpenSearch cluster. You can customize the Lambda function to perform additional preprocessing, in addition to customizing the ingestion into Amazon OpenSearch Service. Centralized logging with Amazon OpenSearch Service makes it easier to analyze, search, and troubleshoot issues across multiple components in your cloud architecture.
+ [Lambda](https://docs.aws.amazon.com//lambda/latest/dg/welcome.html) – If you use Kinesis Data Streams, you need to provision and manage compute resources that consume data from your stream. To avoid this, you can stream log data directly to Lambda for processing and send it to a destination based on your logic. This means that you don't need to provision and manage compute resources to process incoming data. If you choose to use Lambda, make sure that your solution is compatible with [Lambda quotas](https://docs.aws.amazon.com//lambda/latest/dg/gettingstarted-limits.html).

You might need to process or share log data stored in CloudWatch Logs in file format. You can create an export task to [export a log group to Amazon S3](https://docs.aws.amazon.com//AmazonCloudWatch/latest/logs/S3Export.html) for a specific date or time range. For example, you might choose to export logs on a daily basis to Amazon S3 for analytics and auditing. Lambda can be used to automate this solution. You can also combine this solution with Amazon S3 replication to ship and centralize your logs from multiple accounts and Regions to one centralized account and Region. 

The CloudWatch agent configuration can also specify a `credentials` field in the [`agent` section](https://docs.aws.amazon.com//AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html#CloudWatch-Agent-Configuration-File-Agentsection). This specifies an IAM role to use when sending metrics and logs to a different account. If specified, this field contains the `role_arn` parameter. This field can be used when you only need centralized logging and monitoring in a specific centralized account and Region. 

You can also use [AWS SDK](https://aws.amazon.com/developer/tools/) to write your own custom processing application in a language of your choice, read logs and metrics from your accounts, and send data to a centralized account or other destination for further processing and monitoring.

# Managing CloudWatch agent configuration files
<a name="create-store-cloudwatch-configurations"></a>

We recommend that you create a standard Amazon CloudWatch agent configuration that includes the system logs and metrics that you want to capture across all your Amazon Elastic Compute Cloud (Amazon EC2) instances and on-premises servers. You can use the CloudWatch agent [configuration file wizard](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html) to help you create the configuration file. You can run the configuration wizard multiple times to generate unique configurations for different systems and environments. You can also modify the configuration file or create variations by [using the configuration file schema](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html). The CloudWatch agent configuration file can be stored in [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) parameters.  You can create separate Parameter Store parameters if you have [multiple CloudWatch agent configuration files](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-common-scenarios.html#CloudWatch-Agent-multiple-config-files). If you are using multiple AWS accounts or AWS Regions, you must manage and update the Parameter Store parameters in each account and Region. Alternatively, you can centrally manage your CloudWatch configurations as files in Amazon S3 or a version-control tool of your choice. 

The `amazon-cloudwatch-agent-ctl` script included with the CloudWatch agent allows you to specify a configuration file, Parameter Store parameter, or the agent's default configuration. The default configuration aligns to the basic, predefined metric set and configures the agent to report memory and disk space metrics to CloudWatch. However, it doesn't include any log file configurations. The default configuration is also applied if you use [Systems Manager Quick Setup](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-quick-setup.html) for the CloudWatch agent.

Because the default configuration doesn't include logging and isn't customized for your requirements, we recommend that you create and apply your own CloudWatch configurations, customized to your requirements.

## Managing CloudWatch configurations
<a name="store-cloudwatch-configuration-s3"></a>

By default, CloudWatch configurations can be stored and applied as Parameter Store parameters or as CloudWatch configuration files. The best choice will depend on your requirements. In this section, we discuss the pros and cons for these two options. A representative solution is also detailed for managing CloudWatch configuration files for multiple AWS accounts and AWS Regions.

**Systems Manager Parameter Store parameters**

Using Parameter Store parameters to manage CloudWatch configurations works well if you have a single, standard CloudWatch agent configuration file that you want to apply and manage in a small set of AWS accounts and Regions. When you store your CloudWatch configurations as Parameter Store parameters, you can use the CloudWatch agent configuration tool (`amazon-cloudwatch-agent-ctl` on Linux) to read and apply the configuration from Parameter Store without requiring you to copy the configuration file to your instance. You can use the **AmazonCloudWatch-ManageAgent **Systems Manager Command document to update the CloudWatch configuration on multiple EC2 instances in a single run. Because Parameter Store parameters are regional, you must update and maintain your CloudWatch Parameter Store parameters in each AWS Region and AWS account. If you have multiple CloudWatch configurations that you want to apply to each instance, you must customize the **AmazonCloudWatch-ManageAgent** Command document** **to include these parameters.

**CloudWatch configuration files**

Managing your CloudWatch configurations as files might work well if you have many AWS accounts and Regions and you are managing multiple CloudWatch configuration files. Using this approach, you can browse, organize, and manage them in a folder structure.  You can apply security rules to individual folders or files to limit and grant access such as update and read permissions. You can share and transfer them outside of AWS for collaboration. You can version control the files to track and manage changes. You can apply CloudWatch configurations collectively by copying the configuration files to the CloudWatch agent configuration directory without applying each configuration file individually. For Linux, the CloudWatch configuration directory is found at `/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d`. For Windows, the configuration directory is found at `C:\ProgramData\Amazon\AmazonCloudWatchAgent\Configs`.

When you start the CloudWatch agent, the agent automatically appends each file found in these directories to create a CloudWatch composite configuration file. The configuration files should be stored in a central location (for example, an S3 bucket) that can be accessed by your required accounts and Regions. An example solution using this approach is provided.

**Organizing CloudWatch configurations**

Regardless of the approach used to manage your CloudWatch configurations, organize your CloudWatch configurations. You can organize your configurations into file or Parameter Store paths using an approach such as the following.


|  |  | 
| --- |--- |
| */config/standard/windows/ec2* | Store standard Windows-specific CloudWatch configuration files for Amazon EC2. You can further categorize your standard operating system (OS) configurations for different Windows versions, EC2 instance types, and environments under this folder. | 
| */config/standard/windows/onpremises* | Store standard Windows-specific CloudWatch configuration files for on-premises servers. You also further categorize your standard OS configurations for different Windows versions, server types, and environments under this folder. | 
| */config/standard/linux/ec2* | Store your standard Linux-specific CloudWatch configuration files for Amazon EC2. You can further categorize your standard OS configuration for different Linux distributions, EC2 instance types, and environments under this folder. | 
| */config/standard/linux/onpremises* | Store your standard Linux-specific CloudWatch configuration files for on-premises servers. You can further categorize your standard OS configuration for different Linux distributions, server types, and environments under this folder. | 
| */config/ecs* | Store CloudWatch configuration files that are specific to Amazon Elastic Container Service (Amazon ECS) if you use Amazon ECS container instances. These configurations can be appended to the standard Amazon EC2 configurations for Amazon ECS specific systems-level logging and monitoring. | 
| */config/<application\$1name>* | Store your application-specific CloudWatch configuration files. You can further categorize your applications with additional folders and prefixes for environments and versions. | 

## Example: Storing CloudWatch configuration files in an S3 bucket
<a name="example"></a>

This section provides an example using Amazon S3 to store CloudWatch configuration files and a custom Systems Manager runbook to retrieve and apply the CloudWatch configuration files. This approach can address some of the challenges of using Systems Manager Parameter Store parameters for CloudWatch configuration at scale:
+ If you use multiple Regions, you must synchronize CloudWatch configuration updates in each Region's Parameter Store. Parameter Store is a Regional service and the same parameter must be updated in each Region that uses the CloudWatch agent.
+ If you have multiple CloudWatch configurations, you must initiate the retrieval and application of each Parameter Store configuration. You must individually retrieve each CloudWatch configuration from the Parameter Store and also update the retrieval method whenever you add a new configuration. In contrast, CloudWatch provides a configuration directory for storing configuration files and applies each configuration in the directory, without requiring them to be individually specified.
+ If you use multiple accounts, you must ensure that each new account has the required CloudWatch configurations in its Parameter Store. You also need to make sure that any configuration changes are applied to these accounts and their Regions in the future.

You can store CloudWatch configurations in an S3 bucket that is accessible from all your accounts and Regions. You can then copy these configurations from the S3 bucket to the CloudWatch configuration directory by using Systems Manager Automation runbooks and Systems Manager State Manager. You can use the [cloudwatch-config-s3-bucket.yaml](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/blob/main/cloudwatch-config-s3-bucket.yaml) AWS CloudFormation template to create an S3 bucket that is accessible from multiple accounts within an organization in AWS Organizations. The template includes an `OrganizationID` parameter that grants read access to all accounts within your [organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html).

The augmented sample Systems Manager runbook, provided in the [Set up State Manager and Distributor for CloudWatch agent deployment and configuration](https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/install-cloudwatch-systems-manager.html#set-up-systems-manager-distributor) section of this guide, is configured to retrieve files using the S3 bucket created by the [cloudwatch-config-s3-bucket.yaml](https://github.com/aws-samples/logging-monitoring-apg-guide-examples/blob/main/cloudwatch-config-s3-bucket.yaml) AWS CloudFormation template.

Alternatively, you can use a version control system (for example, GitHub) to store your configuration files. If you want to automatically retrieve configuration files stored in a version control system, you have to manage or centralize the credential storage and update the Systems Manager Automation runbook that is used to retrieve the credentials across your accounts and AWS Regions.