

# Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters
Set up on Amazon EKS and Kubernetes clusters

This section explains how to set up the CloudWatch agent with Prometheus monitoring in a cluster running Amazon EKS or Kubernetes. After you do this, the agent automatically scrapes and imports metrics for the following workloads running in that cluster.
+ AWS App Mesh
+ NGINX
+ Memcached
+ Java/JMX
+ HAProxy
+ Fluent Bit

You can also configure the agent to scrape and import additional Prometheus workloads and sources.

Before following these steps to install the CloudWatch agent for Prometheus metric collection, you must have a cluster running on Amazon EKS or a Kubernetes cluster running on an Amazon EC2 instance.

**VPC security group requirements**

The ingress rules of the security groups for the Prometheus workloads must open the Prometheus ports to the CloudWatch agent for scraping the Prometheus metrics by the private IP.

The egress rules of the security group for the CloudWatch agent must allow the CloudWatch agent to connect to the Prometheus workloads' port by private IP. 

**Topics**
+ [

## Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters
](#ContainerInsights-Prometheus-Setup-roles)
+ [

# Scraping additional Prometheus sources and importing those metrics
](ContainerInsights-Prometheus-Setup-configure.md)
+ [

# (Optional) Set up sample containerized Amazon EKS workloads for Prometheus metric testing
](ContainerInsights-Prometheus-Sample-Workloads.md)

## Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters
Set up on Amazon EKS and Kubernetes clusters

This section explains how to set up the CloudWatch agent with Prometheus monitoring in a cluster running Amazon EKS or Kubernetes. After you do this, the agent automatically scrapes and imports metrics for the following workloads running in that cluster.
+ AWS App Mesh
+ NGINX
+ Memcached
+ Java/JMX
+ HAProxy
+ Fluent Bit

You can also configure the agent to scrape and import additional Prometheus workloads and sources.

Before following these steps to install the CloudWatch agent for Prometheus metric collection, you must have a cluster running on Amazon EKS or a Kubernetes cluster running on an Amazon EC2 instance.

**VPC security group requirements**

The ingress rules of the security groups for the Prometheus workloads must open the Prometheus ports to the CloudWatch agent for scraping the Prometheus metrics by the private IP.

The egress rules of the security group for the CloudWatch agent must allow the CloudWatch agent to connect to the Prometheus workloads' port by private IP. 

**Topics**
+ [

### Setting up IAM roles
](#ContainerInsights-Prometheus-Setup-roles)
+ [

### Installing the CloudWatch agent to collect Prometheus metrics
](#ContainerInsights-Prometheus-Setup-install-agent)

### Setting up IAM roles


The first step is to set up the necessary IAM role in the cluster. There are two methods:
+ Set up an IAM role for a service account, also known as a *service role*. This method works for both the EC2 launch type and the Fargate launch type.
+ Add an IAM policy to the IAM role used for the cluster. This works only for the EC2 launch type.

**Set up a service role (EC2 launch type and Fargate launch type)**

To set up a service role, enter the following command. Replace *MyCluster* with the name of the cluster.

```
eksctl create iamserviceaccount \
 --name cwagent-prometheus \
--namespace amazon-cloudwatch \
 --cluster MyCluster \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--approve \
--override-existing-serviceaccounts
```

**Add a policy to the node group's IAM role (EC2 launch type only)**

**To set up the IAM policy in a node group for Prometheus support**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. You need to find the prefix of the IAM role name for the cluster. To do this, select the check box next to the name of an instance that is in the cluster, and choose **Actions**, **Security**, **Modify IAM Role**. Then copy the prefix of the IAM role, such as `eksctl-dev303-workshop-nodegroup`.

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Use the search box to find the prefix that you copied earlier in this procedure, and choose that role.

1. Choose **Attach policies**.

1. Use the search box to find **CloudWatchAgentServerPolicy**. Select the check box next to **CloudWatchAgentServerPolicy**, and choose **Attach policy**.

### Installing the CloudWatch agent to collect Prometheus metrics


You must install the CloudWatch agent in the cluster to collect the metrics. How to install the agent differs for Amazon EKS clusters and Kubernetes clusters.

**Delete previous versions of the CloudWatch agent with Prometheus support**

If you have already installed a version of the CloudWatch agent with Prometheus support in your cluster, you must delete that version by entering the following command. This is necessary only for previous versions of the agent with Prometheus support. You do not need to delete the CloudWatch agent that enables Container Insights without Prometheus support.

```
kubectl delete deployment cwagent-prometheus -n amazon-cloudwatch
```

#### Installing the CloudWatch agent on Amazon EKS clusters with the EC2 launch type


To install the CloudWatch agent with Prometheus support on an Amazon EKS cluster, follow these steps.

**To install the CloudWatch agent with Prometheus support on an Amazon EKS cluster**

1. Enter the following command to check whether the `amazon-cloudwatch` namespace has already been created:

   ```
   kubectl get namespace
   ```

1. If `amazon-cloudwatch` is not displayed in the results, create it by entering the following command:

   ```
   kubectl create namespace amazon-cloudwatch
   ```

1. To deploy the agent with the default configuration and have it send data to the AWS Region that it is installed in, enter the following command:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks.yaml
   ```

   To have the agent send data to a different Region instead, follow these steps:

   1. Download the YAML file for the agent by entering the following command:

      ```
      curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks.yaml
      ```

   1. Open the file with a text editor, and search for the `cwagentconfig.json` block of the file.

   1. Add the highlighted lines, specifying the Region that you want:

      ```
      cwagentconfig.json: |
          {
            "agent": {
              "region": "us-east-2"
            },
            "logs": { ...
      ```

   1. Save the file and deploy the agent using your updated file.

      ```
      kubectl apply -f prometheus-eks.yaml
      ```

#### Installing the CloudWatch agent on Amazon EKS clusters with the Fargate launch type


To install the CloudWatch agent with Prometheus support on an Amazon EKS cluster with the Fargate launch type, follow these steps.

**To install the CloudWatch agent with Prometheus support on an Amazon EKS cluster with the Fargate launch type**

1. Enter the following command to create a Fargate profile for the CloudWatch agent so that it can run inside the cluster. Replace *MyCluster* with the name of the cluster.

   ```
   eksctl create fargateprofile --cluster MyCluster \
   --name amazon-cloudwatch \
   --namespace amazon-cloudwatch
   ```

1. To install the CloudWatch agent, enter the following command. Replace *MyCluster* with the name of the cluster. This name is used in the log group name that stores the log events collected by the agent, and is also used as a dimension for the metrics collected by the agent.

   Replace *region* with the name of the Region where you want the metrics to be sent. For example, `us-west-1`. 

   ```
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks-fargate.yaml | 
   sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" | 
   kubectl apply -f -
   ```

#### Installing the CloudWatch agent on a Kubernetes cluster


To install the CloudWatch agent with Prometheus support on a cluster running Kubernetes, enter the following command:

```
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-k8s.yaml | 
sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" | 
kubectl apply -f -
```

Replace *MyCluster* with the name of the cluster. This name is used in the log group name that stores the log events collected by the agent, and is also used as a dimension for the metrics collected by the agent.

Replace *region* with the name of the AWS Region where you want the metrics to be sent. For example, **us-west-1**.

#### Verify that the agent is running


On both Amazon EKS and Kubernetes clusters, you can enter the following command to confirm that the agent is running.

```
kubectl get pod -l "app=cwagent-prometheus" -n amazon-cloudwatch
```

If the results include a single CloudWatch agent pod in the `Running` state, the agent is running and collecting Prometheus metrics. By default the CloudWatch agent collects metrics for App Mesh, NGINX, Memcached, Java/JMX, and HAProxy every minute. For more information about those metrics, see [Prometheus metrics collected by the CloudWatch agent](ContainerInsights-Prometheus-metrics.md). For instructions on how to see your Prometheus metrics in CloudWatch, see [Viewing your Prometheus metrics](ContainerInsights-Prometheus-viewmetrics.md)

You can also configure the CloudWatch agent to collect metrics from other Prometheus exporters. For more information, see [Scraping additional Prometheus sources and importing those metrics](ContainerInsights-Prometheus-Setup-configure.md).

# Scraping additional Prometheus sources and importing those metrics


The CloudWatch agent with Prometheus monitoring needs two configurations to scrape the Prometheus metrics. One is for the standard Prometheus configurations as documented in [<scrape\$1config>](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) in the Prometheus documentation. The other is for the CloudWatch agent configuration.

For Amazon EKS clusters, the configurations are defined in `prometheus-eks.yaml` (for the EC2 launch type) or `prometheus-eks-fargate.yaml` (for the Fargate launch type) as two config maps:
+ The `name: prometheus-config` section contains the settings for Prometheus scraping.
+ The `name: prometheus-cwagentconfig` section contains the configuration for the CloudWatch agent. You can use this section to configure how the Prometheus metrics are collected by CloudWatch. For example, you specify which metrics are to be imported into CloudWatch, and define their dimensions. 

For Kubernetes clusters running on Amazon EC2 instances, the configurations are defined in the `prometheus-k8s.yaml` YAML file as two config maps:
+ The `name: prometheus-config` section contains the settings for Prometheus scraping.
+ The `name: prometheus-cwagentconfig` section contains the configuration for the CloudWatch agent. 

To scrape additional Prometheus metrics sources and import those metrics to CloudWatch, you modify both the Prometheus scrape configuration and the CloudWatch agent configuration, and then re-deploy the agent with the updated configuration.

**VPC security group requirements**

The ingress rules of the security groups for the Prometheus workloads must open the Prometheus ports to the CloudWatch agent for scraping the Prometheus metrics by the private IP.

The egress rules of the security group for the CloudWatch agent must allow the CloudWatch agent to connect to the Prometheus workloads' port by private IP. 

## Prometheus scrape configuration


The CloudWatch agent supports the standard Prometheus scrape configurations as documented in [<scrape\$1config>](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) in the Prometheus documentation. You can edit this section to update the configurations that are already in this file, and add additional Prometheus scraping targets. By default, the sample configuration file contains the following global configuration lines:

```
global:
  scrape_interval: 1m
  scrape_timeout: 10s
```
+ **scrape\$1interval**— Defines how frequently to scrape targets.
+ **scrape\$1timeout**— Defines how long to wait before a scrape request times out.

You can also define different values for these settings at the job level, to override the global configurations.

### Prometheus scraping jobs


The CloudWatch agent YAML files already have some default scraping jobs configured. For example, in `prometheus-eks.yaml`, the default scraping jobs are configured in the `job_name` lines in the `scrape_configs` section. In this file, the following default `kubernetes-pod-jmx` section scrapes JMX exporter metrics.

```
   - job_name: 'kubernetes-pod-jmx'
      sample_limit: 10000
      metrics_path: /metrics
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__address__]
        action: keep
        regex: '.*:9404$'
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: Namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container_name
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: pod_controller_name
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_controller_kind
        target_label: pod_controller_kind
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_phase
        target_label: pod_phase
```

Each of these default targets are scraped, and the metrics are sent to CloudWatch in log events using embedded metric format. For more information, see [Embedding metrics within logs](CloudWatch_Embedded_Metric_Format.md).

Log events from Amazon EKS and Kubernetes clusters are stored in the **/aws/containerinsights/*cluster\$1name*/prometheus** log group in CloudWatch Logs. Log events from Amazon ECS clusters are stored in the **/aws/ecs/containerinsights/*cluster\$1name*/prometheus** log group.

Each scraping job is contained in a different log stream in this log group. For example, the Prometheus scraping job `kubernetes-pod-appmesh-envoy` is defined for App Mesh. All App Mesh Prometheus metrics from Amazon EKS and Kubernetes clusters are sent to the log stream named **/aws/containerinsights/*cluster\$1name*>prometheus/kubernetes-pod-appmesh-envoy/**.

To add a new scraping target, you add a new `job_name` section to the `scrape_configs` section of the YAML file, and restart the agent. For an example of this process, see [Tutorial for adding a new Prometheus scrape target: Prometheus API Server metrics](#ContainerInsights-Prometheus-Setup-new-exporters).

## CloudWatch agent configuration for Prometheus


The CloudWatch agent configuration file has a `prometheus` section under `metrics_collected` for the Prometheus scraping configuration. It includes the following configuration options:
+ **cluster\$1name**— specifies the cluster name to be added as a label in the log event. This field is optional. If you omit it, the agent can detect the Amazon EKS or Kubernetes cluster name.
+ **log\$1group\$1name**— specifies the log group name for the scraped Prometheus metrics. This field is optional. If you omit it, CloudWatch uses **/aws/containerinsights/*cluster\$1name*/prometheus** for logs from Amazon EKS and Kubernetes clusters.
+ **prometheus\$1config\$1path**— specifies the Prometheus scrape configuration file path. If the value of this field starts with `env:` the Prometheus scrape configuration file contents will be retrieved from the container's environment variable. Do not change this field.
+ **ecs\$1service\$1discovery**— is the section to specify the configuration for Amazon ECS Prometheus service discovery. For more information, see [Detailed guide for autodiscovery on Amazon ECS clusters](ContainerInsights-Prometheus-Setup-autodiscovery-ecs.md).

  The `ecs_service_discovery` section can contain the following fields:
  + `sd_frequency` is the frequency to discover the Prometheus exporters. Specify a number and a unit suffix. For example, `1m` for once per minute or `30s` for once per 30 seconds. Valid unit suffixes are `ns`, `us`, `ms`, `s`, `m`, and `h`.

    This field is optional. The default is 60 seconds (1 minute).
  + `sd_target_cluster` is the target Amazon ECS cluster name for auto-discovery. This field is optional. The default is the name of the Amazon ECS cluster where the CloudWatch agent is installed. 
  + `sd_cluster_region` is the target Amazon ECS cluster's Region. This field is optional. The default is the Region of the Amazon ECS cluster where the CloudWatch agent is installed. .
  + `sd_result_file` is the path of the YAML file for the Prometheus target results. The Prometheus scrape configuration will refer to this file.
  + `docker_label` is an optional section that you can use to specify the configuration for docker label-based service discovery. If you omit this section, docker label-based discovery is not used. This section can contain the following fields:
    + `sd_port_label` is the container's docker label name that specifies the container port for Prometheus metrics. The default value is `ECS_PROMETHEUS_EXPORTER_PORT`. If the container does not have this docker label, the CloudWatch agent will skip it.
    + `sd_metrics_path_label` is the container's docker label name that specifies the Prometheus metrics path. The default value is `ECS_PROMETHEUS_METRICS_PATH`. If the container does not have this docker label, the agent assumes the default path `/metrics`.
    + `sd_job_name_label` is the container's docker label name that specifies the Prometheus scrape job name. The default value is `job`. If the container does not have this docker label, the CloudWatch agent uses the job name in the Prometheus scrape configuration.
  + `task_definition_list` is an optional section that you can use to specify the configuration of task definition-based service discovery. If you omit this section, task definition-based discovery is not used. This section can contain the following fields:
    + `sd_task_definition_arn_pattern` is the pattern to use to specify the Amazon ECS task definitions to discover. This is a regular expression.
    + `sd_metrics_ports` lists the containerPort for the Prometheus metrics. Separate the containerPorts with semicolons.
    + `sd_container_name_pattern` specifies the Amazon ECS task container names. This is a regular expression.
    + `sd_metrics_path` specifies the Prometheus metric path. If you omit this, the agent assumes the default path `/metrics`
    + `sd_job_name` specifies the Prometheus scrape job name. If you omit this field, the CloudWatch agent uses the job name in the Prometheus scrape configuration.
+ **metric\$1declaration**— are sections that specify the array of logs with embedded metric format to be generated. There are `metric_declaration` sections for each Prometheus source that the CloudWatch agent imports from by default. These sections each include the following fields:
  + `label_matcher` is a regular expression that checks the value of the labels listed in `source_labels`. The metrics that match are enabled for inclusion in the embedded metric format sent to CloudWatch. 

    If you have multiple labels specified in `source_labels`, we recommend that you do not use `^` or `$` characters in the regular expression for `label_matcher`.
  + `source_labels` specifies the value of the labels that are checked by the `label_matcher` line.
  + `label_separator` specifies the separator to be used in the ` label_matcher` line if multiple `source_labels` are specified. The default is `;`. You can see this default used in the `label_matcher` line in the following example.
  + `metric_selectors` is a regular expression that specifies the metrics to be collected and sent to CloudWatch.
  + `dimensions` is the list of labels to be used as CloudWatch dimensions for each selected metric.

See the following `metric_declaration` example.

```
"metric_declaration": [
  {
     "source_labels":[ "Service", "Namespace"],
     "label_matcher":"(.*node-exporter.*|.*kube-dns.*);kube-system",
     "dimensions":[
        ["Service", "Namespace"]
     ],
     "metric_selectors":[
        "^coredns_dns_request_type_count_total$"
     ]
  }
]
```

This example configures an embedded metric format section to be sent as a log event if the following conditions are met:
+ The value of `Service` contains either `node-exporter` or `kube-dns`.
+ The value of `Namespace` is `kube-system`.
+ The Prometheus metric `coredns_dns_request_type_count_total` contains both `Service` and `Namespace` labels.

The log event that is sent includes the following highlighted section:

```
{
   "CloudWatchMetrics":[
      {
         "Metrics":[
            {
               "Name":"coredns_dns_request_type_count_total"
            }
         ],
         "Dimensions":[
            [
               "Namespace",
               "Service"
            ]
         ],
         "Namespace":"ContainerInsights/Prometheus"
      }
   ],
   "Namespace":"kube-system",
   "Service":"kube-dns",
   "coredns_dns_request_type_count_total":2562,
   "eks_amazonaws_com_component":"kube-dns",
   "instance":"192.168.61.254:9153",
   "job":"kubernetes-service-endpoints",
   ...
}
```

## Tutorial for adding a new Prometheus scrape target: Prometheus API Server metrics


The Kubernetes API Server exposes Prometheus metrics on endpoints by default. The official example for the Kubernetes API Server scraping configuration is available on [Github](https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml).

The following tutorial shows how to do the following steps to begin importing Kubernetes API Server metrics into CloudWatch:
+ Adding the Prometheus scraping configuration for Kubernetes API Server to the CloudWatch agent YAML file.
+ Configuring the embedded metric format metrics definitions in the CloudWatch agent YAML file.
+ (Optional) Creating a CloudWatch dashboard for the Kubernetes API Server metrics.

**Note**  
The Kubernetes API Server exposes gauge, counter, histogram, and summary metrics. In this release of Prometheus metrics support, CloudWatch imports only the metrics with gauge, counter, and summary types.

**To start collecting Kubernetes API Server Prometheus metrics in CloudWatch**

1. Download the latest version of the `prometheus-eks.yaml`, `prometheus-eks-fargate.yaml`, or `prometheus-k8s.yaml` file by entering one of the following commands.

   For an Amazon EKS cluster with the EC2 launch type, enter the following command:

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks.yaml
   ```

   For an Amazon EKS cluster with the Fargate launch type, enter the following command:

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks-fargate.yaml
   ```

   For a Kubernetes cluster running on an Amazon EC2 instance, enter the following command:

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-k8s.yaml
   ```

1. Open the file with a text editor, find the `prometheus-config` section, and add the following section inside of that section. Then save the changes:

   ```
       # Scrape config for API servers
       - job_name: 'kubernetes-apiservers'
         kubernetes_sd_configs:
           - role: endpoints
             namespaces:
               names:
                 - default
         scheme: https
         tls_config:
           ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
           insecure_skip_verify: true
         bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
         relabel_configs:
         - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
           action: keep
           regex: kubernetes;https
         - action: replace
           source_labels:
           - __meta_kubernetes_namespace
           target_label: Namespace
         - action: replace
           source_labels:
           - __meta_kubernetes_service_name
           target_label: Service
   ```

1. While you still have the YAML file open in the text editor, find the `cwagentconfig.json` section. Add the following subsection and save the changes. This section puts the API server metrics onto the CloudWatch agent allow list. Three types of API Server metrics are added to the allow list:
   + etcd object counts
   + API Server registration controller metrics
   + API Server request metrics

   ```
   {"source_labels": ["job", "resource"],
     "label_matcher": "^kubernetes-apiservers;(services|daemonsets.apps|deployments.apps|configmaps|endpoints|secrets|serviceaccounts|replicasets.apps)",
     "dimensions": [["ClusterName","Service","resource"]],
     "metric_selectors": [
     "^etcd_object_counts$"
     ]
   },
   {"source_labels": ["job", "name"],
      "label_matcher": "^kubernetes-apiservers;APIServiceRegistrationController$",
      "dimensions": [["ClusterName","Service","name"]],
      "metric_selectors": [
      "^workqueue_depth$",
      "^workqueue_adds_total$",
      "^workqueue_retries_total$"
     ]
   },
   {"source_labels": ["job","code"],
     "label_matcher": "^kubernetes-apiservers;2[0-9]{2}$",
     "dimensions": [["ClusterName","Service","code"]],
     "metric_selectors": [
      "^apiserver_request_total$"
     ]
   },
   {"source_labels": ["job"],
     "label_matcher": "^kubernetes-apiservers",
     "dimensions": [["ClusterName","Service"]],
     "metric_selectors": [
     "^apiserver_request_total$"
     ]
   },
   ```

1. If you already have the CloudWatch agent with Prometheus support deployed in the cluster, you must delete it by entering the following command:

   ```
   kubectl delete deployment cwagent-prometheus -n amazon-cloudwatch
   ```

1. Deploy the CloudWatch agent with your updated configuration by entering one of the following commands. For an Amazon EKS cluster with the EC2 launch type, enter:

   ```
   kubectl apply -f prometheus-eks.yaml
   ```

   For an Amazon EKS cluster with the Fargate launch type, enter the following command. Replace *MyCluster* and *region* with values to match your deployment.

   ```
   cat prometheus-eks-fargate.yaml \
   | sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" \
   | kubectl apply -f -
   ```

   For a Kubernetes cluster, enter the following command. Replace *MyCluster* and *region* with values to match your deployment.

   ```
   cat prometheus-k8s.yaml \
   | sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" \
   | kubectl apply -f -
   ```

Once you have done this, you should see a new log stream named ** kubernetes-apiservers ** in the **/aws/containerinsights/*cluster\$1name*/prometheus** log group. This log stream should include log events with an embedded metric format definition like the following:

```
{
   "CloudWatchMetrics":[
      {
         "Metrics":[
            {
               "Name":"apiserver_request_total"
            }
         ],
         "Dimensions":[
            [
               "ClusterName",
               "Service"
            ]
         ],
         "Namespace":"ContainerInsights/Prometheus"
      }
   ],
   "ClusterName":"my-cluster-name",
   "Namespace":"default",
   "Service":"kubernetes",
   "Timestamp":"1592267020339",
   "Version":"0",
   "apiserver_request_count":0,
   "apiserver_request_total":0,
   "code":"0",
   "component":"apiserver",
   "contentType":"application/json",
   "instance":"192.0.2.0:443",
   "job":"kubernetes-apiservers",
   "prom_metric_type":"counter",
   "resource":"pods",
   "scope":"namespace",
   "verb":"WATCH",
   "version":"v1"
}
```

You can view your metrics in the CloudWatch console in the **ContainerInsights/Prometheus** namespace. You can also optionally create a CloudWatch dashboard for your Prometheus Kubernetes API Server metrics.

### (Optional) Creating a dashboard for Kubernetes API Server metrics


To see Kubernetes API Server metrics in your dashboard, you must have first completed the steps in the previous sections to start collecting these metrics in CloudWatch.

**To create a dashboard for Kubernetes API Server metrics**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Make sure you have the correct AWS Region selected.

1. In the navigation pane, choose **Dashboards**.

1. Choose **Create Dashboard**. Enter a name for the new dashboard, and choose **Create dashboard**.

1. In **Add to this dashboard**, choose **Cancel**.

1. Choose **Actions**, **View/edit source**.

1. Download the following JSON file: [Kubernetes API Dashboard source](https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_cloudwatch_dashboards/kubernetes_api_server/cw_dashboard_kubernetes_api_server.json).

1. Open the JSON file that you downloaded with a text editor, and make the following changes:
   + Replace all the `{{YOUR_CLUSTER_NAME}}` strings with the exact name of your cluster. Make sure not to add whitespaces before or after the text.
   + Replace all the `{{YOUR_AWS_REGION}}` strings with the name of the Region where the metrics are collected. For example `us-west-2`. Be sure not to add whitespaces before or after the text.

1. Copy the entire JSON blob and paste it into the text box in the CloudWatch console, replacing what is already in the box.

1. Choose **Update**, **Save dashboard**.

# (Optional) Set up sample containerized Amazon EKS workloads for Prometheus metric testing


To test the Prometheus metric support in CloudWatch Container Insights, you can set up one or more of the following containerized workloads. The CloudWatch agent with Prometheus support automatically collects metrics from each of these workloads. To see the metrics that are collected by default, see [Prometheus metrics collected by the CloudWatch agent](ContainerInsights-Prometheus-metrics.md).

Before you can install any of these workloads, you must install Helm 3.x by entering the following commands:

```
brew install helm
```

For more information, see [Helm](https://helm.sh).

**Topics**
+ [

# Set up AWS App Mesh sample workload for Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-appmesh.md)
+ [

# Set up NGINX with sample traffic on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-nginx.md)
+ [

# Set up memcached with a metric exporter on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-memcached.md)
+ [

# Set up Java/JMX sample workload on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-javajmx.md)
+ [

# Set up HAProxy with a metric exporter on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-haproxy.md)
+ [

# Tutorial for adding a new Prometheus scrape target: Redis OSS on Amazon EKS and Kubernetes clusters
](ContainerInsights-Prometheus-Setup-redis-eks.md)

# Set up AWS App Mesh sample workload for Amazon EKS and Kubernetes


Prometheus support in CloudWatch Container Insights supports AWS App Mesh. The following sections explain how to set up App Mesh.

**Topics**
+ [

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the EC2 launch type or a Kubernetes cluster
](ContainerInsights-Prometheus-Sample-Workloads-appmesh-EKS.md)
+ [

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the Fargate launch type
](ContainerInsights-Prometheus-Sample-Workloads-appmesh-Fargate.md)

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the EC2 launch type or a Kubernetes cluster


Use these instructions if you are setting up App Mesh on a cluster running Amazon EKS with the EC2 launch type, or a Kubernetes cluster.

## Configure IAM permissions


You must add the **AWSAppMeshFullAccess** policy to the IAM role for your Amazon EKS or Kubernetes node group. On Amazon EKS, this node group name looks similar to `eksctl-integ-test-eks-prometheus-NodeInstanceRole-ABCDEFHIJKL`. On Kubernetes, it might look similar to `nodes.integ-test-kops-prometheus.k8s.local`.

## Install App Mesh


To install the App Mesh Kubernetes controller, follow the instructions in [App Mesh Controller](https://github.com/aws/eks-charts/tree/master/stable/appmesh-controller#app-mesh-controller).

## Install a sample application


[aws-app-mesh-examples](https://github.com/aws/aws-app-mesh-examples) contains several Kubernetes App Mesh walkthroughs. For this tutorial, you install a sample color application that shows how http routes can use headers for matching incoming requests.

**To use a sample App Mesh application to test Container Insights**

1. Install the application using these instructions: [https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-http-headers](https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-http-headers). 

1. Launch a curler pod to generate traffic:

   ```
   kubectl -n default run -it curler --image=tutum/curl /bin/bash
   ```

1. Curl different endpoints by changing HTTP headers. Run the curl command multiple times, as shown:

   ```
   curl -H "color_header: blue" front.howto-k8s-http-headers.svc.cluster.local:8080/; echo;
   
   curl -H "color_header: red" front.howto-k8s-http-headers.svc.cluster.local:8080/; echo;
   
   curl -H "color_header: yellow" front.howto-k8s-http-headers.svc.cluster.local:8080/; echo;
   ```

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the AWS Region where your cluster is running, choose **Metrics** in the navigation pane. The metric are in the **ContainerInsights/Prometheus** namespace.

1. To see the CloudWatch Logs events, choose **Log groups** in the navigation pane. The events are in the log group ` /aws/containerinsights/your_cluster_name/prometheus ` in the log stream `kubernetes-pod-appmesh-envoy`.

## Deleting the App Mesh test environment


When you have finished using App Mesh and the sample application, use the following commands to delete the unnecessary resources. Delete the sample application by entering the following command:

```
cd aws-app-mesh-examples/walkthroughs/howto-k8s-http-headers/
kubectl delete -f _output/manifest.yaml
```

Delete the App Mesh controller by entering the following command:

```
helm delete appmesh-controller -n appmesh-system
```

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the Fargate launch type


Use these instructions if you are setting up App Mesh on a cluster running Amazon EKS with the Fargate launch type.

## Configure IAM permissions


To set up IAM permissions, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create iamserviceaccount --cluster MyCluster \
 --namespace howto-k8s-fargate \
 --name appmesh-pod \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshEnvoyAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapDiscoverInstanceAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapFullAccess \
 --override-existing-serviceaccounts \
 --approve
```

## Install App Mesh


To install the App Mesh Kubernetes controller, follow the instructions in [App Mesh Controller](https://github.com/aws/eks-charts/tree/master/stable/appmesh-controller#app-mesh-controller). Be sure to follow the instructions for Amazon EKS with the Fargate launch type.

## Install a sample application


[aws-app-mesh-examples](https://github.com/aws/aws-app-mesh-examples) contains several Kubernetes App Mesh walkthroughs. For this tutorial, you install a sample color application that works for Amazon EKS clusters with the Fargate launch type.

**To use a sample App Mesh application to test Container Insights**

1. Install the application using these instructions: [https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-fargate](https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-fargate). 

   Those instructions assume that you are creating a new cluster with the correct Fargate profile. If you want to use an Amazon EKS cluster that you've already set up, you can use the following commands to set up that cluster for this demonstration. Replace *MyCluster* with the name of your cluster.

   ```
   eksctl create iamserviceaccount --cluster MyCluster \
    --namespace howto-k8s-fargate \
    --name appmesh-pod \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshEnvoyAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapDiscoverInstanceAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapFullAccess \
    --override-existing-serviceaccounts \
    --approve
   ```

   ```
   eksctl create fargateprofile --cluster MyCluster \
   --namespace howto-k8s-fargate --name howto-k8s-fargate
   ```

1. Port forward the front application deployment:

   ```
   kubectl -n howto-k8s-fargate port-forward deployment/front 8080:8080
   ```

1. Curl the front app:

   ```
   while true; do  curl -s http://localhost:8080/color; sleep 0.1; echo ; done
   ```

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the AWS Region where your cluster is running, choose **Metrics** in the navigation pane. The metric are in the **ContainerInsights/Prometheus** namespace.

1. To see the CloudWatch Logs events, choose **Log groups** in the navigation pane. The events are in the log group ` /aws/containerinsights/your_cluster_name/prometheus ` in the log stream `kubernetes-pod-appmesh-envoy`.

## Deleting the App Mesh test environment


When you have finished using App Mesh and the sample application, use the following commands to delete the unnecessary resources. Delete the sample application by entering the following command:

```
cd aws-app-mesh-examples/walkthroughs/howto-k8s-fargate/
kubectl delete -f _output/manifest.yaml
```

Delete the App Mesh controller by entering the following command:

```
helm delete appmesh-controller -n appmesh-system
```

# Set up NGINX with sample traffic on Amazon EKS and Kubernetes


NGINX is a web server that can also be used as a load balancer and reverse proxy. For more information about how Kubernetes uses NGINX for ingress , see [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx).

**To install Ingress-NGINX with a sample traffic service to test Container Insights Prometheus support**

1. Enter the following command to add the Helm ingress-nginx repo:

   ```
   helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
   ```

1. Enter the following commands:

   ```
   kubectl create namespace nginx-ingress-sample
   
   helm install my-nginx ingress-nginx/ingress-nginx \
   --namespace nginx-ingress-sample \
   --set controller.metrics.enabled=true \
   --set-string controller.metrics.service.annotations."prometheus\.io/port"="10254" \
   --set-string controller.metrics.service.annotations."prometheus\.io/scrape"="true"
   ```

1. Check whether the services started correctly by entering the following command:

   ```
   kubectl get service -n nginx-ingress-sample
   ```

   The output of this command should display several columns, including an `EXTERNAL-IP` column.

1. Set an `EXTERNAL-IP` variable to the value of the `EXTERNAL-IP` column in the row of the NGINX ingress controller.

   ```
   EXTERNAL_IP=your-nginx-controller-external-ip
   ```

1. Start some sample NGINX traffic by entering the following command. 

   ```
   SAMPLE_TRAFFIC_NAMESPACE=nginx-sample-traffic
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_traffic/nginx-traffic/nginx-traffic-sample.yaml | 
   sed "s/{{external_ip}}/$EXTERNAL_IP/g" | 
   sed "s/{{namespace}}/$SAMPLE_TRAFFIC_NAMESPACE/g" | 
   kubectl apply -f -
   ```

1. Enter the following command to confirm that all three pods are in the `Running` status.

   ```
   kubectl get pod -n $SAMPLE_TRAFFIC_NAMESPACE
   ```

   If they are running, you should soon see metrics in the **ContainerInsights/Prometheus** namespace.

**To uninstall NGINX and the sample traffic application**

1. Delete the sample traffic service by entering the following command:

   ```
   kubectl delete namespace $SAMPLE_TRAFFIC_NAMESPACE
   ```

1. Delete the NGINX egress by the Helm release name. 

   ```
   helm uninstall my-nginx --namespace nginx-ingress-sample
   kubectl delete namespace nginx-ingress-sample
   ```

# Set up memcached with a metric exporter on Amazon EKS and Kubernetes


memcached is an open-source memory object caching system. For more information, see [What is Memcached?](https://www.memcached.org).

If you are running memcached on a cluster with the Fargate launch type, you need to set up a Fargate profile before doing the steps in this procedure. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create fargateprofile --cluster MyCluster \
--namespace memcached-sample --name memcached-sample
```

**To install memcached with a metric exporter to test Container Insights Prometheus support**

1. Enter the following command to add the repo:

   ```
   helm repo add bitnami https://charts.bitnami.com/bitnami
   ```

1. Enter the following command to create a new namespace:

   ```
   kubectl create namespace memcached-sample
   ```

1. Enter the following command to install Memcached

   ```
   helm install my-memcached bitnami/memcached --namespace memcached-sample \
   --set metrics.enabled=true \
   --set-string serviceAnnotations.prometheus\\.io/port="9150" \
   --set-string serviceAnnotations.prometheus\\.io/scrape="true"
   ```

1. Enter the following command to confirm the annotation of the running service:

   ```
   kubectl describe service my-memcached-metrics -n memcached-sample
   ```

   You should see the following two annotations:

   ```
   Annotations:   prometheus.io/port: 9150
                  prometheus.io/scrape: true
   ```

**To uninstall memcached**
+ Enter the following commands:

  ```
  helm uninstall my-memcached --namespace memcached-sample
  kubectl delete namespace memcached-sample
  ```

# Set up Java/JMX sample workload on Amazon EKS and Kubernetes


JMX Exporter is an official Prometheus exporter that can scrape and expose JMX mBeans as Prometheus metrics. For more information, see [prometheus/jmx\$1exporter](https://github.com/prometheus/jmx_exporter).

Container Insights can collect predefined Prometheus metrics from Java Virtual Machine (JVM), Java, and Tomcat (Catalina) using the JMX Exporter.

## Default Prometheus scrape configuration


By default, the CloudWatch agent with Prometheus support scrapes the Java/JMX Prometheus metrics from `http://CLUSTER_IP:9404/metrics` on each pod in an Amazon EKS or Kubernetes cluster. This is done by `role: pod` discovery of Prometheus `kubernetes_sd_config`. 9404 is the default port allocated for JMX Exporter by Prometheus. For more information about `role: pod` discovery, see [ pod](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#pod). You can configure the JMX Exporter to expose the metrics on a different port or metrics\$1path. If you do change the port or path, update the default jmx scrape\$1config in the CloudWatch agent config map. Run the following command to get the current CloudWatch agent Prometheus configuration:

```
kubectl describe cm prometheus-config -n amazon-cloudwatch
```

The fields to change are the `/metrics` and `regex: '.*:9404$'` fields, as highlighted in the following example.

```
job_name: 'kubernetes-jmx-pod'
sample_limit: 10000
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__address__]
  action: keep
  regex: '.*:9404$'
- action: replace
  regex: (.+)
  source_labels:
```

## Other Prometheus scrape configuration


If you expose your application running on a set of pods with Java/JMX Prometheus exporters by a Kubernetes Service, you can also switch to use `role: service` discovery or `role: endpoint` discovery of Prometheus `kubernetes_sd_config`. For more information about these discovery methods, see [ service](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#service), [ endpoints](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#endpoints), and [ <kubernetes\$1sd\$1config>.](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). 

More meta labels are provided by these two service discovery modes which could be useful for you to build the CloudWatch metrics dimensions. For example, you can relabel `__meta_kubernetes_service_name` to `Service` and include it into your metrics’ dimension. For more informatio about customizing your CloudWatch metrics and their dimensions, see [CloudWatch agent configuration for Prometheus](ContainerInsights-Prometheus-Setup-configure-ECS.md#ContainerInsights-Prometheus-Setup-cw-agent-config).

## Docker image with JMX Exporter


Next, build a Docker image. The following sections provide two example Dockerfiles.

When you have built the image, load it into Amazon EKS or Kubernetes, and then run the following command to verify that Prometheus metrics are exposed by `JMX_EXPORTER` on port 9404. Replace *\$1JAR\$1SAMPLE\$1TRAFFIC\$1POD* with the running pod name and replace *\$1JAR\$1SAMPLE\$1TRAFFIC\$1NAMESPACE* with your application namespace. 

If you are running JMX Exporter on a cluster with the Fargate launch type, you also need to set up a Fargate profile before doing the steps in this procedure. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create fargateprofile --cluster MyCluster \
--namespace $JAR_SAMPLE_TRAFFIC_NAMESPACE\
 --name $JAR_SAMPLE_TRAFFIC_NAMESPACE
```

```
kubectl exec $JAR_SAMPLE_TRAFFIC_POD -n $JARCAT_SAMPLE_TRAFFIC_NAMESPACE -- curl http://localhost:9404
```

## Example: Apache Tomcat Docker image with Prometheus metrics


Apache Tomcat server exposes JMX mBeans by default. You can integrate JMX Exporter with Tomcat to expose JMX mBeans as Prometheus metrics. The following example Dockerfile shows the steps to build a testing image: 

```
# From Tomcat 9.0 JDK8 OpenJDK 
FROM tomcat:9.0-jdk8-openjdk 

RUN mkdir -p /opt/jmx_exporter

COPY ./jmx_prometheus_javaagent-0.12.0.jar /opt/jmx_exporter
COPY ./config.yaml /opt/jmx_exporter
COPY ./setenv.sh /usr/local/tomcat/bin 
COPY your web application.war /usr/local/tomcat/webapps/

RUN chmod  o+x /usr/local/tomcat/bin/setenv.sh

ENTRYPOINT ["catalina.sh", "run"]
```

The following list explains the four `COPY` lines in this Dockerfile.
+ Download the latest JMX Exporter jar file from [https://github.com/prometheus/jmx\$1exporter](https://github.com/prometheus/jmx_exporter).
+ `config.yaml` is the JMX Exporter configuration file. For more information, see [https://github.com/prometheus/jmx\$1exporter\$1Configuration](https://github.com/prometheus/jmx_exporter#Configuration ).

  Here is a sample configuration file for Java and Tomcat:

  ```
  lowercaseOutputName: true
  lowercaseOutputLabelNames: true
  
  rules:
  - pattern: 'java.lang<type=OperatingSystem><>(FreePhysicalMemorySize|TotalPhysicalMemorySize|FreeSwapSpaceSize|TotalSwapSpaceSize|SystemCpuLoad|ProcessCpuLoad|OpenFileDescriptorCount|AvailableProcessors)'
    name: java_lang_OperatingSystem_$1
    type: GAUGE
  
  - pattern: 'java.lang<type=Threading><>(TotalStartedThreadCount|ThreadCount)'
    name: java_lang_threading_$1
    type: GAUGE
  
  - pattern: 'Catalina<type=GlobalRequestProcessor, name=\"(\w+-\w+)-(\d+)\"><>(\w+)'
    name: catalina_globalrequestprocessor_$3_total
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina global $3
    type: COUNTER
  
  - pattern: 'Catalina<j2eeType=Servlet, WebModule=//([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), name=([-a-zA-Z0-9+/$%~_-|!.]*), J2EEApplication=none, J2EEServer=none><>(requestCount|maxTime|processingTime|errorCount)'
    name: catalina_servlet_$3_total
    labels:
      module: "$1"
      servlet: "$2"
    help: Catalina servlet $3 total
    type: COUNTER
  
  - pattern: 'Catalina<type=ThreadPool, name="(\w+-\w+)-(\d+)"><>(currentThreadCount|currentThreadsBusy|keepAliveCount|pollerThreadCount|connectionCount)'
    name: catalina_threadpool_$3
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina threadpool $3
    type: GAUGE
  
  - pattern: 'Catalina<type=Manager, host=([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), context=([-a-zA-Z0-9+/$%~_-|!.]*)><>(processingTime|sessionCounter|rejectedSessions|expiredSessions)'
    name: catalina_session_$3_total
    labels:
      context: "$2"
      host: "$1"
    help: Catalina session $3 total
    type: COUNTER
  
  - pattern: ".*"
  ```
+ `setenv.sh` is a Tomcat startup script to start the JMX exporter along with Tomcat and expose Prometheus metrics on port 9404 of the localhost. It also provides the JMX Exporter with the `config.yaml` file path.

  ```
  $ cat setenv.sh 
  export JAVA_OPTS="-javaagent:/opt/jmx_exporter/jmx_prometheus_javaagent-0.12.0.jar=9404:/opt/jmx_exporter/config.yaml $JAVA_OPTS"
  ```
+ your web application.war is your web application `war` file to be loaded by Tomcat.

Build a Docker image with this configuration and upload it to an image repository.

## Example: Java Jar Application Docker image with Prometheus metrics


The following example Dockerfile shows the steps to build a testing image: 

```
# Alpine Linux with OpenJDK JRE
FROM openjdk:8-jre-alpine

RUN mkdir -p /opt/jmx_exporter

COPY ./jmx_prometheus_javaagent-0.12.0.jar /opt/jmx_exporter
COPY ./SampleJavaApplication-1.0-SNAPSHOT.jar /opt/jmx_exporter
COPY ./start_exporter_example.sh /opt/jmx_exporter
COPY ./config.yaml /opt/jmx_exporter

RUN chmod -R o+x /opt/jmx_exporter
RUN apk add curl

ENTRYPOINT exec /opt/jmx_exporter/start_exporter_example.sh
```

The following list explains the four `COPY` lines in this Dockerfile.
+ Download the latest JMX Exporter jar file from [https://github.com/prometheus/jmx\$1exporter](https://github.com/prometheus/jmx_exporter).
+ `config.yaml` is the JMX Exporter configuration file. For more information, see [https://github.com/prometheus/jmx\$1exporter\$1Configuration](https://github.com/prometheus/jmx_exporter#Configuration ).

  Here is a sample configuration file for Java and Tomcat:

  ```
  lowercaseOutputName: true
  lowercaseOutputLabelNames: true
  
  rules:
  - pattern: 'java.lang<type=OperatingSystem><>(FreePhysicalMemorySize|TotalPhysicalMemorySize|FreeSwapSpaceSize|TotalSwapSpaceSize|SystemCpuLoad|ProcessCpuLoad|OpenFileDescriptorCount|AvailableProcessors)'
    name: java_lang_OperatingSystem_$1
    type: GAUGE
  
  - pattern: 'java.lang<type=Threading><>(TotalStartedThreadCount|ThreadCount)'
    name: java_lang_threading_$1
    type: GAUGE
  
  - pattern: 'Catalina<type=GlobalRequestProcessor, name=\"(\w+-\w+)-(\d+)\"><>(\w+)'
    name: catalina_globalrequestprocessor_$3_total
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina global $3
    type: COUNTER
  
  - pattern: 'Catalina<j2eeType=Servlet, WebModule=//([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), name=([-a-zA-Z0-9+/$%~_-|!.]*), J2EEApplication=none, J2EEServer=none><>(requestCount|maxTime|processingTime|errorCount)'
    name: catalina_servlet_$3_total
    labels:
      module: "$1"
      servlet: "$2"
    help: Catalina servlet $3 total
    type: COUNTER
  
  - pattern: 'Catalina<type=ThreadPool, name="(\w+-\w+)-(\d+)"><>(currentThreadCount|currentThreadsBusy|keepAliveCount|pollerThreadCount|connectionCount)'
    name: catalina_threadpool_$3
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina threadpool $3
    type: GAUGE
  
  - pattern: 'Catalina<type=Manager, host=([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), context=([-a-zA-Z0-9+/$%~_-|!.]*)><>(processingTime|sessionCounter|rejectedSessions|expiredSessions)'
    name: catalina_session_$3_total
    labels:
      context: "$2"
      host: "$1"
    help: Catalina session $3 total
    type: COUNTER
  
  - pattern: ".*"
  ```
+ `start_exporter_example.sh` is the script to start the JAR application with the Prometheus metrics exported. It also provides the JMX Exporter with the `config.yaml` file path.

  ```
  $ cat start_exporter_example.sh 
  java -javaagent:/opt/jmx_exporter/jmx_prometheus_javaagent-0.12.0.jar=9404:/opt/jmx_exporter/config.yaml -cp  /opt/jmx_exporter/SampleJavaApplication-1.0-SNAPSHOT.jar com.gubupt.sample.app.App
  ```
+ SampleJavaApplication-1.0-SNAPSHOT.jar is the sample Java application jar file. Replace it with the Java application that you want to monitor.

Build a Docker image with this configuration and upload it to an image repository.

# Set up HAProxy with a metric exporter on Amazon EKS and Kubernetes


HAProxy is an open-source proxy application. For more information, see [HAProxy](https://www.haproxy.org).

If you are running HAProxy on a cluster with the Fargate launch type, you need to set up a Fargate profile before doing the steps in this procedure. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create fargateprofile --cluster MyCluster \
--namespace haproxy-ingress-sample --name haproxy-ingress-sample
```

**To install HAProxy with a metric exporter to test Container Insights Prometheus support**

1. Enter the following command to add the Helm incubator repo:

   ```
   helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
   ```

1. Enter the following command to create a new namespace:

   ```
   kubectl create namespace haproxy-ingress-sample
   ```

1. Enter the following commands to install HAProxy:

   ```
   helm install haproxy haproxy-ingress/haproxy-ingress \
   --namespace haproxy-ingress-sample \
   --set defaultBackend.enabled=true \
   --set controller.stats.enabled=true \
   --set controller.metrics.enabled=true \
   --set-string controller.metrics.service.annotations."prometheus\.io/port"="9101" \
   --set-string controller.metrics.service.annotations."prometheus\.io/scrape"="true"
   ```

1. Enter the following command to confirm the annotation of the service:

   ```
   kubectl describe service haproxy-haproxy-ingress-metrics -n haproxy-ingress-sample
   ```

   You should see the following annotations.

   ```
   Annotations:   prometheus.io/port: 9101
                  prometheus.io/scrape: true
   ```

**To uninstall HAProxy**
+ Enter the following commands:

  ```
  helm uninstall haproxy --namespace haproxy-ingress-sample
  kubectl delete namespace haproxy-ingress-sample
  ```

# Tutorial for adding a new Prometheus scrape target: Redis OSS on Amazon EKS and Kubernetes clusters


This tutorial provides a hands-on introduction to scrape the Prometheus metrics of a sample Redis OSS application on Amazon EKS and Kubernetes. Redis OSS (https://redis.io/) is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. For more information, see [ redis](https://redis.io/).

redis\$1exporter (MIT License licensed) is used to expose the Redis OSS Prometheus metrics on the specified port (default: 0.0.0.0:9121). For more information, see [ redis\$1exporter](https://github.com/oliver006/redis_exporter).

The Docker images in the following two Docker Hub repositories are used in this tutorial: 
+ [ redis](https://hub.docker.com/_/redis?tab=description)
+ [ redis\$1exporter](https://hub.docker.com/r/oliver006/redis_exporter)

**To install a sample Redis OSS workload which exposes Prometheus metrics**

1. Set the namespace for the sample Redis OSS workload.

   ```
   REDIS_NAMESPACE=redis-sample
   ```

1. If you are running Redis OSS on a cluster with the Fargate launch type, you need to set up a Fargate profile. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

   ```
   eksctl create fargateprofile --cluster MyCluster \
   --namespace $REDIS_NAMESPACE --name $REDIS_NAMESPACE
   ```

1. Enter the following command to install the sample Redis OSS workload.

   ```
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_traffic/redis/redis-traffic-sample.yaml \
   | sed "s/{{namespace}}/$REDIS_NAMESPACE/g" \
   | kubectl apply -f -
   ```

1. The installation includes a service named `my-redis-metrics` which exposes the Redis OSS Prometheus metric on port 9121 Enter the following command to get the details of the service: 

   ```
   kubectl describe service/my-redis-metrics  -n $REDIS_NAMESPACE
   ```

   In the `Annotations` section of the results, you'll see two annotations which match the Prometheus scrape configuration of the CloudWatch agent, so that it can auto-discover the workloads:

   ```
   prometheus.io/port: 9121
   prometheus.io/scrape: true
   ```

   The related Prometheus scrape configuration can be found in the `- job_name: kubernetes-service-endpoints` section of `kubernetes-eks.yaml` or `kubernetes-k8s.yaml`.

**To start collecting Redis OSS Prometheus metrics in CloudWatch**

1. Download the latest version of the of `kubernetes-eks.yaml` or `kubernetes-k8s.yaml` file by entering one of the following commands. For an Amazon EKS cluster with the EC2 launch type, enter this command.

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks.yaml
   ```

   For an Amazon EKS cluster with the Fargate launch type, enter this command.

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks-fargate.yaml
   ```

   For a Kubernetes cluster running on an Amazon EC2 instance, enter this command.

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-k8s.yaml
   ```

1. Open the file with a text editor, and find the `cwagentconfig.json` section. Add the following subsection and save the changes. Be sure that the indentation follows the existing pattern.

   ```
   {
     "source_labels": ["pod_name"],
     "label_matcher": "^redis-instance$",
     "dimensions": [["Namespace","ClusterName"]],
     "metric_selectors": [
       "^redis_net_(in|out)put_bytes_total$",
       "^redis_(expired|evicted)_keys_total$",
       "^redis_keyspace_(hits|misses)_total$",
       "^redis_memory_used_bytes$",
       "^redis_connected_clients$"
     ]
   },
   {
     "source_labels": ["pod_name"],
     "label_matcher": "^redis-instance$",
     "dimensions": [["Namespace","ClusterName","cmd"]],
     "metric_selectors": [
       "^redis_commands_total$"
     ]
   },
   {
     "source_labels": ["pod_name"],
     "label_matcher": "^redis-instance$",
     "dimensions": [["Namespace","ClusterName","db"]],
     "metric_selectors": [
       "^redis_db_keys$"
     ]
   },
   ```

   The section you added puts the Redis OSS metrics onto the CloudWatch agent allow list. For a list of these metrics, see the following section.

1. If you already have the CloudWatch agent with Prometheus support deployed in this cluster, you must delete it by entering the following command.

   ```
   kubectl delete deployment cwagent-prometheus -n amazon-cloudwatch
   ```

1. Deploy the CloudWatch agent with your updated configuration by entering one of the following commands. Replace *MyCluster* and *region* to match your settings.

   For an Amazon EKS cluster with the EC2 launch type, enter this command.

   ```
   kubectl apply -f prometheus-eks.yaml
   ```

   For an Amazon EKS cluster with the Fargate launch type, enter this command.

   ```
   cat prometheus-eks-fargate.yaml \
   | sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" \
   | kubectl apply -f -
   ```

   For a Kubernetes cluster, enter this command.

   ```
   cat prometheus-k8s.yaml \
   | sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" \
   | kubectl apply -f -
   ```

## Viewing your Redis OSS Prometheus metrics


This tutorial sends the following metrics to the **ContainerInsights/Prometheus** namespace in CloudWatch. You can use the CloudWatch console to see the metrics in that namespace.


| Metric name | Dimensions | 
| --- | --- | 
|  `redis_net_input_bytes_total` |  ClusterName, `Namespace`  | 
|  `redis_net_output_bytes_total` |  ClusterName, `Namespace`  | 
|  `redis_expired_keys_total` |  ClusterName, `Namespace`  | 
|  `redis_evicted_keys_total` |  ClusterName, `Namespace`  | 
|  `redis_keyspace_hits_total` |  ClusterName, `Namespace`  | 
|  `redis_keyspace_misses_total` |  ClusterName, `Namespace`  | 
|  `redis_memory_used_bytes` |  ClusterName, `Namespace`  | 
|  `redis_connected_clients` |  ClusterName, `Namespace`  | 
|  `redis_commands_total` |  ClusterName, `Namespace`, cmd  | 
|  `redis_db_keys` |  ClusterName, `Namespace`, db  | 

**Note**  
The value of the **cmd** dimension can be: `append`, `client`, `command`, `config`, `dbsize`, `flushall`, `get`, `incr`, `info`, `latency`, or `slowlog`.  
The value of the **db** dimension can be `db0` to `db15`. 

You can also create a CloudWatch dashboard for your Redis OSS Prometheus metrics.

**To create a dashboard for Redis OSS Prometheus metrics**

1. Create environment variables, replacing the values below to match your deployment.

   ```
   DASHBOARD_NAME=your_cw_dashboard_name
   REGION_NAME=your_metric_region_such_as_us-east-1
   CLUSTER_NAME=your_k8s_cluster_name_here
   NAMESPACE=your_redis_service_namespace_here
   ```

1. Enter the following command to create the dashboard.

   ```
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_cloudwatch_dashboards/redis/cw_dashboard_redis.json \
   | sed "s/{{YOUR_AWS_REGION}}/${REGION_NAME}/g" \
   | sed "s/{{YOUR_CLUSTER_NAME}}/${CLUSTER_NAME}/g" \
   | sed "s/{{YOUR_NAMESPACE}}/${NAMESPACE}/g" \
   ```