

# (Optional) Set up sample containerized Amazon EKS workloads for Prometheus metric testing
<a name="ContainerInsights-Prometheus-Sample-Workloads"></a>

To test the Prometheus metric support in CloudWatch Container Insights, you can set up one or more of the following containerized workloads. The CloudWatch agent with Prometheus support automatically collects metrics from each of these workloads. To see the metrics that are collected by default, see [Prometheus metrics collected by the CloudWatch agent](ContainerInsights-Prometheus-metrics.md).

Before you can install any of these workloads, you must install Helm 3.x by entering the following commands:

```
brew install helm
```

For more information, see [Helm](https://helm.sh).

**Topics**
+ [

# Set up AWS App Mesh sample workload for Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-appmesh.md)
+ [

# Set up NGINX with sample traffic on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-nginx.md)
+ [

# Set up memcached with a metric exporter on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-memcached.md)
+ [

# Set up Java/JMX sample workload on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-javajmx.md)
+ [

# Set up HAProxy with a metric exporter on Amazon EKS and Kubernetes
](ContainerInsights-Prometheus-Sample-Workloads-haproxy.md)
+ [

# Tutorial for adding a new Prometheus scrape target: Redis OSS on Amazon EKS and Kubernetes clusters
](ContainerInsights-Prometheus-Setup-redis-eks.md)

# Set up AWS App Mesh sample workload for Amazon EKS and Kubernetes
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh"></a>

Prometheus support in CloudWatch Container Insights supports AWS App Mesh. The following sections explain how to set up App Mesh.

**Topics**
+ [

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the EC2 launch type or a Kubernetes cluster
](ContainerInsights-Prometheus-Sample-Workloads-appmesh-EKS.md)
+ [

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the Fargate launch type
](ContainerInsights-Prometheus-Sample-Workloads-appmesh-Fargate.md)

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the EC2 launch type or a Kubernetes cluster
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-EKS"></a>

Use these instructions if you are setting up App Mesh on a cluster running Amazon EKS with the EC2 launch type, or a Kubernetes cluster.

## Configure IAM permissions
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-iam"></a>

You must add the **AWSAppMeshFullAccess** policy to the IAM role for your Amazon EKS or Kubernetes node group. On Amazon EKS, this node group name looks similar to `eksctl-integ-test-eks-prometheus-NodeInstanceRole-ABCDEFHIJKL`. On Kubernetes, it might look similar to `nodes.integ-test-kops-prometheus.k8s.local`.

## Install App Mesh
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-install"></a>

To install the App Mesh Kubernetes controller, follow the instructions in [App Mesh Controller](https://github.com/aws/eks-charts/tree/master/stable/appmesh-controller#app-mesh-controller).

## Install a sample application
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-application"></a>

[aws-app-mesh-examples](https://github.com/aws/aws-app-mesh-examples) contains several Kubernetes App Mesh walkthroughs. For this tutorial, you install a sample color application that shows how http routes can use headers for matching incoming requests.

**To use a sample App Mesh application to test Container Insights**

1. Install the application using these instructions: [https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-http-headers](https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-http-headers). 

1. Launch a curler pod to generate traffic:

   ```
   kubectl -n default run -it curler --image=tutum/curl /bin/bash
   ```

1. Curl different endpoints by changing HTTP headers. Run the curl command multiple times, as shown:

   ```
   curl -H "color_header: blue" front.howto-k8s-http-headers.svc.cluster.local:8080/; echo;
   
   curl -H "color_header: red" front.howto-k8s-http-headers.svc.cluster.local:8080/; echo;
   
   curl -H "color_header: yellow" front.howto-k8s-http-headers.svc.cluster.local:8080/; echo;
   ```

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the AWS Region where your cluster is running, choose **Metrics** in the navigation pane. The metric are in the **ContainerInsights/Prometheus** namespace.

1. To see the CloudWatch Logs events, choose **Log groups** in the navigation pane. The events are in the log group ` /aws/containerinsights/your_cluster_name/prometheus ` in the log stream `kubernetes-pod-appmesh-envoy`.

## Deleting the App Mesh test environment
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-delete"></a>

When you have finished using App Mesh and the sample application, use the following commands to delete the unnecessary resources. Delete the sample application by entering the following command:

```
cd aws-app-mesh-examples/walkthroughs/howto-k8s-http-headers/
kubectl delete -f _output/manifest.yaml
```

Delete the App Mesh controller by entering the following command:

```
helm delete appmesh-controller -n appmesh-system
```

# Set up AWS App Mesh sample workload on an Amazon EKS cluster with the Fargate launch type
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-Fargate"></a>

Use these instructions if you are setting up App Mesh on a cluster running Amazon EKS with the Fargate launch type.

## Configure IAM permissions
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh--fargate-iam"></a>

To set up IAM permissions, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create iamserviceaccount --cluster MyCluster \
 --namespace howto-k8s-fargate \
 --name appmesh-pod \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshEnvoyAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapDiscoverInstanceAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess \
 --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapFullAccess \
 --override-existing-serviceaccounts \
 --approve
```

## Install App Mesh
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-fargate-install"></a>

To install the App Mesh Kubernetes controller, follow the instructions in [App Mesh Controller](https://github.com/aws/eks-charts/tree/master/stable/appmesh-controller#app-mesh-controller). Be sure to follow the instructions for Amazon EKS with the Fargate launch type.

## Install a sample application
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-fargate-application"></a>

[aws-app-mesh-examples](https://github.com/aws/aws-app-mesh-examples) contains several Kubernetes App Mesh walkthroughs. For this tutorial, you install a sample color application that works for Amazon EKS clusters with the Fargate launch type.

**To use a sample App Mesh application to test Container Insights**

1. Install the application using these instructions: [https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-fargate](https://github.com/aws/aws-app-mesh-examples/tree/main/walkthroughs/howto-k8s-fargate). 

   Those instructions assume that you are creating a new cluster with the correct Fargate profile. If you want to use an Amazon EKS cluster that you've already set up, you can use the following commands to set up that cluster for this demonstration. Replace *MyCluster* with the name of your cluster.

   ```
   eksctl create iamserviceaccount --cluster MyCluster \
    --namespace howto-k8s-fargate \
    --name appmesh-pod \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshEnvoyAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapDiscoverInstanceAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapFullAccess \
    --override-existing-serviceaccounts \
    --approve
   ```

   ```
   eksctl create fargateprofile --cluster MyCluster \
   --namespace howto-k8s-fargate --name howto-k8s-fargate
   ```

1. Port forward the front application deployment:

   ```
   kubectl -n howto-k8s-fargate port-forward deployment/front 8080:8080
   ```

1. Curl the front app:

   ```
   while true; do  curl -s http://localhost:8080/color; sleep 0.1; echo ; done
   ```

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the AWS Region where your cluster is running, choose **Metrics** in the navigation pane. The metric are in the **ContainerInsights/Prometheus** namespace.

1. To see the CloudWatch Logs events, choose **Log groups** in the navigation pane. The events are in the log group ` /aws/containerinsights/your_cluster_name/prometheus ` in the log stream `kubernetes-pod-appmesh-envoy`.

## Deleting the App Mesh test environment
<a name="ContainerInsights-Prometheus-Sample-Workloads-appmesh-fargate-delete"></a>

When you have finished using App Mesh and the sample application, use the following commands to delete the unnecessary resources. Delete the sample application by entering the following command:

```
cd aws-app-mesh-examples/walkthroughs/howto-k8s-fargate/
kubectl delete -f _output/manifest.yaml
```

Delete the App Mesh controller by entering the following command:

```
helm delete appmesh-controller -n appmesh-system
```

# Set up NGINX with sample traffic on Amazon EKS and Kubernetes
<a name="ContainerInsights-Prometheus-Sample-Workloads-nginx"></a>

NGINX is a web server that can also be used as a load balancer and reverse proxy. For more information about how Kubernetes uses NGINX for ingress , see [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx).

**To install Ingress-NGINX with a sample traffic service to test Container Insights Prometheus support**

1. Enter the following command to add the Helm ingress-nginx repo:

   ```
   helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
   ```

1. Enter the following commands:

   ```
   kubectl create namespace nginx-ingress-sample
   
   helm install my-nginx ingress-nginx/ingress-nginx \
   --namespace nginx-ingress-sample \
   --set controller.metrics.enabled=true \
   --set-string controller.metrics.service.annotations."prometheus\.io/port"="10254" \
   --set-string controller.metrics.service.annotations."prometheus\.io/scrape"="true"
   ```

1. Check whether the services started correctly by entering the following command:

   ```
   kubectl get service -n nginx-ingress-sample
   ```

   The output of this command should display several columns, including an `EXTERNAL-IP` column.

1. Set an `EXTERNAL-IP` variable to the value of the `EXTERNAL-IP` column in the row of the NGINX ingress controller.

   ```
   EXTERNAL_IP=your-nginx-controller-external-ip
   ```

1. Start some sample NGINX traffic by entering the following command. 

   ```
   SAMPLE_TRAFFIC_NAMESPACE=nginx-sample-traffic
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_traffic/nginx-traffic/nginx-traffic-sample.yaml | 
   sed "s/{{external_ip}}/$EXTERNAL_IP/g" | 
   sed "s/{{namespace}}/$SAMPLE_TRAFFIC_NAMESPACE/g" | 
   kubectl apply -f -
   ```

1. Enter the following command to confirm that all three pods are in the `Running` status.

   ```
   kubectl get pod -n $SAMPLE_TRAFFIC_NAMESPACE
   ```

   If they are running, you should soon see metrics in the **ContainerInsights/Prometheus** namespace.

**To uninstall NGINX and the sample traffic application**

1. Delete the sample traffic service by entering the following command:

   ```
   kubectl delete namespace $SAMPLE_TRAFFIC_NAMESPACE
   ```

1. Delete the NGINX egress by the Helm release name. 

   ```
   helm uninstall my-nginx --namespace nginx-ingress-sample
   kubectl delete namespace nginx-ingress-sample
   ```

# Set up memcached with a metric exporter on Amazon EKS and Kubernetes
<a name="ContainerInsights-Prometheus-Sample-Workloads-memcached"></a>

memcached is an open-source memory object caching system. For more information, see [What is Memcached?](https://www.memcached.org).

If you are running memcached on a cluster with the Fargate launch type, you need to set up a Fargate profile before doing the steps in this procedure. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create fargateprofile --cluster MyCluster \
--namespace memcached-sample --name memcached-sample
```

**To install memcached with a metric exporter to test Container Insights Prometheus support**

1. Enter the following command to add the repo:

   ```
   helm repo add bitnami https://charts.bitnami.com/bitnami
   ```

1. Enter the following command to create a new namespace:

   ```
   kubectl create namespace memcached-sample
   ```

1. Enter the following command to install Memcached

   ```
   helm install my-memcached bitnami/memcached --namespace memcached-sample \
   --set metrics.enabled=true \
   --set-string serviceAnnotations.prometheus\\.io/port="9150" \
   --set-string serviceAnnotations.prometheus\\.io/scrape="true"
   ```

1. Enter the following command to confirm the annotation of the running service:

   ```
   kubectl describe service my-memcached-metrics -n memcached-sample
   ```

   You should see the following two annotations:

   ```
   Annotations:   prometheus.io/port: 9150
                  prometheus.io/scrape: true
   ```

**To uninstall memcached**
+ Enter the following commands:

  ```
  helm uninstall my-memcached --namespace memcached-sample
  kubectl delete namespace memcached-sample
  ```

# Set up Java/JMX sample workload on Amazon EKS and Kubernetes
<a name="ContainerInsights-Prometheus-Sample-Workloads-javajmx"></a>

JMX Exporter is an official Prometheus exporter that can scrape and expose JMX mBeans as Prometheus metrics. For more information, see [prometheus/jmx\$1exporter](https://github.com/prometheus/jmx_exporter).

Container Insights can collect predefined Prometheus metrics from Java Virtual Machine (JVM), Java, and Tomcat (Catalina) using the JMX Exporter.

## Default Prometheus scrape configuration
<a name="ContainerInsights-Prometheus-Sample-Workloads-javajmx-default"></a>

By default, the CloudWatch agent with Prometheus support scrapes the Java/JMX Prometheus metrics from `http://CLUSTER_IP:9404/metrics` on each pod in an Amazon EKS or Kubernetes cluster. This is done by `role: pod` discovery of Prometheus `kubernetes_sd_config`. 9404 is the default port allocated for JMX Exporter by Prometheus. For more information about `role: pod` discovery, see [ pod](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#pod). You can configure the JMX Exporter to expose the metrics on a different port or metrics\$1path. If you do change the port or path, update the default jmx scrape\$1config in the CloudWatch agent config map. Run the following command to get the current CloudWatch agent Prometheus configuration:

```
kubectl describe cm prometheus-config -n amazon-cloudwatch
```

The fields to change are the `/metrics` and `regex: '.*:9404$'` fields, as highlighted in the following example.

```
job_name: 'kubernetes-jmx-pod'
sample_limit: 10000
metrics_path: /metrics
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__address__]
  action: keep
  regex: '.*:9404$'
- action: replace
  regex: (.+)
  source_labels:
```

## Other Prometheus scrape configuration
<a name="ContainerInsights-Prometheus-Sample-Workloads-javajmx-other"></a>

If you expose your application running on a set of pods with Java/JMX Prometheus exporters by a Kubernetes Service, you can also switch to use `role: service` discovery or `role: endpoint` discovery of Prometheus `kubernetes_sd_config`. For more information about these discovery methods, see [ service](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#service), [ endpoints](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#endpoints), and [ <kubernetes\$1sd\$1config>.](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). 

More meta labels are provided by these two service discovery modes which could be useful for you to build the CloudWatch metrics dimensions. For example, you can relabel `__meta_kubernetes_service_name` to `Service` and include it into your metrics’ dimension. For more informatio about customizing your CloudWatch metrics and their dimensions, see [CloudWatch agent configuration for Prometheus](ContainerInsights-Prometheus-Setup-configure-ECS.md#ContainerInsights-Prometheus-Setup-cw-agent-config).

## Docker image with JMX Exporter
<a name="ContainerInsights-Prometheus-Sample-Workloads-javajmx-docker"></a>

Next, build a Docker image. The following sections provide two example Dockerfiles.

When you have built the image, load it into Amazon EKS or Kubernetes, and then run the following command to verify that Prometheus metrics are exposed by `JMX_EXPORTER` on port 9404. Replace *\$1JAR\$1SAMPLE\$1TRAFFIC\$1POD* with the running pod name and replace *\$1JAR\$1SAMPLE\$1TRAFFIC\$1NAMESPACE* with your application namespace. 

If you are running JMX Exporter on a cluster with the Fargate launch type, you also need to set up a Fargate profile before doing the steps in this procedure. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create fargateprofile --cluster MyCluster \
--namespace $JAR_SAMPLE_TRAFFIC_NAMESPACE\
 --name $JAR_SAMPLE_TRAFFIC_NAMESPACE
```

```
kubectl exec $JAR_SAMPLE_TRAFFIC_POD -n $JARCAT_SAMPLE_TRAFFIC_NAMESPACE -- curl http://localhost:9404
```

## Example: Apache Tomcat Docker image with Prometheus metrics
<a name="ContainerInsights-Prometheus-Sample-Workloads-javajmx-tomcat"></a>

Apache Tomcat server exposes JMX mBeans by default. You can integrate JMX Exporter with Tomcat to expose JMX mBeans as Prometheus metrics. The following example Dockerfile shows the steps to build a testing image: 

```
# From Tomcat 9.0 JDK8 OpenJDK 
FROM tomcat:9.0-jdk8-openjdk 

RUN mkdir -p /opt/jmx_exporter

COPY ./jmx_prometheus_javaagent-0.12.0.jar /opt/jmx_exporter
COPY ./config.yaml /opt/jmx_exporter
COPY ./setenv.sh /usr/local/tomcat/bin 
COPY your web application.war /usr/local/tomcat/webapps/

RUN chmod  o+x /usr/local/tomcat/bin/setenv.sh

ENTRYPOINT ["catalina.sh", "run"]
```

The following list explains the four `COPY` lines in this Dockerfile.
+ Download the latest JMX Exporter jar file from [https://github.com/prometheus/jmx\$1exporter](https://github.com/prometheus/jmx_exporter).
+ `config.yaml` is the JMX Exporter configuration file. For more information, see [https://github.com/prometheus/jmx\$1exporter\$1Configuration](https://github.com/prometheus/jmx_exporter#Configuration ).

  Here is a sample configuration file for Java and Tomcat:

  ```
  lowercaseOutputName: true
  lowercaseOutputLabelNames: true
  
  rules:
  - pattern: 'java.lang<type=OperatingSystem><>(FreePhysicalMemorySize|TotalPhysicalMemorySize|FreeSwapSpaceSize|TotalSwapSpaceSize|SystemCpuLoad|ProcessCpuLoad|OpenFileDescriptorCount|AvailableProcessors)'
    name: java_lang_OperatingSystem_$1
    type: GAUGE
  
  - pattern: 'java.lang<type=Threading><>(TotalStartedThreadCount|ThreadCount)'
    name: java_lang_threading_$1
    type: GAUGE
  
  - pattern: 'Catalina<type=GlobalRequestProcessor, name=\"(\w+-\w+)-(\d+)\"><>(\w+)'
    name: catalina_globalrequestprocessor_$3_total
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina global $3
    type: COUNTER
  
  - pattern: 'Catalina<j2eeType=Servlet, WebModule=//([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), name=([-a-zA-Z0-9+/$%~_-|!.]*), J2EEApplication=none, J2EEServer=none><>(requestCount|maxTime|processingTime|errorCount)'
    name: catalina_servlet_$3_total
    labels:
      module: "$1"
      servlet: "$2"
    help: Catalina servlet $3 total
    type: COUNTER
  
  - pattern: 'Catalina<type=ThreadPool, name="(\w+-\w+)-(\d+)"><>(currentThreadCount|currentThreadsBusy|keepAliveCount|pollerThreadCount|connectionCount)'
    name: catalina_threadpool_$3
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina threadpool $3
    type: GAUGE
  
  - pattern: 'Catalina<type=Manager, host=([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), context=([-a-zA-Z0-9+/$%~_-|!.]*)><>(processingTime|sessionCounter|rejectedSessions|expiredSessions)'
    name: catalina_session_$3_total
    labels:
      context: "$2"
      host: "$1"
    help: Catalina session $3 total
    type: COUNTER
  
  - pattern: ".*"
  ```
+ `setenv.sh` is a Tomcat startup script to start the JMX exporter along with Tomcat and expose Prometheus metrics on port 9404 of the localhost. It also provides the JMX Exporter with the `config.yaml` file path.

  ```
  $ cat setenv.sh 
  export JAVA_OPTS="-javaagent:/opt/jmx_exporter/jmx_prometheus_javaagent-0.12.0.jar=9404:/opt/jmx_exporter/config.yaml $JAVA_OPTS"
  ```
+ your web application.war is your web application `war` file to be loaded by Tomcat.

Build a Docker image with this configuration and upload it to an image repository.

## Example: Java Jar Application Docker image with Prometheus metrics
<a name="ContainerInsights-Prometheus-Sample-Workloads-javajmx-jar"></a>

The following example Dockerfile shows the steps to build a testing image: 

```
# Alpine Linux with OpenJDK JRE
FROM openjdk:8-jre-alpine

RUN mkdir -p /opt/jmx_exporter

COPY ./jmx_prometheus_javaagent-0.12.0.jar /opt/jmx_exporter
COPY ./SampleJavaApplication-1.0-SNAPSHOT.jar /opt/jmx_exporter
COPY ./start_exporter_example.sh /opt/jmx_exporter
COPY ./config.yaml /opt/jmx_exporter

RUN chmod -R o+x /opt/jmx_exporter
RUN apk add curl

ENTRYPOINT exec /opt/jmx_exporter/start_exporter_example.sh
```

The following list explains the four `COPY` lines in this Dockerfile.
+ Download the latest JMX Exporter jar file from [https://github.com/prometheus/jmx\$1exporter](https://github.com/prometheus/jmx_exporter).
+ `config.yaml` is the JMX Exporter configuration file. For more information, see [https://github.com/prometheus/jmx\$1exporter\$1Configuration](https://github.com/prometheus/jmx_exporter#Configuration ).

  Here is a sample configuration file for Java and Tomcat:

  ```
  lowercaseOutputName: true
  lowercaseOutputLabelNames: true
  
  rules:
  - pattern: 'java.lang<type=OperatingSystem><>(FreePhysicalMemorySize|TotalPhysicalMemorySize|FreeSwapSpaceSize|TotalSwapSpaceSize|SystemCpuLoad|ProcessCpuLoad|OpenFileDescriptorCount|AvailableProcessors)'
    name: java_lang_OperatingSystem_$1
    type: GAUGE
  
  - pattern: 'java.lang<type=Threading><>(TotalStartedThreadCount|ThreadCount)'
    name: java_lang_threading_$1
    type: GAUGE
  
  - pattern: 'Catalina<type=GlobalRequestProcessor, name=\"(\w+-\w+)-(\d+)\"><>(\w+)'
    name: catalina_globalrequestprocessor_$3_total
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina global $3
    type: COUNTER
  
  - pattern: 'Catalina<j2eeType=Servlet, WebModule=//([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), name=([-a-zA-Z0-9+/$%~_-|!.]*), J2EEApplication=none, J2EEServer=none><>(requestCount|maxTime|processingTime|errorCount)'
    name: catalina_servlet_$3_total
    labels:
      module: "$1"
      servlet: "$2"
    help: Catalina servlet $3 total
    type: COUNTER
  
  - pattern: 'Catalina<type=ThreadPool, name="(\w+-\w+)-(\d+)"><>(currentThreadCount|currentThreadsBusy|keepAliveCount|pollerThreadCount|connectionCount)'
    name: catalina_threadpool_$3
    labels:
      port: "$2"
      protocol: "$1"
    help: Catalina threadpool $3
    type: GAUGE
  
  - pattern: 'Catalina<type=Manager, host=([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), context=([-a-zA-Z0-9+/$%~_-|!.]*)><>(processingTime|sessionCounter|rejectedSessions|expiredSessions)'
    name: catalina_session_$3_total
    labels:
      context: "$2"
      host: "$1"
    help: Catalina session $3 total
    type: COUNTER
  
  - pattern: ".*"
  ```
+ `start_exporter_example.sh` is the script to start the JAR application with the Prometheus metrics exported. It also provides the JMX Exporter with the `config.yaml` file path.

  ```
  $ cat start_exporter_example.sh 
  java -javaagent:/opt/jmx_exporter/jmx_prometheus_javaagent-0.12.0.jar=9404:/opt/jmx_exporter/config.yaml -cp  /opt/jmx_exporter/SampleJavaApplication-1.0-SNAPSHOT.jar com.gubupt.sample.app.App
  ```
+ SampleJavaApplication-1.0-SNAPSHOT.jar is the sample Java application jar file. Replace it with the Java application that you want to monitor.

Build a Docker image with this configuration and upload it to an image repository.

# Set up HAProxy with a metric exporter on Amazon EKS and Kubernetes
<a name="ContainerInsights-Prometheus-Sample-Workloads-haproxy"></a>

HAProxy is an open-source proxy application. For more information, see [HAProxy](https://www.haproxy.org).

If you are running HAProxy on a cluster with the Fargate launch type, you need to set up a Fargate profile before doing the steps in this procedure. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

```
eksctl create fargateprofile --cluster MyCluster \
--namespace haproxy-ingress-sample --name haproxy-ingress-sample
```

**To install HAProxy with a metric exporter to test Container Insights Prometheus support**

1. Enter the following command to add the Helm incubator repo:

   ```
   helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
   ```

1. Enter the following command to create a new namespace:

   ```
   kubectl create namespace haproxy-ingress-sample
   ```

1. Enter the following commands to install HAProxy:

   ```
   helm install haproxy haproxy-ingress/haproxy-ingress \
   --namespace haproxy-ingress-sample \
   --set defaultBackend.enabled=true \
   --set controller.stats.enabled=true \
   --set controller.metrics.enabled=true \
   --set-string controller.metrics.service.annotations."prometheus\.io/port"="9101" \
   --set-string controller.metrics.service.annotations."prometheus\.io/scrape"="true"
   ```

1. Enter the following command to confirm the annotation of the service:

   ```
   kubectl describe service haproxy-haproxy-ingress-metrics -n haproxy-ingress-sample
   ```

   You should see the following annotations.

   ```
   Annotations:   prometheus.io/port: 9101
                  prometheus.io/scrape: true
   ```

**To uninstall HAProxy**
+ Enter the following commands:

  ```
  helm uninstall haproxy --namespace haproxy-ingress-sample
  kubectl delete namespace haproxy-ingress-sample
  ```

# Tutorial for adding a new Prometheus scrape target: Redis OSS on Amazon EKS and Kubernetes clusters
<a name="ContainerInsights-Prometheus-Setup-redis-eks"></a>

This tutorial provides a hands-on introduction to scrape the Prometheus metrics of a sample Redis OSS application on Amazon EKS and Kubernetes. Redis OSS (https://redis.io/) is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. For more information, see [ redis](https://redis.io/).

redis\$1exporter (MIT License licensed) is used to expose the Redis OSS Prometheus metrics on the specified port (default: 0.0.0.0:9121). For more information, see [ redis\$1exporter](https://github.com/oliver006/redis_exporter).

The Docker images in the following two Docker Hub repositories are used in this tutorial: 
+ [ redis](https://hub.docker.com/_/redis?tab=description)
+ [ redis\$1exporter](https://hub.docker.com/r/oliver006/redis_exporter)

**To install a sample Redis OSS workload which exposes Prometheus metrics**

1. Set the namespace for the sample Redis OSS workload.

   ```
   REDIS_NAMESPACE=redis-sample
   ```

1. If you are running Redis OSS on a cluster with the Fargate launch type, you need to set up a Fargate profile. To set up the profile, enter the following command. Replace *MyCluster* with the name of your cluster.

   ```
   eksctl create fargateprofile --cluster MyCluster \
   --namespace $REDIS_NAMESPACE --name $REDIS_NAMESPACE
   ```

1. Enter the following command to install the sample Redis OSS workload.

   ```
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_traffic/redis/redis-traffic-sample.yaml \
   | sed "s/{{namespace}}/$REDIS_NAMESPACE/g" \
   | kubectl apply -f -
   ```

1. The installation includes a service named `my-redis-metrics` which exposes the Redis OSS Prometheus metric on port 9121 Enter the following command to get the details of the service: 

   ```
   kubectl describe service/my-redis-metrics  -n $REDIS_NAMESPACE
   ```

   In the `Annotations` section of the results, you'll see two annotations which match the Prometheus scrape configuration of the CloudWatch agent, so that it can auto-discover the workloads:

   ```
   prometheus.io/port: 9121
   prometheus.io/scrape: true
   ```

   The related Prometheus scrape configuration can be found in the `- job_name: kubernetes-service-endpoints` section of `kubernetes-eks.yaml` or `kubernetes-k8s.yaml`.

**To start collecting Redis OSS Prometheus metrics in CloudWatch**

1. Download the latest version of the of `kubernetes-eks.yaml` or `kubernetes-k8s.yaml` file by entering one of the following commands. For an Amazon EKS cluster with the EC2 launch type, enter this command.

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks.yaml
   ```

   For an Amazon EKS cluster with the Fargate launch type, enter this command.

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-eks-fargate.yaml
   ```

   For a Kubernetes cluster running on an Amazon EC2 instance, enter this command.

   ```
   curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/prometheus-k8s.yaml
   ```

1. Open the file with a text editor, and find the `cwagentconfig.json` section. Add the following subsection and save the changes. Be sure that the indentation follows the existing pattern.

   ```
   {
     "source_labels": ["pod_name"],
     "label_matcher": "^redis-instance$",
     "dimensions": [["Namespace","ClusterName"]],
     "metric_selectors": [
       "^redis_net_(in|out)put_bytes_total$",
       "^redis_(expired|evicted)_keys_total$",
       "^redis_keyspace_(hits|misses)_total$",
       "^redis_memory_used_bytes$",
       "^redis_connected_clients$"
     ]
   },
   {
     "source_labels": ["pod_name"],
     "label_matcher": "^redis-instance$",
     "dimensions": [["Namespace","ClusterName","cmd"]],
     "metric_selectors": [
       "^redis_commands_total$"
     ]
   },
   {
     "source_labels": ["pod_name"],
     "label_matcher": "^redis-instance$",
     "dimensions": [["Namespace","ClusterName","db"]],
     "metric_selectors": [
       "^redis_db_keys$"
     ]
   },
   ```

   The section you added puts the Redis OSS metrics onto the CloudWatch agent allow list. For a list of these metrics, see the following section.

1. If you already have the CloudWatch agent with Prometheus support deployed in this cluster, you must delete it by entering the following command.

   ```
   kubectl delete deployment cwagent-prometheus -n amazon-cloudwatch
   ```

1. Deploy the CloudWatch agent with your updated configuration by entering one of the following commands. Replace *MyCluster* and *region* to match your settings.

   For an Amazon EKS cluster with the EC2 launch type, enter this command.

   ```
   kubectl apply -f prometheus-eks.yaml
   ```

   For an Amazon EKS cluster with the Fargate launch type, enter this command.

   ```
   cat prometheus-eks-fargate.yaml \
   | sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" \
   | kubectl apply -f -
   ```

   For a Kubernetes cluster, enter this command.

   ```
   cat prometheus-k8s.yaml \
   | sed "s/{{cluster_name}}/MyCluster/;s/{{region_name}}/region/" \
   | kubectl apply -f -
   ```

## Viewing your Redis OSS Prometheus metrics
<a name="ContainerInsights-Prometheus-Setup-redis-eks-view"></a>

This tutorial sends the following metrics to the **ContainerInsights/Prometheus** namespace in CloudWatch. You can use the CloudWatch console to see the metrics in that namespace.


| Metric name | Dimensions | 
| --- | --- | 
|  `redis_net_input_bytes_total` |  ClusterName, `Namespace`  | 
|  `redis_net_output_bytes_total` |  ClusterName, `Namespace`  | 
|  `redis_expired_keys_total` |  ClusterName, `Namespace`  | 
|  `redis_evicted_keys_total` |  ClusterName, `Namespace`  | 
|  `redis_keyspace_hits_total` |  ClusterName, `Namespace`  | 
|  `redis_keyspace_misses_total` |  ClusterName, `Namespace`  | 
|  `redis_memory_used_bytes` |  ClusterName, `Namespace`  | 
|  `redis_connected_clients` |  ClusterName, `Namespace`  | 
|  `redis_commands_total` |  ClusterName, `Namespace`, cmd  | 
|  `redis_db_keys` |  ClusterName, `Namespace`, db  | 

**Note**  
The value of the **cmd** dimension can be: `append`, `client`, `command`, `config`, `dbsize`, `flushall`, `get`, `incr`, `info`, `latency`, or `slowlog`.  
The value of the **db** dimension can be `db0` to `db15`. 

You can also create a CloudWatch dashboard for your Redis OSS Prometheus metrics.

**To create a dashboard for Redis OSS Prometheus metrics**

1. Create environment variables, replacing the values below to match your deployment.

   ```
   DASHBOARD_NAME=your_cw_dashboard_name
   REGION_NAME=your_metric_region_such_as_us-east-1
   CLUSTER_NAME=your_k8s_cluster_name_here
   NAMESPACE=your_redis_service_namespace_here
   ```

1. Enter the following command to create the dashboard.

   ```
   curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/service/cwagent-prometheus/sample_cloudwatch_dashboards/redis/cw_dashboard_redis.json \
   | sed "s/{{YOUR_AWS_REGION}}/${REGION_NAME}/g" \
   | sed "s/{{YOUR_CLUSTER_NAME}}/${CLUSTER_NAME}/g" \
   | sed "s/{{YOUR_NAMESPACE}}/${NAMESPACE}/g" \
   ```