

# Monitoring EMR Serverless
<a name="metrics"></a>

This section covers the ways that monitor your Amazon EMR Serverless applications and jobs.

**Topics**
+ [Monitoring EMR Serverless applications and jobs](app-job-metrics.md)
+ [Monitor Spark metrics with Amazon Managed Service for Prometheus](monitor-with-prometheus.md)
+ [EMR Serverless usage metrics](monitoring-usage.md)

# Monitoring EMR Serverless applications and jobs
<a name="app-job-metrics"></a>

With Amazon CloudWatch metrics for EMR Serverless, you can receive 1-minute CloudWatch metrics and access CloudWatch dashboards to access near-real-time operations and performance of your EMR Serverless applications.

EMR Serverless sends metrics to CloudWatch every minute. EMR Serverless emits these metrics at the application level as well as the job, worker-type, and capacity-allocation-type levels.

To get started, use the EMR Serverless CloudWatch dashboard template provided in the [EMR Serverless GitHub repository](https://github.com/aws-samples/emr-serverless-samples/tree/main/cloudformation/emr-serverless-cloudwatch-dashboard/) and deploy it.

**Note**  
[EMR Serverless interactive workloads](interactive-workloads.md) have only application-level monitoring enabled, and have a new worker type dimension, `Spark_Kernel`. To monitor and debug your interactive workloads, access the logs and Apache Spark UI from [within your EMR Studio Workspace](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-studio-debug.html#emr-studio-debug-serverless).

## Monitoring metrics
<a name="app-job-metrics-versions"></a>

**Important**  
We are restructuring our metrics display to add `ApplicationName` and `JobName` as dimensions. For release 7.10 and later, the older metrics will no longer be updated. For EMR releases below 7.10, the older metrics are still available.

**Current dimensions**

The table below describes the EMR Serverless dimensions available within the `AWS/EMR Serverless` namespace.


**Dimensions for EMR Serverless metrics**  

| Dimension | Description | 
| --- | --- | 
| ApplicationId | Filters for all metrics of an EMR Serverless application using the application ID. | 
| ApplicationName | Filters for all metrics of an EMR Serverless application using the name. If the name isn't provided, or contains non-ASCII characters, it is published as **[Unspecified]**. | 
| JobId | Filters for all metrics of an EMR Serverless the job run ID. | 
| JobName | Filters for all metrics of an EMR Serverless job run using the name. If the name isn't provided, or contains non-ASCII characters, it is published as **[Unspecified]**. | 
| WorkerType | Filters for all metrics of a given worker type. For example, you can filter for `SPARK_DRIVER` and `SPARK_EXECUTORS` for Spark jobs. | 
| CapacityAllocationType | Filters for all metrics of a given capacity allocation type. For example, you can filter for `PreInitCapacity` for pre-initialized capacity and `OnDemandCapacity` for everything else. | 

## Application-level monitoring
<a name="app-level-metrics"></a>

You can monitor capacity usage at the EMR Serverless application level with Amazon CloudWatch metrics. You can also set up a single display to monitor application capacity usage in a CloudWatch dashboard.


**EMR Serverless application metrics**  

| Metric | Description | Unit | Dimension | 
| --- | --- | --- | --- | 
| MaxCPUAllowed |  The maximum CPU allowed for the application.  | vCPU | ApplicationId, ApplicationName | 
| MaxMemoryAllowed |  The maximum memory in GB allowed for the application.  | Gigabytes (GB) | ApplicationId, ApplicationName | 
| MaxStorageAllowed |  The maximum storage in GB allowed for the application.  | Gigabytes (GB) | ApplicationId, ApplicationName | 
| CPUAllocated |  The total numbers of vCPUs allocated.  | vCPU | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 
| IdleWorkerCount |  The number of total workers idle.  | Count | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 
| MemoryAllocated |  The total memory in GB allocated.  | Gigabytes (GB) | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 
| PendingCreationWorkerCount |  The number of total workers pending creation.  | Count | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 
| RunningWorkerCount |  The number of total workers in use by the application.  | Count | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 
| StorageAllocated |  The total disk storage in GB allocated.  | Gigabytes (GB) | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 
| TotalWorkerCount |  The number of total workers available.  | Count | ApplicationId, ApplicationName, WorkerType, CapacityAllocationType | 

## Job-level monitoring
<a name="job-level-metrics"></a>

Amazon EMR Serverless sends the following job-level metrics to Amazon CloudWatch every one minute. You can access the metric values for aggregate job runs by job run state. The unit for each of the metrics is *count*.


**EMR Serverless job-level metrics**  

| Metric | Description | Dimension | 
| --- | --- | --- | 
| SubmittedJobs | The number of jobs in a Submitted state. | ApplicationId, ApplicationName | 
| PendingJobs | The number of jobs in a Pending state. | ApplicationId, ApplicationName | 
| ScheduledJobs | The number of jobs in a Scheduled state. | ApplicationId, ApplicationName | 
| RunningJobs | The number of jobs in a Running state. | ApplicationId, ApplicationName | 
| SuccessJobs | The number of jobs in a Success state. | ApplicationId, ApplicationName | 
| FailedJobs | The number of jobs in a Failed state. | ApplicationId, ApplicationName | 
| CancellingJobs | The number of jobs in a Cancelling state. | ApplicationId, ApplicationName | 
| CancelledJobs | The number of jobs in a Cancelled state. | ApplicationId, ApplicationName | 

You can monitor engine-specific metrics for running and completed EMR Serverless jobs with engine-specific application UIs. When you access the UI for a running job, the live application UI displays with real-time updates. When you access the UI for a completed job, the persistent app UI displays.

**Running jobs**

For your running EMR Serverless jobs, access a real-time interface that provides engine-specific metrics. You can use either the Apache Spark UI or the Hive Tez UI to monitor and debug your jobs. To access these UIs, use the EMR Studio console or request a secure URL endpoint with the AWS Command Line Interface.

**Completed jobs**

For your completed EMR Serverless jobs, use the Spark History Server or the Persistent Hive Tez UI to access jobs details, stages, tasks, and metrics for Spark or Hive jobs runs. To access these UIs, use the EMR Studio console, or request a secure URL endpoint with the AWS Command Line Interface.

## Job worker-level monitoring
<a name="job-worker-level-metrics"></a>

Amazon EMR Serverless sends the following job worker level metrics that are available in the `AWS/EMRServerless` namespace and `Job Worker Metrics` metric group to Amazon CloudWatch. EMR Serverless collects data points from individual workers during job runs at the job level, worker-type, and the capacity-allocation-type level. You can use `ApplicationId` as a dimension to monitor multiple jobs that belong to the same application.

**Note**  
To view the total CPU and Memory used by an EMR Serverless job when viewing the metrics in the Amazon CloudWatch console, use the Statistic as Sum and Period as 1 minute.


**EMR Serverless job worker-level metrics**  

| Metric | Description | Unit | Dimension | 
| --- | --- | --- | --- | 
| WorkerCpuAllocated | The total numbers of vCPU cores allocated for workers in a job run. | vCPU | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerCpuUsed | The total numbers of vCPU cores utilized by workers in a job run. | vCPU | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerMemoryAllocated | The total memory in GB allocated for workers in a job run. | Gigabytes (GB) | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerMemoryUsed | The total memory in GB utilized by workers in a job run. | Gigabytes (GB) | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerEphemeralStorageAllocated | The number of bytes of ephemeral storage allocated for workers in a job run. | Gigabytes (GB) | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerEphemeralStorageUsed | The number of bytes of ephemeral storage used by workers in a job run. | Gigabytes (GB) | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerStorageReadBytes | The number of bytes read from storage by workers in a job run. | Bytes | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 
| WorkerStorageWriteBytes | The number of bytes written to storage from workers in a job run. | Bytes | JobId, JobName, ApplicationId, ApplicationName, WorkerType, and CapacityAllocationType | 

The steps below describe how to access the various types of metrics.

------
#### [ Console ]

**To access your application UI with the console**

1. Navigate to your EMR Serverless application on the EMR Studio with the instructions in [Getting started from the console](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/getting-started.html#gs-console). 

1. To access engine-specific application UIs and logs for a running job: 

   1. Choose a job with a `RUNNING` status.

   1. Select the job on the **Application details** page, or navigate to the **Job details** page for your job.

   1. Under the **Display UI** dropdown menu, choose either **Spark UI** or **Hive Tez UI** to navigate to the application UI for your job type. 

   1. To access Spark engine logs, navigate to the **Executors** tab in the Spark UI, and choose the **Logs** link for the driver. To access Hive engine logs, choose the **Logs** link for the appropriate DAG in the Hive Tez UI.

1. To access engine-specific application UIs and logs for a completed job: 

   1. Choose a job with a `SUCCESS` status.

   1. Select the job on your application's **Application details** page or navigate to the job's **Job details** page.

   1. Under the **Display UI** dropdown menu, choose either **Spark History Server** or **Persistent Hive Tez UI** to navigate to the application UI for your job type. 

   1. To access Spark engine logs, navigate to the **Executors** tab in the Spark UI, and choose the **Logs** link for the driver. To access Hive engine logs, choose the **Logs** link for the appropriate DAG in the Hive Tez UI.

------
#### [ AWS CLI ]

**To access your application UI with the AWS CLI**
+ To generate a URL that use to access your application UI for running and completed jobs, call the `GetDashboardForJobRun` API. 

  ```
  aws emr-serverless get-dashboard-for-job-run /
  --application-id <application-id> /
  --job-run-id <job-id>
  ```

  The URL that you generate is valid for one hour.

------

# Monitor Spark metrics with Amazon Managed Service for Prometheus
<a name="monitor-with-prometheus"></a>

With Amazon EMR releases 7.1.0 and higher, you can integrate EMR Serverless with Amazon Managed Service for Prometheus to collect Apache Spark metrics for EMR Serverless jobs and applications. This integration is available when you submit a job or create an application using either the AWS console, the EMR Serverless API, or the AWS CLI.

## Prerequisites
<a name="monitoring-with-prometheus-prereqs"></a>

Before you can deliver your Spark metrics to Amazon Managed Service for Prometheus, complete the following prerequisites.
+ [Create an Amazon Managed Service for Prometheus workspace.](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-create-workspace.html) This workspace serves as an ingestion endpoint. Make a note of the URL displayed for **Endpoint - remote write URL**. You'll need to specify the URL when you create your EMR Serverless application.
+ To grant access of your jobs to Amazon Managed Service for Prometheus for monitoring purposes, add the following policy to your job execution role.

  ```
  {
      "Sid": "AccessToPrometheus",
      "Effect": "Allow",
      "Action": ["aps:RemoteWrite"],
      "Resource": "arn:aws:aps:<AWS_REGION>:<AWS_ACCOUNT_ID>:workspace/<WORKSPACE_ID>"
  }
  ```

## Setup
<a name="monitoring-with-prometheus-setup"></a>

**To use the AWS console to create an application that's integrated with Amazon Managed Service for Prometheus**

1. See [Getting started with Amazon EMR Serverless](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/getting-started.html                             ) to create an application.

1. While you're creating an application, choose **Use custom settings**, and then configure your application by specifying the information into the fields you want to configure.

1. Under **Application logs and metrics**, choose **Deliver engine metrics to Amazon Managed Service for Prometheus**, and then specify your remote write URL.

1. Specify any other configuration settings you want, and then choose **Create and start application**.

**Use the AWS CLI or EMR Serverless API**

You can also use the AWS CLI or EMR Serverless API to integrate your EMR Serverless application with Amazon Managed Service for Prometheus when you're running the `create-application` or the `start-job-run` commands.

------
#### [ create-application ]

```
aws emr-serverless create-application \
--release-label emr-7.1.0 \
--type "SPARK" \
--monitoring-configuration '{ 
    "prometheusMonitoringConfiguration": {
        "remoteWriteUrl": "https://aps-workspaces.<AWS_REGION>.amazonaws.com/workspaces/<WORKSPACE_ID>/api/v1/remote_write"
    }
}'
```

------
#### [ start-job-run ]

```
aws emr-serverless start-job-run \
--application-id <APPPLICATION_ID> \
--execution-role-arn <JOB_EXECUTION_ROLE> \
--job-driver '{
    "sparkSubmit": {
        "entryPoint": "local:///usr/lib/spark/examples/src/main/python/pi.py",
        "entryPointArguments": ["10000"],
        "sparkSubmitParameters": "--conf spark.dynamicAllocation.maxExecutors=10"
    }
}' \
--configuration-overrides '{
     "monitoringConfiguration": {
        "prometheusMonitoringConfiguration": {
            "remoteWriteUrl": "https://aps-workspaces.<AWS_REGION>.amazonaws.com/workspaces/<WORKSPACE_ID>/api/v1/remote_write"
        }
    }
}'
```

------

Including `prometheusMonitoringConfiguration` in your command indicates that EMR Serverless must run the Spark job with an agent that collects the Spark metrics and writes them to your `remoteWriteUrl` endpoint for Amazon Managed Service for Prometheus. You can then use the Spark metrics in Amazon Managed Service for Prometheus for visualization, alerts, and analysis.

## Advanced configuration properties
<a name="monitoring-with-prometheus-config-options"></a>

EMR Serverless uses a component within Spark named `PrometheusServlet` to collect Spark metrics and translates performance data into data that's compatible with Amazon Managed Service for Prometheus. By default, EMR Serverless sets default values in Spark and parses driver and executor metrics when you submit a job using `PrometheusMonitoringConfiguration`. 

The following table describes all of the properties configure when submitting a Spark job that sends metrics to Amazon Managed Service for Prometheus.


| Spark property | Default value | Description | 
| --- | --- | --- | 
| spark.metrics.conf.\$1.sink.prometheusServlet.class | org.apache.spark.metrics.sink.PrometheusServlet | The class that Spark uses to send metrics to Amazon Managed Service for Prometheus. To override the default behavior, specify your own custom class. | 
| spark.metrics.conf.\$1.source.jvm.class | org.apache.spark.metrics.source.JvmSource | The class Spark uses to collect and send crucial metrics from the underlying Java virtual machine. To stop collecting JVM metrics, disable this property by setting it to an empty string, such as `""`. To override the default behavior, specify your own custom class.  | 
| spark.metrics.conf.driver.sink.prometheusServlet.path | /metrics/prometheus | The distinct URL that Amazon Managed Service for Prometheus uses to collect metrics from the driver. To override the default behavior, specify your own path. To stop collecting driver metrics, disable this property by setting it to an empty string, such as `""`. | 
| spark.metrics.conf.executor.sink.prometheusServlet.path | /metrics/executor/prometheus | The distinct URL that Amazon Managed Service for Prometheus uses to collect metrics from the executor. To override the default behavior, specify your own path. To stop collecting executor metrics, disable this property by setting it to an empty string, such as `""`. | 

For more information about the Spark metrics, refer to [Apache Spark metrics](https://spark.apache.org/docs/3.5.0/monitoring.html#metrics).

## Considerations and limitations
<a name="monitoring-with-prometheus-limitations"></a>

When using Amazon Managed Service for Prometheus to collect metrics from EMR Serverless, consider the following considerations and limitations.
+ Support for using Amazon Managed Service for Prometheus with EMR Serverless is available only in the [AWS Regions where Amazon Managed Service for Prometheus is generally available.](https://docs.aws.amazon.com/general/latest/gr/prometheus-service.html)
+ Running the agent to collect Spark metrics on Amazon Managed Service for Prometheus requires more resources from workers. If you choose a smaller worker size, such as one vCPU worker, your job run time might increase.
+ Support for using Amazon Managed Service for Prometheus with EMR Serverless is available only for Amazon EMR releases 7.1.0 and higher.
+ Amazon Managed Service for Prometheus must be deployed in the same account where you run EMR Serverless to collect metrics.

# EMR Serverless usage metrics
<a name="monitoring-usage"></a>

You can use Amazon CloudWatch usage metrics to provide visibility into the resources that your account uses. Use these metrics to visualize your service usage on CloudWatch graphs and dashboards.

EMR Serverless usage metrics correspond to Service Quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information, refer to [Service Quotas and Amazon CloudWatch alarms](https://docs.aws.amazon.com/servicequotas/latest/userguide/configure-cloudwatch.html) in the *Service Quotas User Guide*.

For more information about EMR Serverless service quotas, refer to [Endpoints and quotas for EMR Serverless](endpoints-quotas.md).

## Service quota usage metrics for EMR Serverless
<a name="usage-metrics"></a>

EMR Serverless publishes the following service quota usage metrics in the `AWS/Usage` namespace.


****  

| Metric | Description | 
| --- | --- | 
| `ResourceCount`  | The total number of the specified resource that is running on your account. The resource is defined by the [dimensions](#usage-metrics-dimensions) that are associated with the metric. | 

## Dimensions for EMR Serverless service quota usage metrics
<a name="usage-metrics-dimensions"></a>

You can use the following dimensions to refine the usage metrics that EMR Serverless publishes.


****  

| Dimension | Value | Description | 
| --- | --- | --- | 
|  `Service`  |  EMR Serverless  | The name of the AWS service that contains the resource. | 
|  `Type`  |  Resource  | The type of entity that EMR Serverless is reporting. | 
|  `Resource`  |  vCPU  | The type of resource that EMR Serverless is tracking. | 
|  `Class`  |  None  | The class of resource that EMR Serverless is tracking. | 