

# Monitoring AWS Elemental Inference with Amazon CloudWatch
<a name="monitoring-cloudwatch"></a>

You can monitor AWS Elemental Inference using CloudWatch. CloudWatch collects raw data and processes it into readable, near real-time metrics. These statistics are kept for 15 months, so that you can access historical information and gain a better perspective on how your web application or service is performing. You can also set alarms that watch for certain thresholds, and send notifications or take actions when those thresholds are met. 

For more information about using CloudWatch with data, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).

For information about the metrics that CloudWatch produces from CloudWatch data, see the sections that follow.

**Topics**
+ [Components of a metric](inference-metrics-gen-info.md)
+ [Pricing to view Elemental Inference metrics](inference-metrics-pricing.md)
+ [Service availability and API metrics](inference-metrics-global.md)
+ [Elemental Inference performance metrics](inference-metrics-performance.md)

# Components of a metric
<a name="inference-metrics-gen-info"></a>

AWS Elemental Inference collects data that is the basis for metrics. It collects these *datapoints* every second and sends them immediately to Amazon CloudWatch. You can use CloudWatch to generate *metrics* for these datapoints.

A metric is a collection of datapoints that has had an aggregation (a *statistic*) applied and that has a *period* and a *time range*. For example, you can request the Dropped frames metric as a sum (the statistic) for a 1-minute period over 10 minutes (the time range). This result of this request is 10 metrics (because the range divided by the period is 10). 

Elemental Inference supports all the statistics offered by CloudWatch. However, some statistics aren't useful for Elemental Inference metrics. In the description of metrics later in this chapter, we include the recommended statistics for each metric.

Each Elemental Inference metric includes one or two specific sets of dimensions.

# Pricing to view Elemental Inference metrics
<a name="inference-metrics-pricing"></a>

For information about charges to view metrics on the CloudWatch console or to retrieve metrics using a CloudWatch API, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/). 

# Service availability and API metrics
<a name="inference-metrics-global"></a>

## API Latency
<a name="inference-metrics-api-latency"></a>

The duration of API calls from request initiation to response completion. 

****Details****
+ Name: ApiLatency
+ Supported dimension sets: Feed
+ Recommended statistic: p50, p95, p99, Average
+ Units: Milliseconds
+ Meaning of zero: Elemental Inference throttled requests because of request volume was too high. Typically, API requests fail with error code 429.
+ Meaning of no datapoints: There were no API calls on the specified feed.

## API Request Count
<a name="inference-metrics-api-count"></a>

The number of API calls made. 

**Details**
+ Name: ApiRequestCount
+ Supported dimension sets:
  + Feed
  + Feed, StatusCode: to monitor API requests that result in a specific category HTTP response codes.
+ Recommended statistic: Sum
+ Meaning of zero: No API requests were made to the specified feed during the time period.
+ Meaning of no datapoints: There were no API calls on the specified feed.

# Elemental Inference performance metrics
<a name="inference-metrics-performance"></a>

## GetMetadata request count
<a name="inference-metrics-getmetadata-count"></a>

The number of GetMetadata API requests made to retrieve metadata. 

**Details**
+ Name: GetMetadataRequestCount
+ Supported dimension sets:
  + Feed
  + Feed, StatusCode: to monitor API requests that result in a specific category HTTP response codes.
+ Recommended statistic: Sum
+ Units: Count
+ Meaning of zero: This metric will never emit zero.
+ Meaning of no datapoints: There were no GetMetadata API calls on the specified feed.

## Processing fault count
<a name="inference-metrics-processing-fault"></a>

The total number of processing failures.

**Details**
+ Name: ProcessingFaultCount
+ Supported dimension sets: Feed
+ Recommended statistic: Sum
+ Units: Count
+ Meaning of zero: All video segments submitted to the feed were processed successfully without any failures.
+ Meaning of no datapoints: No video segments were submitted. (PutMedia hasn’t been called.) 

## Processing latency
<a name="inference-metrics-processing-latency"></a>

The end-to-end time from when a video segment is submitted using PutMedia until the metadata is available to be retrieved using GetMetadata.

**Details**

Name: ProcessingLatency

Supported dimension sets:
+ Feed
+ Feed, Feature

Recommended statistics: p50, p95, p99, Average

Units: Milliseconds

Meaning of zero: There is probably an issue with the source media. For example, if the timestamp is wrong, it’s possible that the latency will seem to be 0. 

Meaning of no datapoints: No video segments were submitted (PutMedia hasn’t been called.)

## PutMedia request count
<a name="inference-metrics-putmedia-count"></a>

The number of PutMedia API requests made to submit video segments for processing.

**Details**
+ Name: PutMediaRequestCount
+ Supported dimension sets:
  + Feed, 
  + Feed, StatusCode: to monitor API requests that result in a specific category HTTP response codes.

  Recommended statistic: Sum
+ Units: Count
+ Meaning of zero: This metric will never emit zero.
+ Meaning of no datapoints: There were no PutMedia API calls on the specified feed.