

# Model Monitor schedules and alerts


Using the Python SDK, you can create a model monitor for data quality, model quality, bias drift, or feature attribution drift. For more information about using SageMaker Model Monitor, see [Data and model quality monitoring with Amazon SageMaker Model Monitor](model-monitor.md). The Model Dashboard populates information from all the monitors you create on all your models in your account. You can track the status of each monitor, which indicates whether your monitor is running as expected or failed due to an internal error. You can also activate or deactivate any monitor in the model details page itself. For instructions about how to view scheduled monitors for a model, see [View scheduled monitors](model-dashboard-schedule-view.md). For instructions about how to activate or deactivate model monitors, see [Activate or deactivate a model monitor](model-dashboard-schedule-activate.md).

A properly-configured and actively-running model monitor might raise alerts, in which case the monitoring executions produce violation reports. For details about how alerts work and how to view alert results, history, and links to job reports for debug, see [View and edit alerts](model-dashboard-alerts.md).

# View scheduled monitors


Use SageMaker Model Monitor to continuously monitor your machine learning models for data drift, model quality, bias, and other issues that might impact model performance. After you've set up monitoring schedules, you can view the details of these scheduled monitors through the SageMaker AI console. The following procedure outlines the steps to access and review the scheduled monitors for a particular model, including their current status:

**To view a model’s scheduled monitors**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the scheduled monitors you want to view.

1. View the scheduled monitors in the **Monitor schedule** section. You can review the status for each monitor in the **Status schedule** column, which is one of the following values:
   + **Failed**: The monitoring schedule failed due to a problem with the configuration or settings (such as incorrect user permissions).
   + **Pending**: The monitor is in the process of becoming scheduled.
   + **Stopped**: The schedule is stopped by the user.
   + **Scheduled**: The schedule is created and runs at the frequency you specified.

# Activate or deactivate a model monitor


Use the following procedure to activate or deactivate a model monitor.

**To activate or deactivate a model monitor, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the alert you want to modify.

1. Choose the radio box next to the monitor schedule of the alert you want to modify.

1. (optional) Choose **Deactivate monitor schedule** if you want to deactivate your monitor schedule.

1. (optional) Choose **Activate monitor schedule** if you want to activate your monitor schedule.

# View and edit alerts


The Model Dashboard displays alerts you configured in Amazon CloudWatch. You can modify the alert criteria within the dashboard itself. The alert criteria depend upon two parameters:
+ **Datapoints to alert**: Within the evaluation period, how many execution failures raise an alert.
+ **Evaluation period**: The number of most recent monitoring executions to consider when evaluating alert status.

The following image shows an example scenario of a series of Model Monitor executions in which we set a hypothetical **Evaluation period** of 3 and a **Datapoints to alert** value of 2. After every monitoring execution, the number of failures are counted within the **Evaluation period** of 3. If the number of failures meets or exceeds the **Datapoints to alert** value 2, the monitor raises an alert and remains in alert status until the number of failures within the **Evaluation period** becomes less than 2 in subsequent iterations. In the image, the evaluation windows are red when the monitor raises an alert or remains in alert status, and green otherwise.

Note that even if the evaluation window size has not reached the **Evaluation period** of 3, as shown in the first 2 rows of the image, the monitor still raises an alert if the number of failures meets or exceeds the **Datapoints to alert** value of 2.

![\[A sequence of seven example monitoring executions.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/model_monitor/model-dashboard-alerts-window.png)


Within the monitor details page, you can view your alert history, edit existing alert criteria, and view job reports to help you debug alert failures. For instructions about how to view alert history or job reports for failed monitoring executions, see [View alert history or job reports](model-dashboard-alerts-view.md). For instructions about how to edit alert criteria, see [Edit alert criteria](model-dashboard-alerts-edit.md).

# View alert history or job reports


**To view alert history or job reports of failed executions, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the alert history you want to view.

1. In the **Schedule name** column, select the monitor name of the alert history you want to view.

1. To view alert history, select the **Alert history** tab.

1. (optional) To view job reports of monitoring executions, complete the following steps:

   1. In the **Alert history** tab, choose **View executions** for the alert you want to investigate.

   1. In the **Execution history** table, choose **View report** of the monitoring execution you want to investigate.

**The report displays the following information:**
      + **Feature**: The user-defined ML feature monitored
      + **Constraint**: The specific check within the monitor
      + **Violation details**: Information about why the constraint was violated

# Edit alert criteria


**To edit an alert in the Model Dashboard, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the alert you want to modify.

1. Choose the radio box next to the monitor schedule of the alert you want to modify.

1. Choose **Edit Alert** in the **Monitor schedule** section.

1. (optional) Change **Datapoints to alert** if you want to change the number of failures within the **Evaluation period** that initiate an alert.

1. (optional) Change **Evaluation period** if you want to change the number of most recent monitoring executions to consider when evaluating alert status.