

# Alerts in Grafana version 12
<a name="v12-alerts"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana Alerting allows you to learn about problems in your systems moments after they occur.

Monitor your incoming metrics data or log entries and set up your Alerting system to watch for specific events or circumstances and then send notifications when those things are found.

In this way, you eliminate the need for manual monitoring and provide a first line of defense against system outages or changes that could turn into major incidents.

Using Grafana Alerting, you create queries and expressions from multiple data sources — no matter where your data is stored — giving you the flexibility to combine your data and alert on your metrics and logs in new and unique ways. You can then create, manage, and take action on your alerts from a single, consolidated view, and improve your team’s ability to identify and resolve issues quickly.

With Mimir and Loki alert rules you can run alert expressions closer to your data and at massive scale, all managed by the Grafana UI you are already familiar with.

**Note**  
The `$value` variable in alert notification templates now returns the query value when querying a single data source. Review alert templates that use `$value` and update formatting if needed.

## Key features and benefits
<a name="v12-alerting-key-features"></a>

**One page for all alerts**

A single Grafana Alerting page consolidates both Grafana-managed alerts and alerts that reside in your Prometheus-compatible data source in one single place.

**Multi-dimensional alerts**

Alert rules can create multiple individual alert instances per alert rule, known as multi-dimensional alerts, giving you the power and flexibility to gain visibility into your entire system with just a single alert rule. You do this by adding labels to your query to specify which component is being monitored and generate multiple alert instances for a single alert rule. For example, if you want to monitor each server in a cluster, a multi-dimensional alert will alert on each CPU, whereas a standard alert will alert on the overall server.

**Route alerts**

Route each alert instance to a specific contact point based on labels you define. Notification policies are the set of rules for where, when, and how the alerts are routed to contact points.

**Silence alerts**

Silences stop notifications from getting created and last for only a specified window of time. Silences allow you to stop receiving persistent notifications from one or more alert rules. You can also partially pause an alert based on certain criteria. You can create silences that apply to specific alert rules with granular permissions, providing more targeted alert suppression. Silences have their own dedicated section for better organization and visibility, so that you can scan your paused alert rules without cluttering the main alerting view.

**Mute timings**

A mute timing is a recurring interval of time when no new notifications for a policy are generated or sent. Use them to prevent alerts from firing a specific and reoccurring period, for example, a regular maintenance period.

Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created.

## Design your Alerting system
<a name="v12-alerting-design"></a>

Monitoring complex IT systems and understanding whether everything is up and running correctly is a difficult task. Setting up an effective alert management system is therefore essential to inform you when things are going wrong before they start to impact your business outcomes.

Designing and configuring an alert management set up that works takes time.

Here are some tips on how to create an effective alert management set up for your business:

**Which are the key metrics for your business that you want to monitor and alert on?**
+ Find events that are important to know about and not so trivial or frequent that recipients ignore them.
+ Alerts should only be created for big events that require immediate attention or intervention.
+ Consider quality over quantity.

**Which type of Alerting do you want to use?**
+ Choose between Grafana-managed Alerting or Grafana Mimir or Loki-managed Alerting; or both.

**How do you want to organize your alerts and notifications?**
+ Be selective about who you set to receive alerts. Consider sending them to whomever is on call or a specific Slack channel.
+ Automate as far as possible using the Alerting API or alerts as code (Terraform).

**How can you reduce alert fatigue?**
+ Avoid noisy, unnecessary alerts by using silences, mute timings, or pausing alert rule evaluation.
+ Continually tune your alert rules to review effectiveness. Remove alert rules to avoid duplication or ineffective alerts.
+ Think carefully about priority and severity levels.
+ Continually review your thresholds and evaluation rules.

## Grafana alerting limitations
<a name="v12-alerting-limitations"></a>
+ When aggregating rules from other systems, the Grafana alerting system can retrieve rules from all available Amazon Managed Service for Prometheus, Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from other supported data sources.

**Important**  
Amazon Managed Grafana has an alert evaluation timeout of 30 seconds. Queries made by alerts have a maximum duration of 30 seconds due to the high volume of queries the alert engine can generate. This timeout is not configurable. For more information, see [Amazon Managed Grafana service quotas](https://docs.aws.amazon.com/general/latest/gr/grafana-service.html#grafana-quotas) in the *AWS General Reference*.

**Topics**
+ [Key features and benefits](#v12-alerting-key-features)
+ [Design your Alerting system](#v12-alerting-design)
+ [Grafana alerting limitations](#v12-alerting-limitations)
+ [Overview](v12-alerting-overview.md)
+ [Set up Alerting](v12-alerting-setup.md)
+ [Configure alerting](v12-alerting-configure.md)
+ [Manage your alerts](v12-alerting-manage.md)

# Overview
<a name="v12-alerting-overview"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Whether you’re just starting out or you’re a more experienced user of Grafana Alerting, learn more about the fundamentals and available features that help you create, manage, and respond to alerts; and improve your team’s ability to resolve issues quickly.

## Principles
<a name="v12-alerting-overview-principles"></a>

In Prometheus-based alerting systems, you have an alert generator that creates alerts and an alert receiver that receives alerts. For example, Prometheus is an alert generator and is responsible for evaluating alert rules, while Alertmanager is an alert receiver and is responsible for grouping, inhibiting, silencing, and sending notifications about firing and resolved alerts.

Grafana Alerting is built on the Prometheus model of designing alerting systems. It has an internal alert generator responsible for scheduling and evaluating alert rules, as well as an internal alert receiver responsible for grouping, inhibiting, silencing, and sending notifications. Grafana doesn’t use Prometheus as its alert generator because Grafana Alerting needs to work with many other data sources in addition to Prometheus. However, it does use Alertmanager as its alert receiver.

Alerts are sent to the alert receiver where they are routed, grouped, inhibited, silenced and notified. In Grafana Alerting, the default alert receiver is the Alertmanager embedded inside Grafana, and is referred to as the Grafana Alertmanager. However, you can use other Alertmanagers too, and these are referred to as [External Alertmanagers](v12-alerting-setup-alertmanager.md).

## Fundamentals
<a name="v12-alerting-overview-fundamentals"></a>

The following provides an overview of the different parts of Grafana alerting.

### Alert rules
<a name="v12-alerting-overview-alert-rules"></a>

An alert rule is a set of criteria that determine when an alert should fire. It consists of one or more queries and expressions, a condition which needs to be met, an interval which determines how often the alert rule is evaluated, and a duration over which the condition must be met for an alert to fire.

Alert rules are evaluated over their interval, and each alert rule can have zero, one, or any number of alerts firing at a time. The state of the alert rule is determined by its most `severe` alert, which can be one of Normal, Pending, or Firing. For example, if at least one of an alert rule’s alerts are firing then the alert rule is also firing. The health of an alert rule is determined by the status of its most recent evaluation. These can be OK, Error, and NoData.

Alert rules can be configured to keep their last state when a query returns no data or an error, preventing unnecessary alert state changes. Alert rules also maintain a version history, allowing you to track changes and revert to previous configurations. Alert rules support a Recovering state that indicates when an alert condition is no longer firing but has not yet returned to normal.

A very important feature of alert rules is that they support custom annotations and labels. These allow you to instrument alerts with additional metadata such as summaries and descriptions, and add additional labels to route alerts to specific notification policies.

### Alerts
<a name="v12-alerting-overview-alerts"></a>

Alerts are uniquely identified by sets of key/value pairs called Labels. Each key is a label name and each value is a label value. For example, one alert might have the labels `foo=bar` and another alert might have the labels `foo=baz`. An alert can have many labels such as `foo=bar,bar=baz` but it cannot have the same label twice such as `foo=bar,foo=baz`. Two alerts cannot have the same labels either, and if two alerts have the same labels such as `foo=bar,bar=baz` and `foo=bar,bar=baz` then one of the alerts will be discarded. Alerts are resolved when the condition in the alert rule is no longer met, or the alert rule is deleted.

In Grafana Managed Alerts, alerts can be in Normal, Pending, Alerting, No Data or Error states. In Data source Managed Alerts, such as Mimir and Loki, alerts can be in Normal, Pending and Alerting, but not NoData or Error.

### Contact points
<a name="v12-alerting-overview-contact-points"></a>

Contact points determine where notifications are sent. For example, you might have a contact point that sends notifications to an email address, to Slack, to an incident management system (IRM) such as Grafana OnCall or Pagerduty, or to a webhook.

The notifications that are sent from contact points can be customized using notification templates. You can use notification templates to change the title, message, and structure of the notification. Notification templates are not specific to individual integrations or contact points.

### Notification policies
<a name="v12-alerting-overview-notification-policies"></a>

Notification policies group alerts and then route them to contact points. They determine when notifications are sent, and how often notifications should be repeated.

Alerts are matched to notification policies using label matchers. These are human-readable expressions that assert if the alert’s labels exactly match, do not exactly match, contain, or do not contain some expected text. For example, the matcher `foo=bar` matches alerts with the label `foo=bar` while the matcher `foo=~[a-zA-Z]+` matches alerts with any label called foo with a value that matches the regular expression `[a-zA-Z]+`.

By default, an alert can only match one notification policy. However, with the `continue` feature alerts can be made to match any number of notification policies at the same time. For more information about notification policies, see [Notification Policies](v12-alerting-explore-notifications-policies-details.md).

### Silences and mute timings
<a name="v12-alerting-overview-silences-and-mute-timings"></a>

Silences and mute timings allow you to pause notifications for specific alerts or even entire notification policies. Use a silence to pause notifications on an ad-hoc basis, such as while working on a fix for an alert; and use mute timings to pause notifications at regular intervals, such as during regularly scheduled maintenance windows.

**Topics**
+ [Principles](#v12-alerting-overview-principles)
+ [Fundamentals](#v12-alerting-overview-fundamentals)
+ [Data sources and Grafana alerting](v12-alerting-overview-datasources.md)
+ [Alerting on numeric data](v12-alerting-overview-numeric.md)
+ [Labels and annotations](v12-alerting-overview-labels.md)
+ [About alert rules](v12-alerting-explore-rules.md)
+ [Alertmanager](v12-alerting-explore-alertmanager.md)
+ [Contact points](v12-alerting-explore-contacts.md)
+ [Notifications](v12-alerting-explore-notifications.md)
+ [Alerting high availability](v12-alerting-explore-high-availability.md)

# Data sources and Grafana alerting
<a name="v12-alerting-overview-datasources"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

There are a number of data sources that are compatible with Grafana Alerting. Each data source is supported by a plugin. Grafana alerting requires that data source plugins be *backend* plugins, in order to evaluate rules using the data source, because the evaluation engine runs on the backend. Plugins must also specify that they are compatible with Grafana alerting.

Data sources are added and updated over time. The following data sources are known to be compatible with Grafana alerting.
+ [Connect to an Amazon CloudWatch data source](using-amazon-cloudwatch-in-AMG.md)
+ [Connect to an Azure Monitor data source](using-azure-monitor-in-AMG.md)
+ [Connect to an Amazon OpenSearch Service data source](using-Amazon-OpenSearch-in-AMG.md)
+ [Connect to a Google Cloud Monitoring data source](using-google-cloud-monitoring-in-grafana.md)
+ [Connect to a Graphite data source](using-graphite-in-AMG.md)
+ [Connect to an InfluxDB data source](using-influxdb-in-AMG.md)
+ [Connect to a Loki data source](using-loki-in-AMG.md)
+ [Connect to a Microsoft SQL Server data source](using-microsoft-sql-server-in-AMG.md)
+ [Connect to a MySQL data source](using-mysql-in-AMG.md)
+ [Connect to an OpenTSDB data source](using-opentsdb-in-AMG.md)
+ [Connect to a PostgreSQL data source](using-postgresql-in-AMG.md)
+ [Connect to Amazon Managed Service for Prometheus and open-source Prometheus data sources](prometheus-data-source.md)
+ [Connect to a Jaeger data source](jaeger-data-source.md)
+ [Connect to a Zipkin data source](zipkin-data-source.md)
+ [Connect to a Tempo data source](tempo-data-source.md)
+ [Configure a TestData data source for testing](testdata-data-source.md)

For more detailed information about data sources and data source plugins in Amazon Managed Grafana, see [Connect to data sources](AMG-data-sources.md).

# Alerting on numeric data
<a name="v12-alerting-overview-numeric"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

This topic describes how Grafana handles alerting on numeric rather than time series data.

Among certain data sources, numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules. When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.

## Tabular data
<a name="v12-alerting-numeric-tabular"></a>

This feature is supported with backend data sources that query tabular data:
+ SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
+ The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.

A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
+ The “Format AS” option is set to “Table” in the data source query.
+ The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.

If there are string columns, then those columns become labels. The name of a column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.

## Example
<a name="v12-alerting-numeric-tabexample"></a>

For a MySQL table called “DiskSpace”:


| Time | Host | Disk | PercentFree | 
| --- | --- | --- | --- | 
| 2021-June-7 | web1 | /etc | 3 | 
| 2021-June-7 | web2 | /var | 4 | 
| 2021-June-7 | web3 | /var | 8 | 
| ... | ... | ... | ... | 

You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:

```
SELECT Host , Disk , CASE WHEN PercentFree  < 5.0 THEN PercentFree  ELSE 0 END FROM ( 
       SELECT
          Host, 
          Disk, 
          Avg(PercentFree) 
       FROM DiskSpace
       Group By
          Host, 
          Disk 
       Where __timeFilter(Time)
```

This query returns the following Table response to Grafana:


| Host | Disk | PercentFree | 
| --- | --- | --- | 
| web1 | /etc | 3 | 
| web2 | /var | 4 | 
| web3 | /var | 0 | 

When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:


| Labels | Status | 
| --- | --- | 
| \$1Host=web1,disk=/etc\$1 | Alerting | 
| \$1Host=web2,disk=/var\$1 | Alerting | 
| \$1Host=web3,disk=/var\$1 | Normal | 

# Labels and annotations
<a name="v12-alerting-overview-labels"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Labels and annotations contain information about an alert. Both labels and annotations have the same structure: a set of named values; however their intended uses are different. An example of label, or the equivalent annotation, might be `alertname="test"`.

The main difference between a label and an annotation is that labels are used to differentiate an alert from all other alerts, while annotations are used to add additional information to an existing alert.

For example, consider two high CPU alerts: one for `server1` and another for `server2`. In such an example, we might have a label called `server` where the first alert has the label `server="server1"` and the second alert has the label `server="server2"`. However, we might also want to add a description to each alert such as `"The CPU usage for server1 is above 75%."`, where `server1` and `75%` are replaced with the name and CPU usage of the server (please refer to the documentation on [Templating labels and annotations](v12-alerting-overview-labels-templating.md) for how to do this). This kind of description would be more suitable as an annotation.

## Labels
<a name="v12-alerting-overview-labels-labels"></a>

Labels contain information that identifies an alert. An example of a label might be `server=server1`. Each alert can have more than one label, and the complete set of labels for an alert is called its label set. It is this label set that identifies the alert.

For example, an alert might have the label set `{alertname="High CPU usage",server="server1"}` while another alert might have the label set `{alertname="High CPU usage",server="server2"}`. These are two separate alerts because although their `alertname` labels are the same, their `server` labels are different.

The label set for an alert is a combination of the labels from the datasource, custom labels from the alert rule, and a number of reserved labels such as `alertname`.

**Custom Labels**

Custom labels are additional labels from the alert rule. Like annotations, custom labels must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. Documentation on how to template custom labels can be found [here](v12-alerting-overview-labels-templating.md).

When using custom labels with templates, it is important to make sure that the label value does not change between consecutive evaluations of the alert rule as this will end up creating large numbers of distinct alerts. However, it is OK for the template to produce different label values for different alerts. For example, do not put the value of the query in a custom label as this will end up creating a new set of alerts each time the value changes. Instead use annotations.

It is also important to make sure that the label set for an alert does not have two or more labels with the same name. If a custom label has the same name as a label from the datasource then it will replace that label. However, should a custom label have the same name as a reserved label then the custom label will be omitted from the alert.

## Annotations
<a name="v12-alerting-overview-labels-annotations"></a>

Annotations are named pairs that add additional information to existing alerts. There are a number of suggested annotations in Grafana such as `description`, `summary`, `runbook_url`, `dashboardUId` and `panelId`. Like custom labels, annotations must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. If an annotation contains template code, the template is evaluated once when the alert is fired. It is not re-evaluated, even when the alert is resolved. Documentation on how to template annotations can be found [here](v12-alerting-overview-labels-templating.md).

**Topics**
+ [Labels](#v12-alerting-overview-labels-labels)
+ [Annotations](#v12-alerting-overview-labels-annotations)
+ [How label matching works](v12-alerting-overview-labels-matching.md)
+ [Labels in Grafana Alerting](v12-alerting-overview-labels-alerting.md)
+ [Templating labels and annotations](v12-alerting-overview-labels-templating.md)

# How label matching works
<a name="v12-alerting-overview-labels-matching"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use labels and label matchers to link alert rules to notification policies and silences. This allows for a very flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.

A label matcher consists of 3 distinct parts, the **label**, the **value** and the **operator**.
+ The **Label** field is the name of the label to match. It must exactly match the label name.
+ The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
+ The **Operator** field is the operator to match against the label value. The available operators are:


| Operator | Description | 
| --- | --- | 
| `=` | Select labels that are exactly equal to the value. | 
| `!=` | Select labels that are not equal to the value. | 
| `=~` | Select labels that regex-match the value. | 
| `!~` | Select labels that do not regex-match the value. | 

If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.

## Example
<a name="v12-alerting-overview-labels-matching-ex"></a>

If you define the following set of labels for your alert:

```
{ foo=bar, baz=qux, id=12 }
```

then:
+ A label matcher defined as `foo=bar` matches this alert rule.
+ A label matcher defined as `foo!=bar` does *not* match this alert rule.
+ A label matcher defined as `id=~[0-9]+` matches this alert rule.
+ A label matcher defined as `baz!~[0-9]+` matches this alert rule.
+ Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.

## Exclude labels
<a name="v12-alerting-overview-labels-matching-exclude"></a>

You can also write label matchers to exclude labels.

Here is an example that shows how to exclude the label `team`. You can choose between any of these values to exclude the label.
+ `team=""`
+ `team!~.+`
+ `team=~^$`

# Labels in Grafana Alerting
<a name="v12-alerting-overview-labels-alerting"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

This topic explains why labels are a fundamental component of alerting.
+ The complete set of labels for an alert is what uniquely identifies an alert within Grafana alerts.
+ The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
+ The alerting UI shows labels for every alert instance generated during evaluation of that rule.
+ Contact points can access labels to dynamically generate notifications that contain information specific to the alert that is resulting in a notification.
+ You can add labels to an [alerting rule](v12-alerting-configure.md). Labels are manually configurable, use template functions, and can reference other labels. Labels added to an alerting rule take precedence in the event of a collision between labels (except in the case of Grafana reserved labels, see below for more information).

## External Alertmanager compatibility
<a name="v12-alerting-overview-labels-alerting-external"></a>

Grafana’s built-in Alertmanager supports both Unicode label keys and values. If you are using an external Prometheus Alertmanager, label keys must be compatible with their [data model](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels). This means that label keys must only contain **ASCII letters**, **numbers**, as well as **underscores** and match the regex `[a-zA-Z_][a-zA-Z0-9_]*`. Any invalid characters will be removed or replaced by the Grafana alerting engine before being sent to the external Alertmanager according to the following rules:
+ `Whitespace` will be removed.
+ `ASCII characters` will be replaced with `_`.
+ `All other characters` will be replaced with their lower-case hex representation. If this is the first character it will be prefixed with `_`.

**Note**  
If multiple label keys are sanitized to the same value, the duplicates will have a short hash of the original label appended as a suffix.

## Grafana reserved labels
<a name="v12-alerting-overview-labels-alerting-reserved"></a>

**Note**  
Labels prefixed with `grafana_` are reserved by Grafana for special use. If a manually configured label is added beginning with `grafana_` it will be overwritten in case of collision.

Grafana reserved labels can be used in the same way as manually configured labels. The current list of available reserved labels are:


| Label | Description | 
| --- | --- | 
| grafana\$1folder | Title of the folder containing the alert. | 

# Templating labels and annotations
<a name="v12-alerting-overview-labels-templating"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can use templates to include data from queries and expressions in labels and annotations. For example, you might want to set the severity label for an alert based on the value of the query, or use the instance label from the query in a summary annotation so you know which server is experiencing high CPU usage.

All templates should be written in [text/template](https://pkg.go.dev/text/template). Regardless of whether you are templating a label or an annotation, you should write each template inline inside the label or annotation that you are templating. This means you cannot share templates between labels and annotations, and instead you will need to copy templates wherever you want to use them.

Each template is evaluated whenever the alert rule is evaluated, and is evaluated for every alert separately. For example, if your alert rule has a templated summary annotation, and the alert rule has 10 firing alerts, then the template will be executed 10 times, once for each alert. You should try to avoid doing expensive computations in your templates as much as possible.

## Examples
<a name="v12-alerting-overview-labels-templating-examples"></a>

Rather than write a complete tutorial on text/template, the following examples attempt to show the most common use-cases we have seen for templates. You can use these examples verbatim, or adapt them as necessary for your use case. For more information about how to write text/template see the [text/template](https://pkg.go.dev/text/template) documentation.

**Print all labels, comma separated**

To print all labels, comma separated, print the `$labels` variable:

```
{{ $labels }}
```

For example, given an alert with the labels `alertname=High CPU usage`, `grafana_folder=CPU alerts` and `instance=server1`, this would print: 

```
alertname=High CPU usage, grafana_folder=CPU alerts, instance=server1
```

**Note**  
If you are using classic conditions then `$labels` will not contain any labels from the query. Refer to [the \$1labels variable](#v12-alerting-overview-labels-templating-the-labels-variable) for more information.

**Print all labels, one per line**

To print all labels, one per line, use a `range` to iterate over each key/value pair and print them individually. Here `$k` refers to the name and `$v` refers to the value of the current label: 

```
{{ range $k, $v := $labels -}}
{{ $k }}={{ $v }}
{{ end }}
```

For example, given an alert with the labels `alertname=High CPU usage`, `grafana_folder=CPU alerts` and `instance=server1`, this would print:

```
alertname=High CPU usage
grafana_folder=CPU alerts
instance=server1
```

**Note**  
If you are using classic conditions then `$labels` will not contain any labels from the query. Refer to [the \$1labels variable](#v12-alerting-overview-labels-templating-the-labels-variable) for more information.

**Print an individual label**

To print an individual label use the `index` function with the `$labels` variable: 

```
The host {{ index $labels "instance" }} has exceeded 80% CPU usage for the last 5 minutes
```

For example, given an alert with the label `instance=server1`, this would print:

```
The host server1 has exceeded 80% CPU usage for the last 5 minutes
```

**Note**  
If you are using classic conditions then `$labels` will not contain any labels from the query. Refer to [the \$1labels variable](#v12-alerting-overview-labels-templating-the-labels-variable) for more information.

**Print the value of a query**

To print the value of an instant query you can print its Ref ID using the `index` function and the `$values` variable: 

```
{{ index $values "A" }}
```

For example, given an instant query that returns the value 81.2345, this will print:

```
81.2345
```

To print the value of a range query you must first reduce it from a time series to an instant vector with a reduce expression. You can then print the result of the reduce expression by using its Ref ID instead. For example, if the reduce expression takes the average of A and has the Ref ID B you would write: 

```
{{ index $values "B" }}
```

**Print the humanized value of a query**

To print the humanized value of an instant query use the `humanize` function:

```
{{ humanize (index $values "A").Value }}
```

For example, given an instant query that returns the value 81.2345, this will print: 

```
81.234
```

To print the humanized value of a range query you must first reduce it from a time series to an instant vector with a reduce expression. You can then print the result of the reduce expression by using its Ref ID instead. For example, if the reduce expression takes the average of A and has the Ref ID B you would write: 

```
{{ humanize (index $values "B").Value }}
```

**Print the value of a query as a percentage**

To print the value of an instant query as a percentage use the `humanizePercentage` function:

```
{{ humanizePercentage (index $values "A").Value }}
```

This function expects the value to be a decimal number between 0 and 1. If the value is instead a decimal number between 0 and 100 you can divide it by 100 either in your query or using a math expression. If the query is a range query you must first reduce it from a time series to an instant vector with a reduce expression.

**Set a severity from the value of a query**

To set a severity label from the value of a query use an if statement and the greater than comparison function. Make sure to use decimals (`80.0`, `50.0`, `0.0`, etc) when doing comparisons against `$values` as text/template does not support type coercion. You can find a list of all the supported comparison functions [here](https://pkg.go.dev/text/template#hdr-Functions).

```
{{ if (gt $values.A.Value 80.0) -}}
high
{{ else if (gt $values.A.Value 50.0) -}}
medium
{{ else -}}
low
{{- end }}
```

**Print all labels from a classic condition**

You cannot use `$labels` to print labels from the query if you are using classic conditions, and must use `$values` instead. The reason for this is classic conditions discard these labels to enforce uni-dimensional behavior (at most one alert per alert rule). If classic conditions didn’t discard these labels, then queries that returned many time series would cause alerts to flap between firing and resolved constantly as the labels would change every time the alert rule was evaluated.

Instead, the `$values` variable contains the reduced values of all time series for all conditions that are firing. For example, if you have an alert rule with a query A that returns two time series, and a classic condition B with two conditions, then `$values` would contain `B0`, `B1`, `B2` and `B3`. If the classic condition B had just one condition, then `$values` would contain just `B0` and `B1`.

To print all labels of all firing time series use the following template (make sure to replace `B` in the regular expression with the Ref ID of the classic condition if it’s different): 

```
{{ range $k, $v := $values -}}
{{ if (match "B[0-9]+" $k) -}}
{{ $k }}: {{ $v.Labels }}{{ end }}
{{ end }}
```

For example, a classic condition for two time series exceeding a single condition would print: 

```
B0: instance=server1
B1: instance=server2
```

If the classic condition has two or more conditions, and a time series exceeds multiple conditions at the same time, then its labels will be duplicated for each condition that is exceeded: 

```
B0: instance=server1
B1: instance=server2
B2: instance=server1
B3: instance=server2
```

If you need to print unique labels you should consider changing your alert rules from uni-dimensional to multi-dimensional instead. You can do this by replacing your classic condition with reduce and math expressions.

**Print all values from a classic condition**

To print all values from a classic condition take the previous example and replace `$v.Labels` with `$v.Value`: 

```
{{ range $k, $v := $values -}}
{{ if (match "B[0-9]+" $k) -}}
{{ $k }}: {{ $v.Value }}{{ end }}
{{ end }}
```

For example, a classic condition for two time series exceeding a single condition would print: 

```
B0: 81.2345
B1: 84.5678
```

If the classic condition has two or more conditions, and a time series exceeds multiple conditions at the same time, then `$values` will contain the values of all conditions: 

```
B0: 81.2345
B1: 92.3456
B2: 84.5678
B3: 95.6789
```

## Variables
<a name="v12-alerting-overview-labels-templating-variables"></a>

The following variables are available to you when templating labels and annotations:

### The labels variable
<a name="v12-alerting-overview-labels-templating-the-labels-variable"></a>

The `$labels` variable contains all labels from the query. For example, suppose you have a query that returns CPU usage for all of your servers, and you have an alert rule that fires when any of your servers have exceeded 80% CPU usage for the last 5 minutes. You want to add a summary annotation to the alert that tells you which server is experiencing high CPU usage. With the `$labels` variable you can write a template that prints a human-readable sentence such as: 

```
CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes
```

**Note**  
If you are using a classic condition then `$labels` will not contain any labels from the query. Classic conditions discard these labels in order to enforce uni-dimensional behavior (at most one alert per alert rule). If you want to use labels from the query in your template then follow the previous *Print all labels from a classic condition* example.

### The value variable
<a name="v12-alerting-overview-labels-templating-the-value-variable"></a>

The `$value` variable is a string containing the labels and values of all instant queries; threshold, reduce and math expressions, and classic conditions in the alert rule. It does not contain the results of range queries, as these can return anywhere from 10s to 10,000s of rows or metrics. If it did, for especially large queries a single alert could use 10s of MBs of memory and Grafana would run out of memory very quickly.

To print the `$value` variable in the summary you would write something like this: 

```
CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes: {{ $value }}
```

And it would look something like this:

```
CPU usage for instance1 has exceeded 80% for the last 5 minutes: [ var='A' labels={instance=instance1} value=81.234 ]
```

Here `var='A'` refers to the instant query with Ref ID A, `labels={instance=instance1}` refers to the labels, and `value=81.234` refers to the average CPU usage over the last 5 minutes.

If you want to print just some of the string instead of the full string then use the `$values` variable. It contains the same information as `$value`, but in a structured table, and is much easier to use then writing a regular expression to match just the text you want.

### The values variable
<a name="v12-alerting-overview-labels-templating-the-values-variable"></a>

The `$values` variable is a table containing the labels and floating point values of all instant queries and expressions, indexed by their Ref IDs.

To print the value of the instant query with Ref ID A:

```
CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes: {{ index $values "A" }}
```

For example, given an alert with the labels `instance=server1` and an instant query with the value `81.2345`, this would print:

```
CPU usage for instance1 has exceeded 80% for the last 5 minutes: 81.2345
```

If the query in Ref ID A is a range query rather than an instant query then add a reduce expression with Ref ID B and replace `(index $values "A")` with `(index $values "B")`:

```
CPU usage for {{ index $labels "instance" }} has exceeded 80% for the last 5 minutes: {{ index $values "B" }}
```

## Functions
<a name="v12-alerting-overview-labels-templating-functions"></a>

The following functions are available to you when templating labels and annotations:

**args**

The `args` function translates a list of objects to a map with keys arg0, arg1 etc. This is intended to allow multiple arguments to be passed to templates.

```
{{define "x"}}{{.arg0}} {{.arg1}}{{end}}{{template "x" (args 1 "2")}}
```

```
1 2
```

**externalURL**

The `externalURL` function returns the external URL of the Grafana server.

```
{{ externalURL }}
```

```
https://example.com/grafana
```

**graphLink**

The `graphLink` function returns the path to the graphical view in [Explore in Grafana version 12](v12-explore.md) for the given expression and data source.

```
{{ graphLink "{\"expr\": \"up\", \"datasource\": \"gdev-prometheus\"}" }}
```

```
/explore?left=["now-1h","now","gdev-prometheus",{"datasource":"gdev-prometheus","expr":"up","instant":false,"range":true}]
```

**humanize**

The `humanize` function humanizes decimal numbers.

```
{{ humanize 1000.0 }}
```

```
1k
```

**humanize1024**

The `humanize1024` works similar to `humanize` but uses 1024 as the base rather than 1000.

```
{{ humanize1024 1024.0 }}
```

```
1ki
```

**humanizeDuration**

The `humanizeDuration` function humanizes a duration in seconds.

```
{{ humanizeDuration 60.0 }}
```

```
1m 0s
```

**humanizePercentage**

The `humanizePercentage` function humanizes a ratio value to a percentage.

```
{{ humanizePercentage 0.2 }}
```

```
20%
```

**humanizeTimestamp**

The `humanizeTimestamp` function humanizes a Unix timestamp.

```
{{ humanizeTimestamp 1577836800.0 }}
```

```
2020-01-01 00:00:00 +0000 UTC
```

**match**

The `match` function matches the text against a regular expression pattern.

```
{{ match "a.*" "abc" }}
```

```
true
```

**pathPrefix**

The `pathPrefix` function returns the path of the Grafana server.

```
{{ pathPrefix }}
```

```
/grafana
```

**tableLink**

The `tableLink` function returns the path to the tabular view in [Explore in Grafana version 12](v12-explore.md) for the given expression and data source.

```
{{ tableLink "{\"expr\": \"up\", \"datasource\": \"gdev-prometheus\"}" }}
```

```
/explore?left=["now-1h","now","gdev-prometheus",{"datasource":"gdev-prometheus","expr":"up","instant":true,"range":false}]
```

**title**

The `title` function capitalizes the first character of each word.

```
{{ title "hello, world!" }}
```

```
Hello, World!
```

**toLower**

The `toLower` function returns all text in lowercase.

```
{{ toLower "Hello, world!" }}
```

```
hello, world!
```

**toUpper**

The `toUpper` function returns all text in uppercase.

```
{{ toUpper "Hello, world!" }}
```

```
HELLO, WORLD!
```

**reReplaceAll**

The `reReplaceAll` function replaces text matching the regular expression.

```
{{ reReplaceAll "localhost:(.*)" "example.com:$1" "localhost:8080" }}
```

```
example.com:8080
```

# About alert rules
<a name="v12-alerting-explore-rules"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

An alerting rule is a set of evaluation criteria that determines whether an alert instance will fire. The rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and the duration over which the condition needs to be met to start firing.

While queries and expressions select the data set to evaluate, a *condition* sets the threshold that the data must meet or exceed to create an alert.

An *interval* specifies how frequently an alerting rule is evaluated. *Duration*, when configured, indicates how long a condition must be met. The alert rules can also define alerting behavior in the absence of data.

**Topics**
+ [Alert rule types](v12-alerting-explore-rules-types.md)
+ [Recording rules](v12-alerting-explore-rule-recording.md)
+ [Queries and conditions](v12-alerting-explore-rules-queries.md)
+ [Alert instances](v12-alerting-rules-instances.md)
+ [Namespaces, folders and groups](v12-alerting-rules-grouping.md)
+ [Alert rule evaluation](v12-alerting-rules-evaluation.md)
+ [State and health of alerting rules](v12-alerting-explore-state.md)
+ [Notification templating](v12-alerting-rules-notification-templates.md)

# Alert rule types
<a name="v12-alerting-explore-rules-types"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana supports several alert rule types. Learn more about each of the alert rule types, how they work, and decide which one is best for your use case.

## Grafana managed rules
<a name="v12-alerting-explore-rule-types-grafana"></a>

Grafana managed rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of your existing data sources.

In addition to supporting multiple data sources, you can add [expressions](v12-panels-query-xform-expressions.md) to transform your data and express alert conditions.

In Grafana managed alerting:
+ Alert rules are created within Grafana, based on one or more data sources.
+ Alert rules are evaluated by the alert rule evaluation engine from within Grafana.
+ Alerts are delivered using the internal Grafana Alertmanager.

**Note**  
You can also configure alerts to be delivered using an external Alertmanager, or use both internal and external Alertmanagers. For more information, see [Add an external alertmanager](v12-alerting-setup-alertmanager.md).

## Data source managed rules
<a name="v12-alerting-explore-rule-types-datasource"></a>

To create data source managed alert rules you must have a compatible Prometheus or Loki data source. You can check if your data source supports rule creation via Grafana by testing the data source and observing if the Ruler API is supported.

In data source managed alerting:
+ Alert rules are created and stored within the data source itself.
+ Alert rules can only be created based on Prometheus data.
+ Alert rule evaluation and delivery is distributed across multiple nodes for high-availability and fault tolerance.

## Choose an alert rule type
<a name="v12-alerting-explore-rule-types-choose"></a>

When choosing which alert rule type to use, consider the following comparison between Grafana managed alert rules and data source managed alert rules.


| Feature | Grafana-managed alert rule | Loki/Mimir-managed alert rule | 
| --- | --- | --- | 
| Create alert rules based on data from any of our supported data sources | Yes | No: You can only create alert rules that are based on Prometheus data. The data source must have the Ruler API enabled.  | 
| Mix and match data sources | Yes | No | 
| Includes support for recording rules | No | Yes | 
| Add expressions to transform your data and set alert conditions | Yes | No | 
| Use images in alert notifications | Yes | No | 
| Scaling | More resource intensive, depend on the database, and are likely to suffer from transient errors. They only scale vertically. | Store alert rules within the data source itself and allow for “infinite” scaling. Generate and send alert notifications from the location of your data. | 
| Alert rule evaluation and delivery | Alert rule evaluation and delivery is done from within Grafana, using an external Alertmanager; or both. | Alert rule evaluation and alert delivery is distributed, meaning there is no single point of failure. | 

# Recording rules
<a name="v12-alerting-explore-rule-recording"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Recording rules are available for compatible Prometheus or Loki data sources, and for Grafana-managed alerts. Recording rules allow you to precompute frequently used queries for better performance.

A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.

Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.

Read more about [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) in Prometheus.

# Queries and conditions
<a name="v12-alerting-explore-rules-queries"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

In Grafana, queries play a vital role in fetching and transforming data from supported data sources, which include databases like MySQL and PostgreSQL, time series databases like Prometheus, InfluxDB and Graphite, and services like OpenSearch, Amazon CloudWatch, Azure Monitor and Google Cloud Monitoring.

For more information on supported data sources, see [Data sources and Grafana alerting](v12-alerting-overview-datasources.md).

The process of executing a query involves defining the data source, specifying the desired data to retrieve, and applying relevant filters or transformations. Query languages or syntaxes specific to the chosen data source are utilized for constructing these queries.

In Alerting, you define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.

An alert rule consists of one or more queries and expressions that select the data you want to measure.

For more information on queries and expressions, see [Query and transform data](v12-panels-query-xform.md).

## Data source queries
<a name="v12-alerting-explore-rules-queries-data-source-queries"></a>

Queries in Grafana can be applied in various ways, depending on the data source and query language being used. Each data source’s query editor provides a customized user interface that helps you write queries that take advantage of its unique capabilities.

Because of the differences between query languages, each data source query editor looks and functions differently. Depending on your data source, the query editor might provide auto-completion features, metric names, variable suggestions, or a visual query-building interface.

Some common types of query components include: 

**Metrics or data fields** – Specify the specific metrics or data fields you want to retrieve, such as CPU usage, network traffic, or sensor readings.

**Time range** – Define the time range for which you want to fetch data, such as the last hour, a specific day, or a custom time range.

**Filters** – Apply filters to narrow down the data based on specific criteria, such as filtering data by a specific tag, host, or application.

**Aggregations** – Perform aggregations on the data to calculate metrics like averages, sums, or counts over a given time period.

**Grouping** – Group the data by specific dimensions or tags to create aggregated views or breakdowns.

**Note**  
Grafana does not support alert queries with template variables. More information is available [here](https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514) in the Grafana Labs forums.

## Expression queries
<a name="v12-alerting-explore-rules-queries-expression-queries"></a>

In Grafana, an expression is used to perform calculations, transformations, or aggregations on the data source queried data. It allows you to create custom metrics or modify existing metrics based on mathematical operations, functions, or logical expressions.

By leveraging expression queries, users can perform tasks such as calculating the percentage change between two values, applying functions like logarithmic or trigonometric functions, aggregating data over specific time ranges or dimensions, and implementing conditional logic to handle different scenarios.

In Alerting, you can only use expressions for Grafana-managed alert rules. For each expression, you can choose from the math, reduce, and resample expressions. These are called multi-dimensional rules, because they generate a separate alert for each series.

You can also use a classic condition, which creates an alert rule that triggers a single alert when its condition is met. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series.

**Note**  
Classic conditions exist mainly for compatibility reasons and should be avoided if possible.

**Reduce**

Aggregates time series values in the selected time range into a single value.

**Math**

Performs free-form math functions/operations on time series and number data. Can be used to preprocess time series data or to define an alert condition for number data.

**Resample**

Realigns a time range to a new set of timestamps, this is useful when comparing time series data from different data sources where the timestamps would otherwise not align.

**Threshold**

Checks if any time series data matches the threshold condition.

The threshold expression allows you to compare two single values. It returns `0` when the condition is false and `1` if the condition is true. The following threshold functions are available:
+ Is above (x > y)
+ Is below (x < y)
+ Is within range (x > y1 AND x < y2)
+ Is outside range (x < y1 AND x > y2)

**Classic condition**

Checks if any time series data matches the alert condition.

**Note**  
Classic condition expression queries always produce one alert instance only, no matter how many time series meet the condition. Classic conditions exist mainly for compatibility reasons and should be avoided if possible.

## Aggregations
<a name="v12-alerting-explore-rules-queries-aggregations"></a>

Grafana Alerting provides the following aggregation functions to enable you to further refine your query.

These functions are available for **Reduce** and **Classic condition** expressions only.


| Function | Expression | What it does | 
| --- | --- | --- | 
| avg | Reduce / Classic | Displays the average of the values | 
| min | Reduce / Classic | Displays the lowest value | 
| max | Reduce / Classic | Displays the highest value | 
| sum | Reduce / Classic | Displays the sum of all values | 
| count | Reduce / Classic | Counts the number of values in the result | 
| last | Reduce / Classic | Displays the last value | 
| median | Reduce / Classic | Displays the median value | 
| diff | Classic | Displays the difference between the newest and oldest value | 
| diff\$1abs | Classic | Displays the absolute value of diff | 
| percent\$1diff | Classic | Displays the percentage value of the difference between newest and oldest value | 
| percent\$1diff\$1abs | Classic | Displays the absolute value of percent\$1diff | 
| count\$1non\$1null | Classic | Displays a count of values in the result set that aren’t null | 

## Alert condition
<a name="v12-alerting-explore-rules-queries-alert-condition"></a>

An alert condition is the query or expression that determines whether the alert will fire or not depending on the value it yields. There can be only one condition which will determine the triggering of the alert.

After you have defined your queries and/or expressions, choose one of them as the alert rule condition.

When the queried data satisfies the defined condition, Grafana triggers the associated alert, which can be configured to send notifications through various channels like email, Slack, or PagerDuty. The notifications inform you about the condition being met, allowing you to take appropriate actions or investigate the underlying issue.

By default, the last expression added is used as the alert condition.

## Recovery threshold
<a name="v12-alerting-explore-rules-queries-recovery-threshold"></a>

To reduce the noise of flapping alerts, you can set a recovery threshold different to the alert threshold.

Flapping alerts occur when a metric hovers around the alert threshold condition and may lead to frequent state changes, resulting in too many notifications being generated.

Grafana-managed alert rules are evaluated for a specific interval of time. During each evaluation, the result of the query is checked against the threshold set in the alert rule. If the value of a metric is above the threshold, an alert rule fires and a notification is sent. When the value goes below the threshold and there is an active alert for this metric, the alert is resolved, and another notification is sent.

It can be tricky to create an alert rule for a noisy metric. That is, when the value of a metric continually goes above and below a threshold. This is called flapping and results in a series of firing - resolved - firing notifications and a noisy alert state history.

For example, if you have an alert for latency with a threshold of 1000ms and the number fluctuates around 1000 (say 980 ->1010 -> 990 -> 1020, and so on) then each of those will trigger a notification.

To solve this problem, you can set a (custom) recovery threshold, which basically means having two thresholds instead of one. An alert is triggered when the first threshold is crossed and is resolved only when the second threshold is crossed.

For example, you could set a threshold of 1000ms and a recovery threshold of 900ms. This way, an alert rule will only stop firing when it goes under 900ms and flapping is reduced.

# Alert instances
<a name="v12-alerting-rules-instances"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana managed alerts support multi-dimensional alerting. Each alert rule can create multiple alert instances. This is powerful if you are observing multiple series in a single expression.

Consider the following PromQL expression:

```
sum by(cpu) (
  rate(node_cpu_seconds_total{mode!="idle"}[1m])
)
```

A rule using this expression will create as many alert instances as the amount of CPUs we are observing after the first evaluation, allowing a single rule to report the status of each CPU.

# Namespaces, folders and groups
<a name="v12-alerting-rules-grouping"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alerts can be organized using folders for Grafana managed rules and namespaces for Mimir, Loki, or Prometheus rules and group names.

**Namespaces and folders**

When creating Grafana-managed rules, the folder can be used to perform access control and grant or deny access to all rules within a specific folder.

A namespace contains one or more groups. The rules within a group are run sequentially at a regular interval. The default interval is one minute. You can rename Grafana Mimi or Loki rule namespaces and groups, and edit group evaluation intervals.

**Groups**

The rules within a group are run sequentially at a regular interval, meaning no rules will be evaluated at the same time, and in order of appearance. The default interval is one minute. You can rename Grafana Mimir or Loki rule namespaces or Loki rule namespaces and groups, and edit group evaluation intervals.

**Tip**  
If you want rules to be evaluated concurrently and with different intervals, consider storing them in different groups.

**Note**  
Grafana managed alert rules are evaluated concurrently instead of sequentially.

# Alert rule evaluation
<a name="v12-alerting-rules-evaluation"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.

To do this, you need to make sure that your alert rule is in the right evaluation group and set a pending period time that works best for your use case.

## Evaluation group
<a name="v12-alerting-rules-evaluation-group"></a>

Every alert rule is part of an evaluation group. Each evaluation group contains an evaluation interval that determines how frequently the alert rule is checked.

**Data-source managed** alert rules within the same group are evaluated one after the other, while alert rules in different groups can be evaluated simultaneously. This feature is especially useful when you want to ensure that recording rules are evaluated before any alert rules.

**Grafana-managed** alert rules are evaluated at the same time, regardless of alert rule group. The default evaluation interval is set at 10 seconds, which means that Grafana-managed alert rules are evaluated every 10 seconds to the closest 10-second window on the clock, for example, 10:00:00, 10:00:10, 10:00:20, and so on. You can also configure your own evaluation interval, if required.

**Note**  
Evaluation groups and alerts grouping in notification policies are two separate things. Grouping in notification policies allows multiple alerts sharing the same labels to be sent in the same time message.

## Pending period
<a name="v12-alerting-rules-evaluation-pending-period"></a>

By setting a pending period, you can avoid unnecessary alerts for temporary problems.

In the pending period, you select the period in which an alert rule can be in breach of the condition until it fires.

**Example**

Imagine you have an alert rule evaluation interval set at every 30 seconds and the pending period to 90 seconds.

Evaluation will occur as follows:

[00:30] First evaluation - condition not met.

[01:00] Second evaluation - condition breached. Pending counter starts.**Alert starts pending.**

[01:30] Third evaluation - condition breached. Pending counter = 30s. **Pending state.**

[02:00] Fourth evaluation - condition breached. Pending counter = 60s **Pending state.**

[02:30] Fifth evaluation - condition breached. Pending counter = 90s. **Alert starts firing**

If the alert rule has a condition that needs to be in breach for a certain amount of time before it takes action, then its state changes as follows:
+ When the condition is first breached, the rule goes into a "pending" state.
+ The rule stays in the "pending" state until the condition has been broken for the required amount of time - pending period.
+ Once the required time has passed, the rule goes into a "firing" state.
+ If the condition is no longer broken during the pending period, the rule goes back to its normal state.

**Note**  
If you want to skip the pending state, you can simply set the pending period to 0. This effectively skips the pending period and your alert rule will start firing as soon as the condition is breached.

When an alert rule fires, alert instances are produced, which are then sent to the Alertmanager.

# State and health of alerting rules
<a name="v12-alerting-explore-state"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The state and health of alerting rules help you understand several key status indicators about your alerts.

There are three key components: *alert rule state*, *alert instance state*, and *alert rule health*. Although related, each component conveys subtly different information.

**Alert rule state**

An alert rule can be in one of the following states:


| State | Description | 
| --- | --- | 
| Normal | None of the time series returned by the evaluation engine is in a `Pending` or `Firing` state. | 
| Pending | At least one time series returned by the evaluation engine is `Pending`. | 
| Firing | At least one time series returned by the evaluation engine is `Firing`. | 
| Recovering | The alert condition is no longer firing but has not yet returned to normal. | 

**Note**  
Alerts will transition first to `pending` and then `firing`, thus it will take at least two evaluation cycles before an alert is fired.

**Alert instance state**

An alert instance can be in one of the following states:


| State | Description | 
| --- | --- | 
| Normal | The state of an alert that is neither firing nor pending, everything is working correctly. | 
| Pending | The state of an alert that has been active for less than the configured threshold duration. | 
| Alerting | The state of an alert that has been active for longer than the configured threshold duration. | 
| Recovering | The state of an alert that was previously firing but the alert condition is no longer met. The alert has not yet returned to normal. | 
| NoData | No data has been received for the configured time window. | 
| Error | The error that occurred when attempting to evaluate an alerting rule. | 

**Keep last state**

An alert rule can be configured to keep the last state when `NoData` or `Error` state is encountered. This will both prevent alerts from firing, and from resolving and re-firing. Just like normal evaluation, the alert rule will transition from `Pending` to `Firing` after the pending period has elapsed.

**Alert rule health**

An alert rule can have one the following health statuses:


| State | Description | 
| --- | --- | 
| Ok | No error when evaluating an alerting rule. | 
| Error | An error occurred when evaluating an alerting rule. | 
| NoData | The absence of data in at least one time series returned during a rule evaluation. | 

**Special alerts for `NoData` and `Error`**

When evaluation of an alerting rule produces state `NoData` or `Error`, Grafana Alerting will generate alert instances that have the following additional labels:


| Label | Description | 
| --- | --- | 
| alertname | Either `DatasourceNoData` or `DatasourceError` depending on the state. | 
| datasource\$1uid | The UID of the data source that caused the state. | 

You can handle these alerts the same way as regular alerts by adding a silence, route to a contact point, and so on.

# Notification templating
<a name="v12-alerting-rules-notification-templates"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Notifications sent via contact points are built using notification templates. Grafana’s default templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML (which can affect escaping).

The default template [default\$1template.go](https://github.com/grafana/alerting/blob/main/templates/default_template.go) is a useful reference for custom templates.

Since most of the contact point fields can be templated, you can create reusable custom templates and use them in multiple contact points. To learn about custom notifications using templates, see [Customize notifications](v12-alerting-manage-notifications.md).

**Nested templates**

You can embed templates within other templates.

For example, you can define a template fragment using the `define` keyword.

```
{{ define "mytemplate" }}
  {{ len .Alerts.Firing }} firing. {{ len .Alerts.Resolved }} resolved.
{{ end }}
```

You can then embed custom templates within this fragment using the `template` keyword. For example:

```
Alert summary:
{{ template "mytemplate" . }}
```

You can use any of the following built-in template options to embed custom templates.


| Name | Notes | 
| --- | --- | 
| `default.title` | Displays high-level status information. | 
| `default.message` | Provides a formatted summary of firing and resolved alerts. | 
| `teams.default.message` | Similar to `default.messsage`, formatted for Microsoft Teams. | 

**HTML in notification templates**

HTML in alerting notification templates is escaped. We do not support rendering of HTML in the resulting notification.

Some notifiers support alternative methods of changing the look and feel of the resulting notification. For example, Grafana installs the base template for alerting emails to `<grafana-install-dir>/public/emails/ng_alert_notification.html`. You can edit this file to change the appearance of all alerting emails.

# Alertmanager
<a name="v12-alerting-explore-alertmanager"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alertmanager enables you to quickly and efficiently manage and respond to alerts. It receives alerts, handles mutings, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.

In Grafana, you can use the Grafana Alertmanager or an external Alertmanager. You can also run multiple alertmanagers; your decision depends on your set up and where your alerts are being generated.

**Grafana Alertmanager**

Grafana Alertmanager is an internal Alertmanager that is pre-configured and available for selection by default.

The Grafana Alertmanager can receive alerts from Grafana, but it cannot receive alerts from outside Grafana, for example, from Mimir or Loki.

**Note**  
Inhibition rules are not supported in the Grafana Alertmanager.

**External Alertmanager**

If you want to use a single alertmanager to receive all your Grafana, Loki, Mimir, and Prometheus alerts, you can set up Grafana to use an external Alertmanager. This external Alertmanager can be configured and administered from within Grafana itself.

Here are two examples of when you might want to configure your own external alertmanager and send your alerts there instead of the Grafana Alertmanager:

1. You already have alertmanagers on-premise in your own Cloud infrastructure that you have set up and still want to use, because you have other alert generators, such as Prometheus.

1. You want to use both Prometheus on-premise and hosted Grafana to send alerts to the same alertmanager that runs in your Cloud infrastructure.

Alertmanagers are visible from the dropdown menu on the Alerting Contact Points, and Notification Policies pages.

If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.

# Contact points
<a name="v12-alerting-explore-contacts"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Contact points contain the configuration for sending notifications. A contact point is a list of integrations, each of which sends a notification to a particular email address, service or URL. Contact points can have multiple integrations of the same kind, or a combination of integrations of different kinds. For example, a contact point could contain a Pagerduty integration; an Amazon SNS and Slack integration; or a Pagerduty integration, a Slack integration, and two Amazon SNS integrations. You can also configure a contact point with no integrations; in which case no notifications are sent.

A contact point cannot send notifications until it has been added to a notification policy. A notification policy can only send alerts to one contact point, but a contact point can be added to a number of notification policies at the same time. When an alert matches a notification policy, the alert is sent to the contact point in that notification policy, which then sends a notification to each integration in its configuration.

Contact points can be configured for the Grafana Alertmanager as well as external alertmanagers.

You can also use notification templating to customize notification messages for contact point types.

**Supported contact point types**

The following table lists the contact point types supported by Grafana.


| Name | Type | 
| --- | --- | 
| Amazon SNS | `sns` | 
| OpsGenie | `opsgenie` | 
| Pager Duty | `pagerduty` | 
| Slack | `slack` | 
| VictorOps | `victorops` | 

For more information about contact points, see [Configure contact points](v12-alerting-configure-contactpoints.md) and [Customize notifications](v12-alerting-manage-notifications.md).

# Notifications
<a name="v12-alerting-explore-notifications"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important.

As a first step, define your [contact points](v12-alerting-explore-contacts.md), which define where to send your alert notifications. A contact point is a set of one or more integrations that are used to deliver notifications. Add notification templates to contact points for reuse and consistent messaging in your notifications.

Next, create a notification policy which is a set of rules for where, when and how your alerts are routed to contact points. In a notification policy, you define where to send your alert notifications by choosing one of the contact points you created.

## Alertmanagers
<a name="v12-alerting-explore-notifications-alertmanager"></a>

Grafana uses Alertmanagers to send notifications for firing and resolved alerts. Grafana has its own Alertmanager, referred to as **Grafana** in the user interface, but also supports sending notifications from other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.

## Notification policies
<a name="v12-alerting-explore-notifications-policies"></a>

Notification policies control when and where notifications are sent. A notification policy can choose to send all alerts together in the same notification, send alerts in grouped notifications based on a set of labels, or send alerts as separate notifications. You can configure each notification policy to control how often notifications should be sent as well as having one or more mute timings to inhibit notifications at certain times of the day and on certain days of the week.

Notification policies are organized in a tree structure where at the root of the tree there is a notification policy called the default policy. There can be only one default policy and the default policy cannot be deleted.

Specific routing policies are children of the root policy and can be used to match either all alerts or a subset of alerts based on a set of matching labels. A notification policy matches an alert when its matching labels match the labels in the alert.

A nested policy can have its own nested policies, which allow for additional matching of alerts. An example of a nested policy could be sending infrastructure alerts to the Ops team; while a child policy might send high priority alerts to Pagerduty and low priority alerts to Slack.

All alerts, irrespective of their labels, match the default policy. However, when the default policy receives an alert it looks at each nested policy and sends the alert to the first nested policy that matches the alert. If the nested policy has further nested policies, then it can attempt to match the alert against one of its nested policies. If no nested policies match the alert then the policy itself is the matching policy. If there are no nested policies, or no nested policies match the alert, then the default policy is the matching policy.

For more detailed information about notification policies, see [Notification policies](v12-alerting-explore-notifications-policies-details.md).

## Notification templates
<a name="v12-alerting-explore-notifications-templating"></a>

You can customize notifications with templates. For example, templates can be used to change the title and message of notifications sent to Slack.

Templates are not limited to an individual integration or contact point, but instead can be used in a number of integrations in the same contact point and even integrations across different contact points. For example, a Grafana user can create a template called `custom_subject_or_title` and use it for both templating subjects in Pager Duty and titles of Slack messages without having to create two separate templates.

All notifications templates are written in [Go’s templating language](https://pkg.go.dev/text/template), and are in the Contact points tab on the Alerting page.

For more detailed information about customizing notifications, see [Customize notifications](v12-alerting-manage-notifications.md).

## Silences
<a name="v12-alerting-explore-notifications-silences"></a>

You can use silences to mute notifications from one or more firing rules. Silences do not stop alerts from firing or being resolved, or hide firing alerts in the user interface. A silence lasts as long as its duration which can be configured in minutes, hours, days, months or years.

For more detailed information about using silences, see [Silencing alert notifications](v12-alerting-silences.md).

# Notification policies
<a name="v12-alerting-explore-notifications-policies-details"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Notification policies provide you with a flexible way of routing alerts to various different receivers. Using label matchers, you can modify alert notification delivery without having to update every individual alert rule.

In this section, you will learn more about how notification policies work and are structured, so that you can make the most out of setting up your notification policies.

## Policy tree
<a name="v12-alerting-explore-notifications-policy-tree"></a>

Notification policies are *not* a list, but rather are structured according to a tree structure. This means that each policy can have child policies, and so on. The root of the notification policy tree is called the **Default notification policy**.

Each policy consists of a set of label matchers (0 or more) that specify which labels they are or aren’t interested in handling.

For more information about label matching, see [How label matching works](v12-alerting-overview-labels-matching.md).

**Note**  
If you haven’t configured any label matchers for your notification policy, your notification policy will match *all* alert instances. This may prevent child policies from being evaluated unless you have enabled **Continue matching siblings** on the notification policy.

## Routing
<a name="v12-alerting-explore-notifications-routing"></a>

To determine which notification policy will handle which alert instances, you have to start by looking at the existing set of notification policies, starting with the default notification policy.

If no policies other than the default policy are configured, the default policy will handle the alert instance.

If policies other than the default policy are defined, it will evaluate those notification policies in the order they are displayed.

If a notification policy has label matchers that match the labels of the alert instance, it will descend in to its child policies and, if there are any, will continue to look for any child policies that might have label matchers that further narrow down the set of labels, and so forth until no more child policies have been found.

If no child policies are defined in a notification policy or if none of the child policies have any label matchers that match the alert instance’s labels, the parent notification policy is used.

As soon as a matching policy is found, the system does not continue to look for other matching policies. If you want to continue to look for other policies that may match, enable **Continue matching siblings** on that particular policy.

Lastly, if none of the notification policies are selected the default notification policy is used.

### Routing example
<a name="v12-alerting-explore-notifications-routing-example"></a>

Here is an example of a relatively simple notification policy tree and some alert instances.

![\[An image showing a set of notification policies in a tree structure, and a set of alert instances with different labels to match to the policies.\]](http://docs.aws.amazon.com/grafana/latest/userguide/images/notification-routing.png)


Here’s a breakdown of how these policies are selected:

**Pod stuck in CrashLoop** does not have a `severity` label, so none of its child policies are matched. It does have a `team=operations` label, so the first policy is matched.

The `team=security` policy is not evaluated since we already found a match and **Continue matching siblings** was not configured for that policy.

**Disk Usage – 80%** has both a `team` and `severity` label, and matches a child policy of the operations team.

**Unauthorized log entry** has a `team` label but does not match the first policy (`team=operations`) since the values are not the same, so it will continue searching and match the `team=security` policy. It does not have any child policies, so the additional `severity=high` label is ignored.

## Inheritance
<a name="v12-alerting-explore-notifications-inheritance"></a>

In addition to child policies being a useful concept for routing alert instances, they also inherit properties from their parent policy. This also applies to any policies that are child policies of the default notification policy.

The following properties are inherited by child policies:
+ Contact point
+ Grouping options
+ Timing options
+ Mute timings

Each of these properties can be overwritten by an individual policy should you wish to override the inherited properties.

To inherit a contact point from the parent policy, leave it blank. To override the inherited grouping options, enable **Override grouping**. To override the inherited timing options, enable **Override general timings**.

### Inheritance example
<a name="v12-alerting-explore-notifications-inheritance-example"></a>

The example below shows how the notification policy tree from our previous example allows the child policies of the `team=operations` to inherit its contact point.

In this way, we can avoid having to specify the same contact point multiple times for each child policy.

![\[An image showing a set of notification policies in a tree structure, with contact points assigned to some of the policies, but with some child policies inheriting the contact points of their parents, rather than defining their own.\]](http://docs.aws.amazon.com/grafana/latest/userguide/images/notification-inheritance.png)


## Additional configuration options
<a name="v12-alerting-explore-notifications-additional-configuration-options"></a>

### Grouping
<a name="v12-alerting-explore-notifications-grouping"></a>

Grouping is an important feature of Grafana Alerting as it allows you to batch relevant alerts together into a smaller number of notifications. This is particularly important if notifications are delivered to first-responders, such as engineers on-call, where receiving lots of notifications in a short period of time can be overwhelming and in some cases can negatively impact a first-responders ability to respond to an incident. For example, consider a large outage where many of your systems are down. In this case, grouping can be the difference between receiving 1 phone call and 100 phone calls.

You choose how alerts are grouped together using the Group by option in a notification policy. By default, notification policies in Grafana group alerts together by alert rule using the `alertname` and `grafana_folder` labels (since alert names are not unique across multiple folders). Should you wish to group alerts by something other than the alert rule, change the grouping to any other combination of labels.

#### Disable grouping
<a name="v12-alerting-explore-notifications-disable-grouping"></a>

Should you wish to receive every alert as a separate notification, you can do so by grouping by a special label called `...`. This is useful when your alerts are being delivered to an automated system instead of a first-responder.

#### A single group for all alerts
<a name="v12-alerting-explore-notifications-a-single-group-for-all-alerts"></a>

Should you wish to receive all alerts together in a single notification, you can do so by leaving Group by empty.

### Timing options
<a name="v12-alerting-explore-notifications-timing-options"></a>

The timing options decide how often notifications are sent for each group of alerts. There are three timers that you need to know about: Group wait, Group interval, and Repeat interval.

#### Group wait
<a name="v12-alerting-explore-notifications-group-wait"></a>

Group wait is the amount of time Grafana waits before sending the first notification for a new group of alerts. The longer Group wait is the more time you have for other alerts to arrive. The shorter Group wait is the earlier the first notification will be sent, but at the risk of sending incomplete notifications. You should always choose a Group wait that makes the most sense for your use case.

**Default** 30 seconds

#### Group interval
<a name="v12-alerting-explore-notifications-group-interval"></a>

Once the first notification has been sent for a new group of alerts, Grafana starts the Group interval timer. This is the amount of time Grafana waits before sending notifications about changes to the group. For example, another firing alert might have just been added to the group while an existing alert might have resolved. If an alert was too late to be included in the first notification due to Group wait, it will be included in subsequent notifications after Group interval. Once Group interval has elapsed, Grafana resets the Group interval timer. This repeats until there are no more alerts in the group after which the group is deleted.

**Default** 5 minutes

#### Repeat interval
<a name="v12-alerting-explore-notifications-repeat-interval"></a>

Repeat interval decides how often notifications are repeated if the group has not changed since the last notification. You can think of these as reminders that some alerts are still firing. Repeat interval is closely related to Group interval, which means your Repeat interval must not only be greater than or equal to Group interval, but also must be a multiple of Group interval. If Repeat interval is not a multiple of Group interval it will be coerced into one. For example, if your Group interval is 5 minutes, and your Repeat interval is 9 minutes, the Repeat interval will be rounded up to the nearest multiple of 5 which is 10 minutes.

**Default** 4 hours

# Alerting high availability
<a name="v12-alerting-explore-high-availability"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Amazon Managed Grafana is configured for high availability, including running multiple instances across multiple availability zones for each workspace that you create.

Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.

With high availability configurations, all alert rules are evaluated on all instances. You can think of the evaluation of alert rules as being duplicated. This is how Grafana Alerting makes sure that as long as at least one Grafana instance is working, alert rules will still be evaluated and notifications for alerts will still be sent. You will see this duplication in state history, and is a good way to tell if you are using high availability.

# Set up Alerting
<a name="v12-alerting-setup"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Configure the features and integrations that you need to create and manage your alerts.

**Prerequisites**

Before you set up alerting, you must do the following.
+ Configure your [data sources](AMG-data-sources.md).
+ Ensure that the data source you choose are compatible with and supported by [Grafana alerting](v12-alerting-overview-datasources.md).

**To set up alerting**

1. Configure [alert rules](v12-alerting-configure.md).
   + Create Grafana-managed or data-source managed alert rules and recording rules.

1. Configure [contact points](v12-alerting-configure-contactpoints.md).
   + Check the default contact point, and update the contact for your system.
   + Optionally, add new contact points and integrations.

1. Configure [notification policies](v12-alerting-explore-notifications-policies-details.md)
   + Check the default notification policy, and update for your system.
   + Optionally, add additional nested policies.
   + Optionally, add labels and label matchers to control alert routing.

The following topics give you more information about additional configuration options, including configuring external alert managers and routing Grafana-managed alerts outside of Grafana.

**Topics**
+ [Adding an external Alertmanager](v12-alerting-setup-alertmanager.md)
+ [Provisioning Grafana Alerting resources](v12-alerting-setup-provision.md)

# Adding an external Alertmanager
<a name="v12-alerting-setup-alertmanager"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Set up Grafana to use an external Alertmanager as a single Alertmanager to receive all of your alerts. This external Alertmanager can then be configured and administered from within Grafana itself.

**Note**  
You can't use Amazon Managed Service for Prometheus as an external Alertmanager.

Once you have added the alertmanager, you can use the Grafana Alerting UI to manage silences, contact points, and notification policies. A dropdown option in these pages allows you to switch between alertmanagers.

External alertmanagers are configured as data sources using Grafana Configuration from the main Grafana navigation menu. This enables you to manage the contact points and notification policies of external alertmanagers from within Grafana and also encrypts HTTP basic authentication credentials that were previously visible when configuring external alertmanagers by URL.

**Note**  
Starting with Grafana 9.2, the URL configuration of external alertmanagers from the Admin tab on the Alerting page is deprecated. It will be removed in a future release.

**To add an external Alertmanager**

1. Choose **Connections** from the main left menu.

1. Search for `Alertmanager`.

1. Choose the **Create a new data source** button.

1. Fill out the fields on the page, as required.

   If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.
**Note**  
Prometheus, Grafana Mimir, and Cortex implementations of Alertmanager are supported. For Prometheus, contact points and notification policies are read-only in the Grafana Alerting UI.

1. Choose **Save & test**.

# Provisioning Grafana Alerting resources
<a name="v12-alerting-setup-provision"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Grafana Alerting provisioning makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.

There are two options to choose from:

1. Provision your alerting resources using the Alerting Provisioning HTTP API.
**Note**  
Typically, you cannot edit API-provisioned alert rules from the Grafana UI.  
In order to enable editing, add the x-disable-provenance header to the following requests when creating or editing your alert rules in the API:  

   ```
   POST /api/v1/provisioning/alert-rules
   PUT /api/v1/provisioning/alert-rules/{UID}
   ```

1. Provision your alerting resources using Terraform.

**Note**  
Currently, provisioning for Grafana Alerting supports alert rules, contact points, mute timings, and templates. Provisioned alerting resources using file provisioning or Terraform can only be edited in the source that created them and not from within Grafana or any other source. For example, if you provision your alerting resources using files from disk, you cannot edit the data in Terraform or from within Grafana.

**Topics**
+ [Create and manage alerting resources using Terraform](v12-alerting-setup-provision-terraform.md)
+ [Viewing provisioned alerting resources in Grafana](v12-alerting-setup-provision-view.md)

# Create and manage alerting resources using Terraform
<a name="v12-alerting-setup-provision-terraform"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use Terraform’s Grafana Provider to manage your alerting resources and provision them into your Grafana system. Terraform provider support for Grafana Alerting makes it easy to create, manage, and maintain your entire Grafana Alerting stack as code.

For more information on managing your alerting resources using Terraform, refer to the [Grafana Provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs) documentation in the Terraform documentation.

Complete the following tasks to create and manage your alerting resources using Terraform.

1. Create an API key for provisioning.

1. Configure the Terraform provider.

1. Define your alerting resources in Terraform.

1. Run `terraform apply` to provision your alerting resources.

## Prerequisites
<a name="v12-alerting-setup-provision-tf-prerequisites"></a>
+ Ensure you have the grafana/grafana [Terraform provider](https://registry.terraform.io/providers/grafana/grafana/1.28.0) 1.27.0 or higher.
+ Ensure you are using Grafana 9.1 or higher. If you created your Amazon Managed Grafana instance with Grafana version 9, this will be true.

## Create an API key for provisioning
<a name="v12-alerting-setup-provision-tf-apikey"></a>

You can [create a normal Grafana API key](Using-Grafana-APIs.md) to authenticate Terraform with Grafana. Most existing tooling using API keys should automatically work with the new Grafana Alerting support. For information specifically about creating keys for use with Terraform, see [Using Terraform for Amazon Managed Grafana automation](https://aws-observability.github.io/observability-best-practices/recipes/recipes/amg-automation-tf/).

**To create an API key for provisioning**

1. Create a new service account for your CI pipeline.

1. Assign the role “Access the alert rules Provisioning API.”

1. Create a new service account token.

1. Name and save the token for use in Terraform.

Alternatively, you can use basic authentication. To view all the supported authentication formats, see [Grafana authentication](https://registry.terraform.io/providers/grafana/grafana/latest/docs#authentication) in the Terraform documentation.

## Configure the Terraform provider
<a name="v12-alerting-setup-provision-tf-configure"></a>

Grafana Alerting support is included as part of the [Grafana Terraform provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs).

The following is an example you can use to configure the Terraform provider.

```
terraform {
    required_providers {
        grafana = {
            source = "grafana/grafana"
            version = ">= 1.28.2"
        }
    }
}

provider "grafana" {
    url = <YOUR_GRAFANA_URL>
    auth = <YOUR_GRAFANA_API_KEY>
}
```

## Provision contact points and templates
<a name="v12-alerting-setup-provision-tf-contacts"></a>

Contact points connect an alerting stack to the outside world. They tell Grafana how to connect to your external systems and where to deliver notifications. There are over fifteen different [integrations](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/contact_point#optional) to choose from. This example uses a Slack contact point.

**To provision contact points and templates**

1. Copy this code block into a .tf file on your local machine. Replace *<slack-webhook-url>* with your Slack webhook URL (or other contact point details).

   This example creates a contact point that sends alert notifications to Slack.

   ```
   resource "grafana_contact_point" "my_slack_contact_point" {
       name = "Send to My Slack Channel"
   
       slack {
           url = <slack-webhook-url>
           text = <<EOT
   {{ len .Alerts.Firing }} alerts are firing!
   
   Alert summaries:
   {{ range .Alerts.Firing }}
   {{ template "Alert Instance Template" . }}
   {{ end }}
   EOT
       }
   }
   ```

1. Enter text for your notification in the text field.

   The `text` field supports [Go-style templating](https://pkg.go.dev/text/template). This enables you to manage your Grafana Alerting notification templates directly in Terraform.

1. Run the command `terraform apply`.

1. Go to the Grafana UI and check the details of your contact point.

   You cannot edit resources provisioned via Terraform from the UI. This ensures that your alerting stack always stays in sync with your code.

1. Click **Test** to verify that the contact point works correctly.

**Note**  
You can re-use the same templates across many contact points. In the example above, a shared template is embedded using the statement `{{ template "Alert Instance Template" . }}`  
This fragment can then be managed separately in Terraform:  

```
resource "grafana_message_template" "my_alert_template" {
    name = "Alert Instance Template"

    template = <<EOT
{{ define "Alert Instance Template" }}
Firing: {{ .Labels.alertname }}
Silence: {{ .SilenceURL }}
{{ end }}
EOT
}
```

## Provision notification policies and routing
<a name="v12-alerting-setup-provision-tf-notifications"></a>

Notification policies tell Grafana how to route alert instances, as opposed to where. They connect firing alerts to your previously defined contact points using a system of labels and matchers.

**To provision notification policies and routing**

1. Copy this code block into a .tf file on your local machine.

   In this example, the alerts are grouped by `alertname`, which means that any notifications coming from alerts which share the same name, are grouped into the same Slack message.

   If you want to route specific notifications differently, you can add sub-policies. Sub-policies allow you to apply routing to different alerts based on label matching. In this example, we apply a mute timing to all alerts with the label a=b.

   ```
   resource "grafana_notification_policy" "my_policy" {
       group_by = ["alertname"]
       contact_point = grafana_contact_point.my_slack_contact_point.name
   
       group_wait = "45s"
       group_interval = "6m"
       repeat_interval = "3h"
   
       policy {
           matcher {
               label = "a"
               match = "="
               value = "b"
           }
           group_by = ["..."]
           contact_point = grafana_contact_point.a_different_contact_point.name
           mute_timings = [grafana_mute_timing.my_mute_timing.name]
   
           policy {
               matcher {
                   label = "sublabel"
                   match = "="
                   value = "subvalue"
               }
               contact_point = grafana_contact_point.a_third_contact_point.name
               group_by = ["..."]
           }
       }
   }
   ```

1. In the mute\$1timings field, link a mute timing to your notification policy.

1. Run the command `terraform apply`.

1. Go to the Grafana UI and check the details of your notification policy.
**Note**  
You cannot edit resources provisioned from Terraform from the UI. This ensures that your alerting stack always stays in sync with your code.

1. Click **Test** to verify that the notification point is working correctly.

## Provision mute timings
<a name="v12-alerting-setup-provision-tf-mutetiming"></a>

Mute timings provide the ability to mute alert notifications for defined time periods.

**To provision mute timings**

1. Copy this code block into a .tf file on your local machine.

   In this example, alert notifications are muted on weekends.

   ```
   resource "grafana_mute_timing" "my_mute_timing" {
       name = "My Mute Timing"
   
       intervals {
           times {
             start = "04:56"
             end = "14:17"
           }
           weekdays = ["saturday", "sunday", "tuesday:thursday"]
           months = ["january:march", "12"]
           years = ["2025:2027"]
       }
   }
   ```

1. Run the command `terraform apply`.

1. Go to the Grafana UI and check the details of your mute timing.

1. Reference your newly created mute timing in a notification policy using the `mute_timings` field. This will apply your mute timing to some or all of your notifications.
**Note**  
You cannot edit resources provisioned from Terraform from the UI. This ensures that your alerting stack always stays in sync with your code.

1. Click **Test** to verify that the mute timing is working correctly.

## Provision alert rules
<a name="v12-alerting-setup-provision-tf-rules"></a>

[Alert rules](v12-alerting-configure.md) enable you to alert against any Grafana data source. This can be a data source that you already have configured, or you can [define your data sources in Terraform](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source) alongside your alert rules.

**To provision alert rules**

1. Create a data source to query and a folder to store your rules in.

   In this example, the [Configure a TestData data source for testing](testdata-data-source.md) data source is used.

   Alerts can be defined against any backend datasource in Grafana.

   ```
   resource "grafana_data_source" "testdata_datasource" {
       name = "TestData"
       type = "testdata"
   }
   
   resource "grafana_folder" "rule_folder" {
       title = "My Rule Folder"
   }
   ```

1. Define an alert rule.

   For more information on alert rules, refer to [how to create Grafana-managed alerts](https://grafana.com/blog/2022/08/01/grafana-alerting-video-how-to-create-alerts-in-grafana-9/).

1. Create a rule group containing one or more rules.

   In this example, the `grafana_rule_group` resource group is used.

   ```
   resource "grafana_rule_group" "my_rule_group" {
       name = "My Alert Rules"
       folder_uid = grafana_folder.rule_folder.uid
       interval_seconds = 60
       org_id = 1
   
       rule {
           name = "My Random Walk Alert"
           condition = "C"
           for = "0s"
   
           // Query the datasource.
           data {
               ref_id = "A"
               relative_time_range {
                   from = 600
                   to = 0
               }
               datasource_uid = grafana_data_source.testdata_datasource.uid
               // `model` is a JSON blob that sends datasource-specific data.
               // It's different for every datasource. The alert's query is defined here.
               model = jsonencode({
                   intervalMs = 1000
                   maxDataPoints = 43200
                   refId = "A"
               })
           }
   
           // The query was configured to obtain data from the last 60 seconds. Let's alert on the average value of that series using a Reduce stage.
           data {
               datasource_uid = "__expr__"
               // You can also create a rule in the UI, then GET that rule to obtain the JSON.
               // This can be helpful when using more complex reduce expressions.
               model = <<EOT
   {"conditions":[{"evaluator":{"params":[0,0],"type":"gt"},"operator":{"type":"and"},"query":{"params":["A"]},"reducer":{"params":[],"type":"last"},"type":"avg"}],"datasource":{"name":"Expression","type":"__expr__","uid":"__expr__"},"expression":"A","hide":false,"intervalMs":1000,"maxDataPoints":43200,"reducer":"last","refId":"B","type":"reduce"}
   EOT
               ref_id = "B"
               relative_time_range {
                   from = 0
                   to = 0
               }
           }
   
           // Now, let's use a math expression as our threshold.
           // We want to alert when the value of stage "B" above exceeds 70.
           data {
               datasource_uid = "__expr__"
               ref_id = "C"
               relative_time_range {
                   from = 0
                   to = 0
               }
               model = jsonencode({
                   expression = "$B > 70"
                   type = "math"
                   refId = "C"
               })
           }
       }
   }
   ```

1. Go to the Grafana UI and check your alert rule.

   You can see whether the alert rule is firing. You can also see a visualization of each of the alert rule’s query stages.

   When the alert fires, Grafana routes a notification through the policy you defined.

   For example, if you chose Slack as a contact point, Grafana’s embedded [Alertmanager](https://github.com/prometheus/alertmanager) automatically posts a message to Slack.

# Viewing provisioned alerting resources in Grafana
<a name="v12-alerting-setup-provision-view"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can verify that your alerting resources were created in Grafana.

**To view your provisioned resources in Grafana**

1. Open your Grafana instance.

1. Navigate to Alerting.

1. Click an alerting resource folder, for example, Alert rules.

   Provisioned resources are labeled **Provisioned**, so that it is clear that they were not created manually.

**Note**  
You cannot edit provisioned resources from Grafana. You can only change the resource properties by changing the provisioning file and restarting Grafana or carrying out a hot reload. This prevents changes being made to the resource that would be overwritten if a file is provisioned again or a hot reload is carried out.

# Configure alerting
<a name="v12-alerting-configure"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Configure the features and integrations that you need to create and manage your alerts.

**Topics**
+ [Configure Grafana managed alert rules](v12-alerting-configure-grafanamanaged.md)
+ [Configure data source managed alert rules](v12-alerting-configure-datasourcemanaged.md)
+ [Configure recording rules](v12-alerting-configure-recordingrules.md)
+ [Configure contact points](v12-alerting-configure-contactpoints.md)
+ [Configure notification policies](v12-alerting-configure-notification-policies.md)

# Configure Grafana managed alert rules
<a name="v12-alerting-configure-grafanamanaged"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana-managed rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources. In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. Using images in alert notifications is also supported. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.

Multiple alert instances can be created as a result of one alert rule (also known as multi-dimensional alerting).

Grafana managed alert rules can only be edited or deleted by users with Edit permissions for the folder storing the rules.

If you delete an alerting resource created in the UI, you can no longer retrieve it. To make a backup of your configuration and to be able to restore deleted alerting resources, create your alerting resources using Terraform, or the Alerting API.

In the following procedures, we’ll go through the process of creating your Grafana-managed alert rules.

To create a Grafana-managed alert rule, use the in-workspace alert creation flow and follow these steps to help you.

**Set alert rule name**

1. Choose **Alerting** -> **Alert rules** -> **\$1 New alert rule**.

1. Enter a name to identify your alert rule.

   This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule.

Next, define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.

**To define the query and condition**

1. Select a data source.

1. From the **Options** dropdown, specify a [time range](v12-dash-using-dashboards.md#v12-dash-setting-dashboard-time-range).
**Note**  
Grafana Alerting only supports fixed relative time ranges, for example, `now-24hr: now`.  
It does not support absolute time ranges: `2021-12-02 00:00:00 to 2021-12-05 23:59:592` or semi-relative time ranges: `now/d to: now`.

1. Add a query.

   To add multiple [queries](v12-panels-query-xform.md#v12-panels-query-xform-add), choose **Add query**.

   All alert rules are managed by Grafana by default. If you want to switch to a data source-managed alert rule, click **Switch to data source-managed alert rule**.

1. Add one or more [expressions](v12-panels-query-xform-expressions.md).

   1. For each expression, select either **Classic condition** to create a single alert rule, or choose from the **Math**, **Reduce**, and **Resample** options to generate a separate alert for each series.
**Note**  
When using Prometheus, you can use an instant vector and built-in functions, so you don’t need to add additional expressions.

   1. Choose **Preview** to verify that the expression is successful.

1. [Optional] To add a recovery threshold, turn the **Custom recovery threshold** toggle on and fill in a value for when your alert rule should stop firing.

   You can only add one recovery threshold in a query and it must be the alert condition.

1. Choose **Set as alert condition** on the query or expression you want to set as your alert condition.

Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.

To do this, you need to make sure that your alert rule is in the right evaluation group and set a pending period time that works best for your use case.

**To set alert evaluation behavior**

1. Select a folder or choose **\$1 New folder**.

1. Select an evaluation group or click **\$1 New evaluation group**.

   If you are creating a new evaluation group, specify the interval for the group.

   All rules within the same group are evaluated concurrently over the same time interval.

1. Enter a pending period.

   The pending period is the period in which an alert rule can be in breach of the condition until it fires.

   Once a condition is met, the alert goes into the **Pending** state. If the condition remains active for the duration specified, the alert transitions to the **Firing** state, else it reverts to the **Normal** state.

1. Turn on pause alert notifications, if required.
**Note**  
Pause alert rule evaluation to prevent noisy alerting while tuning your alerts. Pausing stops alert rule evaluation and does not create any alert instances. This is different to mute timings, which stop notifications from being delivered, but still allow for alert rule evaluation and the creation of alert instances.  
You can pause alert rule evaluation to prevent noisy alerting while tuning your alerts. Pausing stops alert rule evaluation and does not create any alert instances. This is different to mute timings, which stop notifications from being delivered, but still allow for alert rule evaluation and the creation of alert instances.

1. In **Configure no data and error handling**, configure alerting behavior in the absence of data.

   Use the guidelines later in this section.

Add labels to your alert rules to set which notification policy should handle your firing alert instances.

All alert rules and instances, irrespective of their labels, match the default notification policy. If there are no nested policies, or no nested policies match the labels in the alert rule or alert instance, then the default notification policy is the matching policy.

**To configure notifications**

1. Add labels if you want to change the way your notifications are routed.

   Add custom labels by selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value.

1. Preview your alert instance routing set up.

   Based on the labels added, alert instances are routed to the notification policies displayed.

   Expand each notification policy to view more details.

1. Choose **See details** to view alert routing details and a preview.

Add [annotations](v12-alerting-overview-labels.md#v12-alerting-overview-labels-annotations) to provide more context on the alert in your alert notification message.

Annotations add metadata to provide more information on the alert in your alert notification message. For example, add a **Summary** annotation to tell you which value caused the alert to fire or which server it happened on.

**To add annotations**

1. [Optional] Add a summary.

   Short summary of what happened and why.

1. [Optional] Add a description.

   Description of what the alert rule does.

1. [Optional] Add a Runbook URL.

   Webpage where you keep your runbook for the alert

1. [Optional] Add a custom annotation

1. [Optional] Add a dashboard and panel link.

   Links alerts to panels in a dashboard.

1. Choose **Save rule**.

**Single and multi-dimensional rule**

For Grafana managed alerts, you can create a rule with a classic condition or you can create a multi-dimensional rule.
+ **Rule with classic condition**

  Use the classic condition expression to create a rule that triggers a single alert when its condition is met. For a query that returns multiple series, Grafana does not track the alert state of each series. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series.
+ **Multi-dimensional rule**

  To generate a separate alert for each series, create a multi-dimensional rule. Use `Math`, `Reduce`, or `Resample` expressions to create a multi-dimensional rule. For example:
  + Add a `Reduce` expression for each query to aggregate values in the selected time range into a single value (not needed for [rules using numeric data](v12-alerting-overview-numeric.md)).
  + Add a `Math` expression with the condition for the rule. Not needed in case a query or a reduce expression already returns `0` if rule should not fire, or a positive number if it should fire. Some examples: `$B > 70` if it should fire in case value of B query/expression is more than 70. `$B < $C * 100` in case it should fire if value of B is less than value of C multiplied by 100. If queries being compared have multiple series in their results, series from different queries are matched if they have the same labels or one is a subset of the other.

**Note**  
Grafana does not support alert queries with template variables. More information is available at [https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514](https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514).

**Configure no data and error handling**

Configure alerting behavior when your alert rule evaluation returns no data or an error.

**Note**  
Alert rules that are configured to fire when an evaluation returns no data or error only fire when the entire duration of the evaluation period has finished. This means that rather than immediately firing when the alert rule condition is breached, the alert rule waits until the time set as the **For** field has finished and then fires, reducing alert noise and allowing for temporary data availability issues.

If your alert rule evaluation returns no data, you can set the state on your alert rule to appear as follows:


| No Data | Description | 
| --- | --- | 
| No Data | Creates a new alert DatasourceNoData with the name and UID of the alert rule, and UID of the datasource that returned no data as labels. | 
| Alerting | Sets alert rule state to Alerting. The alert rule waits until the time set in the For field has finished before firing. | 
| Ok | Sets alert rule state to Normal. | 

If your evaluation returns an error, you can set the state on your alert rule to appear as follows:


| Error | Description | 
| --- | --- | 
| Error | Creates an alert instance DatasourceError with the name and UID of the alert rule, and UID of the datasource that returned no data as labels. | 
| Alerting | Sets alert rule state to Alerting. The alert rule waits until the time set in the For field has finished before firing. | 
| Ok | Sets alert rule state to Normal. | 

**Resolve stale alert instances**

An alert instance is considered stale if its dimension or series has disappeared from the query results entirely for two evaluation intervals.

Stale alert instances that are in the `Alerting`/`NoData`/`Error` states are automatically marked as `Resolved` and the `grafana_state_reason` annotation is added to the alert instance with the reason `MissingSeries`.

**Create alerts from panels**

Create alerts from any panel type. This means you can reuse the queries in the panel and create alerts based on them.

1. Navigate to a dashboard in the **Dashboards** section.

1. In the top right corner of the panel, choose the three dots (ellipses).

1. From the dropdown menu, select **More…** and then choose **New alert rule**.

This will open the alert rule form, allowing you to configure and create your alert based on the current panel’s query.

# Configure data source managed alert rules
<a name="v12-alerting-configure-datasourcemanaged"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Create alert rules for an external Grafana Mimir or Loki instance that has ruler API enabled; these are called data source managed alert rules.

**Note**  
Alert rules for an external Grafana Mimir or Loki instance can be edited or deleted by users with Editor or Admin roles.  
If you delete an alerting resource created in the UI, you can no longer retrieve it. To make a backup of your configuration and to be able to restore deleted alerting resources, create your alerting resources using Terraform, or the Alerting API.

**Prerequisites**
+ Verify that you have write permission to the Prometheus or Loki data source. Otherwise, you will not be able to create or update Grafana Mimir managed alert rules.
+ For Grafana Mimir and Loki data sources, enable the Ruler API by configuring their respective services.
  + **Loki** - The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other rule storage types.
  + **Grafana Mimir** - use the `/prometheus` prefix. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the [Query API](https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/#querier--query-frontend) and [Ruler API](https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/#ruler) are under the same URL. You cannot provide a separate URL for the Ruler API.

**Note**  
If you do not want to manage alert rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via alerting UI** checkbox.

In the following procedures, we’ll guide you through the process of creating your data source managed alert rules.

To create a data source-managed alert rule, use the in-workspace alert creation flow and follow these steps to help you.

**To set the alert rule name**

1. Choose **Alerting** -> **Alert rules** -> **\$1 New alert rule**.

1. Enter a name to identify your alert rule.

   This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule.

Define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.

**To define query and condition**

1. All alert rules are managed by Grafana by default. To switch to a data source managed alert rule, choose **Switch to data source-managed alert rule**.

1. Select a data source from the drop-down list.

   You can also choose **Open advanced data source picker** to see more options, including adding a data source (Admins only).

1. Enter a PromQL or LogQL query.

1. Choose **Preview alerts**.

Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.

**To set alert evaluation behavior**

1. Select a namespace or choose **\$1 New namespace**.

1. Select an evaluation group or choose **\$1 New evaluation group**.

   If you are creating a new evaluation group, specify the interval for the group.

   All rules within the same group are evaluated sequentially over the same time interval.

1. Enter a pending period.

   The pending period is the period in which an alert rule can be in breach of the condition until it fires.

   Once a condition is met, the alert goes into the `Pending` state. If the condition remains active for the duration specified, the alert transitions to the `Firing` state, else it reverts to the `Normal` state.

Add labels to your alert rules to set which notification policy should handle your firing alert instances.

All alert rules and instances, irrespective of their labels, match the default notification policy. If there are no nested policies, or no nested policies match the labels in the alert rule or alert instance, then the default notification policy is the matching policy.

**Configure notifications**
+ Add labels if you want to change the way your notifications are routed.

  Add custom labels by selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value.

Add [annotations](v12-alerting-overview-labels.md#v12-alerting-overview-labels-annotations) to provide more context on the alert in your alert notifications.

Annotations add metadata to provide more information on the alert in your alert notifications. For example, add a `Summary` annotation to tell you which value caused the alert to fire or which server it happened on.

**To add annotations**

1. [Optional] Add a summary.

   Short summary of what happened and why.

1. [Optional] Add a description.

   Description of what the alert rule does.

1. [Optional] Add a Runbook URL.

   Webpage where you keep your runbook for the alert

1. [Optional] Add a custom annotation

1. [Optional] Add a dashboard and panel link.

   Links alerts to panels in a dashboard.

1. Choose **Save rule**.

# Configure recording rules
<a name="v12-alerting-configure-recordingrules"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can create and manage recording rules for an external Grafana Mimir or Loki instance. Recording rules calculate frequently needed expressions or computationally expensive expressions in advance and save the result as a new set of time series. Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.

**Note**  
Recording rules are run as instance rules, and run every 10 seconds.

**Prerequisites**
+ Verify that you have write permissions to the Prometheus or Loki data source. You will be creating or updating alerting rules in your data source.
+ For Grafana Mimir and Loki data sources, enable the ruler API by configuring their respective services.
  + **Loki** – The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other storage types.
  + **Grafana Mimir** – Use the `/prometheus` prefix. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the Query API and Ruler API are under the same URL. You cannot provide a separate URL for the Ruler API.

**Note**  
If you do not want to manage alerting rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via Alerting UI** check box.

**To create recording rules**

1. From your Grafana console, in the Grafana menu, choose **Alerting**, **Alert rules**.

1. Choose **New recording rule**.

1. Set rule name.

   The recording rule name must be a Prometheus metric name and contain no whitespace.

1. Define query
   + Select your Loki or Prometheus data source.
   + Enter a query.

1. Add namespace and group.
   + From the **Namespace** dropdown, select an existing rule namespace or add a new one. Namespaces can contain one or more rule groups and only have an organizational purpose.
   + From the **Group** dropdown, select an existing group within the selected namespace or add a new one. Newly created rules are appended to the end of the group. Rules within a group are run sequentially at a regular interval, with the same evaluation time.

1. Add labels.
   + Add custom labels selecting existing key-value pairs from the dropdown, or add new labels by entering the new key or value.

1. Choose **Save rule** to save the rule, or **Save rule and exit** to save the rule and go back to the Alerting page.

# Configure contact points
<a name="v12-alerting-configure-contactpoints"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use contact points to define how your contacts are notified when an alert rule fires.

**Note**  
You can create and edit contact points for Grafana managed alerts. Contact points for data source managed alerts are read-only.

## Working with contact points
<a name="v12-alerting-configure-contactpoints-working"></a>

The following procedures show how to add, edit, delete, and test a contact point.

**To add a contact point**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points**.

1. From the **Choose Alertmanager** dropdown, select an Alertmanager. The Grafana Alertmanager is selected by default.

1. On the **Contact Points** tab, choose **\$1 Add contact point**.

1. Enter a **Name** for the contact point.

1. From **Integration**, choose a type, and fill out the mandatory fields based on that type. For example, if you choose Slack, enter the Slack channels and users who should be contacted.

1. If available for the contact point you selected, choose any desired **Optional settings** to specify additional settings.

1. Under **Notification settings**, optionally select **Disable resolved message** if you do not want to be notified when an alert resolves.

1. To add another contact point integration, choose **Add contact point integration** and repeat the steps for each contact point type needed.

1. Save your changes.

**To edit a contact point**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points** to see a list of existing contact points.

1. Select the contact point to edit, then choose **Edit**.

1. Update the contact point, and then save your changes.

You can delete contact points that are not in use by a notification policy.

**To delete a contact point**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points** to open the list of existing contact points.

1. On the **Contact points**, select the contact point to delete, then choose **More**, **Delete**.

1. In the confirmation dialog box, choose **Yes, delete**.

**Note**  
If the contact point is in use by a notification policy, you must delete the notification policy or edit it to use a different contact point before deleting the contact point.

After your contact point is created, you can send a test notification to verify that it is configured properly.

**To send a test notification**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points** to open the list of existing contact points.

1. On the **Contact points**, select the contact point to test, then choose **Edit**. You can also create a new contact point if needed.

1. Choose **Test** to open the contact point testing dialog.

1. Choose whether to send a predefined test notification or choose **Custom** to add your own custom annotations and labels in the test notification.

1. Choose **Send test notification** to test the alert with the given contact points.

## Configure contact point integrations
<a name="v12-alerting-configure-contactpoints-integration"></a>

Configure contact point integrations in Grafana to select your preferred communication channel for receiving notifications when your alert rules are firing. Each integration has its own configuration options and setup process. In most cases, this involves providing an API key or a Webhook URL.

Once configured, you can use integrations as part of your contact points to receive notifications whenever your alert changes its state. In this section, we’ll cover the basic steps to configure an integration, using PagerDuty as an example, so you can start receiving real-time alerts and stay on top of your monitoring data.

**List of supported integrations**

The following table lists the contact point types supported by Grafana.


| Name | Type | 
| --- | --- | 
| Amazon SNS | `sns` | 
| OpsGenie | `opsgenie` | 
| Pager Duty | `pagerduty` | 
| Slack | `slack` | 
| VictorOps | `victorops` | 

**Configuring PagerDuty for alerting**

To set up PagerDuty, you must provide an integration key. Provide the following details.


| Setting | Description | 
| --- | --- | 
| Integration Key | Integration key for PagerDuty | 
| Severity | Level for dynamic notifications. Default is critical. | 
| Custom Details | Additional details about the event | 

The `CustomDetails` field is an object containing arbitrary key-value pairs. The user-defined details are merged with the ones used by default.

The default values for `CustomDetails` are:

```
{
	"firing":       `{{ template "__text_alert_list" .Alerts.Firing }}`,
	"resolved":     `{{ template "__text_alert_list" .Alerts.Resolved }}`,
	"num_firing":   `{{ .Alerts.Firing | len }}`,
	"num_resolved": `{{ .Alerts.Resolved | len }}`,
}
```

In case of duplicate keys, the user-defined details overwrite the default ones.

# Configure notification policies
<a name="v12-alerting-configure-notification-policies"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Notification policies determine how alerts are routed to contact points.

Policies have a tree structure, where each policy can have one or more nested policies. Each policy, except for the default policy, can also match specific alert labels.

Each alert is evaluated by the default policy and subsequently by each nested policy.

If you enable the `Continue matching subsequent sibling nodes` option for a nested policy, then evaluation continues even after one or more matches. A parent policy’s configuration settings and contact point information govern the behavior of an alert that does not match any of the child policies. A default policy governs any alert that does not match a nested policy.

For more information on notification policies, see [Notifications](v12-alerting-explore-notifications.md).

The following procedures show you how to create and manage notification policies.

**To edit the default notification policy**

1. In the left-side menu, choose **Alerting**.

1. Choose **Notification policies**.

1. From the **Choose Alertmanager** dropdown, select the Alertmanager you want to edit.

1. In the **Default policy** section, choose **...**, **Edit**.

1. In **Default contact point**, update the contact point where notifications should be sent for rules when alert rules do not match any specific policy.

1. In **Group by**, choose the labels to group alerts by. If multiple alerts are matched for this policy, then they are grouped by these labels. A notification is sent per group. If the field is empty (the default), then all notifications are sent in a single group. Use a special label, `...` to group alerts by all labels (which effectively disables grouping).

1. In **Timing options**, select from the following options.
   + **Group wait** – Time to wait to buffer alerts of the same group before sending an initial notification. The default is 30 seconds.
   + **Group interval** – Minimum time interval between two notifications for a group. The default is 5 minutes.
   + **Repeat interval** – Minimum time interval before resending a notification if no new alerts were added to the group. The default is 4 hours.

1. Choose **Save** to save your changes.

To create a new notification policy, you need to follow its tree structure. New policies created on the trunk of the tree (the default policy), are the tree branches. Each branch can have their own nested policies. This is why you will always be adding a new **nested** policy, either under the default policy, or under an already nested policy.

**To add a new nested policy**

1. In the left-side menu, choose **Alerting**.

1. Choose **Notification policies**.

1. From the **Choose Alertmanager** dropdown, select the Alertmanager you want to edit.

1. To add a top level specific policy, go to the Specific routing section (eitehr to the default policy, or to antoher existing policy in which you would like to add a new nested policy) and choose **\$1 New nested policy**.

1. In the matching labels section, add one or more rules for matching alert labels.

1. In the **Contact point** dropdown, select the contact point to send notifications to if an alert matches only this specific policy and not any of the nested policies.

1. Optionally, enable **Continue matching subsequent sibling nodes** to continue matching sibling policies even after the alert matched the current policy. When this option is enabled, you can get more than one notification for one alert.

1. Optionally, enable **Override grouping** to specify the same grouping as the default policy. If the option is not enabled, the default policy grouping is used.

1. Optionally, enable **Override general timings** to override the timing options configured in the group notification policy.

1. Choose **Save policy** to save your changes.

**To edit a nested policy**

1. In the left-side menu, choose **Alerting**.

1. Choose **Notification policies**.

1. Select the policy that you want to edit, then choose **...**, **Edit**.

1. Make any changes (as when adding a nested policy).

1. Save your changes.

**Searching for policies**

You can search within the tree of policies by *Label matchers* or *contact points*.
+ To search by contact point, enter a partial or full name of a contact point in the **Search by contact point** field. The policies that use that contact point will be highlighted in the user interface.
+ To search by label, enter a valid label matcher in the **Search by matchers** input field. Multiple matchers can be entered, separated by a comma. For example, a valid matcher input could be `severity=high, region=~EMEA|NA`.
**Note**  
When searching by label, all matched policies will be exact matches. Partial matches and regex-style matches are not supported.

# Manage your alerts
<a name="v12-alerting-manage"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Once you have set up your alert rules, contact points, and notification policies, you can use Grafana alerting to manage your alerts in practice.

**Topics**
+ [Customize notifications](v12-alerting-manage-notifications.md)
+ [Manage contact points](v12-alerting-manage-contactpoints.md)
+ [Silencing alert notifications](v12-alerting-silences.md)
+ [View and filter alert rules](v12-alerting-manage-rules-viewfilter.md)
+ [Mute timings](v12-alerting-manage-muting.md)
+ [View the state and health of alert rules](v12-alerting-manage-rulestate.md)
+ [View and filter by alert groups](v12-alerting-manage-viewfiltergroups.md)
+ [View notification errors](v12-alerting-manage-viewnotificationerrors.md)

# Customize notifications
<a name="v12-alerting-manage-notifications"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Customize your notifications with notifications templates.

You can use notification templates to change the title, message, and format of the message in your notifications.

Notification templates are not tied to specific contact point integrations, such as Amazon SNS or Slack. However, you can choose to create separate notification templates for different contact point integrations.

You can use notification templates to:
+ Add, remove, or re-order information in the notification including the summary, description, labels and annotations, values, and links
+ Format text in bold and italic, and add or remove line breaks

You cannot use notification templates to:
+ Change the design of notifications in instant messaging services such as Slack and Microsoft Teams

**Topics**
+ [Using Go’s templating language](v12-alerting-notifications-go-templating.md)
+ [Create notification templates](v12-alerting-create-templates.md)
+ [Using notification templates](#v12-alerting-use-notification-templates)
+ [Template reference](v12-alerting-template-reference.md)

# Using Go’s templating language
<a name="v12-alerting-notifications-go-templating"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You write notification templates in Go’s templating language, [text/template](https://pkg.go.dev/text/template).

This section provides an overview of Go’s templating language and writing templates in text/template.

## Dot
<a name="v12-go-dot"></a>

In text/template there is a special cursor called dot, and is written as `.`. You can think of this cursor as a variable whose value changes depending where in the template it is used. For example, at the start of a notification template `.` refers to the `ExtendedData` object, which contains a number of fields including `Alerts`, `Status`, `GroupLabels`, `CommonLabels`, `CommonAnnotations` and `ExternalURL`. However, dot might refer to something else when used in a `range` over a list, when used inside a `with`, or when writing feature templates to be used in other templates. You can see examples of this in [Create notification templates](v12-alerting-create-templates.md), and all data and functions in the [Template reference](v12-alerting-template-reference.md).

## Opening and closing tags
<a name="v12-go-openclosetags"></a>

In text/template, templates start with `{{` and end with `}}` irrespective of whether the template prints a variable or runs control structures such as if statements. This is different from other templating languages such as Jinja where printing a variable uses `{{` and `}}` and control structures use `{%` and `%}`.

## Print
<a name="v12-go-print"></a>

To print the value of something use `{{` and `}}`. You can print the value of dot, a field of dot, the result of a function, and the value of a [variable](#v12-go-variables). For example, to print the `Alerts` field where dot refers to `ExtendedData` you would write the following:

```
{{ .Alerts }}
```

## Iterate over alerts
<a name="v12-go-iterate-alerts"></a>

To print just the labels of each alert, rather than all information about the alert, you can use a `range` to iterate the alerts in `ExtendedData`:

```
{{ range .Alerts }}
{{ .Labels }}
{{ end }}
```

Inside the range dot no longer refers to `ExtendedData`, but to an `Alert`. You can use `{{ .Labels }}` to print the labels of each alert. This works because `{{ range .Alerts }}` changes dot to refer to the current alert in the list of alerts. When the range is finished dot is reset to the value it had before the start of the range, which in this example is `ExtendedData`:

```
{{ range .Alerts }}
{{ .Labels }}
{{ end }}
{{/* does not work, .Labels does not exist here */}}
{{ .Labels }}
{{/* works, cursor was reset */}}
{{ .Status }}
```

## Iterate over annotations and labels
<a name="v12-go-iterate-labels"></a>

Let’s write a template to print the labels of each alert in the format `The name of the label is $name, and the value is $value`, where `$name` and `$value` contain the name and value of each label.

Like in the previous example, use a range to iterate over the alerts in `.Alerts` such that dot refers to the current alert in the list of alerts, and then use a second range on the sorted labels so dot is updated a second time to refer to the current label. Inside the second range use `.Name` and `.Value` to print the name and value of each label:

```
{{ range .Alerts }}
{{ range .Labels.SortedPairs }}
The name of the label is {{ .Name }}, and the value is {{ .Value }}
{{ end }}
{{ range .Annotations.SortedPairs }}
The name of the annotation is {{ .Name }}, and the value is {{ .Value }}
{{ end }}
{{ end }}
```

## The index functions
<a name="v12-go-index"></a>

To print a specific annotation or label use the `index` function.

```
{{ range .Alerts }}
The name of the alert is {{ index .Labels "alertname" }}
{{ end }}
```

## If statements
<a name="v12-go-if"></a>

You can use if statements in templates. For example, to print `There are no alerts` if there are no alerts in `.Alerts` you would write the following:

```
{{ if .Alerts }}
There are alerts
{{ else }}
There are no alerts
{{ end }}
```

## With
<a name="v12-go-with"></a>

With is similar to if statements, however unlike if statements, `with` updates dot to refer to the value of the with:

```
{{ with .Alerts }}
There are {{ len . }} alert(s)
{{ else }}
There are no alerts
{{ end }}
```

## Variables
<a name="v12-go-variables"></a>

Variables in text/template must be created within the template. For example, to create a variable called `$variable` with the current value of dot you would write the following:

```
{{ $variable := . }}
```

You can use `$variable` inside a range or `with` and it will refer to the value of dot at the time the variable was defined, not the current value of dot.

For example, you cannot write a template that use `{{ .Labels }}` in the second range because here dot refers to the current label, not the current alert:

```
{{ range .Alerts }}
{{ range .Labels.SortedPairs }}
{{ .Name }} = {{ .Value }}
{{/* does not work because in the second range . is a label not an alert */}}
There are {{ len .Labels }}
{{ end }}
{{ end }}
```

You can fix this by defining a variable called `$alert` in the first range and before the second range:

```
{{ range .Alerts }}
{{ $alert := . }}
{{ range .Labels.SortedPairs }}
{{ .Name }} = {{ .Value }}
{{/* works because $alert refers to the value of dot inside the first range */}}
There are {{ len $alert.Labels }}
{{ end }}
{{ end }}
```

## Range with index
<a name="v12-go-rangeindex"></a>

You can get the index of each alert within a range by defining index and value variables at the start of the range:

```
{{ $num_alerts := len .Alerts }}
{{ range $index, $alert := .Alerts }}
This is alert {{ $index }} out of {{ $num_alerts }}
{{ end }}
```

## Define templates
<a name="v12-go-define"></a>

You can define templates that can be used within other templates, using `define` and the name of the template in double quotes. You should not define templates with the same name as other templates, including default templates such as `__subject`, `__text_values_list`, `__text_alert_list`, `default.title` and `default.message`. Where a template has been created with the same name as a default template, or a template in another notification template, Grafana might use either template. Grafana does not prevent, or show an error message, when there are two or more templates with the same name.

```
{{ define "print_labels" }}
{{ end }}
```

## Execute templates
<a name="v12-go-execute"></a>

You can execute defined template within your template using `template`, the name of the template in double quotes, and the cursor that should be passed to the template:

```
{{ template "print_labels" . }}
```

## Pass data to templates
<a name="v12-go-passdata"></a>

Within a template dot refers to the value that is passed to the template.

For example, if a template is passed a list of firing alerts then dot refers to that list of firing alerts:

```
{{ template "print_alerts" .Alerts }}
```

If the template is passed the sorted labels for an alert then dot refers to the list of sorted labels:

```
{{ template "print_labels" .SortedLabels }}
```

This is useful when writing reusable templates. For example, to print all alerts you might write the following:

```
{{ template "print_alerts" .Alerts }}
```

Then to print just the firing alerts you could write this:

```
{{ template "print_alerts" .Alerts.Firing }}
```

This works because both `.Alerts` and `.Alerts.Firing` are lists of alerts.

```
{{ define "print_alerts" }}
{{ range . }}
{{ template "print_labels" .SortedLabels }}
{{ end }}
{{ end }}
```

## Comments
<a name="v12-go-comments"></a>

You can add comments with `{{/*` and `*/}}`:

```
{{/* This is a comment */}}
```

To prevent comments from adding line breaks use:

```
{{- /* This is a comment with no leading or trailing line breaks */ -}}
```

## Indentation
<a name="v12-go-indentation"></a>

You can use indentation, both tabs and spaces, and line breaks, to make templates more readable:

```
{{ range .Alerts }}
  {{ range .Labels.SortedPairs }}
    {{ .Name }} = {{ .Value }}
  {{ end }}
{{ end }}
```

However, indentation in the template will also be present in the text. Next we will see how to remove it.

## Remove spaces and line breaks
<a name="v12-go-removespace"></a>

In text/template use `{{-` and `-}}` to remove leading and trailing spaces and line breaks.

For example, when using indentation and line breaks to make a template more readable:

```
{{ range .Alerts }}
  {{ range .Labels.SortedPairs }}
    {{ .Name }} = {{ .Value }}
  {{ end }}
{{ end }}
```

The indentation and line breaks will also be present in the text:

```
    alertname = "Test"

    grafana_folder = "Test alerts"
```

You can remove the indentation and line breaks from the text changing `}}` to `-}}` at the start of each range:

```
{{ range .Alerts -}}
  {{ range .Labels.SortedPairs -}}
    {{ .Name }} = {{ .Value }}
  {{ end }}
{{ end }}
```

The indentation and line breaks in the template are now absent from the text:

```
alertname = "Test"
grafana_folder = "Test alerts"
```

# Create notification templates
<a name="v12-alerting-create-templates"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Create reusable notification templates to send to your contact points.

You can add one or more templates to your notification template.

Your notification template name must be unique. You cannot have two templates with the same name in the same notification template or in different notification templates. Avoid defining templates with the same name as default templates, such as: `__subject`, `__text_values_list`, `__text_alert_list`, `default.title` and `default.message`.

In the Contact points tab, you can see a list of your notification templates.

## Creating notification templates
<a name="v12-alerting-creating-templates"></a>

**To create a notification template**

1. Choose **Alerting**, **Contact points**.

1. Choose the **Notification Templates** tab, and then **\$1 Add notification template**.

1. Choose a name for the notification template, such as `email.subject`.

1. Write the content of the template in the content field.

   For example:

   ```
   {{ if .Alerts.Firing -}}
      {{ len .Alerts.Firing }} firing alerts
      {{ end }}
      {{ if .Alerts.Resolved -}}
      {{ len .Alerts.Resolved }} resolved alerts
      {{ end }}
   ```

1. Save your changes.

   `{{ define "email.subject" }}` (where `email.subject` is the name of your template) and `{{ end }}` is automatically added to the start and end of the content.

**To create a notification template that contains more than one template**

1. Choose **Alerting**, **Contact points**.

1. Choose the **Notification Templates** tab, and then **\$1 Add notification template**.

1. Enter a name for the overall notification template. For example, `email`.

1. Write each template in the Content field, including `{{ define "name-of-template" }}` and `{{ end }}` at the start and end of each template. You can use descriptive names for each of the templates in the notification template, for example, `email.subject` or `email.message`. In this case, do not reuse the name of the notification template you entered above.

   Later sections show detailed examples for templates you might create.

1. Click Save.

## Preview notification templates
<a name="v12-alerting-preview-templates"></a>

Preview how your notification templates will look before using them in your contact points, helping you understand the result of the template you are creating as well as giving you a chance to fix any errors before saving the template.

**Note**  
Notification previews are only available for Grafana Alertmanager.

**To preview your notification templates**

1. Choose **Alerting**, **Contact points**.

1. Choose the **Notification Templates** tab, and then **\$1 Add notification template**, or edit an existing template.

1. Add or update your template content.

   Default data is provided and you can add or edit alert data to it as well as alert instances. You can add alert data directly in the Payload data window itself, or click **Select alert instances** or **Add custom alerts**.

1. [Optional] To add alert data from existing alert instances:

   1. Choose **Select alert instances**.

   1. Hover over the alert instances to view more information about each alert instance/

   1. Choose **Confirm** to add the alert instance to the payload.

1. [Optional] To add alert data using the Alert data editor, choose **Add custom data**:

   1. Add annotations, custom labels, or set a dashboard or panel.

   1. Toggle Firing or resolved, depending on whether you want to add firing or resolved alerts to your notification.

   1. Choose **Add alert data**.

   1. Choose **Refresh preview** to see what your template content will look like and the corresponding payload data.

   If there are any errors in your template, they are displayed in the Preview and you can correct them before saving.

1. Save your changes.

## Creating a template for the subject of message
<a name="v12-alerting-create-template-subject"></a>

Create a template for the subject of an email that contains the number of firing and resolved alerts, as in this example:

```
1 firing alerts, 0 resolved alerts
```

**To create a template for the subject of an email**

1. Create a template called `email.subject` with the following content:

   ```
   {{ define "email.subject" }}
   {{ len .Alerts.Firing }} firing alerts, {{ len .Alerts.Resolved }} resolved alerts
   {{ end }}
   ```

1. Use the template when creating your contact point integration by putting it into the **Subject** field with the `template` keyword.

   ```
   {{ template "email.subject" . }}
   ```

## Creating a template for the message of an email
<a name="v12-alerting-create-template-message"></a>

Create a template for the message of an email that contains a summary of all firing and resolved alerts, as in this example:

```
There are 2 firing alerts, and 1 resolved alerts

Firing alerts:

- alertname=Test 1 grafana_folder=GrafanaCloud has value(s) B=1
- alertname=Test 2 grafana_folder=GrafanaCloud has value(s) B=2

Resolved alerts:

- alertname=Test 3 grafana_folder=GrafanaCloud has value(s) B=0
```

**To create a template for the message of an email**

1. Create a notification template called `email` with two templates in the content: `email.message_alert` and `email.message`.

   The `email.message_alert` template is used to print the labels and values for each firing and resolved alert while the `email.message` template contains the structure of the email.

   ```
   {{- define "email.message_alert" -}}
   {{- range .Labels.SortedPairs }}{{ .Name }}={{ .Value }} {{ end }} has value(s)
   {{- range $k, $v := .Values }} {{ $k }}={{ $v }}{{ end }}
   {{- end -}}
   
   {{ define "email.message" }}
   There are {{ len .Alerts.Firing }} firing alerts, and {{ len .Alerts.Resolved }} resolved alerts
   
   {{ if .Alerts.Firing -}}
   Firing alerts:
   {{- range .Alerts.Firing }}
   - {{ template "email.message_alert" . }}
   {{- end }}
   {{- end }}
   
   {{ if .Alerts.Resolved -}}
   Resolved alerts:
   {{- range .Alerts.Resolved }}
   - {{ template "email.message_alert" . }}
   {{- end }}
   {{- end }}
   
   {{ end }}
   ```

1. Use the template when creating your contact point integration by putting it into the **Text Body** field with the `template` keyword.

   ```
   {{ template "email.message" . }}
   ```

## Creating a template for the title of a Slack message
<a name="v12-alerting-create-template-slack-title"></a>

Create a template for the title of a Slack message that contains the number of firing and resolved alerts, as in the following example:

```
1 firing alerts, 0 resolved alerts
```

**To create a template for the title of a Slack message**

1. Create a template called `slack.title` with the following content:

   ```
   {{ define "slack.title" }}
   {{ len .Alerts.Firing }} firing alerts, {{ len .Alerts.Resolved }} resolved alerts
   {{ end }}
   ```

1. Execute the template from the title field in your contact point integration.

   ```
   {{ template "slack.title" . }}
   ```

## Creating a template for the content of a Slack message
<a name="v12-alerting-create-template-slack-message"></a>

Create a template for the content of a Slack message that contains a description of all firing and resolved alerts, including their labels, annotations, and Dashboard URL.

**Note**  
This template is for Grafana managed alerts only. To use the template for data source managed alerts, delete the references to DashboardURL and SilenceURL. For more information about configuring Prometheus notifications, see the [Prometheus documentation on notifications](https://prometheus.io/docs/alerting/latest/notifications/).

```
1 firing alerts:

[firing] Test1
Labels:
- alertname: Test1
- grafana_folder: GrafanaCloud
Annotations:
- description: This is a test alert
Go to dashboard: https://example.com/d/dlhdLqF4z?orgId=1

1 resolved alerts:

[firing] Test2
Labels:
- alertname: Test2
- grafana_folder: GrafanaCloud
Annotations:
- description: This is another test alert
Go to dashboard: https://example.com/d/dlhdLqF4z?orgId=1
```

**To create a template for the content of a Slack message**

1. Create a template called `slack` with two templates in the content: `slack.print_alert` and `slack.message`.

   The `slack.print_alert` template is used to print the labels, annotations, and DashboardURL while the `slack.message` template contains the structure of the notification.

   ```
   {{ define "slack.print_alert" -}}
   [{{.Status}}] {{ .Labels.alertname }}
   Labels:
   {{ range .Labels.SortedPairs -}}
   - {{ .Name }}: {{ .Value }}
   {{ end -}}
   {{ if .Annotations -}}
   Annotations:
   {{ range .Annotations.SortedPairs -}}
   - {{ .Name }}: {{ .Value }}
   {{ end -}}
   {{ end -}}
   {{ if .DashboardURL -}}
     Go to dashboard: {{ .DashboardURL }}
   {{- end }}
   {{- end }}
   
   {{ define "slack.message" -}}
   {{ if .Alerts.Firing -}}
   {{ len .Alerts.Firing }} firing alerts:
   {{ range .Alerts.Firing }}
   {{ template "slack.print_alert" . }}
   {{ end -}}
   {{ end }}
   {{ if .Alerts.Resolved -}}
   {{ len .Alerts.Resolved }} resolved alerts:
   {{ range .Alerts.Resolved }}
   {{ template "slack.print_alert" .}}
   {{ end -}}
   {{ end }}
   {{- end }}
   ```

1. Execute the template from the text body field in your contact point integration:

   ```
   {{ template "slack.message" . }}
   ```

## Template both email and Slack with shared templates
<a name="v12-alerting-create-shared-templates"></a>

Instead of creating separate notification templates for each contact point, such as email and Slack, you can share the same template.

For example, if you want to send an email with this subject and Slack message with this title `1 firing alerts, 0 resolved alerts`, you can create a shared template.

**To create a shared template**

1. Create a template called `common.subject_title` with the following content:

   ```
   {{ define "common.subject_title" }}
   {{ len .Alerts.Firing }} firing alerts, {{ len .Alerts.Resolved }} resolved alerts
   {{ end }}
   ```

1. For email, run the template from the subject field in your email contact point integration:

   ```
   {{ template "common.subject_title" . }}
   ```

1. For Slack, run the template from the title field in your Slack contact point integration:

   ```
   {{ template "common.subject_title" . }}
   ```

## Using notification templates
<a name="v12-alerting-use-notification-templates"></a>

Use templates in contact points to customize your notifications.

**To use a template when creating a contact point**

1. From the **Alerting** menu, choose the **Contact points** tab to see a list of existing contact points.

1. Choose **New**. Alternately, you can edit an existing contact point by choosing the **Edit** icon.

1. Enter the templates you wish to use in a field, such as **Message** or **Subject**. To enter a template, use the form `{{ template "template_name" . }}`, replacing *template\$1name* with the name of the template you want to use.

1. Choose **Save contact point**.

# Template reference
<a name="v12-alerting-template-reference"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

This section provides reference information for creating your templates.

**Alert (type)**

The alert type contains the following data.


| Name | Kind | Description | Example | 
| --- | --- | --- | --- | 
|  Status  |  string  |  `firing` or `resolved`.  | \$1\$1 .Status \$1\$1 | 
|  Labels  |  KeyValue  |  A set of labels attached to the alert.  | \$1\$1 .Labels \$1\$1 | 
|  Annotations  |  KeyValue  |  A set of annotations attached to the alert.  | \$1\$1 .Annotations \$1\$1 | 
| Values | KeyValue | The values of all expressions, including Classic Conditions | \$1\$1 .Values \$1\$1 | 
|  StartsAt  |  time.Time  |  Time the alert started firing.  | \$1\$1 .StartsAt \$1\$1 | 
|  EndsAt  |  time.Time  |  Only set if the end time of an alert is known. Otherwise set to a configurable timeout period from the time since the last alert was received.  | \$1\$1 .EndsAt \$1\$1 | 
|  GeneratorURL  |  string  |  A back link to Grafana or external Alertmanager.  | \$1\$1 .GeneratorURL \$1\$1 | 
|  SilenceURL  |  string  |  A link to silence the alert (with labels for this alert pre-filled). Only for Grafana managed alerts.  | \$1\$1 .SilenceURL\$1\$1 | 
|  DashboardURL  |  string  |  Link to grafana dashboard, if alert rule belongs to one. Only for Grafana managed alerts.  | \$1\$1 .DashboardURL \$1\$1 | 
|  PanelURL  |  string  |  Link to grafana dashboard panel, if alert rule belongs to one. Only for Grafana managed alerts.  | \$1\$1 .PanelURL \$1\$1 | 
|  Fingerprint  |  string  |  Fingerprint that can be used to identify the alert.  | \$1\$1 .Fingerprint \$1\$1 | 
|  ValueString  |  string  |  A string that contains the labels and value of each reduced expression in the alert.  | \$1\$1 .ValueString \$1\$1 | 

 **ExtendedData**

The ExtendedData object contains the following properties.


| Name | Kind | Description | Example | 
| --- | --- | --- | --- | 
|  Receiver  |  `string`  |  The name of the contact point sending the notification.  |  `{{ .Receiver }}`  | 
|  Status  |  `string`  |  The status is `firing` if at least one alert is firing, otherwise `resolved`.  |  `{{ .Status }}`  | 
|  Alerts  |  `[]Alert`  |  List of all firing and resolved alerts in this notification.  |  `There are {{ len .Alerts }} alerts`  | 
|  Firing alerts  |  `[]Alert`  |  List of all firing alerts in this notification.  |  `There are {{ len .Alerts.Firing }} firing alerts`  | 
|  Resolved alerts  |  `[]Alert`  |  List of all resolved alerts in this notification.  |  `There are {{ len .Alerts.Resolved }} resolved alerts`  | 
|  GroupLabels  |  `KeyValue`  |  The labels that group these alerts int his notification.  |  `{{ .GroupLabels }}`  | 
|  CommonLabels  |  `KeyValue`  |  The labels common to all alerts in this notification.  |  `{{ .CommonLabels }}`  | 
|  CommonAnnotations  |  `KeyValue`  |  The annotations common to all alerts in this notification.  |  `{{ .CommonAnnotations }}`  | 
|  ExternalURL  |  `string`  |  A link to the Grafana workspace or Alertmanager that sent this notification.  |  `{{ .ExternalURL }}`  | 

**KeyValue type**

The `KeyValue` type is a set of key/value string pairs that represent labels and annotations.

In addition to direct access of the data stored as a `KeyValue`, there are also methods for sorting, removing, and transforming the data.


| Name | Arguments | Returns | Notes | Example | 
| --- | --- | --- | --- | --- | 
|  SortedPairs  |    |  Sorted list of key and value string pairs  |    | `{{ .Annotations.SortedPairs }}` | 
|  Remove  |  []string  |  KeyValue  |  Returns a copy of the Key/Value map without the given keys.  | `{{ .Annotations.Remove "summary" }}` | 
|  Names  |    |  []string  |  List of names  | `{{ .Names }}` | 
|  Values  |    |  []string  |  List of values  | `{{ .Values }}` | 

**Time**

Time is from the Go [https://pkg.go.dev/time#Time](https://pkg.go.dev/time#Time) package. You can print a time in a number of different formats. For example, to print the time that an alert fired in the format `Monday, 1st January 2022 at 10:00AM`, you write the following template:

```
{{ .StartsAt.Format "Monday, 2 January 2006 at 3:04PM" }}
```

You can find a reference for Go’s time format [here](https://pkg.go.dev/time#pkg-constants).

# Manage contact points
<a name="v12-alerting-manage-contactpoints"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The **Contact points** list view lists all existing contact points and notification templates.

On the **Contact points** tab, you can:
+ Search for names and types of contact points and integrations.
+ View all existing contact points and integrations.
+ View how many notification policies each contact point is being used for, and navigate directly to the linked notification policies.
+ View the status of notification deliveries.
+ Export individual contact points or all contact points in JSON, YAML, or Terraform format.
+ Delete contact points that are not in use by a notification policy.

On the **Notification templates** tab, you can:
+ View, edit, copy, or delete existing notification templates.

# Silencing alert notifications
<a name="v12-alerting-silences"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can suppress alert notifications with a *silence*. A silence only stops notifications from being created: Silences do not prevent alert rules from being evaluated, and they do not stop alerting instances from being shown in the user interface. When you silence an alert, you specify a window of time for it to be suppressed.

**Note**  
To suppress alert notifications at regular time intervals, for example, during regular maintenance periods, use [Mute timings](v12-alerting-manage-muting.md) rather than silences.

**To add a silence**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Silences**.

1. Choose an Alertmanager from the **Alertmanager** dropdown.

1. Choose **Create Silence**.

1. Select the start and end date in **Silence start and end** to indicate when the silence should go into effect and when it should end.

1. As an alternative to setting an end time, in **Duration**, specify how long the silence is enforced. This automatically updates the end time in the **Silence start and end** field.

1. In the **Label** and **Value** fields, enter one or more *Matching Labels*. Matchers determine which rules the silence applies to. Any matching alerts (in firing state), will show in the **Affected alerts instances** field.

1. Optionally, add a **Comment** describing the silence.

1. Choose **Submit**.

**To edit a silence**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Silences** to view the list of existing silences.

1. Find the silence you want to edit, then choose **Edit** (pen icon).

1. Make any desired changes, then choose **Submit** to save your changes.

You can edit an existing silence by choosing the **Edit** icon (pen).

**To create a URL link to a silence form**

When linking to a silence form, provide the default matching labels and comment via `matcher` and `comment` query parameters. The `matcher` parameter should be in the following format `[label][operator][value]` where the `operator` parameter can be one of the following: `=` (equals, not regex), `!=` (not equals, not regex), `=~` (equals, regex), `!~` (not equals, regex). The URL can contain many query parameters with the key `matcher`. For example, to link to silence form with matching labels `severity=critical` & `cluster!~europe-.*` and comment `Silence critical EU alerts`, create a URL `https://mygrafana/alerting/silence/new?matcher=severity%3Dcritical&matcher=cluster!~europe-*&comment=Silence%20critical%20EU%20alert`.

To link to a new silence page for an external Alertmanager, add an `alertmanager` query parameter

**To remove a silence**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Silences** to view the list of existing silences.

1. Select the silence that you want to end, and choose **Unsilence**. This ends the alert suppression.
**Note**  
Unsilencing ends the alert suppression, as if the end time was set for the current time. Silences that have ended (automatically or manually) are retained and listed for five days. You cannot remove a silence from the list manually.

# View and filter alert rules
<a name="v12-alerting-manage-rules-viewfilter"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The **Alerting** page lists alerting rules. By default, rules are grouped by types of data sources. The **Grafana** section lists rules managed by Grafana. Alert rules for Prometheus compatible data sources are also listed here. You can view alerting rules for Prometheus compatible data sources but you cannot edit them.

The Mimir/Cortex/Loki rules section lists all rules for Mimir, Cortex, or Loki data sources. Cloud alert rules are also listed in this section.

When managing large volumes of alerts, you can use extended alert rule search capabilities to filter on folders, evaluation groups, and rules. Additionally, you can filter alert rules by their properties like labels, state, type, and health.

## View alert rules
<a name="v12-alerting-manage-rules-view"></a>

Using Grafana alerts, you can view all of your alerts in one page.

**To view alerting details**

1. From your Grafana console, in the Grafana menu, choose **Alerting**, **Alert rules**. By default, the list view is displayed.

1. In **View as**, you can toggle between the Grouped, List, and State views by choosing the option you prefer.

1. Expand the rule row to view the rule labels, annotations, data sources, the rule queries, and a list of alert instances resulting from the rule.

From this page, you can also make copies of an alert rule to help you reuse existing rules.

## Export alert rules
<a name="v12-alerting-manage-rules-export"></a>

You can export rules to YAML or JSON in the Grafana workspace.
+ Choose the **Export rule group** icon next to each alert rule group to export to YAML, JSON, or Terraform.
+ Choose **Export rules** to export all Grafana managed alert rules to YAML, JSON, or Terraform.
+ Choose **More**, **Modify export** next to each individual alert rule within a group to edit provisioned alert rules and export a modified version.

## View query definitions for provisioned alerts
<a name="v12-alerting-manage-rules-querydef"></a>

View read-only query definitions for provisioned alerts. Check quickly if your alert rule queries are correct, without diving into your "as-code" repository for rule definitions.

**Grouped view**

Grouped view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by `namespace` \$1 `group`. This is the default rule list view, intended for managing rules. You can expand each group to view a list of rules in this group. Expand a rule further to view its details. You can also expand action buttons and alerts resulting from the rule to view their details.

**State view**

State view shows alert rules grouped by state. Use this view to get an overview of which rules are in what state. Each rule can be expanded to view its details. Action buttons and any alerts generated by this rule, and each alert can be further expanded to view its details.

## Filtering alert rules
<a name="v12-alerting-manage-rules-filter"></a>

You can filter the alerting rules that appear on the **Alerting** page in several ways.

**To filter alert rules**

1. From **Select data sources**, select a data source.. You can see alert rules that query the selected data source.

1. In **Search by label**, enter search criteria using label selectors. For example, `environment=production;region=~US|EU,severity!=warning`.

1. From **Filter alerts by state**, select an alerting state you want to see. You can see alerting rules that match that state. Rules matching other states are hidden.

# Mute timings
<a name="v12-alerting-manage-muting"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

A mute timing is a recurring interval of time when no new notifications for a policy are generated or sent. Use them to prevent alerts from firing a specific and reoccurring period, for example, a regular maintenance period.

Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created.

You can configure Grafana managed mute timings as well as mute timings for an external Alertmanager data source.

## Mute timings vs Silences
<a name="v12-alerting-manage-muting-compare"></a>

The following table highlights the differences between mute timings and silences.


| Mute timing | Silence | 
| --- | --- | 
| Uses time interval definitions that can reoccur. | Has a fixed start and end time. | 
| Is created and then added to notification policies. | Uses labels to match against an alert to determine whether to silence or not. | 

## Adding a mute timing
<a name="v12-alerting-manage-muting-add"></a>

You can create mute timings in your Grafana workspace.

**To add a mute timing**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Notification policies**, and then select the **Mute Timings** tab.

1. From the **Alertmanager** dropdown, select the Alertmanager you want to edit.

1. Choose the **\$1 Add mute timing** button.

1. Fill out the form to create a [time interval](#v12-alerting-manage-muting-interval) to match against for your mute timing.

1. Save your mute timing.

## Adding a mute timing to a notification policy
<a name="v12-alerting-manage-muting-add-notif"></a>

Once you have a mute timing, you use it by adding it to notification policy that you want to mute at regular intervals.

**To add a mute timing to a notification policy**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Notification policies**, and then select the **Notification Policies** tab.

1. Select the notification policy you would like to add the mute timing to, and choose **...**, **Edit**.

1. From the **Mute timings** dropdown, select the mute timings you would like to add to the policy.

1. Save your changes.

## Time intervals
<a name="v12-alerting-manage-muting-interval"></a>

A time interval is a specific duration during which alerts are suppressed. The duration typically consists of a specific time range and the days of the week, month, or year. 

Support time interval options are:
+ **Time range** – The time inclusive of the start and exclusive of the end time (in UTC, if no location has been selected, otherwise local time.
+ **Location** – Sets the location for the timing—the time range is displayed in local time for the location.
+ **Days of the week** – The day or range of days of the week. For example, `monday:thursday`.
+ **Days of the month** – The dates within a month. Values can range from `1`-`31`. Negative values specify days of the month in reverse order, so `-1` represents the last day of the month.
+ **Months** – The months of the year in either numerical of full calendar month name. For example, `1, may:august`.
+ **Years** – The year or years for the interval. For example, `2023:2024`.

Each of these elements can be a list, and at least one item in the element must be satisfied to be a match. Fields also support ranges, using `:`. For example, `monday:thursday`.

If a field is left blank, any moment of time will match the field. For an instant of to match a complete time intervale, all fields must match. A mute timing can contain multiple time intervals.

If you want to specify an exact duration, specify all the options needed for that duration. For example, if you want to create a time interval for the first Monday of the month, for March, June, September, and December, between the hours of 12:00 and 24:00 UTC, your time interval specification could be:
+ Time range:
  + Start time: `12:00`
  + End time: `24:00`
+ Days of the week: `monday`
+ Months: `3, 6, 9, 12`
+ Days of the month: `1:7`

# View the state and health of alert rules
<a name="v12-alerting-manage-rulestate"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The state and health of alert rules gives you several key status indicators about your alerts.

There are three components:
+ [Alert rule state](#v12-alerting-manage-rulestate-state)
+ [Alert instance state](#v12-alerting-manage-rulestate-instance)
+ [Alert rule health](#v12-alerting-manage-rulestate-health)

Although related, each component conveys subtly different information.

**To view the state and health of your alert rules**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Alert rules** to view the list of existing alerts.

1. Choose an alert rule to view its state and health.

## Alert rule state
<a name="v12-alerting-manage-rulestate-state"></a>

An alert rule can be in any of the following states:


| State | Description | 
| --- | --- | 
| Normal | None of the time series returned by the evaluation engine is in a pending or firing state. | 
| Pending | At least one time series returned by the evaluation engine is pending. | 
| Firing | At least one time series returned by the evaluation engine is firing. | 

**Note**  
Alerts transition first to `pending` and then `firing`, thus it takes at least two evaluation cycles before an alert is fired.

## Alert instance state
<a name="v12-alerting-manage-rulestate-instance"></a>

An alert instance can be in any of the following states:


| State | Description | 
| --- | --- | 
| Normal | The state of an alert that is neither pending nor firing. Everything is working as expected. | 
| Pending | The state of an alert that has been active for less than the configured threshold duration. | 
| Alerting | The state of an alert that has been active for longer than the configured threshold duration. | 
| No data | No data has been received for the configured time window. | 
| Alerting | An error occurred when attempting to evaluate an alerting rule. | 

## Keep last state
<a name="v12-alerting-manage-rulestate-keepstate"></a>

An alert rule can be configured to keep the last state when a `NoData` or `Error` state is encountered. This will both prevent alerts from firing, and from resolving and re-firing. Just like normal evaluation, the alert rule will transition from `pending` to `firing` after the pending period has elapsed.

## Alert rule health
<a name="v12-alerting-manage-rulestate-health"></a>

An alert rule can have one of the following health statuses.


| State | Description | 
| --- | --- | 
| Ok | No errors when evaluating the alert rule. | 
| Error | An error occurred when evaluating the alert rule. | 
| NoData | The absence of data in at least one time series returned during a rule evaluation. | 
| \$1status\$1, KeepLast | The rule would have received another status, but was configured to keep the last state of the alert rule. | 

## Special alerts for NoData and Error
<a name="v12-alerting-manage-rulestate-special"></a>

When evaluation of an alert rule produces the state `NoData` or `Error`, Grafana alerting will generate alert instances that have the following additional labels.


| Label | Description | 
| --- | --- | 
| alertname | Either DatasourceNoData or DatasourceError, depending on the state. | 
| datasource\$1uid | The UID of the data source that caused the state. | 

**Note**  
You will need to set the no data or error handling to `NoData` or `Error` in the alert rule, as described in the [Configure Grafana managed alert rules](v12-alerting-configure-grafanamanaged.md) topic, to generate the additional labels.

You can handle these alerts the same way as regular alerts, including adding silences, routing to a contact point, and so on.

# View and filter by alert groups
<a name="v12-alerting-manage-viewfiltergroups"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alert groups show grouped alerts from an Alertmanager instance. By default, alert rules are grouped by the label keys for the default policy in notification policies. Grouping common alert rules into a single alert group prevents duplicate alert rules from being fired.

You can view alert groups and also filter for alert rules that match specific criteria.

**To view alert groups**

1. From your Grafana console, in the Grafana menu, choose **Alerting**.

1. Choose **Groups** to view existing groups.

1. From the **Alertmanager** dropdown, select an external Alertmanager as your data source.

1. From the **Custom group by** dropdown, select a combination of labels to view a grouping other than the default. This is useful for debugging and verifying your grouping of notification policies.

If an alert does not contain labels specified either in the grouping of the root policy or the custom grouping, then the alert is added to a catch all group with a header of `No grouping`.

You can filter alerts by label or state of the alerts.

**To filter by label**
+ In **Search**, enter an existing label to view alerts matching the label.

  For example, `environment=production,region=~US|EU,severity!=warning`.

**To filter by state**
+ In **States**, select from Active, Suppressed, or Unprocessed states to view alerts matching your selected state. All other alerts are hidden.

# View notification errors
<a name="v12-alerting-manage-viewnotificationerrors"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

View notification errors and understand why they failed to be sent or were not received.

**Note**  
This feature is only supported for Grafana Alertmanager.

**To view notification errors**

1. From the left menu, choose **Alerting** then **Contact points**.

   If any contact points are failing, a message at the right-hand corner of the workspace tells you that there are errors, and how many.

1. Select a contact point to view the details of errors for that contact point.

   Error details are displayed if you hover over the Error icon.

   If a contact point has more than one integration, you see all errors for each of the integrations listed.

1. In the Health column, check the status of the notification.

   This can be either OK, No attempts, or Error.