

# Configure alerting
<a name="v12-alerting-configure"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Configure the features and integrations that you need to create and manage your alerts.

**Topics**
+ [Configure Grafana managed alert rules](v12-alerting-configure-grafanamanaged.md)
+ [Configure data source managed alert rules](v12-alerting-configure-datasourcemanaged.md)
+ [Configure recording rules](v12-alerting-configure-recordingrules.md)
+ [Configure contact points](v12-alerting-configure-contactpoints.md)
+ [Configure notification policies](v12-alerting-configure-notification-policies.md)

# Configure Grafana managed alert rules
<a name="v12-alerting-configure-grafanamanaged"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana-managed rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources. In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. Using images in alert notifications is also supported. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.

Multiple alert instances can be created as a result of one alert rule (also known as multi-dimensional alerting).

Grafana managed alert rules can only be edited or deleted by users with Edit permissions for the folder storing the rules.

If you delete an alerting resource created in the UI, you can no longer retrieve it. To make a backup of your configuration and to be able to restore deleted alerting resources, create your alerting resources using Terraform, or the Alerting API.

In the following procedures, we’ll go through the process of creating your Grafana-managed alert rules.

To create a Grafana-managed alert rule, use the in-workspace alert creation flow and follow these steps to help you.

**Set alert rule name**

1. Choose **Alerting** -> **Alert rules** -> **\$1 New alert rule**.

1. Enter a name to identify your alert rule.

   This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule.

Next, define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.

**To define the query and condition**

1. Select a data source.

1. From the **Options** dropdown, specify a [time range](v12-dash-using-dashboards.md#v12-dash-setting-dashboard-time-range).
**Note**  
Grafana Alerting only supports fixed relative time ranges, for example, `now-24hr: now`.  
It does not support absolute time ranges: `2021-12-02 00:00:00 to 2021-12-05 23:59:592` or semi-relative time ranges: `now/d to: now`.

1. Add a query.

   To add multiple [queries](v12-panels-query-xform.md#v12-panels-query-xform-add), choose **Add query**.

   All alert rules are managed by Grafana by default. If you want to switch to a data source-managed alert rule, click **Switch to data source-managed alert rule**.

1. Add one or more [expressions](v12-panels-query-xform-expressions.md).

   1. For each expression, select either **Classic condition** to create a single alert rule, or choose from the **Math**, **Reduce**, and **Resample** options to generate a separate alert for each series.
**Note**  
When using Prometheus, you can use an instant vector and built-in functions, so you don’t need to add additional expressions.

   1. Choose **Preview** to verify that the expression is successful.

1. [Optional] To add a recovery threshold, turn the **Custom recovery threshold** toggle on and fill in a value for when your alert rule should stop firing.

   You can only add one recovery threshold in a query and it must be the alert condition.

1. Choose **Set as alert condition** on the query or expression you want to set as your alert condition.

Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.

To do this, you need to make sure that your alert rule is in the right evaluation group and set a pending period time that works best for your use case.

**To set alert evaluation behavior**

1. Select a folder or choose **\$1 New folder**.

1. Select an evaluation group or click **\$1 New evaluation group**.

   If you are creating a new evaluation group, specify the interval for the group.

   All rules within the same group are evaluated concurrently over the same time interval.

1. Enter a pending period.

   The pending period is the period in which an alert rule can be in breach of the condition until it fires.

   Once a condition is met, the alert goes into the **Pending** state. If the condition remains active for the duration specified, the alert transitions to the **Firing** state, else it reverts to the **Normal** state.

1. Turn on pause alert notifications, if required.
**Note**  
Pause alert rule evaluation to prevent noisy alerting while tuning your alerts. Pausing stops alert rule evaluation and does not create any alert instances. This is different to mute timings, which stop notifications from being delivered, but still allow for alert rule evaluation and the creation of alert instances.  
You can pause alert rule evaluation to prevent noisy alerting while tuning your alerts. Pausing stops alert rule evaluation and does not create any alert instances. This is different to mute timings, which stop notifications from being delivered, but still allow for alert rule evaluation and the creation of alert instances.

1. In **Configure no data and error handling**, configure alerting behavior in the absence of data.

   Use the guidelines later in this section.

Add labels to your alert rules to set which notification policy should handle your firing alert instances.

All alert rules and instances, irrespective of their labels, match the default notification policy. If there are no nested policies, or no nested policies match the labels in the alert rule or alert instance, then the default notification policy is the matching policy.

**To configure notifications**

1. Add labels if you want to change the way your notifications are routed.

   Add custom labels by selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value.

1. Preview your alert instance routing set up.

   Based on the labels added, alert instances are routed to the notification policies displayed.

   Expand each notification policy to view more details.

1. Choose **See details** to view alert routing details and a preview.

Add [annotations](v12-alerting-overview-labels.md#v12-alerting-overview-labels-annotations) to provide more context on the alert in your alert notification message.

Annotations add metadata to provide more information on the alert in your alert notification message. For example, add a **Summary** annotation to tell you which value caused the alert to fire or which server it happened on.

**To add annotations**

1. [Optional] Add a summary.

   Short summary of what happened and why.

1. [Optional] Add a description.

   Description of what the alert rule does.

1. [Optional] Add a Runbook URL.

   Webpage where you keep your runbook for the alert

1. [Optional] Add a custom annotation

1. [Optional] Add a dashboard and panel link.

   Links alerts to panels in a dashboard.

1. Choose **Save rule**.

**Single and multi-dimensional rule**

For Grafana managed alerts, you can create a rule with a classic condition or you can create a multi-dimensional rule.
+ **Rule with classic condition**

  Use the classic condition expression to create a rule that triggers a single alert when its condition is met. For a query that returns multiple series, Grafana does not track the alert state of each series. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series.
+ **Multi-dimensional rule**

  To generate a separate alert for each series, create a multi-dimensional rule. Use `Math`, `Reduce`, or `Resample` expressions to create a multi-dimensional rule. For example:
  + Add a `Reduce` expression for each query to aggregate values in the selected time range into a single value (not needed for [rules using numeric data](v12-alerting-overview-numeric.md)).
  + Add a `Math` expression with the condition for the rule. Not needed in case a query or a reduce expression already returns `0` if rule should not fire, or a positive number if it should fire. Some examples: `$B > 70` if it should fire in case value of B query/expression is more than 70. `$B < $C * 100` in case it should fire if value of B is less than value of C multiplied by 100. If queries being compared have multiple series in their results, series from different queries are matched if they have the same labels or one is a subset of the other.

**Note**  
Grafana does not support alert queries with template variables. More information is available at [https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514](https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514).

**Configure no data and error handling**

Configure alerting behavior when your alert rule evaluation returns no data or an error.

**Note**  
Alert rules that are configured to fire when an evaluation returns no data or error only fire when the entire duration of the evaluation period has finished. This means that rather than immediately firing when the alert rule condition is breached, the alert rule waits until the time set as the **For** field has finished and then fires, reducing alert noise and allowing for temporary data availability issues.

If your alert rule evaluation returns no data, you can set the state on your alert rule to appear as follows:


| No Data | Description | 
| --- | --- | 
| No Data | Creates a new alert DatasourceNoData with the name and UID of the alert rule, and UID of the datasource that returned no data as labels. | 
| Alerting | Sets alert rule state to Alerting. The alert rule waits until the time set in the For field has finished before firing. | 
| Ok | Sets alert rule state to Normal. | 

If your evaluation returns an error, you can set the state on your alert rule to appear as follows:


| Error | Description | 
| --- | --- | 
| Error | Creates an alert instance DatasourceError with the name and UID of the alert rule, and UID of the datasource that returned no data as labels. | 
| Alerting | Sets alert rule state to Alerting. The alert rule waits until the time set in the For field has finished before firing. | 
| Ok | Sets alert rule state to Normal. | 

**Resolve stale alert instances**

An alert instance is considered stale if its dimension or series has disappeared from the query results entirely for two evaluation intervals.

Stale alert instances that are in the `Alerting`/`NoData`/`Error` states are automatically marked as `Resolved` and the `grafana_state_reason` annotation is added to the alert instance with the reason `MissingSeries`.

**Create alerts from panels**

Create alerts from any panel type. This means you can reuse the queries in the panel and create alerts based on them.

1. Navigate to a dashboard in the **Dashboards** section.

1. In the top right corner of the panel, choose the three dots (ellipses).

1. From the dropdown menu, select **More…** and then choose **New alert rule**.

This will open the alert rule form, allowing you to configure and create your alert based on the current panel’s query.

# Configure data source managed alert rules
<a name="v12-alerting-configure-datasourcemanaged"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Create alert rules for an external Grafana Mimir or Loki instance that has ruler API enabled; these are called data source managed alert rules.

**Note**  
Alert rules for an external Grafana Mimir or Loki instance can be edited or deleted by users with Editor or Admin roles.  
If you delete an alerting resource created in the UI, you can no longer retrieve it. To make a backup of your configuration and to be able to restore deleted alerting resources, create your alerting resources using Terraform, or the Alerting API.

**Prerequisites**
+ Verify that you have write permission to the Prometheus or Loki data source. Otherwise, you will not be able to create or update Grafana Mimir managed alert rules.
+ For Grafana Mimir and Loki data sources, enable the Ruler API by configuring their respective services.
  + **Loki** - The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other rule storage types.
  + **Grafana Mimir** - use the `/prometheus` prefix. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the [Query API](https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/#querier--query-frontend) and [Ruler API](https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/#ruler) are under the same URL. You cannot provide a separate URL for the Ruler API.

**Note**  
If you do not want to manage alert rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via alerting UI** checkbox.

In the following procedures, we’ll guide you through the process of creating your data source managed alert rules.

To create a data source-managed alert rule, use the in-workspace alert creation flow and follow these steps to help you.

**To set the alert rule name**

1. Choose **Alerting** -> **Alert rules** -> **\$1 New alert rule**.

1. Enter a name to identify your alert rule.

   This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule.

Define a query to get the data you want to measure and a condition that needs to be met before an alert rule fires.

**To define query and condition**

1. All alert rules are managed by Grafana by default. To switch to a data source managed alert rule, choose **Switch to data source-managed alert rule**.

1. Select a data source from the drop-down list.

   You can also choose **Open advanced data source picker** to see more options, including adding a data source (Admins only).

1. Enter a PromQL or LogQL query.

1. Choose **Preview alerts**.

Use alert rule evaluation to determine how frequently an alert rule should be evaluated and how quickly it should change its state.

**To set alert evaluation behavior**

1. Select a namespace or choose **\$1 New namespace**.

1. Select an evaluation group or choose **\$1 New evaluation group**.

   If you are creating a new evaluation group, specify the interval for the group.

   All rules within the same group are evaluated sequentially over the same time interval.

1. Enter a pending period.

   The pending period is the period in which an alert rule can be in breach of the condition until it fires.

   Once a condition is met, the alert goes into the `Pending` state. If the condition remains active for the duration specified, the alert transitions to the `Firing` state, else it reverts to the `Normal` state.

Add labels to your alert rules to set which notification policy should handle your firing alert instances.

All alert rules and instances, irrespective of their labels, match the default notification policy. If there are no nested policies, or no nested policies match the labels in the alert rule or alert instance, then the default notification policy is the matching policy.

**Configure notifications**
+ Add labels if you want to change the way your notifications are routed.

  Add custom labels by selecting existing key-value pairs from the drop down, or add new labels by entering the new key or value.

Add [annotations](v12-alerting-overview-labels.md#v12-alerting-overview-labels-annotations) to provide more context on the alert in your alert notifications.

Annotations add metadata to provide more information on the alert in your alert notifications. For example, add a `Summary` annotation to tell you which value caused the alert to fire or which server it happened on.

**To add annotations**

1. [Optional] Add a summary.

   Short summary of what happened and why.

1. [Optional] Add a description.

   Description of what the alert rule does.

1. [Optional] Add a Runbook URL.

   Webpage where you keep your runbook for the alert

1. [Optional] Add a custom annotation

1. [Optional] Add a dashboard and panel link.

   Links alerts to panels in a dashboard.

1. Choose **Save rule**.

# Configure recording rules
<a name="v12-alerting-configure-recordingrules"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can create and manage recording rules for an external Grafana Mimir or Loki instance. Recording rules calculate frequently needed expressions or computationally expensive expressions in advance and save the result as a new set of time series. Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.

**Note**  
Recording rules are run as instance rules, and run every 10 seconds.

**Prerequisites**
+ Verify that you have write permissions to the Prometheus or Loki data source. You will be creating or updating alerting rules in your data source.
+ For Grafana Mimir and Loki data sources, enable the ruler API by configuring their respective services.
  + **Loki** – The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other storage types.
  + **Grafana Mimir** – Use the `/prometheus` prefix. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the Query API and Ruler API are under the same URL. You cannot provide a separate URL for the Ruler API.

**Note**  
If you do not want to manage alerting rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via Alerting UI** check box.

**To create recording rules**

1. From your Grafana console, in the Grafana menu, choose **Alerting**, **Alert rules**.

1. Choose **New recording rule**.

1. Set rule name.

   The recording rule name must be a Prometheus metric name and contain no whitespace.

1. Define query
   + Select your Loki or Prometheus data source.
   + Enter a query.

1. Add namespace and group.
   + From the **Namespace** dropdown, select an existing rule namespace or add a new one. Namespaces can contain one or more rule groups and only have an organizational purpose.
   + From the **Group** dropdown, select an existing group within the selected namespace or add a new one. Newly created rules are appended to the end of the group. Rules within a group are run sequentially at a regular interval, with the same evaluation time.

1. Add labels.
   + Add custom labels selecting existing key-value pairs from the dropdown, or add new labels by entering the new key or value.

1. Choose **Save rule** to save the rule, or **Save rule and exit** to save the rule and go back to the Alerting page.

# Configure contact points
<a name="v12-alerting-configure-contactpoints"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use contact points to define how your contacts are notified when an alert rule fires.

**Note**  
You can create and edit contact points for Grafana managed alerts. Contact points for data source managed alerts are read-only.

## Working with contact points
<a name="v12-alerting-configure-contactpoints-working"></a>

The following procedures show how to add, edit, delete, and test a contact point.

**To add a contact point**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points**.

1. From the **Choose Alertmanager** dropdown, select an Alertmanager. The Grafana Alertmanager is selected by default.

1. On the **Contact Points** tab, choose **\$1 Add contact point**.

1. Enter a **Name** for the contact point.

1. From **Integration**, choose a type, and fill out the mandatory fields based on that type. For example, if you choose Slack, enter the Slack channels and users who should be contacted.

1. If available for the contact point you selected, choose any desired **Optional settings** to specify additional settings.

1. Under **Notification settings**, optionally select **Disable resolved message** if you do not want to be notified when an alert resolves.

1. To add another contact point integration, choose **Add contact point integration** and repeat the steps for each contact point type needed.

1. Save your changes.

**To edit a contact point**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points** to see a list of existing contact points.

1. Select the contact point to edit, then choose **Edit**.

1. Update the contact point, and then save your changes.

You can delete contact points that are not in use by a notification policy.

**To delete a contact point**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points** to open the list of existing contact points.

1. On the **Contact points**, select the contact point to delete, then choose **More**, **Delete**.

1. In the confirmation dialog box, choose **Yes, delete**.

**Note**  
If the contact point is in use by a notification policy, you must delete the notification policy or edit it to use a different contact point before deleting the contact point.

After your contact point is created, you can send a test notification to verify that it is configured properly.

**To send a test notification**

1. In the left-side menu, choose **Alerting**.

1. Choose **Contact points** to open the list of existing contact points.

1. On the **Contact points**, select the contact point to test, then choose **Edit**. You can also create a new contact point if needed.

1. Choose **Test** to open the contact point testing dialog.

1. Choose whether to send a predefined test notification or choose **Custom** to add your own custom annotations and labels in the test notification.

1. Choose **Send test notification** to test the alert with the given contact points.

## Configure contact point integrations
<a name="v12-alerting-configure-contactpoints-integration"></a>

Configure contact point integrations in Grafana to select your preferred communication channel for receiving notifications when your alert rules are firing. Each integration has its own configuration options and setup process. In most cases, this involves providing an API key or a Webhook URL.

Once configured, you can use integrations as part of your contact points to receive notifications whenever your alert changes its state. In this section, we’ll cover the basic steps to configure an integration, using PagerDuty as an example, so you can start receiving real-time alerts and stay on top of your monitoring data.

**List of supported integrations**

The following table lists the contact point types supported by Grafana.


| Name | Type | 
| --- | --- | 
| Amazon SNS | `sns` | 
| OpsGenie | `opsgenie` | 
| Pager Duty | `pagerduty` | 
| Slack | `slack` | 
| VictorOps | `victorops` | 

**Configuring PagerDuty for alerting**

To set up PagerDuty, you must provide an integration key. Provide the following details.


| Setting | Description | 
| --- | --- | 
| Integration Key | Integration key for PagerDuty | 
| Severity | Level for dynamic notifications. Default is critical. | 
| Custom Details | Additional details about the event | 

The `CustomDetails` field is an object containing arbitrary key-value pairs. The user-defined details are merged with the ones used by default.

The default values for `CustomDetails` are:

```
{
	"firing":       `{{ template "__text_alert_list" .Alerts.Firing }}`,
	"resolved":     `{{ template "__text_alert_list" .Alerts.Resolved }}`,
	"num_firing":   `{{ .Alerts.Firing | len }}`,
	"num_resolved": `{{ .Alerts.Resolved | len }}`,
}
```

In case of duplicate keys, the user-defined details overwrite the default ones.

# Configure notification policies
<a name="v12-alerting-configure-notification-policies"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 12.x**.  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 9.x, see [Working in Grafana version 9](using-grafana-v9.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Notification policies determine how alerts are routed to contact points.

Policies have a tree structure, where each policy can have one or more nested policies. Each policy, except for the default policy, can also match specific alert labels.

Each alert is evaluated by the default policy and subsequently by each nested policy.

If you enable the `Continue matching subsequent sibling nodes` option for a nested policy, then evaluation continues even after one or more matches. A parent policy’s configuration settings and contact point information govern the behavior of an alert that does not match any of the child policies. A default policy governs any alert that does not match a nested policy.

For more information on notification policies, see [Notifications](v12-alerting-explore-notifications.md).

The following procedures show you how to create and manage notification policies.

**To edit the default notification policy**

1. In the left-side menu, choose **Alerting**.

1. Choose **Notification policies**.

1. From the **Choose Alertmanager** dropdown, select the Alertmanager you want to edit.

1. In the **Default policy** section, choose **...**, **Edit**.

1. In **Default contact point**, update the contact point where notifications should be sent for rules when alert rules do not match any specific policy.

1. In **Group by**, choose the labels to group alerts by. If multiple alerts are matched for this policy, then they are grouped by these labels. A notification is sent per group. If the field is empty (the default), then all notifications are sent in a single group. Use a special label, `...` to group alerts by all labels (which effectively disables grouping).

1. In **Timing options**, select from the following options.
   + **Group wait** – Time to wait to buffer alerts of the same group before sending an initial notification. The default is 30 seconds.
   + **Group interval** – Minimum time interval between two notifications for a group. The default is 5 minutes.
   + **Repeat interval** – Minimum time interval before resending a notification if no new alerts were added to the group. The default is 4 hours.

1. Choose **Save** to save your changes.

To create a new notification policy, you need to follow its tree structure. New policies created on the trunk of the tree (the default policy), are the tree branches. Each branch can have their own nested policies. This is why you will always be adding a new **nested** policy, either under the default policy, or under an already nested policy.

**To add a new nested policy**

1. In the left-side menu, choose **Alerting**.

1. Choose **Notification policies**.

1. From the **Choose Alertmanager** dropdown, select the Alertmanager you want to edit.

1. To add a top level specific policy, go to the Specific routing section (eitehr to the default policy, or to antoher existing policy in which you would like to add a new nested policy) and choose **\$1 New nested policy**.

1. In the matching labels section, add one or more rules for matching alert labels.

1. In the **Contact point** dropdown, select the contact point to send notifications to if an alert matches only this specific policy and not any of the nested policies.

1. Optionally, enable **Continue matching subsequent sibling nodes** to continue matching sibling policies even after the alert matched the current policy. When this option is enabled, you can get more than one notification for one alert.

1. Optionally, enable **Override grouping** to specify the same grouping as the default policy. If the option is not enabled, the default policy grouping is used.

1. Optionally, enable **Override general timings** to override the timing options configured in the group notification policy.

1. Choose **Save policy** to save your changes.

**To edit a nested policy**

1. In the left-side menu, choose **Alerting**.

1. Choose **Notification policies**.

1. Select the policy that you want to edit, then choose **...**, **Edit**.

1. Make any changes (as when adding a nested policy).

1. Save your changes.

**Searching for policies**

You can search within the tree of policies by *Label matchers* or *contact points*.
+ To search by contact point, enter a partial or full name of a contact point in the **Search by contact point** field. The policies that use that contact point will be highlighted in the user interface.
+ To search by label, enter a valid label matcher in the **Search by matchers** input field. Multiple matchers can be entered, separated by a comma. For example, a valid matcher input could be `severity=high, region=~EMEA|NA`.
**Note**  
When searching by label, all matched policies will be exact matches. Partial matches and regex-style matches are not supported.