

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Monitoring queries and workloads with Amazon Redshift Serverless
<a name="serverless-monitoring"></a>

You can monitor your Amazon Redshift Serverless queries and workload with the provided system views. 

*Monitoring views* are system views in Amazon Redshift Serverless that are used to monitor query and workload usage. These views are located in the `pg_catalog` schema. The system views available have been designed to give you the information needed to monitor Amazon Redshift Serverless, which is much simpler than that needed for provisioned clusters. The SYS system views have been designed to work with Amazon Redshift Serverless. To display the information provided by these views, run SQL SELECT statements.

System views are defined to support the following monitoring objectives.

**Workload monitoring**  
You can monitor your query activities over time to:  
+ Understand workload patterns, so you know what is normal (baseline) and what is within business service level agreements (SLAs).
+ Rapidly identify deviation from normal, which might be a transient issue or something that warrants further action.

**Data load and unload monitoring**  
Data movement in and out of Amazon Redshift Serverless is a critical function. You use COPY and UNLOAD to load or unload data, and you must monitor progress closely in terms of bytes/rows transferred and files completed to track adherence to business SLAs. This is normally done by running system table queries frequently (that is, every minute) to track progress and raise alerts for investigation/corrective action if significant deviations are detected.

**Failure and problem diagnostics**  
There are cases where you must take action for query or runtime failures. Developers rely on system tables to self-diagnose issues and determine correct remedies.

**Performance tuning**  
You might need to tune queries that are not meeting SLA requirements either from the start, or have degraded over time. To tune, you must have runtime details including run plan, statistics, duration, and resource consumption. You need baseline data for offending queries to determine the cause for deviation and to guide you how to improve performance.

**User objects event monitoring**  
You need to monitor actions and activities on user objects, such as refreshing materialized views, vacuum, and analyze. This includes system-managed events like auto-refresh for materialized views. You want to monitor when an event ends if it is user initiated, or the last successful run if system initiated.

**Usage tracking for billing**  
You can monitor your usage trends over time to:  
+ Inform budget planning and business expansion estimates.
+ Identify potential cost-saving opportunities like removing cold data.

Use the SYS system views to monitor Amazon Redshift Serverless;. For more information about the SYS monitoring views, go to [SYS monitoring views](https://docs.aws.amazon.com//redshift/latest/dg/serverless_views-monitoring.html) in the Amazon Redshift Database Developer Guide.

# Adding a query monitoring policy
<a name="serverless-monitor-access"></a>

A superuser can provide access to users who aren't superusers so that they can perform query monitoring for all users. First, you add a policy for a user or a role to provide query monitoring access. Then, you grant query monitoring permission to the user or role. 

**To add the query monitoring policy**

1. Choose [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Under **Access management**, choose **Policies**.

1. Choose **Create Policy**.

1. Choose **JSON** and paste the following policy definition.

------
#### [ JSON ]

****  

   ```
   {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
   {
   "Effect": "Allow",
   "Action": [
       "redshift-data:ExecuteStatement",
       "redshift-data:DescribeStatement",
       "redshift-data:GetStatementResult",
       "redshift-data:ListDatabases"
   ],
   "Resource": "*"
   },
   {
   "Effect": "Allow",
   "Action": "redshift-serverless:GetCredentials",
   "Resource": "*"
   }
   ]
   }
   ```

------

1. Choose **Review policy**.

1. For **Name**, enter a name for the policy, such as `query-monitoring`.

1. Choose **Create policy**.

After you create the policy, you can grant the appropriate permissions.

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

# Granting query monitoring permissions for a user
<a name="serverless-monitor-access-user"></a>

Users with `sys:monitor` permission can view all queries. In addition, users with `sys:operator` permission can cancel queries, analyze query history, and perform vacuum operations.

**To grant query monitoring permission for a user**

1. Enter the following command to provide system monitor access, where *user-name* is the name of the user for whom you want to provide access.

   ```
   grant role sys:monitor to "IAM:user-name";
   ```

1. (Optional) Enter the following command to provide system operator access, where *user-name* is the name of the user for whom you want to provide access.

   ```
   grant role sys:operator to "IAM:user-name";
   ```

# Granting query monitoring permissions for a role
<a name="serverless-monitor-access-role"></a>

Users with a role that has `sys:monitor` permission can view all queries. In addition, users with a role that has `sys:operator` permission can cancel queries, analyze query history, and perform vacuum operations.

**To grant query monitoring permission for a role**

1. Enter the following command to provide system monitor access, where *role-name* is the name of the role for which you want to provide access.

   ```
   grant role sys:monitor to "IAMR:role-name";
   ```

1. (Optional) Enter the following command to provide system operator access, where *role-name* is the name of the role for which you want to provide access.

   ```
   grant role sys:operator to "IAMR:role-name";
   ```

# Setting usage limits, including setting RPU limits
<a name="serverless-workgroup-max-rpu"></a>

Under the **Limits** tab for a workgroup, you can add one or more usage limits to control the maximum RPUs you use in a given time period, or to set a data sharing usage limit.

1. Choose **Manage usage limits**. The limits section appears at the bottom of the **Compute usage by period** panel.

1. Set a usage limit in number of RPU hours.

1. Choose a **Frequency**, which is either **Daily**, **Weekly**, or **Monthly**. This sets the time period for the usage limit. Choosing **Daily** in this instance gives you more detailed control.

1. Set a usage limit, in number of hours.

1. Set the action. These are the following:
   + **Log to system table** - Adds a record to the system view [SYS\$1QUERY\$1HISTORY](https://docs.aws.amazon.com/redshift/latest/dg/SYS_QUERY_HISTORY.html). You can query the `usage_limit` column in this view to determine if a query exceeded the limit.
   + **Alert** - Uses Amazon SNS to set up notification subscriptions and send notifications if a limit is breached. You can choose an existing Amazon SNS topic or create a new one.
   + **Turn off user queries** - Disables queries to stop use of Amazon Redshift Serverless. It also sends a notification.

   The first two actions are informational, but the last turns off query processing.

1. Optionally, you can set a **Cross-Region data sharing usage limit**, which limits how much data transferred from producer Region to consumer Region consumers can query. To do this, choose **Add limit** and follow the steps.

1. Choose **Save changes** at the bottom of the page to save the limit.

1. Set up to 3 more limits as necessary.

For more conceptual information about RPUs and billing, see [Billing for Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-billing.html).

# Setting query limits
<a name="serverless-workgroup-query-limits"></a>

Under the **Limits** tab for a workgroup, you can add a limit to monitor performance and limits. For more information about query monitoring limits, see [WLM query monitoring rules](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html).

1. Choose **Manage query limits**. Choose **Add new limit** on the **Manage query limits** dialogue.

1. Choose the limit type you want to set and enter a value for its corresponding limit.

1. Choose **Save changes** to save the limit.

When you change your query limit and configuration parameters, your database will restart.

# Setting query queues
<a name="serverless-workgroup-query-queues"></a>

Amazon Redshift Serverless supports queue-based query resource management. You can create dedicated query queues with customized monitoring rules for different workloads. This feature provides granular control over resource usage.

Query monitoring rules (QMR) apply only at the Redshift Serverless workgroup level, affecting all queries run in this workgroup uniformly. Queue-based approach lets you create queues with distinct monitoring rules. You can assign these queues to specific user roles and query groups. Each queue operates independently, with rules affecting only the queries within that queue.

Queues let you set metrics-based predicates and automated responses. For example, you can configure rules to automatically abort queries that exceed time limits or consume too many resources.

## Considerations
<a name="serverless-workgroup-query-queues-considerations"></a>

Consider the following when using serverless queues:
+ The following Workload Management (WLM) configuration keys used in Amazon Redshift provisioned clusters are not supported in Redshift Serverless queues: `max_execution_time`, `short_query_queue`, `auto_wlm`, `concurrency_scaling`, `priority`, `queue_type`, `query_concurrency`, `memory_percent_to_use`, `user_group`, `user_group_wild_card`.

  Additionally hop, change\$1query\$1priority actions are not supported in Serverless.
+ The hop Action (moving queries between queues) is not supported in Amazon Redshift Serverless.
+ Queue priorities are supported only for Amazon Redshift provisioned clusters.
+ Amazon Redshift Serverless automatically manages scaling and resource allocation for optimal performance, so you don't need to manually configure queue priorities.

## Setting up query queues
<a name="serverless-workgroup-query-queues-setup"></a>

You can create queues under the Limits tab for a serverless workgroup using the AWS Management Console, AWS CLI, or Redshift Serverless API.

------
#### [ Console ]

Follow these steps to create a queue for your serverless workgroup.

1. Navigate to your Redshift Serverless workgroup.

1. Select the Limits tab.

1. Under **Query Queues**, choose **Enable Queues**.
**Important**  
Enabling query queues is a permanent change. You cannot revert to queue-less monitoring once enabled.

1. Configure your queues using the following parameters:

   **Queue level parameters**
   + `name` - Queue identifier (required, unique, non-empty)
   + `user_role` - Array of user roles (optional)
   + `query_group` - Array of query groups (optional)
   + `query_group_wild_card` - 0 or 1 to enable wildcard matching (optional)
   + `user_group_wild_card` - 0 or 1 to enable wildcard matching (optional)
   + `rules` - Array of monitoring rules (optional)

   **Rule level parameters**
   + `rule_name` - Unique identifier, max 32 chars (required)
   + `predicate` - Array of conditions, 1-3 predicates (required)
   + `action` - "abort" or "log" (required)

   **Predicate level parameters**
   + `metric_name` - Metric to monitor (required)
   + `operator` - "=", "<", or ">" (required)
   + `value` - Numeric threshold (required)

   **Limits**
   + Max 8 queues
   + Max 25 rules across all queues
   + Max 3 predicates per rule
   + Rule names must be unique globally

**Example Configuration**

Three queues example: one for dashboarding queries with a short timeout, one for ETL queries with a long timeout and an admin queue:

```
[
  {
    "name": "dashboard",
    "user_role": ["analyst", "viewer"],
    "query_group": ["reporting"],
    "query_group_wild_card": 1,
    "rules": [
      {
        "rule_name": "short_timeout",
        "predicate": [
          {
            "metric_name": "query_execution_time",
            "operator": ">",
            "value": 60
          }
        ],
        "action": "abort"
      }
    ]
  },
  {
    "name": "ETL",
    "user_role": ["data_scientist"],
    "query_group": ["analytics", "ml"],
    "rules": [
      {
        "rule_name": "long_timeout",
        "predicate": [
          {
            "metric_name": "query_execution_time",
            "operator": ">",
            "value": 3600
          }
        ],
        "action": "log"
      },
      {
        "rule_name": "memory_limit",
        "predicate": [
          {
            "metric_name": "query_temp_blocks_to_disk",
            "operator": ">",
            "value": 100000
          }
        ],
        "action": "abort"
      }
    ]
  },
  {
    "name": "admin_queue",
    "user_role": ["admin"],
    "query_group": ["admin"]
  }
]
```

In this example:
+ Dashboard queries are aborted if they run more than 60 seconds
+ ETL queries are logged if they run more than an hour
+ Admin queue does not have any resource limits

------
#### [ CLI ]

You can manage queues using the CreateWorkgroup or UpdateWorkgroup APIs with the `wlm_json_configuration` config parameter to specify queues in JSON format.

```
aws redshift-serverless create-workgroup \
  --workgroup-name test-workgroup \
  --namespace-name test-namespace \
  --config-parameters '[{"parameterKey": "wlm_json_configuration", "parameterValue": "[{\"name\":\"dashboard\",\"user_role\":[\"analyst\",\"viewer\"],\"query_group\":[\"reporting\"],\"query_group_wild_card\":1,\"rules\":[{\"rule_name\":\"short_timeout\",\"predicate\":[{\"metric_name\":\"query_execution_time\",\"operator\":\">\",\"value\":60}],\"action\":\"abort\"}]},{\"name\":\"ETL\",\"user_role\":[\"data_scientist\"],\"query_group\":[\"analytics\",\"ml\"],\"rules\":[{\"rule_name\":\"long_timeout\",\"predicate\":[{\"metric_name\":\"query_execution_time\",\"operator\":\">\",\"value\":3600}],\"action\":\"log\"},{\"rule_name\":\"memory_limit\",\"predicate\":[{\"metric_name\":\"query_temp_blocks_to_disk\",\"operator\":\">\",\"value\":100000}],\"action\":\"abort\"}]},{\"name\":\"admin_queue\",\"user_role\":[\"admin\"],\"query_group\":[\"admin\"]}]"}]'
```

------

## Best practices
<a name="serverless-workgroup-query-queues-best-practices"></a>

Keep the following best practices in mind when you use serverless queues.
+ Use separate queues for workloads with distinct limit requirements (e.g., ETL, reporting, or ad-hoc analysis).
+ Start with simple thresholds and adjust based on query behavior and usage patterns. You can monitor query usage patterns using the tables and views documented in [System tables and views for query monitoring rules](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html#cm-c-wlm-qmr-tables-and-views).

# Checking Amazon Redshift Serverless summary data using the dashboard
<a name="serverless-dashboard"></a>

The Amazon Redshift Serverless dashboard contains a collection of panels that show at-a-glance metrics and information about your workgroup and namespace. These panels include the following: 
+ **Resources summary** - Displays high-level information about Amazon Redshift Serverless, such as the storage used and other metrics.
+ **Query summary** - Displays information about queries, including completed queries and running queries. Choose **View details** to go to a screen that has additional filters.
+ **RPU capacity used** - Displays the overall capacity used over a given time period, like the previous ten hours, for instance.
+ **Datashares** - Shows the count of datashares, which are used to share darta between, for example, AWS accounts. The metrics show which datashares require authorization, and other information.
+ **Total compute usage** - Shows your total consumed RPU hours for the selected workgroup over a selected time range, up to the last 7 days.

From the dashboard you can quickly dive into these available metrics to check a detail regarding Amazon Redshift Serverless, or review queries, or track work items.