

# Migrating management features to Aurora MySQL
<a name="chap-sql-server-aurora-mysql.management"></a>

This chapter provides reference information for migrating databases that use management features from Microsoft SQL Server 2019 to Amazon Aurora MySQL. You can gain insights into how these two database systems handle tasks such as job scheduling, event notifications, email functionality, ETL processes, logging, maintenance, monitoring, resource management, linked servers, and scripting.

**Topics**
+ [

# SQL Server Agent and MySQL Agent
](chap-sql-server-aurora-mysql.management.agent.md)
+ [

# Alerting features
](chap-sql-server-aurora-mysql.management.alerting.md)
+ [

# Database mail features
](chap-sql-server-aurora-mysql.management.databasemail.md)
+ [

# ETL features
](chap-sql-server-aurora-mysql.management.etl.md)
+ [

# Viewing server logs
](chap-sql-server-aurora-mysql.management.serverlogs.md)
+ [

# Maintenance plans
](chap-sql-server-aurora-mysql.management.maintenanceplans.md)
+ [

# Monitoring features
](chap-sql-server-aurora-mysql.management.monitoring.md)
+ [

# Resource governor features
](chap-sql-server-aurora-mysql.management.resourcegovernor.md)
+ [

# Linked servers
](chap-sql-server-aurora-mysql.management.linkedservers.md)
+ [

# Scripting features
](chap-sql-server-aurora-mysql.management.scripting.md)

# SQL Server Agent and MySQL Agent
<a name="chap-sql-server-aurora-mysql.management.agent"></a>

This topic provides reference information about the differences between SQL Server Agent functionality in Microsoft SQL Server 2019 and comparable features in Amazon Aurora MySQL. You can understand the limitations and alternatives available when migrating from SQL Server to Aurora MySQL, particularly regarding scheduling, automation, and alerting capabilities.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[No feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-0.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)   |   [SQL Server Agent](chap-sql-server-aurora-mysql.tools.actioncode.md#chap-sql-server-aurora-mysql.tools.actioncode.agent)   |  For more information, see [Alerting](chap-sql-server-aurora-mysql.management.alerting.md) and [Maintenance Plans](chap-sql-server-aurora-mysql.management.maintenanceplans.md).  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.agent.sqlserver"></a>

SQL Server Agent provides two main functions: scheduling automated maintenance and backup jobs, and for alerting.

**Note**  
Other SQL built-in frameworks such as replication, also use SQL Server Agent jobs under the covers.

Maintenance plans, backups and alerting are covered in separate sections.

For more information, see [SQL Server Agent](https://docs.microsoft.com/en-us/sql/ssms/agent/sql-server-agent?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.agent.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) does provide a native, in-database scheduler. It is limited to the cluster scope and can’t be used to manage multiple clusters. There are no native alerting capabilities in Aurora MySQL similar to SQL Server Agent alerts.

Although Amazon Relational Database Service (Amazon RDS) doesn’t currently provide an external scheduling agent like SQL Server Agent, CloudWatch Events provides the ability to specify a cron-like schedule to run Lambda functions. This approach requires writing custom code in C\$1, NodeJS, Java, or Python. Additionally, any task that runs longer than five minutes will not work due to the AWS Lambda time out limit. For example, this limit may pose a challenge for index rebuild operations. Other options include:

1. Running an SQL Server for the sole purpose of using the Agent.

1. Using a t2 or container to schedule your code (C\$1, NodeJS, Java, Python) with Cron. A t2.nano is simple to deploy and can run tasks indefinitely at a very modest cost. For most scheduling applications, the low resources shouldn’t be an issue.

### Aurora MySQL Database Events
<a name="chap-sql-server-aurora-mysql.management.agent.mysql.databaseevents"></a>

 Aurora MySQL also provides a native, in-database scheduling framework that can be used to trigger scheduled operations including maintenance tasks.

Events are running by a dedicated thread, which can be seen in the process list. The global `event_scheduler` must be turned on explicitly from its default state of `OFF` for the event thread to run. Event errors are written to the error log. Event metadata can be viewed using the `INFORMATION_SCHEMA.EVENTS` view.

### Syntax
<a name="chap-sql-server-aurora-mysql.management.agent.mysql.syntax"></a>

```
CREATE EVENT <Event Name>
    ON SCHEDULE <Schedule>
    [ON COMPLETION [NOT] PRESERVE][ENABLE | DISABLE | DISABLE ON SLAVE]
    [COMMENT 'string']
    DO <Event Body>;

<Schedule>:
    AT <Time Stamp> [+ INTERVAL <Interval>] ...
    | EVERY <Interval>
    [STARTS <Time Stamp> [+ INTERVAL <Interval>] ...]
    [ENDS <Time Stamp> [+ INTERVAL <Interval>] ...]

<Interval>:
    quantity {YEAR | QUARTER | MONTH | DAY | HOUR | MINUTE |
        WEEK | SECOND | YEAR_MONTH | DAY_HOUR | DAY_MINUTE |
        DAY_SECOND | HOUR_MINUTE | HOUR_SECOND | MINUTE_SECOND}
```

### Examples
<a name="chap-sql-server-aurora-mysql.management.agent.mysql.examples"></a>

Create an event to collect login data statistics that runs once five hours after creation.

```
CREATE EVENT Update_T1_In_5_Hours
    ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 5 HOUR
    DO
        INSERT INTO LoginStatistics
        SELECT UserID,
            COUNT(*) AS LoginAttempts
        FROM Logins AS L
        GROUP BY UserID
        WHERE LoginData = '20180502';
```

Create an event to run every hour and delete session information older than four hours.

```
CREATE EVENT Clear_Old_Sessions
    ON SCHEDULE
        EVERY 4 HOUR
    DO
        DELETE FROM Sessions
        WHERE LastCommandTime < CURRENT_TIMESTAMP - INTERVAL 4 HOUR;
```

Schedule weekly index rebuilds and pass parameters.

```
CREATE EVENT Rebuild_Indexes
    ON SCHEDULE
        EVERY 1 WEEK
    DO
        CALL IndexRebuildProcedure(1, 80)
```

## Summary
<a name="chap-sql-server-aurora-mysql.management.agent.summary"></a>

For more information, see [CREATE EVENT Statement](https://dev.mysql.com/doc/refman/5.7/en/create-event.html) and [Event Scheduler Configuration](https://dev.mysql.com/doc/refman/5.7/en/events-configuration.html) in the *MySQL documentation*; [Amazon CloudWatch](https://aws.amazon.com/cloudwatch) and [AWS Lambda](https://aws.amazon.com/lambda).

# Alerting features
<a name="chap-sql-server-aurora-mysql.management.alerting"></a>

This topic provides reference information about event notifications and alerts in SQL Server and Amazon Aurora MySQL. You can use event notifications to monitor and respond to various database events, performance conditions, and system changes.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[One star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-1.png)   |  N/A  |  N/A  |  Use event notifications subscription with Amazon SNS. For more information, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) and [Amazon Simple Notification Service](https://aws.amazon.com/sns).  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.alerting.sqlserver"></a>

SQL Server provides SQL Server Agent to generate alerts. When running, SQL Server Agent constantly monitors SQL Server windows application log messages, performance counters, and Windows Management Instrumentation (WMI) objects. When a new error event is detected, the agent checks the MSDB database for configured alerts and runs the specified action.

You can define SQL Server Agent alerts for the following categories:
+ SQL Server events
+ SQL Server performance conditions
+ WMI events

For SQL Server events, the alert options include the following settings:
+  **Error number** — Alert when a specific error is logged.
+  **Severity level** — Alert when any error in the specified severity level is logged.
+  **Database** — Filter the database list for which the event will generate an alert.
+  **Event text** — Filter specific text in the event message.

**Note**  
SQL Server Agent is pre-configured with several high severity alerts. It is highly recommended to enable these alerts.

To generate an alert in response to a specific performance condition, specify the performance counter to be monitored, the threshold values for the alert, and the predicate for the alert to occur. The following list identifies the performance alert settings:
+  **Object** — The performance counter *category* or the monitoring area of performance.
+  **Counter** — A counter is a specific attribute value of the object.
+  **Instance** — Filter by SQL Server instance (multiple instances can share logs).
+  **Alert if counter and Value** — The threshold for the alert and the predicate. The threshold is a number. Predicates are *falls below*, *becomes equal to*, or *rises above the threshold*.

WMI events require the WMI namespace and the WMI Query Language (WQL) query for specific events.

Alerts can be assigned to specific operators with schedule limitations and multiple response types including:
+ Run an SQL Server Agent job.
+ Send email, net send command, or a pager notification.

You can configure alerts and responses with SQL Server Management Studio or with a set of system stored procedures.

### Examples
<a name="chap-sql-server-aurora-mysql.management.alerting.sqlserver.examples"></a>

Configure an alert for all errors with severity 20.

```
EXEC msdb.dbo.sp_add_alert
    @name = N'Severity 20 Error Alert',
    @severity = 20,
    @notification_message = N'A severity 20 Error has occurred. Initiating emergency procedure',
    @job_name = N'Error 20 emergency response';
```

For more information, see [Alerts](https://docs.microsoft.com/en-us/sql/ssms/agent/alerts?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.alerting.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) doesn’t support direct configuration of engine alerts.

Use the event notifications infrastructure to collect history logs or receive event notifications in near real-time.

 Amazon Relational Database Service (Amazon RDS) uses Amazon Simple Notification Service (Amazon SNS) to provide notifications for events. SNS can send notifications in any form supported by the region including email, text messages, or calls to HTTP endpoints for response automation.

Events are grouped into categories. You can only subscribe to event categories, not individual events. SNS sends notifications when any event in a category occurs.

You can subscribe to alerts for database instances, database clusters, database snapshots, database cluster snapshots, database security groups and database parameter groups. For example, a subscription to the backup category for a specific database instance sends notifications when backup related events occur on that instance.

A subscription to a configuration change category for a database security group sends notifications when the security group changes.

**Note**  
For Amazon Aurora, some events occur at the cluster rather than instance level. You will not receive those events if you subscribe to an Aurora DB instance.

SNS sends event notifications to the address specified when the subscription was created. Typically, administrators create several subscriptions. For example, one subscription to receive logging events and another to receive only critical events for a production environment requiring immediate responses.

You can turn off notifications without deleting a subscription by setting the `Enabled` radio button to `No` in the Amazon RDS console. Alternatively, use the Command Line Interface (CLI) or Amazon RDS API to change the `Enabled` setting.

Subscriptions are identified by the Amazon Resource Name (ARN) of an Amazon SNS topic. The Amazon RDS console creates ARNs when subscriptions are created. When using the CLI or API, make sure that you create the ARN using the Amazon SNS console or the Amazon SNS API.

### Examples
<a name="chap-sql-server-aurora-mysql.management.alerting.mysql.examples"></a>

The following walkthrough demonstrates how to create an event notification subscription:

1. Sign in to your AWS account, and choose **RDS**.

1. Choose **Events** on the left navigation pane. This screen that presents relevant Amazon RDS events occurs.

1. Choose **Event subscriptions** and then choose **Create event subscription**.

1. Enter the **Name of the subscription** and select a **Target of ARN** or **Email**. For email subscriptions, enter values for **Topic** name and **With these recipients**.

1. Select the event source, choose specific event categories to be monitored, and choose **Create**.

1. On the Amazon RDS dashboard, choose **Recent events**.

For more information, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the *Amazon Relational Database Service User Guide*.

# Database mail features
<a name="chap-sql-server-aurora-mysql.management.databasemail"></a>

This topic provides reference information about migrating the Database Mail feature from Microsoft SQL Server 2019 to Amazon Aurora MySQL. You can understand the key differences in email functionality between these two database systems and learn about alternative approaches for sending emails from Aurora MySQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[One star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-1.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)   |   [SQL Server Database Mail](chap-sql-server-aurora-mysql.tools.actioncode.md#chap-sql-server-aurora-mysql.tools.actioncode.mail)   |  Use AWS Lambda integration. For more information, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html).  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.databasemail.sqlserver"></a>

The Database Mail framework is an email client solution for sending messages directly from SQL Server. Email capabilities and APIs within the database server provide easy management of the following messages:
+ Server administration messages such as alerts, logs, status reports, and process confirmations.
+ Application messages such as user registration confirmation and action verifications.

**Note**  
Database Mail is turned off by default.

The main features of the Database Mail framework are:
+ Database Mail sends messages using the standard and secure Simple Mail Transfer Protocol (SMTP).
+ The email client engine runs asynchronously and sends messages in a separate process to minimize dependencies.
+ Database Mail supports multiple SMTP Servers for redundancy.
+ Full support and awareness of Windows Server Failover Cluster for high availability environments.
+ Multi-profile support with multiple failover accounts in each profile.
+ Enhanced security management with separate roles in MSDB.
+ Security is enforced for mail profiles.
+ Attachment sizes are monitored and can be capped by the administrator.
+ Attachment file types can be added to the deny list.
+ Email activity can be logged to SQL Server, the Windows application event log, and to a set of system tables in MSDB.
+ Supports full auditing capabilities with configurable retention policies.
+ Supports both plain text and HTML messages.

### Architecture
<a name="chap-sql-server-aurora-mysql.management.databasemail.sqlserver.architecture"></a>

Database Mail is built on top of the Microsoft SQL Server Service Broker queue management framework.

The system stored procedure `sp_send_dbmail` sends email messages. When this stored procedure runs, it inserts an row to the mail queue and records the email message.

The queue insert operation triggers the run of the Database Mail process (DatabaseMail.exe). The Database Mail process then reads the email information and sends the message to the SMTP servers.

When the SMTP servers acknowledge or reject the message, the Database Mail process inserts a status row into the status queue, including the result of the send attempt. This insert operation triggers the run of a system stored procedure that updates the status of the Email message send attempt.

Database Mail records all Email attachments in the system tables. SQL Server provides a set of system views and stored procedures for troubleshooting and administration of the Database Mail queue.

### Deprecated SQL Mail Framework
<a name="chap-sql-server-aurora-mysql.management.databasemail.sqlserver.deprecated"></a>

The old SQL Mail framework using `xp_sendmail` has been deprecated as of SQL Server 2008 R2. For more information, see [Deprecated Database Engine Features in SQL Server 2008 R2](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143729(v=sql.105)) in the *SQL Server documentation*.

The legacy mail system has been completely replaced by the greatly enhanced DB mail framework described here. The old system has been out-of-use for many years because it was prone to synchronous run issues and windows mail profile quirks.

### Syntax
<a name="chap-sql-server-aurora-mysql.management.databasemail.sqlserver.syntax"></a>

```
EXECUTE sp_send_dbmail
    [[,@profile_name =] '<Profile Name>']
    [,[,@recipients =] '<Recipients>']
    [,[,@copy_recipients =] '<CC Recipients>']
    [,[,@blind_copy_recipients =] '<BCC Recipients>']
    [,[,@from_address =] '<From Address>']
    [,[,@reply_to =] '<Reply-to Address>']
    [,[,@subject =] '<Subject>']
    [,[,@body =] '<Message Body>']
    [,[,@body_format =] '<Message Body Format>']
    [,[,@importance =] '<Importance>']
    [,[,@sensitivity =] '<Sensitivity>']
    [,[,@file_attachments =] '<Attachments>']
    [,[,@query =] '<SQL Query>']
    [,[,@execute_query_database =] '<Execute Query Database>']
    [,[,@attach_query_result_as_file =] <Attach Query Result as File>]
    [,[,@query_attachment_filename =] <Query Attachment Filename>]
    [,[,@query_result_header =] <Query Result Header>]
    [,[,@query_result_width =] <Query Result Width>]
    [,[,@query_result_separator =] '<Query Result Separator>']
    [,[,@exclude_query_output =] <Exclude Query Output>]
    [,[,@append_query_error =] <Append Query Error>]
    [,[,@query_no_truncate =] <Query No Truncate>]
    [,[,@query_result_no_padding =] @<Parameter for Query Result No Padding>]
    [,[,@mailitem_id =] <Mail item id>] [,OUTPUT]
```

### Examples
<a name="chap-sql-server-aurora-mysql.management.databasemail.sqlserver.examples"></a>

Create a Database Mail account.

```
EXECUTE msdb.dbo.sysmail_add_account_sp
    @account_name = 'MailAccount1',
    @description = 'Mail account for testing DB Mail',
    @email_address = 'Address@MyDomain.com',
    @replyto_address = 'ReplyAddress@MyDomain.com',
    @display_name = 'Mailer for registration messages',
    @mailserver_name = 'smtp.MyDomain.com' ;
```

Create a Database Mail profile.

```
EXECUTE msdb.dbo.sysmail_add_profile_sp
    @profile_name = 'MailAccount1 Profile',
    @description = 'Mail Profile for testing DB Mail' ;
```

Associate the account with the profile.

```
EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
    @profile_name = 'MailAccount1 Profile',
    @account_name = 'MailAccount1',
    @sequence_number =1 ;
```

Grant the profile access to the `DBMailUsers` role.

```
EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
    @profile_name = 'MailAccount1 Profile',
    @principal_name = 'ApplicationUser',
    @is_default = 1 ;
```

Send a message with `sp_db_sendmail`.

```
EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'MailAccount1 Profile',
    @recipients = 'Recipient@Mydomain.com',
    @query = 'SELECT * FROM fn_WeeklySalesReport(GETDATE())',
    @subject = 'Weekly Sales Report',
    @attach_query_result_as_file = 1 ;
```

For more information, see [Database Mail](https://docs.microsoft.com/en-us/sql/relational-databases/database-mail/database-mail?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.databasemail.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) doesn’t provide native support sending mail from the database.

For alerting purposes, use the event notification subscription feature to send email notifications to operators. For more information, see [Alerting](chap-sql-server-aurora-mysql.management.alerting.md).

For application email requirements, consider using a dedicated email framework. If the code generating email messages must be in the database, consider using a queue table. Replace all occurrences of `sp_send_dbmail` with an `INSERT` into the queue table. Design external applications to connect, read the queue, send email an message, and then update the status periodically. With this approach, messages can be populated with a query result similar to `sp_send_dbmail` with the query option.

The only way to send email from the database, is to use the AWS Lambda integration.

For more information, see [AWS Lambda](https://aws.amazon.com/lambda).

### Examples
<a name="chap-sql-server-aurora-mysql.management.databasemail.mysql.examples"></a>

You can send emails from Aurora MySQL using AWS Lambda integration. For more information, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html).

# ETL features
<a name="chap-sql-server-aurora-mysql.management.etl"></a>

This topic provides reference content comparing SQL Server’s ETL capabilities with those of Amazon Aurora MySQL. You can understand the evolution of SQL Server’s ETL tools from DTS to SSIS, and learn about AWS Glue as the recommended ETL solution for Aurora MySQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[One star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-1.png)   |  N/A  |  N/A  |  Use AWS Glue for ETL.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.etl.sqlserver"></a>

SQL Server offers a native Extract, Transform, and Load (ETL) framework of tools and services to support enterprise ETL requirements. The legacy Data Transformation Services (DTS) has been deprecated as of SQL Server 2008 and replaced with SQL Server Integration Services (SSIS), which was introduced with SQL Server 2005. For more information, see [Deprecated Database Engine Features in SQL Server 2008 R2](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143729(v=sql.105)) in the *SQL Server documentation*.

### DTS
<a name="chap-sql-server-aurora-mysql.management.etl.sqlserver.dts"></a>

DTS was introduced in SQL Server version 7 in 1998. It was significantly expanded in SQL Server 2000 with features such as FTP, database level operations, and Microsoft Message Queuing (MSMQ) integration. It included a set of objects, utilities, and services that enabled easy, visual construction of complex ETL operations across heterogeneous data sources and targets.

DTS supported OLE DB, ODBC, and text file drivers. It allowed transformations to be scheduled using SQL Server Agent. DTS also provided version control and backup capabilities with version control systems such as Microsoft Visual SourceSafe.

The fundamental entity in DTS was the DTS Package. Packages were the logical containers for DTS objects such as connections, data transfers, transformations, and notifications. The DTS framework also included the following tools:
+ DTS Wizards
+ DTS Package Designers
+ DTS Query Designer
+ DTS Run Utility

### SSIS
<a name="chap-sql-server-aurora-mysql.management.etl.sqlserver.ssis"></a>

The SSIS framework was introduced in SQL Server 2005, but was limited to the top-tier editions only, unlike DTS which was available with all editions.

SSIS has evolved over DTS to offer a true modern, enterprise class, heterogeneous platform for a broad range of data migration and processing tasks. It provides a rich workflow oriented design with features for all types of enterprise data warehousing. It also supports scheduling capabilities for multi-dimensional cubes management.

SSIS provides the following tools:
+ SSIS Import/Export Wizard is an SQL Server Management Studio extension that enables quick creation of packages for moving data between a wide array of sources and destinations. However, it has limited transformation capabilities.
+ SQL Server Business Intelligence Development Studio (BIDS) is a developer tool for creating complex packages and transformations. It provides the ability to integrate procedural code into package transformations and provides a scripting environment. Recently, BIDS has been replaced by SQL Server Data Tools — Business Intelligence (SSDT-BI).

SSIS objects include:
+ Connections
+ Event handlers
+ Workflows
+ Error handlers
+ Parameters (Beginning with SQL Server 2012)
+ Precedence constraints
+ Tasks
+ Variables

SSIS packages are constructed as XML documents and can be saved to the file system or stored within a SQL Server instance using a hierarchical name space.

For more information, see [SQL Server Integration Services](https://docs.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver15) in the *SQL Server documentation* and [Data Transformation Services](https://en.wikipedia.org/wiki/Data_Transformation_Services) in the *Wikipedia*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.etl.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) provides AWS Glue for enterprise class Extract, Transform, and Load (ETL). It is a fully-managed service that performs data cataloging, cleansing, enriching, and movement between heterogeneous data sources and destinations. Being a fully managed service, the user doesn’t need to be concerned with infrastructure management.

 AWS Glue key features include the following.

### Integrated Data Catalog
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.datacatalog"></a>

The AWS Glue Data Catalog is a persistent meta-data store, that can be used to store all data assets, whether in the cloud or on-premises. It stores table schemas, job steps, and additional meta data information for managing these processes. AWS Glue can automatically calculate statistics and register partitions to make queries more efficient. It maintains a comprehensive schema version history for tracking changes over time.

### Automatic Schema Discovery
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.schemadiscovery"></a>

 AWS Glue provides automatic crawlers that can connect to source or target data providers. The crawler uses a prioritized list of classifiers to determine the schema for your data and then generates and stores the metadata in the AWS Glue Data Catalog. You can schedule crawlers or run them on-demand. You can also trigger a crawler when an event occurs to keep meta-data current.

### Code Generation
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.codegeneration"></a>

 AWS Glue automatically generates the code to extract, transform, and load data. All you need to do is point Glue to your data source and target. The ETL scripts to transform, flatten, and enrich data are created automatically. AWS Glue scripts can be generated in Scala or Python and are written for Apache Spark.

### Developer Endpoints
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.developerendpoints"></a>

When interactively developing Glue ETL code, AWS Glue provides development endpoints for editing, debugging, and testing. You can use any IDE or text editor for ETL development. Custom readers, writers, and transformations can be imported into Glue ETL jobs as libraries. You can also use and share code with other developers in the AWS Glue GitHub repository. For more information, see [this repository](https://github.com/awslabs/aws-glue-libs) on *GitHub*.

### Flexible Job Scheduler
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.jobscheduler"></a>

 AWS Glue jobs can be triggered for running either on a pre-defined schedule, on-demand, or as a response to an event.

Multiple jobs can be started in parallel and dependencies can be explicitly defined across jobs to build complex ETL pipelines. Glue handles all inter-job dependencies, filters bad data, and retries failed jobs. All logs and notifications are pushed to Amazon CloudWatch; you can monitor and get alerts from a central service.

### Migration Considerations
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.considerations"></a>

Currently, there are no automatic tools for migrating ETL packages from DTS or SSIS into AWS Glue. Migration from SQL Server to Aurora MySQL requires rewriting ETL processes to use AWS Glue.

Alternatively, consider using an EC2 SQL Server instance to run the SSIS service as an interim solution. The connectors and tasks must be revised to support Aurora MySQL instead of SQL Server, but this approach allows gradual migration to AWS Glue.

### Examples
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.examples"></a>

The following walkthrough describes how to create an AWS Glue job to upload a comma-separated values (CSV) file from Amazon S3 to Aurora MySQL.

The source file for this walkthrough is a simple Visits table in CSV format. The objective is to upload this file to an Amazon S3 bucket and create an AWS Glue job to discover and copy it into an Aurora MySQL database.

#### Step 1 — Create a Bucket in Amazon S3 and Upload the CSV File
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.examples.step1"></a>

1. In the AWS console, choose **S3**, and then choose **Create bucket**.
**Note**  
This walkthrough demonstrates how to create the buckets and upload the files manually, which is automated using the Amazon S3 API for production ETLs. Using the console to manually run all the settings will help you get familiar with the terminology, concepts, and workflow.

1. Enter a unique name for the bucket, select a region, and define the level of access.

1. Turn on versioning, add tags, turn on server-side encryption, and choose **Create bucket**.

1. On the Amazon S3 Management Console, choose the newly created bucket.

1. On the bucket page, choose **Upload**.

1. Choose **Add files**, select your CSV file, and choose **Upload**.

#### Step 2 — Add an Amazon Glue Crawler to Discover and Catalog the Visits File
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.examples.step2"></a>

1. In the AWS console, choose ** AWS Glue**.

1. Choose **Tables**, and then choose **Add tables using a crawler**.

1. Enter the name of the crawler and choose **Next**.

1. On the **Specify crawler source type** page, leave the default values, and choose **Next**.

1. On the **Add a data store** page, specify a valid Amazon S3 path, and choose **Next**.

1. On the **Choose an IAM role** page, choose an existing IAM role, or create a new IAM role. Choose **Next**.

1. On the **Create a schedule for this crawler** page, choose **Run on demand**, and choose **Next**.

1. On the **Configure the crawler’s output** page, choose a database for the crawler’s output, enter an optional table prefix for easy reference, and choose **Next**.

1. Review the information that you provided and choose **Finish** to create the crawler.

    ![\[Crawler\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-crawler.png) 

#### Step 3 — Run the Amazon Glue Crawler
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.examples.step3"></a>

1. In the AWS console, choose ** AWS Glue**, and then choose **Crawlers**.

1. Choose the crawler that you created on the previous step, and choose **Run crawler**.

After the crawler completes, the table should be discovered and recorded in the catalog in the table specified.

Click the link to get to the table that was just discovered and then click the table name.

Verify the crawler identified the table’s properties and schema correctly.

**Note**  
You can manually adjust the properties and schema JSON files using the buttons on the top right.

If you don’t want to add a crawler, you can add tables manually.

1. In the AWS console, choose ** AWS Glue**.

1. Choose **Tables**, and then choose **Add table manually**.

#### Step 4 — Create an ETL Job to Copy the Visits Table to an Aurora MySQL Database
<a name="chap-sql-server-aurora-mysql.management.etl.mysql.examples.step4"></a>

1. In the AWS console, choose ** AWS Glue**.

1. Choose **Jobs (legacy)**, and then choose **Add job**.

1. Enter a name for the ETL job and pick a role for the security context. For this example, use the same role created for the crawler. The job may consist of a pre-existing ETL script, a manually-authored script, or an automatic script generated by Amazon Glue. For this example, use Amazon Glue. Enter a name for the script file or accept the default, which is also the job’s name. Configure advanced properties and parameters if needed and choose **Next**.

1. Select the data source for the job and choose **Next**.

1. On the **Choose a transform type** page, choose **Change schema**.

1. On the **Choose a data target** page, choose **Create tables in your data target**, use the JDBC Data store, and the `gluerds` connection type. Choose **Add connection**.

1. On the **Add connection** page, enter the access details for the Amazon Aurora Instance and choose **Add**.

1. Choose **Next** to display the column mapping between the source and target. Leave the default mapping and data types, and choose **Next**.

1. Review the job properties and choose **Save job and edit script**.

1. Review the generated script and make manual changes if needed. You can use the built-in templates for source, target, target location, transform, and spigot using the buttons at the top right section of the screen.

1. Choose **Run job**.

1. In the AWS console, choose ** AWS Glue**, and then choose **Jobs (legacy)**.

1. On the history tab, verify that the job status is set to **Succeeded**.

1. Open your query IDE, connect to the Aurora MySQL cluster, and query the visits database to make sure the data has been transferred successfully.

For more information, see [AWS Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) and [AWS Glue resources](https://aws.amazon.com/glue/resources).

# Viewing server logs
<a name="chap-sql-server-aurora-mysql.management.serverlogs"></a>

This topic provides reference information about logging capabilities in SQL Server and Amazon Aurora MySQL. You can gain insights into how these database systems handle error logging, slow query logging, and general logging.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  View logs from the Amazon RDS console, the Amazon RDS API, the AWS CLI, or the AWS SDKs.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.serverlogs.sqlserver"></a>

SQL Server logs system and user generated events to the *SQL Server Error Log* and to the *Windows Application Log*. It logs recovery messages, kernel messages, security events, maintenance events, and other general server level error and informational messages. The Windows Application Log contains events from all windows applications including SQL Server and SQL Server agent.

SQL Server Management Studio Log Viewer unifies all logs into a single consolidated view. You can also view the logs with any text editor.

Administrators typically use the SQL Server Error Log to confirm successful completion of processes, such as backup or batches, and to investigate the cause of run time errors. These logs can help detect current risks or potential future problem areas.

To view the log for SQL Server, SQL Server Agent, Database Mail, and Windows applications, open the SQL Server Management Studio Object Explorer pane, navigate to **Management**, **SQL Server Logs**, and choose the current log.

The following table identifies some common error codes database administrators typically look for in the error logs:


| Error code | Error message | 
| --- | --- | 
|  1105  |  Couldn’t allocate space.  | 
|  3041  |  Backup failed.  | 
|  9002  |  Transaction log full.  | 
|  14151  |  Replication agent failed.  | 
|  17053  |  Operating system error.  | 
|  18452  |  Login failed.  | 
|  9003  |  Possible database corruption.  | 

### Examples
<a name="chap-sql-server-aurora-mysql.management.serverlogs.sqlserver.examples"></a>

The following screenshot shows the typical log file viewer content:

![\[Log file viewer\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-log-file-viewer.png)


For more information, see [Monitoring the Error Logs](https://docs.microsoft.com/en-us/sql/tools/configuration-manager/monitoring-the-error-logs?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.serverlogs.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) provides administrators with access to the MySQL error log, slow query log, and the general log.

The MySQL Error Log is generated by default. To generate the slow query and general logs, set the corresponding parameters in the database parameter group. For more information, see [Server Options](chap-sql-server-aurora-mysql.configuration.serveroptions.md).

You can view Aurora MySQL logs directly from the Amazon RDS console, the Amazon RDS API, the AWS CLI, or the AWS SDKs. You can also direct the logs to a database table in the main database and use SQL queries to view the data. To download a binary log, use the `mysqlbinlog` utility.

The system writes error events to the `mysql-error.log` file, which you can view using the Amazon RDS console. Alternatively, you can use the Amazon RDS API, the Amazon RDS CLI, or the AWS SDKs retrieve to retrieve the log.

The `mysql-error.log` file buffers are flushed every five minutes and are appended to the `filemysql-error-running.log`. The `mysql-error-running.log` file is rotated every hour and retained for 24 hours.

 Aurora MySQL writes to the error log only on server startup, server shutdown, or when an error occurs. A database instance may run for long periods without generating log entries.

You can turn on and configure the Aurora MySQL Slow Query and general logs to write log entries to a file or a database table by setting the corresponding parameters in the database parameter group. The following list identifies he parameters that control the log options:
+  `slow_query_log` — Set to 1 to create the Slow Query Log. The default is 0.
+  `general_log` — Set to 1 to create the General Log. The default is 0.
+  `long_query_time` — Specify a value in seconds for the shortest query run time to be logged. The default is 10 seconds; the minimum is 0.
+  `log_queries_not_using_indexes` — Set to 1 to log all queries not using indexes to the slow query log. The default is 0. Queries using indexes are logged even if their run time is less than the value of the `long_query_time` parameter.
+  `log_output` — Specify one of the following options:
  +  **TABLE** — Write general queries to the `mysql.general_log` table and slow queries to the `mysql.slow_log` table. This option is set by default.
  +  **FILE** — Write both general and slow query logs to the file system. Log files are rotated hourly.
  +  **NONE** — Disable logging.

### Examples
<a name="chap-sql-server-aurora-mysql.management.serverlogs.mysql.examples"></a>

The following walkthrough demonstrates how to view the Aurora PostgreSQL error logs in the Amazon RDS console.

1. In the AWS console, choose **RDS**, and then choose **Databases**.

1. Choose the instance for which you want to view the error log.

    ![\[Log file viewer\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-view-error-log.png) 

1. Scroll down to the logs section and choose the log name. The log viewer displays the log content.

For more information, see [MySQL database log files](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html) in the *Amazon Relational Database Service User Guide*.

# Maintenance plans
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans"></a>

This topic provides reference information about migrating maintenance tasks from Microsoft SQL Server 2019 to Amazon Aurora MySQL. You can understand the key differences in how routine database maintenance is handled between these two systems.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  Use Amazon RDS for backups. Use SQL for table maintenance.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.sqlserver"></a>

A *maintenance plan* is a set of automated tasks used to optimize a database, performs regular backups, and ensure it is free of inconsistencies. Maintenance plans are implemented as SQL Server Integration Services (SSIS) packages and are run by SQL Server Agent jobs. You can run them manually or automatically at scheduled time intervals.

SQL Server provides a variety of pre-configured maintenance tasks. You can create custom tasks using TSQL scripts or operating system batch files.

Maintenance plans are typically used for the following tasks:
+ Backing up database and transaction log files.
+ Performing cleanup of database backup files in accordance with retention policies.
+ Performing database consistency checks.
+ Rebuilding or reorganizing indexes.
+ Decreasing data file size by removing empty pages (shrink a database).
+ Updating statistics to help the query optimizer obtain updated data distributions.
+ Running SQL Server Agent jobs for custom actions.
+ Running a T-SQL task.

Maintenance plans can include tasks for operator notifications and history or maintenance cleanup. They can also generate reports and output the contents to a text file or the maintenance plan tables in the `msdb` database.

You can create and manage maintenance plans using the maintenance plan wizard in SQL Server Management Studio, Maintenance Plan Design Surface (provides enhanced functionality over the wizard), Management Studio Object Explorer, and T-SQL system stored procedures.

For more information, see [SQL Server Agent and MySQL Agent](chap-sql-server-aurora-mysql.management.agent.md).

### Deprecated DBCC Index and Table Maintenance Commands
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.sqlserver.dbccindex"></a>

The DBCC DBREINDEX, INDEXDEFRAG, and SHOWCONTIG commands have been deprecated as of SQL Server 2008R2. For more information, see [Deprecated Database Engine Features in SQL Server 2008 R2](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143729(v=sql.105)) in the *SQL Server documentation*.

In place of the deprecated DBCC, SQL Server provides newer syntax alternatives as detailed in the following table.


| Deprecated DBCC command | Use instead | 
| --- | --- | 
|   `DBCC DBREINDEX`   |   `ALTER INDEX …​ REBUILD`   | 
|   `DBCC INDEXDEFRAG`   |   `ALTER INDEX …​ REORGANIZE`   | 
|   `DBCC SHOWCONTIG`   |   `sys.dm_db_index_physical_stats`   | 

For the Aurora MySQL alternatives to these maintenance commands, see [Aurora MySQL Maintenance Plans](#chap-sql-server-aurora-mysql.management.maintenanceplans.mysql).

### Examples
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.sqlserver.examples"></a>

Enable Agent XPs, which are turned off by default.

```
EXEC [sys].[sp_configure] @configname = 'show advanced options', @configvalue = 1 RECONFIGURE ;
```

```
EXEC [sys].[sp_configure] @configname = 'agent xps', @configvalue = 1 RECONFIGURE;
```

Create a T-SQL maintenance plan for a single index rebuild.

```
USE msdb;
```

Add the Index Maintenance `IDX1` job to SQL Server Agent.

```
EXEC dbo.sp_add_job @job_name = N'Index Maintenance IDX1', @enabled = 1, @description = N'Optimize IDX1 for INSERT' ;
```

Add the T-SQL job step `Rebuild IDX1 to 50 percent fill`.

```
EXEC dbo.sp_add_jobstep @job_name = N'Index Maintenance IDX1', @step_name = N'Rebuild IDX1 to 50 percent fill', @subsystem = N'TSQL',
@command = N'Use MyDatabase; ALTER INDEX IDX1 ON Shcema.Table REBUILD WITH ( FILL_FACTOR = 50), @retry_attempts = 5, @retry_interval = 5;
```

Add a schedule to run every day at 01:00 AM.

```
EXEC dbo.sp_add_schedule @schedule_name = N'Daily0100', @freq_type = 4, @freq_interval = 1, @active_start_time = 010000;
```

Associate the schedule `Daily0100` with the job index maintenance `IDX1`.

```
EXEC sp_attach_schedule @job_name = N'Index Maintenance IDX1' @schedule_name = N'Daily0100' ;
```

For more information, see [Maintenance Plans](https://docs.microsoft.com/en-us/sql/relational-databases/maintenance-plans/maintenance-plans?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.mysql"></a>

 Amazon Relational Database Service (Amazon RDS) performs automated database backups by creating storage volume snapshots that back up entire instances, not individual databases.

 Amazon RDS creates snapshots during the backup window for individual database instances and retains snapshots in accordance with the backup retention period. You can use the snapshots to restore a database to any point in time within the backup retention period.

**Note**  
The state of a database instance must be ACTIVE for automated backups to occur.

You can backup database instances manually by creating an explicit database snapshot. Use the AWS console, the AWS CLI, or the AWS API to take manual snapshots.

### Examples
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.mysql.examples"></a>

 **Create a manual database snapshot using the Amazon RDS console** 

1. In the AWS console, choose **RDS**, and then choose **Databases**.

1. Choose your Aurora PostgreSQL instance, and for **Instance actions** choose **Take snapshot**.

    ![\[Take snapshot\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-take-snapshot.png) 

 **Restore a database from a snapshot** 

1. In the AWS console, choose **RDS**, and then choose **Snapshots**.

1. Choose the snapshot to restore, and for **Actions** choose **Restore snapshot**.

   This action creates a new instance.

1. Enter the required configuration options in the wizard for creating a new Amazon Aurora database instance. Choose **Restore DB Instance**.

You can also restore a database instance to a point-in-time. For more information, see [Backup and Restore](chap-sql-server-aurora-mysql.hadr.backuprestore.md).

For all other tasks, use a third-party or a custom application scheduler.

 **Rebuild and reorganize an index** 

 Aurora MySQL supports the `OPTIMIZE TABLE` command, which is similar to the `REORGANIZE` option of SQL Server indexes.

```
OPTIMIZE TABLE MyTable;
```

To perform a full table rebuild with all secondary indexes, perform a null altering action using either `ALTER TABLE <table> FORCE` or `ALTER TABLE <table> ENGINE = <current engine>`.

```
ALTER TABLE MyTable FORCE;
```

```
ALTER TABLE MyTable ENGINE = InnoDB
```

### Perform Database Consistency Checks
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.mysql.consistencychecks"></a>

Use the `CHECK TABLE` command to perform a database consistency check.

```
CHECK TABLE <table name> [FOR UPGRADE | QUICK]
```

The `FOR UPGRADE` option checks if the table is compatible with the current version of MySQL to determine whether there have been any incompatible changes in any of the table’s data types or indexes since the table was created. The `QUICK` options doesn’t scan the rows to check for incorrect links.

For routine checks of a table, use the `QUICK` option.

**Note**  
In most cases, Aurora MySQL will find all errors in the data file. When an error is found, the table is marked as corrupted and can’t be used until it is repaired.

### Converting Deprecated DBCC Index and Table Maintenance Commands
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.mysql.dbcc"></a>


| Deprecated DBCC command |  Aurora MySQL equivalent | 
| --- | --- | 
|   `DBCC DBREINDEX`   |   `ALTER TABLE …​ FORCE`   | 
|   `DBCC INDEXDEFRAG`   |   `OPTIMIZE TABLE`   | 
|   `DBCC SHOWCONTIG`   |   `CHECK TABLE`   | 

### Decrease Data File Size by Removing Empty Pages
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.mysql.datafilesize"></a>

Unlike SQL Server that uses a single set of files for an entire database, Aurora MySQL uses one file for each database table. Therefore you don’t need to shrink an entire database.

### Update Statistics to Help the Query Optimizer Get Updated Data Distribution
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.mysql.updatestatistics"></a>

 Aurora MySQL uses both persistent and non-persistent table statistics. Non-persistent statistics are deleted on server restart and after some operations. The statistics are then recomputed on the next table access. Therefore, different estimates could be produced when recomputing statistics leading to different choices in run plans and variations in query performance.

Persistent optimizer statistics survive server restarts and provide better plan stability resulting in more consistent query performance. Persistent optimizer statistics provide the following control and flexibility options:
+ Set the `innodb_stats_auto_recalc` configuration option to control whether statistics are updated automatically when changes to a table cross a threshold.
+ Set the `STATS_PERSISTENT`, `STATS_AUTO_RECALC`, and `STATS_SAMPLE_PAGES` clauses with `CREATE TABLE` and `ALTER TABLE` statements to configure custom statistics settings for individual tables.
+ View optimizer statistics in the `mysql.innodb_table_stats` and `mysql.innodb_index_stats` tables.
+ View the `last_update` column of the `mysql.innodb_table_stats` and `mysql.innodb_index_stats` tables to see when statistics were last updated.
+ Modify the `mysql.innodb_table_stats` and `mysql.innodb_index_stats` tables to force a specific query optimization plan or to test alternative plans without modifying the database.

For more information, see [Managing Statistics](chap-sql-server-aurora-mysql.tsql.managingstatistics.md).

## Summary
<a name="chap-sql-server-aurora-mysql.management.maintenanceplans.summary"></a>

The following table summarizes the key tasks that use SQL Server maintenance plans and a comparable Aurora MySQL solutions.


| Task | SQL Server |  Aurora MySQL  | Comments | 
| --- | --- | --- | --- | 
|  Rebuild or reorganize indexes  |   `ALTER INDEX` / `ALTER TABLE`   |   `OPTIMIZE TABLE` / `ALTER TABLE`   |  | 
|  Decrease data file size by removing empty pages  |   `DBCC SHRINKDATABASE` / `DBCC SHRINKFILE`   |  Files are for each table; not for each database. Rebuilding a table optimizes file size.  |  Not needed  | 
|  Update statistics to help the query optimizer get updated data distribution  |   `UPDATE STATISTICS` / `sp_updatestats`   |  Set `innodb_stats_auto_recalc` to `ON` in the instance global parameter group.  |  | 
|  Perform database consistency checks  |   `DBCC CHECKDB` / `DBCC CHECKTABLE`   |   `CHECK TABLE`   |  | 
|  Back up the database and transaction log files  |   `BACKUP DATABASE` / `BACKUP LOG`   |  Automated backups and snapshots  |  For more information, see [Backup and Restore](chap-sql-server-aurora-mysql.hadr.backuprestore.md).  | 
|  Run SQL Server Agent jobs for custom actions  |   `sp_start_job`, `scheduled`   |  Not supported  |  | 

For more information, see [CHECK TABLE Statement](https://dev.mysql.com/doc/refman/5.7/en/check-table.html) in the *MySQL documentation* and [Working with backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html) in the *Amazon Relational Database Service User Guide*.

# Monitoring features
<a name="chap-sql-server-aurora-mysql.management.monitoring"></a>

This topic provides reference information about monitoring and performance management for Microsoft SQL Server and Amazon Aurora MySQL databases. You can learn about the different monitoring capabilities and tools available for each database system, including SQL Server’s dynamic management views and integration with Amazon CloudWatch and Performance Insights.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  Use Amazon CloudWatch service. For more information, see [Monitoring metrics in an Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html) in the *Amazon Relational Database Service User Guide*.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver"></a>

Monitoring server performance and behavior is a critical aspect of maintaining service quality and includes ad-hoc data collection, ongoing data collection, root cause analysis, preventative actions, and reactive actions. SQL Server provides an array of interfaces to monitor and collect server data.

SQL Server 2017 introduces several new dynamic management views:
+  `sys.dm_db_log_stats` exposes summary level attributes and information on transaction log files, helpful for monitoring transaction log health.
+  `sys.dm_tran_version_store_space_usage` tracks version store usage for each database, useful for proactively planning `tempdb` sizing based on the version store usage for each database.
+  `sys.dm_db_log_info` exposes VLF information to monitor, alert, and avert potential transaction log issues.
+  `sys.dm_db_stats_histogram` is a new dynamic management view for examining statistics.
+  `sys.dm_os_host_info` provides operating system information for both Windows and Linux.

SQL Server 2019 adds new configuration parameter, `LIGHTWEIGHT_QUERY_PROFILING`. It turns on or turns off the lightweight query profiling infrastructure. The lightweight query profiling infrastructure (LWP) provides query performance data more efficiently than standard profiling mechanisms and is enabled by default. For more information, see [Query Profiling Infrastructure](https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-profiling-infrastructure?view=sql-server-ver15) in the *SQL Server documentation*.

### Windows Operating System Level Tools
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.ostools"></a>

You can use the Windows Scheduler to trigger run of script files such as CMD, PowerShell, and so on to collect, store, and process performance data.

System Monitor is a graphical tool for measuring and recording performance of SQL Server and other Windows-related metrics using the Windows Management Interface (WMI) performance objects.

**Note**  
Performance objects can also be accessed directly from T-SQL using the SQL Server Operating System Related DMVs. For a full list of the DMVs, see [SQL Server Operating System Related Dynamic Management Views (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sql-server-operating-system-related-dynamic-management-views-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

Performance counters exist for real-time measurements such as CPU Utilization and for aggregated history such as average active transactions. For a full list of the object hierarchy, see: [Use SQL Server Objects](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/use-sql-server-objects?view=sql-server-ver15) in the *SQL Server documentation*.

### SQL Server Extended Events
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.extendedevents"></a>

SQL Server latest tracing framework provides very lightweight and robust event collection and storage. SQL Server Management Studio features the New Session Wizard and New Session graphic user interfaces for managing and analyzing captured data. SQL Server Extended Events consists of the following items:
+ SQL Server Extended Events Package is a logical container for Extended Events objects.
+ SQL Server Extended Events Targets are consumers of events. Targets include Event File, which writes data to the file Ring Buffer for retention in memory, or for processing aggregates such as Event Counters and Histograms.
+ SQL Server Extended Events Engine is a collection of services and tools that comprise the framework.
+ SQL Server Extended Events Sessions are logical containers mapped many-to-many with packages, events, and filters.

The following example creates a session that logs lock escalations and lock timeouts to a file.

```
CREATE EVENT SESSION Locking_Demo
ON SERVER
    ADD EVENT sqlserver.lock_escalation,
    ADD EVENT sqlserver.lock_timeout
    ADD TARGET package0.etw_classic_sync_target
        (SET default_etw_session_logfile_path = N'C:\ExtendedEvents\Locking\Demo_20180502.etl')
    WITH (MAX_MEMORY=8MB, MAX_EVENT_SIZE=8MB);
GO
```

### SQL Server Tracing Framework and the SQL Server Profiler Tool
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.tracingframework"></a>

The SQL Server trace framework is the predecessor to the Extended Events framework and remains popular among database administrators. The lighter and more flexible Extended Events Framework is recommended for development of new monitoring functionality. For more information, see [SQL Server Profiler](https://docs.microsoft.com/en-us/sql/tools/sql-server-profiler/sql-server-profiler?view=sql-server-ver15) in the *SQL Server documentation*.

### SQL Server Management Studio
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.ssms"></a>

SQL Server Management Studio (SSMS) provides several monitoring extensions:
+  **SQL Server Activity Monitor** is an in-process, real-time, basic high-level information graphical tool.
+  **Query Graphical Show Plan** provides easy exploration of estimated and actual query run plans.
+  **Query Live Statistics** displays query run progress in real time.
+  **Replication Monitor** presents a publisher-focused view or distributor-focused view of all replication activity. For more information, see [Overview of the Replication Monitor Interface](https://docs.microsoft.com/en-us/sql/relational-databases/replication/monitor/overview-of-the-replication-monitor-interface?view=sql-server-ver15) in the *SQL Server documentation*.
+  **Log Shipping Monitor** displays the status of any log shipping activity whose status is available from the server instance to which you are connected. For more information, see [View the Log Shipping Report (SQL Server Management Studio)](https://docs.microsoft.com/en-us/sql/database-engine/log-shipping/view-the-log-shipping-report-sql-server-management-studio?view=sql-server-ver15) in the *SQL Server documentation*.
+  **Standard Performance Reports** is set of reports that show the most important performance metrics such as change history, memory usage, activity, transactions, HA, and more.

### T-SQL
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.tsql"></a>

From the T-SQL interface, SQL Server provides many system stored procedures, system views, and functions for monitoring data.

System stored procedures such as `sp_who` and `sp_lock` provide real-time information. The `sp_monitor` procedure provides aggregated data.

Built in functions such as `@@CONNECTIONS`, `@@IO_BUSY`, `@@TOTAL_ERRORS`, and others provide high level server information.

A rich set of System Dynamic Management functions and views are provided for monitoring almost every aspect of the server. These functions reside in the sys schema and are prefixed with `dm_string`. For more information, see [System Dynamic Management Views](https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views?view=sql-server-ver15) in the *SQL Server documentation*.

### Trace Flags
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.traceflags"></a>

You can set trace flags to log events. For example, set trace flag 1204 to log deadlock information. For more information, see [DBCC TRACEON - Trace Flags (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

### SQL Server Query Store
<a name="chap-sql-server-aurora-mysql.management.monitoring.sqlserver.ssqs"></a>

Query Store is a database-level framework supporting automatic collection of queries, run plans, and run time statistics. This data is stored in system tables. You can use this data to diagnose performance issues, understand patterns, and understand trends. It can also be set to automatically revert plans when a performance regression is detected.

For more information, see [Monitoring performance by using the Query Store](https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.monitoring.mysql"></a>

The native features for monitoring MySQL databases such as innodb logging and the performance schema are turned off for Aurora MySQL. Most third-party tools that rely on these features can’t be used. Some vendors provide monitoring services specifically for Aurora MySQL.

However, Amazon RDS provides a very rich monitoring infrastructure for Aurora MySQL clusters and instances with the native Amazon CloudWatch service.

These services are improved frequently.

 Amazon RDS Performance Insights, an advanced database performance monitoring feature that makes it easy to diagnose and solve performance challenges on Amazon RDS databases, now supports additional counter metrics on Amazon RDS for MySQL and Amazon Aurora MySQL-Compatible Edition (Aurora MySQL). With counter metrics, you can customize the Performance Insights dashboard to include up to 10 additional graphs that show a selection from dozens of operating system and database performance metrics. Counter metrics provide additional information that can be correlated with the database load chart to help identify performance issues and analyze performance. For more information, see [Performance Insights](https://aws.amazon.com/rds/performance-insights/).

![\[Performance Insights\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-sql-server-aurora-mysql-performance-insights.png)


To turn on Performance Insight for your instance, use the step-by-step walkthrough. For more information, see [Turning Performance Insights on and off](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.Enabling.html#USER_PerfInsights.Enabling.Console.Modifying) in the *Amazon Relational Database Service User Guide*.

When the Performance Schema is turned on for Aurora MySQL, Performance Insights provides more detailed information. For example, Performance Insights displays DB load categorized by detailed wait events. When Performance Schema is turned off, Performance Insights displays DB load categorized by the list state of the MySQL process.

The Performance Schema stores many useful metrics that will help you analyze and solve performance related issues.

You have the following options for enabling the Performance Schema:
+ Allow Performance Insights to manage required parameters automatically. When you create an Aurora MySQL DB instance with Performance Insights enabled, Performance Schema is turned on automatically. In this case, Performance Insights automatically manages your parameters.
**Note**  
In this scenario, Performance Insights changes schema-related parameters on the DB instance. These changes aren’t visible in the parameter group associated with the DB instance. However, these changes are visible in the output of the `SHOW GLOBAL VARIABLES` command.
+ Set the required parameters yourself. For Performance Insights to list wait events, you must set all parameters as shown in the following table.


| Parameter name | Value | 
| --- | --- | 
|   `performance_schema`   |  1 (the Source column has the value engine-default)  | 
|   `performance-schema-consumer-events-waits-current`   |  ON  | 
|   `performance-schema-instrument`   |   `wait/%=ON`   | 
|   `performance-schema-consumer-global-instrumentation`   |  ON  | 
|   `performance-schema-consumer-thread-instrumentation`   |  ON  | 

For more information, see [Server Options](chap-sql-server-aurora-mysql.configuration.serveroptions.md) and [Performance Schema Quick Start](https://dev.mysql.com/doc/refman/5.7/en/performance-schema-quick-start.html) in the *MySQL documentation*, [Monitoring metrics in an Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html) and [Monitoring OS metrics with Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html) in the *Amazon Relational Database Service User Guide*.

# Resource governor features
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor"></a>

This topic provides reference information about resource management and workload isolation capabilities in SQL Server 2019 and Amazon Aurora MySQL. You can understand the differences in how these database systems handle resource limits and workload management.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[One star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-1.png)   |  N/A  |  N/A  |  Use the resource limit for each user.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.sqlserver"></a>

SQL Server Resource Governor provides the capability to control and manage resource consumption. Administrators can specify and enforce workload limits on CPU, physical I/O, and Memory. Resource configurations are dynamic and you can change them in real time.

In SQL Server 2019 configurable value for the `REQUEST_MAX_MEMORY_GRANT_PERCENT` option of `CREATE WORKLOAD GROUP` and `ALTER WORKLOAD GROUP` has been changed from an integer to a float data type to allow more granular control of memory limits. For more information, see [ALTER WORKLOAD GROUP (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-workload-group-transact-sql?view=sql-server-ver15) and [CREATE WORKLOAD GROUP (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-workload-group-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

### Use Cases
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.sqlserver.usecases"></a>

The following list identifies typical Resource Governor use cases:
+  **Minimize performance bottlenecks and inconsistencies** to better support Service Level Agreements (SLA) for multiple workloads and users.
+  **Protect against runaway queries** that consume a large amount of resources or explicitly throttle I/O intensive operations. For example, consistency checks with DBCC that may bottleneck the I/O subsystem and negatively impact concurrent workloads.
+  **Allow tracking and control for resource-based pricing scenarios** to improve predictability of user charges.

### Concepts
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.sqlserver.concepts"></a>

The three basic concepts in Resource Governor are Resource Pools, Workload Groups, and Classification.
+  **Resource Pools** represent physical resources. Two built-in resource pools, internal and default, are created when SQL Server is installed. You can create custom user-defined resource pools for specific workload types.
+  **Workload Groups** are logical containers for session requests with similar characteristics. Workload Groups allow aggregate resource monitoring of multiple sessions. Resource limit policies are defined for a Workload Group. Each Workload Group belongs to a Resource Pool.
+  **Classification** is a process that inspects incoming connections and assigns them to a specific Workload Group based on the common attributes. User-defined functions are used to implement Classification. For more information, see [User-Defined Functions](chap-sql-server-aurora-mysql.tsql.udf.md).

### Examples
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.sqlserver.examples"></a>

Turn on the Resource Governor.

```
ALTER RESOURCE GOVERNOR RECONFIGURE;
```

Create a Resource Pool.

```
CREATE RESOURCE POOL ReportingWorkloadPool
    WITH (MAX_CPU_PERCENT = 20);
```

```
ALTER RESOURCE GOVERNOR RECONFIGURE;
```

Create a Workload Group.

```
CREATE WORKLOAD GROUP ReportingWorkloadGroup USING poolAdhoc;
```

```
ALTER RESOURCE GOVERNOR RECONFIGURE;
```

Create a classifier function.

```
CREATE FUNCTION dbo.WorkloadClassifier()
RETURNS sysname WITH SCHEMABINDING
AS
BEGIN
    RETURN (CASE
        WHEN HOST_NAME()= 'ReportServer'
        THEN 'ReportingWorkloadGroup'
        ELSE 'Default'
    END)
END;
```

Register the classifier function.

```
ALTER RESOURCE GOVERNOR with (CLASSIFIER_FUNCTION = dbo.WorkloadClassifier);
```

```
ALTER RESOURCE GOVERNOR RECONFIGURE;
```

For more information, see [Resource Governor](https://docs.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition (Aurora MySQL) doesn’t support a server-wide, granular, resource-based, workload resource isolation and management capability similar to SQL Server Resource Governor. However, Aurora MySQL does support the feature User Resource Limit Options that you can use to achieve similar high-level functionality for limiting resource consumption of user connections.

You can specify User Resource Limit Options as part of the `CREATE USER` statement to place the following limits on users:
+ The number of total queries in hour an account is allowed to issue.
+ The number of updates in hour an account is allowed to issue.
+ The number of times in hour an account can establish a server connection.
+ The total number of concurrent server connections allowed for the account.

For more information, see [Users and Roles](chap-sql-server-aurora-mysql.security.usersroles.md).

### Syntax
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.mysql.syntax"></a>

```
CREATE USER <User Name> ...
WITH
MAX_QUERIES_PER_HOUR count |
MAX_UPDATES_PER_HOUR count |
MAX_CONNECTIONS_PER_HOUR count |
MAX_USER_CONNECTIONS count
```

### Migration Considerations
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.mysql.considerations"></a>

Although both SQL Server Resource Manager and Aurora MySQL User Resource Limit Options provide the same basic function — limiting the amount of resources for distinct types of workloads — they differ significantly in scope and flexibility.

SQL Server Resource Manager is a dynamically configured independent framework based on actual run-time resource consumption. User Resource Limit Options are defined as part of the security objects and requires application connection changes to map to limited users. To modify these limits, you must alter the user object.

User Resource Limit Options don’t allow limiting workload activity based on actual resource consumption, but rather provides a quantitative limit for the number of queries or number of connections. A runaway query that consumes a large amount of resources may slow down the server.

Another important difference is how exceeded resource limits are handled. SQL Server Resource Governor throttles the run; Aurora MySQL raises errors.

### Example
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.mysql.examples"></a>

Create a resource-limited user.

```
CREATE USER 'ReportUsers'@'localhost'
IDENTIFIED BY 'ReportPassword'
WITH
MAX_QUERIES_PER_HOUR 60
MAX_UPDATES_PER_HOUR 0
MAX_CONNECTIONS_PER_HOUR 5
MAX_USER_CONNECTIONS 2;
```

## Summary
<a name="chap-sql-server-aurora-mysql.management.resourcegovernor.summary"></a>


| Feature | SQL Server Resource Governor |  Aurora MySQL User Resource Limit Options | Comments | 
| --- | --- | --- | --- | 
|  Scope  |  Dynamic workload pools and workload groups, mapped to a classifier function.  |  For each user.  |  Application connection strings need to use specific limited users.  | 
|  Limited resources  |  IO, CPU, and memory.  |  Number of queries, number of connections.  |  | 
|  Modifying limits  |   `ALTER RESOURCE POOL`   |   `ALTER USER`   |  Application may use a dynamic connection string.  | 
|  When resource threshold limit is reached.  |  Throttles and queues runs.  |  Raises an error.  |  Application retry logic may need to be added.  | 

For more information, see [CREATE USER Resource-Limit Options](https://dev.mysql.com/doc/refman/5.7/en/create-user.html#create-user-resource-limits) and [Setting Account Resource Limits](https://dev.mysql.com/doc/refman/5.7/en/user-resources.html) in the *MySQL documentation*.

# Linked servers
<a name="chap-sql-server-aurora-mysql.management.linkedservers"></a>

This topic provides reference information about linked servers in Microsoft SQL Server and their absence in Amazon Aurora MySQL-Compatible Edition. You can understand the functionality and benefits of linked servers in SQL Server, including their ability to connect to external data sources and run distributed queries.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[No feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-0.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-automation-0.png)   |   [Linked Servers](chap-sql-server-aurora-mysql.tools.actioncode.md#chap-sql-server-aurora-mysql.tools.actioncode.linkedservers)   |  Data transfer across schemas only, use a custom application solution to access remote instances.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.linkedservers.sqlserver"></a>

Linked servers enable the database engine to connect to external Object Linking and Embedding for databases (OLE-DB) sources. They are typically used to run T-SQL commands and include tables in other instances of SQL Server, or other RDBMS engines such as Oracle. SQL Server supports multiple types of OLE-DB sources as linked servers, including Microsoft Access, Microsoft Excel, text files and others.

The main benefits of using linked servers are:
+ Reading external data for import or processing.
+ Running distributed queries, data modifications, and transactions for enterprise-wide data sources.
+ Querying heterogeneous data source using the familiar T-SQL API.

You can configure linked servers using either SQL Server Management Studio, or the system stored procedure `sp_addlinkedserver`. The available functionality and the specific requirements vary significantly between the various OLE-DB sources. Some sources may allow read only access, others may require specific security context settings, and so on.

The linked server definition contains the linked server alias, the OLE DB provider, and all the parameters needed to connect to a specific OLE-DB data source.

The OLE-DB provider is a .NET Dynamic Link Library (DLL) that handles the interaction of SQL Server with all data sources of its type. For example, OLE-DB Provider for Oracle. The OLE-DB data source is the specific data source to be accessed, using the specified OLE-DB provider.

**Note**  
You can use SQL Server distributed queries with any custom OLE DB provider as long as the required interfaces are implemented correctly.

SQL Server parses the T-SQL commands that access the linked server and sends the appropriate requests to the OLE-DB provider. There are several access methods for remote data, including opening the base table for read or issuing SQL queries against the remote data source.

You can manage linked servers using SQL Server Management Studio graphical user interface or T-SQL system stored procedures.
+  `EXECUTE sp_addlinkedserver` to add new server definitions.
+  `EXECUTE sp_addlinkedserverlogin` to define security context.
+  `EXECUTE sp_linkedservers` or `SELECT * FROM sys.servers` system catalog view to retrieve meta data.
+  `EXECUTE sp_dropserver` to delete a linked server.

You can access linked server data sources from T-SQL using a fully qualified, four-part naming scheme: `<Server Name>.<Database Name>.<Schema Name>.<Object Name>`.

Additionally, you can use the `OPENQUERY` row set function to explicitly invoke pass-through queries on the remote linked server. Also, you can use the `OPENROWSET` and `OPENDATASOURCE` row set functions for one-time remote data access without defining the linked server in advance.

### Syntax
<a name="chap-sql-server-aurora-mysql.management.linkedservers.sqlserver.syntax"></a>

```
EXECUTE sp_addlinkedserver
    [ @server= ] <Linked Server Name>
    [ , [ @srvproduct= ] <Product Name>]
    [ , [ @provider= ] <OLE DB Provider>]
    [ , [ @datasrc= ] <Data Source>]
    [ , [ @location= ] <Data Source Address>]
    [ , [ @provstr= ] <Provider Connection String>]
    [ , [ @catalog= ] <Database>];
```

### Examples
<a name="chap-sql-server-aurora-mysql.management.linkedservers.sqlserver.examples"></a>

Create a linked server to a local text file.

```
EXECUTE sp_addlinkedserver MyTextLinkedServer, N'Jet 4.0',
    N'Microsoft.Jet.OLEDB.4.0',
    N'D:\TextFiles\MyFolder',
    NULL,
    N'Text';
```

Define security context.

```
EXECUTE sp_addlinkedsrvlogin MyTextLinkedServer, FALSE, Admin, NULL;
```

Use `sp_tables_ex` to list tables in a folder.

```
EXEC sp_tables_ex MyTextLinkedServer;
```

Issue a `SELECT` query using a four-part name.

```
SELECT *
FROM MyTextLinkedServer...[FileName#text];
```

For more information, see [sp\$1addlinkedserver (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-addlinkedserver-transact-sql?view=sql-server-ver15) and [Distributed Queries Stored Procedures (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/distributed-queries-stored-procedures-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.linkedservers.mysql"></a>

 Amazon Aurora MySQL-Compatible Edition doesn’t support remote data access.

Connectivity between schemas is trivial, connectivity to other instances will require an application custom solution.

# Scripting features
<a name="chap-sql-server-aurora-mysql.management.scripting"></a>

This topic provides reference information about the differences in scripting and automation capabilities between Microsoft SQL Server 2019 and Amazon Aurora MySQL. You can understand the contrasting tool sets and scripting languages used in these database systems.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[No feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-mysql-migration-playbook/images/pb-compatibility-0.png)   |  N/A  |  N/A  |  Non-compatible tool sets and scripting languages. Use MySQL Workbench, Amazon RDS API, AWS Management Console, and AWS CLI.  | 

## SQL Server Usage
<a name="chap-sql-server-aurora-mysql.management.scripting.sqlserver"></a>

SQL Server supports T-SQL and XQuery scripting within multiple run frameworks such as SQL Server Agent, and stored procedures.

The `SQLCMD` command line utility can also be used to run T-SQL scripts. However, the most extensive and feature-rich scripting environment is PowerShell.

SQL Server provides two PowerShell snap-ins that implement a provider exposing the entire SQL Server Management Object Model (SMO) as PowerShell paths. Additionally, you can use `cmd` in SQL Server to run specific SQL Server commands.

**Note**  
You can use `Invoke-Sqlcmd` to run scripts using the SQLCMD utility.

The `sqlps` utility launches the PowerShell scripting environment and automatically loads the SQL Server modules. You can launch `sqlps` from a command prompt or from the Object Explorer pane of SQL Server Management Studio. You can run one-time PowerShell commands and script files (for example, `.\SomeFolder\SomeScript.ps1`).

**Note**  
SQL Server Agent supports running PowerShell scripts in job steps. For more information, see [SQL Server Agent and MySQL Agent](chap-sql-server-aurora-mysql.management.agent.md).

SQL Server also supports three types of direct database engine queries: T-SQL, XQuery, and the SQLCMD utility. You can call T-SQL and XQuery from stored procedures, SQL Server Management Studio (or other IDE), and SQL Server agent jobs. The SQLCMD utility also supports commands and variables.

### Examples
<a name="chap-sql-server-aurora-mysql.management.scripting.sqlserver.examples"></a>

Backup a database with PowerShell using the default backup options.

```
PS C:\> Backup-SqlDatabase -ServerInstance "MyServer\SQLServerInstance" -Database "MyDB"
```

Get all rows from the `MyTable` table in the `MyDB` database.

```
PS C:\> Read-SqlTableData -ServerInstance MyServer\SQLServerInstance" -DatabaseName "MyDB" -TableName "MyTable"
```

For more information, see [SQL Server PowerShell](https://docs.microsoft.com/en-us/sql/powershell/sql-server-powershell?view=sql-server-ver15), [Database Engine Scripting](https://docs.microsoft.com/en-us/sql/ssms/scripting/database-engine-scripting?view=sql-server-ver15), and [sqlcmd Utility](https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-ver15) in the *SQL Server documentation*.

## MySQL Usage
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql"></a>

As a Platform as a Service (PaaS), Aurora MySQL accepts connections from any compatible client, but you can’t access the MySQL command line utility typically used for database administration. However, you can use MySQL tools installed on a network host and the Amazon RDS API. The most common tools for Aurora MySQL scripting and automation include MySQL Workbench, MySQL Utilities, and the Amazon RDS API. The following sections describe each tool.

### MySQL Workbench
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.workbench"></a>

MySQL Workbench is the most commonly used tool for development and administration of MySQL servers. It is available as a free Community Edition and a paid Commercial Edition that adds enterprise features such as database documentation features. MySQL Workbench is an integrated IDE with the following features:
+  **SQL Development** — Manage and configure connections to aurora MySQL clusters and run SQL queries using the SQL editor.
+  **Data Modeling** — Reverse and forward engineer graphical database schema models and manage schemas with the Table Editor.
+  **Server Administration** — Not applicable to Aurora MySQL. Use the Amazon RDS console to administer servers.

The MySQL Workbench also supports a Python scripting shell that you can use interactively and programmatically.

### MySQL Utilities
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.utilities"></a>

MySQL Utilities are a set of Python command line tools used for common maintenance and administration of MySQL servers tasks. They can reduce the need to write custom code for common tasks and can be easily customized.

The following tools are included in the MySQL Utilities set. Note that some tools will not work with Aurora MySQL because you don’t have root access to the underlying server.
+  **Admin utilities** — Clone, Copy, Compare, Diff, Export, Import, and User Management.
+  **Replication utilities** — Setup, Configuration, and Verification
+  **General utilities** — Disk Usage, Redundant Indexes, Manage Metadata, and Manage Audit Data

### Amazon RDS API
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.rdsapi"></a>

The Amazon RDS API is a web service for managing and maintaining Aurora PostgreSQL and other relational databases. You can use Amazon RDS API to setup, operate, scale, backup, and perform many common administration tasks. The Amazon RDS API supports multiple database platforms and can integrate administration seamlessly for heterogeneous environments.

**Note**  
The Amazon RDS API is asynchronous. Some interfaces may require polling or callback functions to receive command status and results.

You can access Amazon RDS using the AWS Management Console, the AWS Command Line Interface (CLI), and the Amazon RDS Programmatic API as described in the following sections.

### AWS Management Console
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.managementconsole"></a>

The AWS Management Console is a simple web-based set of tools for interactive management of Aurora PostgreSQL and other Amazon RDS services. To access the AWS Management Console, sign in to your AWS account, and choose **RDS**.

### AWS Command Line Interface
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.cli"></a>

The AWS Command Line Interface is an open source tool that runs on Linux, Windows, or macOS having Python 2 version 2.6.5 and higher or Python 3 version 3.3 and higher.

The AWS CLI is built on top of the AWS SDK for Python (Boto), which provides commands for interacting with AWS services. With minimal configuration, you can start using all AWS Management Console functionality from your favorite terminal application.
+  **Linux shells** — Use common shell programs such as Bash, Zsh, or tsch.
+  **Windows command line** — Run commands in PowerShell or the Windows Command Processor.
+  **Remotely** — Run commands on Amazon EC2 instances through a remote terminal such as PuTTY or SSH.

The AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are PowerShell modules built on the functionality exposed by the AWS SDK for .NET. These Tools enable scripting operations for AWS resources using the PowerShell command line.

**Note**  
You can’t use SQL Server cmdlets in PowerShell.

### Amazon RDS Programmatic API
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.programmaticapi"></a>

You can use the Amazon RDS API to automate management of database instances and other Amazon RDS objects.

For more information, see [Actions](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Operations.html), [Data Types](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Types.html), [Common Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonParameters.html), and [Common Errors](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html) in the *Amazon Relational Database Service API Reference*.

### Examples
<a name="chap-sql-server-aurora-mysql.management.scripting.mysql.examples"></a>

The following walkthrough describes how to connect to an Aurora MySQL database instance using the MySQL utility.

1. Sign in to your AWS account, choose **RDS**, and then choose **Databases**.

1. Choose the MySQL database you want to connect to and copy the cluster endpoint address.
**Note**  
You can also connect to individual database instances. For more information, see [High Availability Essentials](chap-sql-server-aurora-mysql.hadr.essentials.md).

1. In the command shell, enter the following:

   ```
   mysql -h <mysql-instance-endpoint-address> -P 3306 -u MasterUser
   ```

   In the preceding example, the `-h` parameter is the endpoint Domain Name System (DNS) name of the Aurora MySQL database cluster.

   In the preceding example, the `-P` parameter is the port number.

1. Provide the password when prompted. The system displays the following (or similar) message.

   ```
   Welcome to the MySQL monitor. Commands end with ; or \g.
   Your MySQL connection id is 350
   Server version: 5.6.27-log MySQL Community Server (GPL)
   Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
   mysql>
   ```

For more information, see [MySQL Product Archives](https://downloads.mysql.com/archives/utilities/), [MySQL Workbench 8.0.29](https://dev.mysql.com/downloads/workbench/), [Command Line Interface](https://docs.aws.amazon.com/cli/latest/reference/), and [Amazon Relational Database Service API Reference](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/Welcome.html).