

# Migrating ANSI SQL features
ANSI SQL

This topic provides conceptual content comparing various database features and functionalities between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can gain valuable insights into the similarities and differences in areas such as object naming conventions, SQL constraints, table creation, Common Table Expressions (CTEs), data type compatibility, derived tables, grouping operations, join operations, temporal tables, views, and window functions. Understanding these concepts is crucial for database administrators and developers planning a migration from SQL Server to Aurora PostgreSQL. By familiarizing yourself with these comparisons, you can anticipate potential challenges, make informed decisions about database migration strategies, and ensure a smooth transition of your database operations and applications to the new PostgreSQL environment.

**Topics**
+ [

# Case sensitivity differences for ANSI SQL
](chap-sql-server-aurora-pg.sql.casesensitivity.md)
+ [

# Constraints for ANSI SQL
](chap-sql-server-aurora-pg.sql.constraints.md)
+ [

# Creating tables for ANSI SQL
](chap-sql-server-aurora-pg.sql.tables.md)
+ [

# Common table expressions for ANSI SQL
](chap-sql-server-aurora-pg.sql.cte.md)
+ [

# Data types for ANSI SQL
](chap-sql-server-aurora-pg.sql.datatypes.md)
+ [

# Derived tables for ANSI SQL
](chap-sql-server-aurora-pg.sql.derivedtables.md)
+ [

# GROUP BY for ANSI SQL
](chap-sql-server-aurora-pg.sql.groupby.md)
+ [

# Table JOIN for ANSI SQL
](chap-sql-server-aurora-pg.sql.tablejoin.md)
+ [

# Temporal tables for ANSI SQL
](chap-sql-server-aurora-pg.sql.temporaltables.md)
+ [

# Views for ANSI SQL
](chap-sql-server-aurora-pg.sql.views.md)
+ [

# Window functions for ANSI SQL
](chap-sql-server-aurora-pg.sql.windowfunctions.md)

# Case sensitivity differences for ANSI SQL


This topic provides reference information on handling object name case sensitivity when migrating from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. You can use this information to ensure proper naming conventions and avoid potential conflicts during the migration process.

Object name case sensitivity might be different for SQL Server and PostgreSQL. By default, SQL Server names are case insensitive. However, you can create a case sensitive SQL Server database by changing the `COLLATION` property. In PostgreSQL, object names are case insensitive.

By default, the AWS Schema Conversion Tool (AWS SCT) uses object names in lowercase for PostgreSQL. If your source code includes objects with identical names in different case, make sure that you keep unique names in your converted code. You can enclose object names in double quotation marks or change the names manually.

In addition to this, you can use AWS Database Migration Service transformation actions to change schema, table, and column names to lowercase. For more information, see [Transformation rules and actions](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html).

To use an uppercase name, enclose object names with double quotation marks. The following code example shows how to create the `EMPLOYEES` table in uppercase.

```
CREATE TABLE "EMPLOYEES" (
  EMP_ID NUMERIC PRIMARY KEY,
  EMP_FULL_NAME VARCHAR(60) NOT NULL,
  AVG_SALARY NUMERIC NOT NULL);
```

The following PostgreSQL command creates the `employees` table in lowercase.

```
CREATE TABLE EMPLOYEES (
  EMP_ID NUMERIC PRIMARY KEY,
  EMP_FULL_NAME VARCHAR(60) NOT NULL,
  AVG_SALARY NUMERIC NOT NULL);
```

If you don’t use double quotation marks, then PostgreSQL creates objects with lowercase names. To create, query, or manage PostgreSQL database objects with names in uppercase or mixed case, use double quotation marks.

# Constraints for ANSI SQL


This topic provides reference information about SQL constraints in both Microsoft SQL Server and Amazon Aurora PostgreSQL. You can understand the similarities and differences in constraint implementation between these two database systems. The topic covers various types of constraints, including check, unique, primary key, and foreign key constraints, as well as cascaded referential actions and indexing requirements.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [Constraints](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.constraints)   |  The `SET DEFAULT` option is missing. Check constraint with subquery.  | 

## SQL Server Usage


Column and table constraints are defined by the SQL standard and enforce relational data consistency. You can use four types of SQL constraints: check, unique, primary key, and foreign key.

### Check Constraints


Check constraints enforce domain integrity by limiting the data values stored in table columns. They are logical Boolean expressions that evaluate to one of the following three values: `TRUE`, `FALSE`, and `UNKNOWN`.

```
CHECK (<Logical Expression>)
```

**Note**  
Check constraint expressions behave differently than predicates in other query clauses. For example, in a `WHERE` clause, a logical expression that evaluates to `UNKNOWN` is functionally equivalent to `FALSE` and the row is filtered out. For check constraints, an expression that evaluates to `UNKNOWN` is functionally equivalent to `TRUE` because the value is permitted by the constraint.

You can assign multiple check constraints to a column. Also, you can apply a single check constraint to multiple columns. In this case, it works as a table-level check constraint.

In ANSI SQL, check constraints can’t access other rows as part of the expression. In SQL Server, you can use user-defined functions in constraints to access other rows, tables, or databases.

### Unique Constraints


You can use unique constraints for all candidate keys. A candidate key is an attribute or a set of attributes or columns that uniquely identify each row in the relation (table data).

```
UNIQUE [CLUSTERED | NONCLUSTERED] (<Column List>)
```

Unique constraints guarantee that no rows with duplicate column values exist in a table.

A unique constraint can be simple or composite. Simple constraints are composed of a single column. Composite constraints are composed of multiple columns. A column may be a part of more than one constraint.

According to the ANSI SQL standard, you can have multiple rows with `NULL` values for unique constraints. However, in SQL Server, you can use a `NULL` value only for a single row. You can use a `NOT NULL` constraint in addition to a unique constraint to address this limitation.

To improve the efficiency, SQL Server creates a unique index to support unique constraints. Otherwise, every `INSERT` and `UPDATE` would require a full table scan to verify that the table doesn’t include duplicates. The default index type for unique constraints is non-clustered.

### Primary Key Constraints


A primary key is a candidate key serving as the unique identifier of a table row. Primary keys might consist of one or more columns. All columns that comprise a primary key must also have a `NOT NULL` constraint. Tables can have one primary key.

```
PRIMARY KEY [CLUSTERED | NONCLUSTERED] (<Column List>)
```

The default index type for primary keys is a clustered index.

### Foreign Key Constraints


Foreign key constraints enforce domain referential integrity. Similar to check constraints, foreign keys limit the values stored in a column or set of columns.

```
FOREIGN KEY (<Referencing Column List>)
REFERENCES <Referenced Table>(<Referenced Column List>)
```

Foreign keys reference columns in other tables, which must be either primary keys or have unique constraints. The set of values that you can use for the referencing table is the set of values that exist in the referenced table.

Although the columns referenced in the parent table are indexed because they have either a primary key or unique constraint, no indexes are automatically created for the referencing columns in the child table. A best practice is to create appropriate indexes to support joins and constraint enforcement.

Foreign key constraints impose DML limitations for the referencing child and parent tables. The purpose of a constraint is to guarantee that no orphan rows, which don’t have corresponding matching values in the parent table exist in the referencing table. The constraint limits `INSERT` and `UPDATE` to the child table and `UPDATE` and `DELETE` to the parent table. For example, you can’t delete an order having associated order items.

Foreign keys support Cascading Referential Integrity (CRI). You can use CRI to enforce constraints and define action paths for DML statements that violate the constraints. There are four CRI options:
+  **NO ACTION**. When the constraint is violated due to a DML operation, an error is raised and the operation is rolled back.
+  **CASCADE**. Values in a child table are updated with values from the parent table when they are updated or deleted along with the parent.
+  **SET NULL**. All columns that are part of the foreign key are set to NULL when the parent is deleted or updated.
+  **SET DEFAULT**. All columns that are part of the foreign key are set to their DEFAULT value when the parent is deleted or updated.

You can customize these actions independently of others in the same constraint. For example, a cascading constraint may have `CASCADE` for `UPDATE`, but `NO ACTION` for `DELETE`.

### Examples


Create a composite non-clustered primary key.

```
CREATE TABLE MyTable
(
Col1 INT NOT NULL,
Col2 INT NOT NULL,
Col3 VARCHAR(20) NULL,
CONSTRAINT PK_MyTable
PRIMARY KEY NONCLUSTERED (Col1, Col2)
);
```

Create a table-level check constraint.

```
CREATE TABLE MyTable
(
Col1 INT NOT NULL,
Col2 INT NOT NULL,
Col3 VARCHAR(20) NULL,
CONSTRAINT PK_MyTable
PRIMARY KEY NONCLUSTERED (Col1, Col2),
CONSTRAINT CK_MyTableCol1Col2
CHECK (Col2 >= Col1)
);
```

Create a simple non-null unique constraint.

```
CREATE TABLE MyTable
(
Col1 INT NOT NULL,
Col2 INT NOT NULL,
Col3 VARCHAR(20) NULL,
CONSTRAINT PK_MyTable
PRIMARY KEY NONCLUSTERED (Col1, Col2),
CONSTRAINT UQ_Col2Col3
UNIQUE (Col2, Col3)
);
```

Create a foreign key with multiple cascade actions.

```
CREATE TABLE MyParentTable
(
Col1 INT NOT NULL,
Col2 INT NOT NULL,
Col3 VARCHAR(20) NULL,
CONSTRAINT PK_MyTable
PRIMARY KEY NONCLUSTERED (Col1, Col2)
);
```

```
CREATE TABLE MyChildTable
(
Col1 INT NOT NULL PRIMARY KEY,
Col2 INT NOT NULL,
Col3 INT NOT NULL,
CONSTRAINT FK_MyChildTable_MyParentTable
FOREIGN KEY (Col2, Col3)
REFERENCES MyParentTable (Col1, Col2)
ON DELETE NO ACTION
ON UPDATE CASCADE
);
```

For more information, see [Unique Constraints and Check Constraints](https://docs.microsoft.com/en-us/sql/relational-databases/tables/unique-constraints-and-check-constraints?view=sql-server-ver15) and [Primary and Foreign Key Constraints](https://docs.microsoft.com/en-us/sql/relational-databases/tables/primary-and-foreign-key-constraints?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL supports the following types of table constraints:
+  `PRIMARY KEY`.
+  `FOREIGN KEY`.
+  `UNIQUE`.
+  `NOT NULL`.
+  `EXCLUDE` (unique to PostgreSQL).

Similar to constraint declaration in SQL Server, you can create constraints inline or out-of-line when you specify table columns in PostgreSQL.

You can specify PostgreSQL constraints using `CREATE TABLE` or `ALTER TABLE`. Constraints on views aren’t supported.

Make sure that you have the `CREATE` and `ALTER` privileges on the table for which you create constraints. For foreign key constraints, make sure that you have the `REFERENCES` privilege.

### Primary Key Constraints

+ Uniquely identify each row and can’t contain NULL values.
+ Use the same ANSI SQL syntax as SQL Server.
+ You can create primary key constraints on a single column or on multiple columns (composite primary keys) as the only primary key in a table.
+ Creating a primary key constraint automatically creates a unique B-Tree index on the column or group of columns marked as the primary key of the table.
+ You can generate constraint names automatically by PostgreSQL or explicitly specified during constraint creation.

Create an inline primary key constraint with a system-generated constraint name.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC PRIMARY KEY,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25));
```

Create an inline primary key constraint with a user-specified constraint name.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC CONSTRAINT PK_EMP_ID PRIMARY KEY,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25));
```

Create an out-of-line primary key constraint.

```
CREATE
```

```
CREATE TABLE EMPLOYEES(
  EMPLOYEE_ID NUMERIC,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25)),
  CONSTRAINT PK_EMP_ID PRIMARY KEY (EMPLOYEE_ID));
```

Add a primary key constraint to an existing table.

```
ALTER TABLE SYSTEM_EVENTS
  ADD CONSTRAINT PK_EMP_ID PRIMARY KEY (EVENT_CODE, EVENT_TIME);
```

Drop the primary key.

```
ALTER TABLE SYSTEM_EVENTS DROP CONSTRAINT PK_EMP_ID;
```

### Foreign Key Constraints

+ Enforce referential integrity in the database. Values in specific columns or a group of columns must match the values from another table or column.
+ Creating a foreign key constraint in PostgreSQL uses the same ANSI SQL syntax as SQL Server.
+ You can create foreign key constraints in-line or out-of-line during table creation.
+ Use the `REFERENCES` clause to specify the table referenced by the foreign key constraint.
+ When specifying `REFERENCES` in the absence of a column list in the referenced table, the primary key of the referenced table is used as the referenced column or columns.
+ A table can have multiple foreign key constraints.
+ Use the ON DELETE clause to handle foreign key parent record deletions such as cascading deletes.
+ Foreign key constraint names are generated automatically by the database or specified explicitly during constraint creation.

### ON DELETE Clause


PostgreSQL provides three main options to handle cases where data is deleted from the parent table and a child table is referenced by a FOREIGN KEY constraint. By default, without specifying any additional options, PostgreSQL uses the NO ACTION method and raises an error if the referencing rows still exist when the constraint is verified.
+  `ON DELETE CASCADE`. Any dependent foreign key values in the child table are removed along with the referenced values from the parent table.
+  `ON DELETE RESTRICT`. Prevents the deletion of referenced values from the parent table and the deletion of dependent foreign key values in the child table.
+  `ON DELETE NO ACTION`. Performs no action (the default). The fundamental difference between `RESTRICT` and `NO ACTION` is that `NO ACTION` allows the check to be postponed until later in the transaction; `RESTRICT` doesn’t.

### ON UPDATE Clause


Handling updates on `FOREIGN KEY` columns is also available using the `ON UPDATE` clause, which shares the same options as the `ON DELETE` clause:
+  `ON UPDATE CASCADE`.
+  `ON UPDATE RESTRICT`.
+  `ON UPDATE NO ACTION`.

Create an inline foreign key with a user-specified constraint name.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC PRIMARY KEY,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25),
  DEPARTMENT_ID NUMERIC REFERENCES DEPARTMENTS(DEPARTMENT_ID));
```

Create an out-of-line foreign key constraint with a system-generated constraint name.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC PRIMARY KEY,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25),
  DEPARTMENT_ID NUMERIC,
  CONSTRAINT FK_FEP_ID
  FOREIGN KEY(DEPARTMENT_ID) REFERENCES DEPARTMENTS(DEPARTMENT_ID));
```

Create a foreign key using the `ON DELETE CASCADE` clause.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC PRIMARY KEY,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25),
  DEPARTMENT_ID NUMERIC,
  CONSTRAINT FK_FEP_ID
  FOREIGN KEY(DEPARTMENT_ID) REFERENCES DEPARTMENTS(DEPARTMENT_ID)
  ON DELETE CASCADE);
```

Add a foreign key to an existing table.

```
ALTER TABLE EMPLOYEES ADD CONSTRAINT FK_DEPT
  FOREIGN KEY (department_id)
  REFERENCES DEPARTMENTS (department_id) NOT VALID;

ALTER TABLE EMPLOYEES VALIDATE CONSTRAINT FK_DEPT;
```

### ON UPDATE Clause

+ Ensure that values in a column, or a group of columns, are unique across the entire table.
+ PostgreSQL unique constraint syntax is ANSI SQL compatible.
+ Automatically creates a B-Tree index on the respective column, or a group of columns, when creating a `UNIQUE` constraint.
+ If duplicate values exist in the column, for which you create the unique constraint, the operation fails and returns an error message.
+ Unique constraints in PostgreSQL accept multiple NULL values. This behavior is similar to SQL Server.
+ You can use system-generated or explicitly specified naming for unique constraints.

Create an inline unique constraint ensuring uniqueness of values in the email column.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC PRIMARY KEY,
  FIRST_NAME VARCHAR(20),
  LAST_NAME VARCHAR(25),
  EMAIL VARCHAR(25) CONSTRAINT UNIQ_EMP_EMAIL UNIQUE,
  DEPARTMENT_ID NUMERIC);
```

### CHECK Constraints

+ Enforce that values in a column satisfy a specific requirement.
+ Check constraints in PostgreSQL use the same ANSI SQL syntax as SQL Server.
+ Can only be defined using a Boolean data type to evaluate the values of a column.
+ Check constraints naming can be system-generated or explicitly specified by the user during constraint creation.

Check constraints are using Boolean data type, therefore you can’t use subqueries in the check constraint. To use this feature, you can create a Boolean function that will check the query results and return `TRUE` or `FALSE` values accordingly.

### NOT NULL Constraints

+ Enforce that a column can’t accept NULL values. This behavior is different from the default column behavior in PostgreSQL where columns can accept NULL values.
+  `NOT NULL` constraints can only be defined inline during table creation.
+ You can explicitly specify names for `NOT NULL` constraints when used with a `CHECK` constraint.

Define two not null constraints on the `FIRST_NAME` and `LAST_NAME` columns. Define a check constraint with an explicitly user-specified name to enforce not null behavior on the `EMAIL` column.

```
CREATE TABLE EMPLOYEES (
  EMPLOYEE_ID NUMERIC PRIMARY KEY,
  FIRST_NAME VARCHAR(20) NOT NULL,
  LAST_NAME VARCHAR(25) NOT NULL,
  EMAIL VARCHAR(25) CONSTRAINT CHK_EMAIL
    CHECK(EMAIL IS NOT NULL));
```

### SET Constraints Syntax


```
SET CONSTRAINTS { ALL | name [, ...] } { DEFERRED | IMMEDIATE }
```

PostgreSQL provides controls for certain aspects of constraint behavior:
+  `DEFERRABLE` \$1 `NOT DEFERRABLE`. Using the PostgreSQL `SET CONSTRAINTS` statement. You can define constraints as:
  +  `DEFERRABLE`. Allows you to use the `SET CONSTRAINTS` statement to set the behavior of constraint checking within the current transaction until transaction commit.
  +  `IMMEDIATE`. Constraints are enforced only at the end of each statement. Note that each constraint has its own `IMMEDIATE` or `DEFERRED` mode.
  +  `NOT DEFERRABLE`: This statement always runs as `IMMEDIATE` and isn’t affected by the `SET CONSTRAINTS` command.
+  `VALIDATE CONSTRAINT` \$1 `NOT VALID`.
  +  `VALIDATE CONSTRAINT`. Validates foreign key or check constraints only that were previously created as `NOT VALID`. This action performs a validation check by scanning the table to ensure all records satisfy the constraint definition.
  +  `NOT VALID`. You can use this type only for foreign key or check constraints. When specified, new records aren’t validated with the creation of the constraint. Only when the `VALIDATE CONSTRAINT` state is applied is the constraint state enforced on all records.

### Using Existing Indexes During Constraint Creation


PostgreSQL can add a new primary key or unique constraints based on an existing unique index. PostgreSQL includes all index columns in the constraint. When you create constraints using this method, the index is owned by the constraint. If you delete the constraint, then PostgreSQL deletes the index.

Use an existing unique index to create a primary key constraint.

```
CREATE UNIQUE INDEX IDX_EMP_ID ON EMPLOYEES(EMPLOYEE_ID);

ALTER TABLE EMPLOYEES
  ADD CONSTRAINT PK_CON_UNIQ PRIMARY KEY USING INDEX IDX_EMP_ID;
```

## Summary


The following table identifies similarities, differences, and key migration considerations.


| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Check constraints  |  CHECK  |  CHECK  | 
|  Unique constraints  |  UNIQUE  |  UNIQUE  | 
|  Primary key constraints  |  PRIMARY KEY  |  PRIMARY KEY  | 
|  Foreign key constraints  |  FOREIGN KEY  |  FOREIGN KEY  | 
|  Cascaded referential actions  |  NO ACTION, CASCADE, SET NULL, SET DEFAULT  |  RESTRICT, CASCADE, SET NULL, NO ACTION  | 
|  Indexing of referencing columns  |  Not required  |  N/A  | 
|  Indexing of referenced columns  |  PRIMARY KEY or UNIQUE  |  PRIMARY KEY or UNIQUE  | 

For more information, see [Constraints](https://www.postgresql.org/docs/13/ddl-constraints.html), [SET CONSTRAINTS](https://www.postgresql.org/docs/13/sql-set-constraints.html), and [ALTER TABLE](https://www.postgresql.org/docs/13/sql-altertable.html) in the *PostgreSQL documentation*.

# Creating tables for ANSI SQL


This topic provides reference information comparing the creation of tables in Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can understand the similarities and differences in table creation syntax, features, and capabilities between these two database systems. The topic highlights key aspects such as table and column naming, data types, constraints, and auto-generated values.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [Creating Tables](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.tables)   |  Auto generated value column is different. Can’t use physical attribute `ON`. Missing table variable and memory-optimized table.  | 

## SQL Server Usage


### ANSI Syntax Conformity


You can create tables in SQL Server using the `CREATE TABLE` statement and conform to the ANSI/ISO entry level standard. The basic features of `CREATE TABLE` are similar for most relational database management engines and are well defined in the ANSI/ISO standards.

In its most basic form, the `CREATE TABLE` statement in SQL Server is used to define:
+ Table names, the containing security schema, and database.
+ Column names.
+ Column data types.
+ Column and table constraints.
+ Column default values.
+ Primary, candidate (UNIQUE), and foreign keys.

### T-SQL Extensions


SQL Server extends the basic syntax and provides many additional options for the `CREATE TABLE` or `ALTER TABLE` statements. The most often used options are:
+ Supporting index types for primary keys and unique constraints, clustered or non-clustered, and index properties such as `FILLFACTOR`.
+ Physical table data storage containers using the `ON <File Group>` clause.
+ Defining `IDENTITY` auto-enumerator columns.
+ Encryption.
+ Compression.
+ Indexes.

For more information, see [Data Types](chap-sql-server-aurora-pg.sql.datatypes.md), [Column Encryption](chap-sql-server-aurora-pg.security.columnencryption.md), and [Databases and Schemas](chap-sql-server-aurora-pg.tsql.schemas.md).

### Table Scope


SQL Server provides five scopes for tables
+ Standard tables are created on disk, globally visible, and persist through connection resets and server restarts.
+ Temporary tables are designated with the "\$1 " prefix. They are persisted in TempDB and are visible to the run scope and any sub-scopes where they were created. Temporary tables are cleaned up by the server when the run scope terminates and when the server restarts.
+ Global temporary tables are designated by the "\$1\$1 " prefix. They are similar in scope to temporary tables, but are also visible to concurrent scopes.
+ Table variables are defined with the `DECLARE` statement, not with `CREATE TABLE`. They are visible only to the run scope where they were created.
+ Memory-optimized tables are special types of tables used by the In-Memory Online Transaction Processing (OLTP) engine. They use a non-standard `CREATE TABLE` syntax.

### Creating a Table Based on an Existing Table or Query


In SQL Server, you can create new tables based on `SELECT` queries as an alternate to the `CREATE TABLE` statement. You can use a `SELECT` statement that returns a valid set with unique column names to create a new table and populate data.

 `SELECT INTO` is a combination of DML and DDL. The simplified syntax for `SELECT INTO` is shown following.

```
SELECT <Expression List>
INTO <Table Name>
[FROM <Table Source>]
[WHERE <Filter>]
[GROUP BY <Grouping Expressions>...];
```

When you create a new table using `SELECT INTO`, the only attributes created for the new table are column names, column order, and the data types of the expressions. Even a straight forward statement such as` SELECT \$1 INTO <New Table> FROM <Source Table>` doesn’t copy constraints, keys, indexes, identity property, default values, or any other related objects.

### TIMESTAMP Syntax for ROWVERSION Deprecated Syntax


The `TIMESTAMP` syntax synonym for `ROWVERSION` has been deprecated as of SQL Server 2008R2 in accordance with [Deprecated Database Engine Features in SQL Server 2008 R2](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143729(v=sql.105)).

Previously, you could use either the `TIMESTAMP` or the `ROWVERSION` keywords to denote a special data type that exposes an auto-enumerator. The auto-enumerator generates unique eight-byte binary numbers typically used to version-stamp table rows. Clients read the row, process it, and check the `ROWVERSION` value against the current row in the table before modifying it. If they are different, the row has been modified since the client read it. The client can then apply different processing logic.

Note that when migrating to Aurora PostgreSQL using the AWS Schema Conversion Tool, neither `ROWVERSION` nor `TIMESTAMP` are supported. You must add customer logic, potentially in the form of a trigger, to maintain this functionality.

### Syntax


Simplified syntax for `CREATE TABLE` is shown following.

```
CREATE TABLE [<Database Name>.<Schema Name>].<Table Name> (<Column Definitions>)
  [ON{<Partition Scheme Name> (<Partition Column Name>)];
```

```
<Column Definition>:
<Column Name> <Data Type>
[CONSTRAINT <Column Constraint>
[DEFAULT <Default Value>]]
[IDENTITY [(<Seed Value>, <Increment Value>)]
[NULL | NOT NULL]
[ENCRYPTED WITH (<Encryption Specifications>)
[<Column Constraints>]
[<Column Index Specifications>]
```

```
<Column Constraint>:
[CONSTRAINT <Constraint Name>]
{{PRIMARY KEY | UNIQUE} [CLUSTERED | NONCLUSTERED]
[WITH FILLFACTOR = <Fill Factor>]
| [FOREIGN KEY]
REFERENCES <Referenced Table> (<Referenced Columns>)]
```

```
<Column Index Specifications>:
INDEX <Index Name> [CLUSTERED | NONCLUSTERED]
[WITH(<Index Options>]
```

### Examples


Create a basic table.

```
CREATE TABLE MyTable
(
Col1 INT NOT NULL PRIMARY KEY,
Col2 VARCHAR(20) NOT NULL
);
```

Create a table with column constraints and an identity.

```
CREATE TABLE MyTable
(
Col1 INT NOT NULL PRIMARY KEY IDENTITY (1,1),
Col2 VARCHAR(20) NOT NULL CHECK (Col2 <> ''),
Col3 VARCHAR(100) NULL
REFERENCES MyOtherTable (Col3)
);
```

Create a table with an additional index.

```
CREATE TABLE MyTable
(
Col1 INT NOT NULL PRIMARY KEY,
Col2 VARCHAR(20) NOT NULL
INDEX IDX_Col2 NONCLUSTERED
);
```

For more information, see [CREATE TABLE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


As SQL Server, Aurora PostgreSQL provides ANSI/ISO syntax entry level conformity for `CREATE TABLE` and custom extensions to support Aurora PostgreSQL specific functionality.

In its most basic form, and very similar to SQL Server, the `CREATE TABLE` statement in Aurora PostgreSQL is used to define:
+ Table names containing security schema and/or database.
+ Column names.
+ Column data types.
+ Column and table constraints.
+ Column default values.
+ Primary, candidate (UNIQUE), and foreign keys.

Starting with PostgreSQL 12 support for generated columns has been added. Generated columns can be either calculated from other columns values on the fly or calculated and stored.

```
CREATE TABLE tst_gen(
n NUMERIC,
n_gen GENERATED ALWAYS AS (n*0.01)
);
```

### Aurora PostgreSQL Extensions


 Aurora PostgreSQL extends the basic syntax and allows many additional options to be defined as part of the `CREATE TABLE` or `ALTER TABLE` statements. The most often used option is in-line index definition.

### Table Scope


 Aurora PostgreSQL provides two table scopes:
+ Standard tables are created on disk, visible globally, and persist through connection resets and server restarts.
+ Temporary tables are created using the `CREATE GLOBAL TEMPORARY TABLE` statement. A `TEMPORARY` table is visible only to the session that creates it and is dropped automatically when the session is closed.

### Creating a Table Based on an Existing Table or Query


 Aurora PostgreSQL provides two ways to create standard or temporary tables based on existing tables and queries: `CREATE TABLE <New Table> LIKE <Source Table>` and `CREATE TABLE …​ AS <Query Expression>`.

 `CREATE TABLE <New Table> LIKE <Source Table>` creates an empty table based on the definition of another table including any column attributes and indexes defined in the original table.

 `CREATE TABLE …​ AS <Query Expression>` is very similar to `SELECT INTO` in SQL Server. You can use this query to create a new table and populate data in a single step.

The following code example creates a new empty table based on the definition of the SourceTable table.

```
CREATE TABLE SourceTable(Col1 INT);

INSERT INTO SourceTable VALUES (1);

CREATE TABLE NewTable AS SELECT Col1 AS Col2 FROM SourceTable;

INSERT INTO NewTable (Col2) VALUES (2);

SELECT * FROM NewTable;
Col2
1
2
```

### Converting TIMESTAMP and ROWVERSION Columns


The following code example shows how you can use SQL server to provide an automatic mechanism for stamping row versions for application concurrency control.

```
CREATE TABLE WorkItems
(
WorkItemID INT IDENTITY(1,1) PRIMARY KEY,
WorkItemDescription XML NOT NULL,
Status VARCHAR(10) NOT NULL DEFAULT ('Pending'),
-- other columns...
VersionNumber ROWVERSION
);
```

The VersionNumber column automatically updates when a row is modified. The actual value is meaningless. Just the fact that it changed is what indicates a row modification. The client can now read a work item row, process it, and ensure no other clients updated the row before updating the status.

```
SELECT @WorkItemDescription = WorkItemDescription,
@Status = Status,
@VersionNumber = VersionNumber
FROM WorkItems
WHERE WorkItemID = @WorkItemID;

EXECUTE ProcessWorkItem @WorkItemID, @WorkItemDescription, @Status OUTPUT;

IF (
  SELECT VersionNumber
  FROM WorkItems
  WHERE WorkItemID = @WorkItemID
  ) = @VersionNumber;
EXECUTE UpdateWorkItems @WorkItemID, 'Completed'; -- Success
ELSE
EXECUTE ConcurrencyExceptionWorkItem; -- Row updated while processing
```

In Aurora PostgreSQL, you can add a trigger to maintain the updated stamp for each row.

```
CREATE OR REPLACE FUNCTION IncByOne()
  RETURNS TRIGGER
  AS $$
  BEGIN
    UPDATE WorkItems SET VersionNumber = VersionNumber+1
    WHERE WorkItemID = OLD.WorkItemID;
  END; $$
  LANGUAGE PLPGSQL;

CREATE TRIGGER MaintainWorkItemVersionNumber
  AFTER UPDATE OF WorkItems
  FOR EACH ROW
  EXECUTE PROCEDURE IncByOne();
```

For more information, see [Triggers](chap-sql-server-aurora-pg.tsql.triggers.md).

### Syntax


```
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ]
table_name ( [
{ column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint
| LIKE source_table [ like_option ... ] }
[, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation
] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
```

```
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ]
table_name
OF type_name [ (
{ column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
| table_constraint }
[, ... ]
) ]
[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation
] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
```

```
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ]
table_name
PARTITION OF parent_table [ (
{ column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
| table_constraint }
[, ... ]
) ] FOR VALUES partition_bound_spec
[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation
] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
```

The `column_constraint` is:

```
[ CONSTRAINT constraint_name ]
{ NOT NULL |
NULL |
CHECK ( expression ) [ NO INHERIT ] |
DEFAULT default_expr |
GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] |
UNIQUE index_parameters |
PRIMARY KEY index_parameters |
REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
[ ON DELETE action ] [ ON UPDATE action ] }
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
```

The `table_constraint` is:

```
[ CONSTRAINT constraint_name ]
{ CHECK ( expression ) [ NO INHERIT ] |
UNIQUE ( column_name [, ... ] ) index_parameters |
PRIMARY KEY ( column_name [, ... ] ) index_parameters |
EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters
[ WHERE ( predicate ) ] |
FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ]
[ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE
action ] }
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
```

The `like_option` is:

```
{ INCLUDING | EXCLUDING } { COMMENTSDEFAULTS | CONSTRAINTS | DEFAULTS | IDENTITY |
INDEXES | STATISTICS | STORAGE |COMMENTS | ALL }
```

The `partition_bound_spec` is:

```
IN ( { numeric_literal | string_literal | TRUE | FALSE | NULL } [, ...] ) |
FROM ( { numeric_literal | string_literal | TRUE | FALSE | MINVALUE | MAXVALUE } [,
...] )
TO ( { numeric_literal | string_literal | TRUE | FALSE | MINVALUE | MAXVALUE } [,
...] )
```

The `index_parameters` in `UNIQUE`, `PRIMARY KEY`, and `EXCLUDE` constraints are:

```
[ WITH ( storage_parameter [= value] [, ... ] ) ]
[ USING INDEX TABLESPACE tablespace_name ]
```

The `exclude_element` in an `EXCLUDE` constraint is:

```
{ column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ]
```

### Examples


Create a basic table.

```
CREATE TABLE MyTable
(
Col1 INT PRIMARY KEY,
Col2 VARCHAR(20) NOT NULL
);
```

Create a table with column constraints.

```
CREATE TABLE MyTable
(
Col1 INT PRIMARY KEY,
Col2 VARCHAR(20) NOT NULL
  CHECK (Col2 <> ''),
Col3 VARCHAR(100) NULL
  REFERENCES MyOtherTable (Col3)
);
```

## Summary



| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  ANSI compliance  |  Entry level  |  Entry level  | 
|  Auto generated enumerator  |   `IDENTITY`   |   `SERIAL`   | 
|  Reseed auto generated value  |   `DBCC CHECKIDENT`   |  N/A  | 
|  Index types  |   `CLUSTERED` or `NONCLUSTERED`   |  See [Indexes](chap-sql-server-aurora-pg.indexes.md).  | 
|  Physical storage location  |   `ON <File Group>`   |  Not supported  | 
|  Temporary tables  |   `#TempTable`   |   `CREATE TEMPORARY TABLE`   | 
|  Global temporary tables  |   `##GlobalTempTable`   |   `CREATE GLOBAL TEMPORARY TABLE`   | 
|  Table variables  |   `DECLARE @Table`   |  Not supported  | 
|  Create table as query  |   `SELECT…​ INTO`   |   `CREATE TABLE…​ AS`   | 
|  Copy table structure  |  Not supported  |   `CREATE TABLE…​ LIKE`   | 
|  Memory-optimized tables  |  Supported  |  N/A  | 

For more information, see [CREATE TABLE](https://www.postgresql.org/docs/13/sql-createtable.html) in the *PostgreSQL documentation*.

# Common table expressions for ANSI SQL


This topic provides reference information about Common Table Expressions (CTEs) in both SQL Server and PostgreSQL. It explains that CTEs are part of the ANSI SQL standard and are used to simplify queries and improve readability by defining temporary views or derived tables. The topic highlights the similarities between SQL Server and PostgreSQL implementations of CTEs, including their support for recursive functionality.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  Use `RECURSIVE` keyword for recursive CTE queries.  | 

## SQL Server Usage


Common Table Expressions (CTE) are part of the ANSI standard since SQL:1999, simplify queries and make them more readable by defining a temporary view, or derived table, that a subsequent query can reference. You can use SQL Server CTEs as the target of DML modification statements. They have similar restrictions as updateable views.

SQL Server CTEs provide recursive functionality in accordance with the ANSI 99 standard. Recursive CTEs can reference themselves and re-run queries until the data set is exhausted, or the maximum number of iterations is exceeded.

### CTE Syntax


```
WITH <CTE NAME>
AS
(
SELECT ....
)
SELECT ...
FROM CTE
```

### Recursive CTE Syntax


```
WITH <CTE NAME>
AS (
<Anchor SELECT query>
UNION ALL
<Recursive SELECT query with reference to <CTE NAME>>
)
SELECT ... FROM <CTE NAME>...
```

### Examples


Create and populate an OrderItems table.

```
CREATE TABLE OrderItems
(
OrderID INT NOT NULL,
Item VARCHAR(20) NOT NULL,
Quantity SMALLINT NOT NULL,
PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Washer', 100);
```

Define a CTE to calculate the total quantity in every order and then join to the OrderItems table to obtain the relative quantity for each item.

```
WITH AggregatedOrders
AS
( SELECT OrderID, SUM(Quantity) AS TotalQty
FROM OrderItems
GROUP BY OrderID
)
SELECT O.OrderID, O.Item,
O.Quantity,
(O.Quantity / AO.TotalQty) * 100 AS PercentOfOrder
FROM OrderItems AS O
INNER JOIN
AggregatedOrders AS AO
ON O.OrderID = AO.OrderID;
```

The preceding example produces the following results.

```
OrderID  Item       Quantity  PercentOfOrder
1        M8 Bolt    100       100.0000000000
2        M8 Nut     100       100.0000000000
3        M8 Washer  100       33.3333333300
3        M6 Washer  200       66.6666666600
```

Using a recursive CTE, create and populate the `Employees` table with the `DirectManager` for each employee.

```
CREATE TABLE Employees
(
Employee VARCHAR(5) NOT NULL PRIMARY KEY,
DirectManager VARCHAR(5) NULL
);
```

```
INSERT INTO Employees(Employee, DirectManager)
VALUES
('John', 'Dave'),
('Jose', 'Dave'),
('Fred', 'John'),
('Dave', NULL);
```

Use a recursive CTE to display the employee-management hierarchy.

```
WITH EmpHierarchyCTE AS
(
  -- Anchor query retrieves the top manager
  SELECT 0 AS LVL,
    Employee,
    DirectManager
  FROM Employees AS E
  WHERE DirectManager IS NULL
  UNION ALL
-- Recursive query gets all Employees managed by the previous level
SELECT LVL + 1 AS LVL,
  E.Employee,
  E.DirectManager
FROM EmpHierarchyCTE AS EH
INNER JOIN
Employees AS E
ON E.DirectManager = EH.Employee
)
SELECT *
FROM EmpHierarchyCTE;
```

The preceding example produces the following results.

```
LVL  Employee  DirectManager
0    Dave      NULL
1    John      Dave
1    Jose      Dave
2    Fred      John
```

For more information, see [Recursive Queries Using Common Table Expressions](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms186243(v=sql.105)?redirectedfrom=MSDN) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL conforms to the ANSI SQL-99 standard and implementing CTEs in PostgreSQL is similar to SQL Server.

CTE is also known as `WITH` query. This type of query helps you to simplify long queries, it is similar to defining temporary tables that exist only for the running of the query. The statement in a WITH clause can be a `SELECT`, `INSERT`, `UPDATE`, or `DELETE`, and the `WITH` clause itself is attached to a primary statement that can also be a `SELECT`, `INSERT`, `UPDATE`, or `DELETE`.

### CTE Syntax


```
WITH <CTE NAME>
AS
(
SELECT OR DML
)
SELECT OR DML
Recursive CTE
```

### Recursive CTE Syntax


```
WITH RECURSIVE <CTE NAME>
AS (
<Anchor SELECT query>
UNION ALL
<Recursive SELECT query with reference to <CTE NAME>>
)
SELECT OR DML
```

### Examples


Create and populate an OrderItems table.

```
CREATE TABLE OrderItems
(
OrderID INT NOT NULL,
Item VARCHAR(20) NOT NULL,
Quantity SMALLINT NOT NULL,
PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Washer', 100);
```

Create a CTE.

```
WITH DEPT_COUNT
  (DEPARTMENT_ID, DEPT_COUNT) AS (
    SELECT DEPARTMENT_ID, COUNT(*) FROM EMPLOYEES GROUP BY DEPARTMENT_ID)
    SELECT E.FIRST_NAME ||' '|| E.LAST_NAME AS EMP_NAME,
    D.DEPT_COUNT AS EMP_DEPT_COUNT
    FROM EMPLOYEES E JOIN DEPT_COUNT D USING (DEPARTMENT_ID) ORDER BY 2;
```

PostgreSQL provides an additional feature when using a CTE as a recursive modifier. The following example uses a recursive WITH clause to access its own result set.

```
WITH RECURSIVE t(n) AS (
  VALUES (0)
  UNION ALL
  SELECT n+1 FROM t WHERE n < 5)
  SELECT * FROM t;

WITH RECURSIVE t(n) AS (
VALUES (0)
UNION ALL
SELECT n+1 FROM t WHERE n < 5)

SELECT * FROM t;

n
0
...
5
```

Note that using the SQL Server example will get undesired results.

Define a CTE to calculate the total quantity in every order and then join to the OrderItems table to obtain the relative quantity for each item.

```
WITH AggregatedOrders
AS
( SELECT OrderID, SUM(Quantity) AS TotalQty
  FROM OrderItems
  GROUP BY OrderID
)
SELECT O.OrderID, O.Item,
  O.Quantity,
    (O.Quantity / AO.TotalQty) * 100 AS PercentOfOrder
FROM OrderItems AS O
  INNER JOIN
  AggregatedOrders AS AO
  ON O.OrderID = AO.OrderID;
```

The preceding example produces the following results.

```
OrderID  Item       Quantity  PercentOfOrder
1        M8 Bolt    100       100
2        M8 Nut     100       100
3        M8 Washer  100       0
3        M6 Washer  200       0
```

This is because when you divide `INT` by `INT`, you get a round result. If you use another data type such as `DECIMAL`, there will be no problem. To fix the current issue, cast the columns using `::decimal`.

```
AS
( SELECT OrderID, SUM(Quantity) AS TotalQty
  FROM OrderItems
  GROUP BY OrderID
)
SELECT O.OrderID, O.Item,
  O.Quantity,
  trunc((O.Quantity::decimal / AO.TotalQty::decimal)*100,2) AS PercentOfOrder
  FROM OrderItems AS O
  INNER JOIN
  AggregatedOrders AS AO
  ON O.OrderID = AO.OrderID;
```

The preceding example produces the following results.

```
OrderID  Item       Quantity  PercentOfOrder
1        M8 Bolt    100       100
2        M8 Nut     100       100
3        M8 Washer  100       33.33
3        M6 Washer  200       66.66
```

Unlike in SQL Server, for `RECURSIVE WITH` query, use the `RECURSIVE` keyword in PostgreSQL.

Use a recursive CTE to display the employee-management hierarchy.

```
WITH RECURSIVE EmpHierarchyCTE AS
(
-- Anchor query retrieves the top manager
SELECT 0 AS LVL,
  Employee,
  DirectManager
FROM Employees AS E
WHERE DirectManager IS NULL
UNION ALL

-- Recursive query gets all Employees managed by the previous level
SELECT LVL + 1 AS LVL,
  E.Employee,
  E.DirectManager
FROM EmpHierarchyCTE AS EH
INNER JOIN
Employees AS E
ON E.DirectManager = EH.Employee
)
SELECT *
FROM EmpHierarchyCTE;
```

The preceding example produces the following results.

```
LVL  Employee  DirectManager
0    Dave
1    John      Dave
1    Jose      Dave
2    Fred      John
```

For more information, see [WITH Queries (Common Table Expressions)](https://www.postgresql.org/docs/13/queries-with.html) in the *PostgreSQL documentation*.

# Data types for ANSI SQL


This topic provides reference information about data type compatibility between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can use this information to understand how various SQL Server data types map to their PostgreSQL equivalents during migration


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [Data Types](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.types)   |  Syntax and handling differences.  | 

## SQL Server Usage


In SQL Server, each table column, variable, expression, and parameter has an associated data type. SQL Server provides a rich set of built-in data types as summarized in the following table.


| Category | Data types | 
| --- | --- | 
|  Numeric  |   `BIT`, `TINYINT`, `SMALLINT`, `INT`, `BIGINT`, `NUMERIC`, `DECIMAL`, `MONEY`, `SMALLMONEY`, `FLOAT`, `REAL`   | 
|  String and Character  |   `CHAR`, `VARCHAR`, `NCHAR`, `NVARCHAR`   | 
|  Temporal  |   `DATE`, `TIME`, `SMALLDATETIME`, `DATETIME`, `DATETIME2`, `DATETIMEOFFSET`   | 
|  Binary  |   `BINARY`, `VARBINARY`   | 
|  Large Object (LOB)  |   `TEXT`, `NTEXT`, `IMAGE`, `VARCHAR(MAX)`, `NVARCHAR(MAX)`, `VARBINARY(MAX)`   | 
|  Cursor  |   `CURSOR`   | 
|  GUID  |   `UNIQUEIDENTIFIER`   | 
|  Hierarchical identifier  |   `HIERARCHYID`   | 
|  Spatial  |   `GEOMETRY`, `GEOGRAPHY`   | 
|  Sets (table type)  |   `TABLE`   | 
|  XML  |   `XML`   | 
|  Other specialty types  |   `ROW VERSION`, `SQL_VARIANT`   | 

You can create custom user defined data types using T-SQL, and the .NET Framework. Custom data types are based on the built-in system data types and are used to simplify development. For more information, see [User-Defined Types](chap-sql-server-aurora-pg.tsql.udt.md).

### TEXT, NTEXT, and IMAGE Deprecated Data Types


The `TEXT`, `NTEXT`, and `IMAGE` data types have been deprecated as of SQL Server 2008R2. For more information, see [Deprecated Database Engine Features in SQL Server 2008 R2](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143729(v=sql.105)) in the *SQL Server documentation*.

These data types are legacy types for storing `BLOB` and `CLOB` data. The `TEXT` data type was used to store ASCII text `CLOBS`, the `NTEXT` data type to store `UNICODE CLOBS`, and `IMAGE` was used as a generic data type for storing all `BLOB` data. In SQL Server 2005, Microsoft introduced the new and improved `VARCHAR (MAX)`, `NVARCHAR(MAX)`, and `VARBINARY(MAX)` data types as the new BLOB and CLOB standard. These new types support a wider range of functions and operations. They also provide enhanced performance over the legacy types.

If your code uses `TEXT`, `NTEXT` or `IMAGE` data types, AWS SCT automatically converts them to the appropriate Aurora PostgreSQL `BYTEA` data type. Also, AWS SCT converts `TEXT` and `NTEXT` data types to `LONGTEXT` and `IMAGE` to `LONGBLOB`. Make sure you use the proper collations. For more information, see the [SQL Server Collations and PostgreSQL Encoding](chap-sql-server-aurora-pg.tsql.collations.md).

### Examples


Define table columns.

```
CREATE TABLE MyTable
(
Col1 AS INTEGER NOT NULL PRIMARY KEY,
Col2 AS NVARCHAR(100) NOT NULL
);
```

Define variable types.

```
DECLARE @MyXMLType AS XML,
  @MyTemporalType AS DATETIME2
```

```
DECLARE @MyTableType
AS TABLE
(
Col1 AS BINARY(16) NOT NULL PRIMARY KEY,
Col2 AS XML NULL
);
```

For more information, see [Data types (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL provides multiple data types equivalent to certain SQL Server data types. The following tables include the full list of PostgreSQL data types.

 **Character data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  CHAR  |  Fixed length 1-8,000  |  Yes  |  CHAR  | 
|  VARCHAR  |  Variable length 1-8,000  |  Yes  |  VARCHAR  | 
|  NCHAR  |  Fixed length 1-4,000  |  Yes  |  CHAR (n)  | 
|  NVARCHAR  |  Variable length 1-4,000  |  Yes  |  VARCHAR (n)  | 

 **Numeric data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  BIT  |  First 8 BIT column will consume 1 byte, 9 to 16 BIT columns will be 2 bytes, and so on.  |  Yes  |  BIT  | 
|  TINYINT  |  8-bit unsigned integer, 0 to 255  |  No  |  SMALLINT  | 
|  SMALLINT  |  16-bit integer  |  Yes  |  SMALLINT  | 
|  INT, INTEGER  |  32-bit integer  |  Yes  |  INT, INTEGER  | 
|  BIGINT  |  64-bit integer  |  Yes  |  BIGINT  | 
|  NUMERIC  |  Fixed-point number  |  Yes  |  NUMERIC  | 
|  DECIMAL  |  Fixed-point number  |  Yes  |  DECIMAL  | 
|  MONEY  |  64-bit currency amount  |  Yes  |  MONEY  | 
|  SMALLMONEY  |  32-bit currency amount  |  No  |  MONEY  | 
|  FLOAT  |  Floating-point number  |  Yes  |  FLOAT  | 
|  REAL  |  Single-precision floating-point number  |  Yes  |  REAL  | 

 **Temporal data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  DATE  |  Date (year, month and day)  |  Yes  |  DATE  | 
|  TIME  |  Time (hour, minute, second and fraction)  |  Yes  |  TIME  | 
|  SMALLDATETIME  |  Date and time  |  No  |  TIMESTAMP(0)  | 
|  DATETIME  |  Date and time with fraction  |  No  |  TIMESTAMP(3)  | 
|  DATETIME2  |  Date and time with fraction  |  No  |  TIMESTAMP(p)  | 
|  DATETIMEOFFSET  |  Date and time with fraction and time zone  |  No  |  TIMESTAMP(p) WITH TIME ZONE  | 

 **Binary data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  BINARY  |  Fixed-length byte string  |  No  |  BYTEA  | 
|  VARBINARY  |  Variable length 1-8,000  |  No  |  BYTEA  | 

 **LOB data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  TEXT  |  Variable-length character data up to 2 GB  |  Yes  |  TEXT  | 
|  NTEXT  |  Variable-length Unicode UCS-2 data up to 2 GB  |  No  |  TEXT  | 
|  IMAGE  |  Variable-length character data up to 2 GB  |  No  |  BYTEA  | 
|  VARCHAR(MAX)  |  Variable-length character data up to 2 GB  |  Yes  |  TEXT  | 
|  NVARCHAR(MAX)  |  Variable-length Unicode UCS-2 data up to 2 GB  |  No  |  TEXT  | 
|  VARBINARY(MAX)  |  Variable-length character data up to 2 GB  |  No  |  BYTEA  | 

 **Spatial data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  GEOMETRY  |  Euclidean (flat) coordinate system  |  Yes  |  GEOMETRY  | 
|  GEOGRAPHY  |  Round-earth coordinate system  |  Yes  |  GEOGRAPHY  | 
|  SQL\$1VARIANT  |  Maximum length of 8016  |  No  |  No equivalent  | 

 **Other data types** 


| SQL Server data type | SQL Server data type characteristic | PostgreSQL identical compatibility | PostgreSQL corresponding data type | 
| --- | --- | --- | --- | 
|  XML  |  XML data  |  Yes  |  XML  | 
|  UNIQUEIDENTIFIER  |  16-byte GUID (UUID)  |  No  |  CHAR(16)  | 
|  HIERARCHYID  |  Approximately 5 bytes  |  No  |  VARCHAR (n)  | 
|  ROWVERSION  |  8 bytes  |  No  |  TIMESTAMP(p)  | 

### PostgreSQL Character Column Semantics


PostgreSQL only supports `CHAR` for column size semantics. If you define a field as `VARCHAR (10)`, PostgreSQL can store 10 characters regardless of how many bytes it takes to store each non-English character. `VARCHAR(n)` stores strings up to n characters, not bytes, in length.

### Migration of SQL Server Data Types to PostgreSQL Data Types


You can use AWS Schema Conversion Tool (AWS SCT) for automatic migration and conversion of SQL Server tables and data types.

### Examples


To demonstrate AWS SCT capability for migrating SQL Server tables to their PostgreSQL equivalents, a table containing columns representing the majority of SQL Server data types was created and converted using AWS SCT.

Source SQL Server compatible DDL for creating the `DATATYPES` table

```
CREATE TABLE "DataTypes"(
  "BINARY_FLOAT" REAL,
  "BINARY_DOUBLE" FLOAT,
  "BLOB" VARBINARY(4000),
  "CHAR" CHAR(10),
  "CHARACTER" CHAR(10),
  "CLOB" VARCHAR(4000),
  "DATE" DATE,
  "DECIMAL" NUMERIC(3,2),
  "DOUBLE_PRECISION" FLOAT(52),
  "FLOAT" FLOAT(3),
  "INTEGER" INTEGER,
  "LONG" TEXT,
  "NCHAR" NCHAR(10),
  "NUMBER" NUMERIC(9,9),
  "NUMBER1" NUMERIC(9,0),
  "NUMERIC" NUMERIC(9,9),
  "RAW" BINARY(10),
  "REAL" FLOAT(52),
  "SMALLINT" SMALLINT,
  "TIMESTAMP" TIMESTAMP,
  "TIMESTAMP_WITH_TIME_ZONE" DATETIMEOFFSET(5),
  "VARCHAR" VARCHAR(10),
  "VARCHAR2" VARCHAR(10),
  "XMLTYPE" XML
);
```

Target PostgreSQL compatible DDL for creating the DATATYPES table migrated from SQL Server with AWS SCT.

```
CREATE TABLE IF NOT EXISTS datatypes(
  binary_float real DEFAULT NULL,
  binary_double double precision DEFAULT NULL,
  blob bytea DEFAULT NULL,
  char character(10) DEFAULT NULL,
  character character(10) DEFAULT NULL,
  clob text DEFAULT NULL,
  date TIMESTAMP(0) without time zone DEFAULT NULL,
  decimal numeric(3,2) DEFAULT NULL,
  dec numeric(3,2) DEFAULT NULL,
  double_precision double precision DEFAULT NULL,
  float double precision DEFAULT NULL,
  integer numeric(38,0) DEFAULT NULL,
  long text DEFAULT NULL,
  nchar character(10) DEFAULT NULL,
  number numeric(9,9) DEFAULT NULL,
  number1 numeric(9,0) DEFAULT NULL,
  numeric numeric(9,9) DEFAULT NULL,
  raw bytea DEFAULT NULL,
  real double precision DEFAULT NULL,
  smallint numeric(38,0) DEFAULT NULL,
  timestamp TIMESTAMP(5) without time zone DEFAULT NULL,
  timestamp_with_time_zone TIMESTAMP(5) with time zone DEFAULT NULL,
  varchar character varying(10) DEFAULT NULL,
  varchar2 character varying(10) DEFAULT NULL,
  xmltype xml DEFAULT NULL
)
WITH (
  OIDS=FALSE
);
```

## Summary


 AWS SCT converts all incompatible data types.

SQL Server CREATE TABLE command:

```
CREATE TABLE scttest(
SMALLDATETIMEcol SMALLDATETIME,
datetimecol DATETIME,
datetime2col DATETIME2,
datetimeoffsetcol DATETIMEOFFSET,
binarycol BINARY,
varbinarycol VARBINARY,
ntextcol NTEXT,
imagecol IMAGE,
nvarcharmaxcol NVARCHAR(MAX),
varbinarymaxcol VARBINARY(MAX),
uniqueidentifiercol UNIQUEIDENTIFIER,
hierarchyiDcol HIERARCHYID,
sql_variantcol SQL_VARIANT,
rowversioncol ROWVERSION);
```

The equivalent command that was created by AWS SCT:

```
CREATE TABLE scttest(
smalldatetimecol TIMESTAMP WITHOUT TIME ZONE,
datetimecol TIMESTAMP WITHOUT TIME ZONE,
datetime2col TIMESTAMP(6) WITHOUT TIME ZONE,
datetimeoffsetcol TIMESTAMP(6) WITH TIME ZONE,
binarycol BYTEA,
varbinarycol BYTEA,
ntextcol TEXT,
imagecol BYTEA,
nvarcharmaxcol TEXT,
varbinarymaxcol BYTEA,
uniqueidentifiercol UUID,
hierarchyidcol VARCHAR(8000),
sql_variantcol VARCHAR(8000),
rowversioncol VARCHAR(8000) NOT NULL);
```

For more information, see [System Columns](https://www.postgresql.org/docs/13/ddl-system-columns.html) and [Data Types](https://www.postgresql.org/docs/13/datatype.html) in the *PostgreSQL documentation*, and [Schema Conversion Tool Documentation](https://docs.aws.amazon.com/SchemaConversionTool/index.html).

# Derived tables for ANSI SQL


This topic provides reference information about derived tables in SQL Server and PostgreSQL, focusing on their compatibility in the context of migrating from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. You can understand how derived tables function similarly in both database systems, enabling you to write complex join queries.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  N/A  | 

## SQL Server Usage


SQL Server implements derived tables as specified in ANSI SQL:2011. Derived tables are similar to CTEs, but the reference to another query is used inside the `FROM` clause of a query.

This feature enables you to write more sophisticated, complex join queries.

### Examples


```
SELECT name, salary, average_salary
FROM (SELECT AVG(salary)
  FROM employee) AS workers (average_salary), employee
WHERE salary > average_salary
ORDER BY salary DESC;
```

For more information, see [FROM clause plus JOIN, APPLY, PIVOT (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/from-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL implements derived tables and is fully compatible with SQL Server derived tables.

### Examples


```
SELECT name, salary, average_salary
FROM (SELECT AVG(salary)
  FROM employee) AS workers (average_salary), employee
WHERE salary > average_salary
ORDER BY salary DESC;
```

For more information, see [Table Expressions ](https://www.postgresql.org/docs/13/queries-table-expressions.html) in the *PostgreSQL documentation*.

# GROUP BY for ANSI SQL


This topic provides reference information about migrating from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL, focusing on GROUP BY, CUBE, ROLLUP, and GROUPING SETS functionalities. You can use this guide to understand the similarities and differences between these database systems when working with aggregate functions and grouping operations.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  N/A  | 

## SQL Server Usage


 `GROUP BY` is an ANSI SQL query clause used to group individual rows that have passed the `WHERE` filter clause into groups to be passed on to the `HAVING` filter and then to the `SELECT` list. This grouping supports the use of aggregate functions such as `SUM`, `MAX`, `AVG`, and others.

### Syntax


ANSI compliant `GROUP BY` Syntax.

```
GROUP BY
[ROLLUP | CUBE]
<Column Expression> ...n
[GROUPING SETS (<Grouping Set>)...n
```

Backward compatibility syntax.

```
GROUP BY
  [ ALL ] <Column Expression> ...n
  [ WITH CUBE | ROLLUP ]
```

The basic ANSI syntax for GROUP BY supports multiple grouping expressions, the `CUBE` and `ROLLUP` keywords, and the `GROUPING SETS` clause; all used to add super-aggregate rows to the output.

Up to SQL Server 2008 R2, the database engine supported a legacy, proprietary syntax (not ANSI Compliant) using the `WITH CUBE` and `WITH ROLLUP` clauses. These clauses added super-aggregates to the output.

Also, up to SQL Server 2008 R2, SQL Server supported the `GROUP BY ALL` syntax, which was used to create an empty group for rows that failed the WHERE clause.

SQL Server supports the following aggregate functions: `AVG`, `CHECKSUM_AGG`, `COUNT`, `COUNT_BIG`, `GROUPING`, `GROUPING_ID`, `STDEV`, `STDEVP`, `STRING_AGG`, `SUM`, `MIN`, `MAX`, `VAR`, `VARP`.

### Examples


 **Legacy CUBE and ROLLUP Syntax** 

```
CREATE TABLE Orders
(
  OrderID INT IDENTITY(1,1) NOT NULL
  PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  OrderDate DATE NOT NULL
);
```

```
INSERT INTO Orders(Customer, OrderDate)
VALUES ('John', '20180501'), ('John', '20180502'), ('John', '20180503'),
  ('Jim', '20180501'), ('Jim', '20180503'), ('Jim', '20180504')
```

```
SELECT Customer,
  OrderDate,
  COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY Customer, OrderDate
WITH ROLLUP
```

The preceding example produces the following results.

```
Customer  OrderDate   NumOrders
Jim       2018-05-01  1
Jim       2018-05-03  1
Jim       2018-05-04  1
Jim       NULL        3
John      2018-05-01  1
John      2018-05-02  1
John      2018-05-03  1
John      NULL        3
NULL      NULL        6
```

The rows with NULL were added as a result of the `WITH ROLLUP` clause and contain super aggregates for the following:
+ All orders for Jim and John regardless of OrderDate.
+ A super aggregated for all customers and all dates.

Using `CUBE` instead of `ROLLUP` adds super aggregates in all possible combinations, not only in `GROUP BY` expression order.

```
SELECT Customer,
  OrderDate,
  COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY Customer, OrderDate
WITH CUBE
```

The preceding example produces the following results.

```
Customer  OrderDate   NumOrders
Jim       2018-05-01  1
John      2018-05-01  1
NULL      2018-05-01  2
John      2018-05-02  1
NULL      2018-05-02  1
Jim       2018-05-03  1
John      2018-05-03  1
NULL      2018-05-03  2
Jim       2018-05-04  1
NULL      2018-05-04  1
NULL      NULL        6
Jim       NULL        3
John      NULL        3
```

Four additional rows were added by the `CUBE`. They provide super aggregates for every date for all customers that were not part of the `ROLLUP` results in the preceding example.

 **Legacy GROUP BY ALL** 

Use the Orders table from the previous example.

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
WHERE OrderDate <= '20180503'
GROUP BY ALL Customer, OrderDate
```

The preceding example produces the following results.

```
Customer  OrderDate   NumOrders
Jim       2018-05-01  1
John      2018-05-01  1
John      2018-05-02  1
Jim       2018-05-03  1
John      2018-05-03  1
Jim       2018-05-04  0
Warning: Null value is eliminated by an aggregate or other SET operation.
```

The last row failed the WHERE clause and was returned as an empty group as indicated by the warning for the empty `COUNT(*) = 0`.

 **Use GROUPING SETS** 

The following query uses the ANSI compliant `GROUPING SETS` syntax to provide all possible aggregate combinations for the Orders table, similar to the result of the `CUBE` syntax. This syntax requires specifying each dimension that needs to be aggregated.

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY GROUPING SETS (
  (Customer, OrderDate),
  (Customer),
  (OrderDate),
  ()
)
```

The preceding example produces the following results.

```
Customer  OrderDate   NumOrders
Jim       2018-05-01  1
John      2018-05-01  1
NULL      2018-05-01  2
John      2018-05-02  1
NULL      2018-05-02  1
Jim       2018-05-03  1
John      2018-05-03  1
NULL      2018-05-03  2
Jim       2018-05-04  1
NULL      2018-05-04  1
NULL      NULL        6
Jim       NULL        3
John      NULL        3
```

For more information, see [Aggregate Functions (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15) and [SELECT - GROUP BY- Transact-SQL](https://docs.microsoft.com/en-us/sql/t-sql/queries/select-group-by-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) supports the basic ANSI syntax for `GROUP BY` and also supports `GROUPING SETS CUBE`, and `ROLLUP`.

In Aurora PostgreSQL, you can use `ROLLUP` and `ORDER BY` clauses in the same query, but the syntax is different from SQL Server. There is no `WITH` clause in the statement.

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY ROLLUP (Customer, OrderDate)
```

The main difference is the need to move from writing the column to `GROUP BY` after the `ROLLUP`.

For the `CUBE` option, it’s the same change.

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY CUBE (Customer, OrderDate);
```

For the `GROUPING SET`, use the following query.

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY GROUPING SETS (
  (Customer, OrderDate),
  (Customer),
  (OrderDate),
  ()
);
```

For more information, see [Table Expressions](https://www.postgresql.org/docs/13/queries-table-expressions.html) in the *PostgreSQL documentation*.

### Syntax


```
SELECT <Select List>
FROM <Table Source>
WHERE <Row Filter>
GROUP BY
  [ROLLUP | CUBE | GROUPING SETS]
<Column Name> | <Expression> | <Position>
```

### Migration Considerations


The `GROUP BY` functionality exists except for the `ALL` option.

Convert every query to use the column name after the `GROUP BY` option, such as `CUBE`, `ROLLUP`, or `CUBE`.

### Examples


Rewrite SQL Server `WITH CUBE` modifier for migration.

```
CREATE TABLE Orders
(
  OrderID serial NOT NULL
  PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  OrderDate DATE NOT NULL
);
```

```
INSERT INTO Orders(Customer, OrderDate)
VALUES ('John', '20180501'), ('John', '20180502'), ('John', '20180503'),
  ('Jim', '20180501'), ('Jim', '20180503'), ('Jim', '20180504');
```

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
GROUP BY CUBE (Customer, OrderDate);
```

The preceding example produces the following results.

```
Customer  OrderDate   NumOrders
Jim       2018-05-01  1
Jim       2018-05-03  1
Jim       2018-05-04  1
Jim       NULL        3
John      2018-05-01  1
John      2018-05-02  1
John      2018-05-03  1
John      NULL        3
NULL      NULL        6
NULL      2018-05-01  2
NULL      2018-05-02  1
NULL      2018-05-03  2
NULL      2018-05-04  1
```

Rewrite SQL Server `GROUP BY ALL` for migration.

```
SELECT Customer, OrderDate, COUNT(*) AS NumOrders
FROM Orders AS O
WHERE OrderDate <= '20180503'
GROUP BY Customer, OrderDate
UNION ALL -- Add the empty groups
SELECT DISTINCT Customer, OrderDate, 0
FROM Orders AS O
WHERE OrderDate > '20180503';
```

The preceding example produces the following results.

```
Customer  OrderDate   NumOrders
Jim       2018-05-01  1
Jim       2018-05-03  1
John      2018-05-01  1
John      2018-05-02  1
John      2018-05-03  1
Jim       2018-05-04  0
```

## Summary


The following table shows similarities, differences, and key migration considerations.


| SQL Server feature |  Aurora PostgreSQL feature | Comments | 
| --- | --- | --- | 
|   `MAX`, `MIN`, `AVG`, `COUNT`, `COUNT_BIG`   |   `MAX`, `MIN`, `AVG`, `COUNT`   |  In Aurora PostgreSQL, `COUNT` returns a `BIGINT` and is compatible with SQL Server `COUNT` and `COUNT_BIG`.  | 
|   `CHECKSUM_AGG`   |  N/A  |  Use a loop to calculate checksums.  | 
|   `GROUPING`, `GROUPING_ID`   |   `GROUPING`   |  Reconsider the query logic to avoid having NULL groups that are ambiguous with the super aggregates.  | 
|   `STDEV`, `STDEVP`, `VAR`, `VARP`   |   `STDDEV`, `STDDEV_POP`, `VARIANCE`, `VAR_POP`   |  Rewrite keywords only.  | 
|   `STRING_AGG`   |   `STRING_AGG`   |  | 
|   `WITH ROLLUP`   |   `ROLLUP`   |  Remove `WITH` and change the columns names to be after the `ROLLUP` keyword.  | 
|   `WITH CUBE`   |   `CUBE`   |  Remove `WITH` and change the columns names to be after the `CUBE` keyword.  | 
|   `GROUPING SETS`   |   `GROUPING SETS`   |  | 

For more information, see [Aggregate Functions](https://www.postgresql.org/docs/10/functions-aggregate.html) in the *PostgreSQL documentation*.

# Table JOIN for ANSI SQL


This topic provides reference information about join operations in SQL Server and their compatibility with Amazon Aurora PostgreSQL. You can understand how different types of joins, such as INNER JOIN, OUTER JOIN, CROSS JOIN, and APPLY operations, are supported or need to be rewritten when migrating from SQL Server to Aurora PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |   `OUTER JOIN` with commas. `CROSS APPLY` and `OUTER APPLY` aren’t supported.  | 

## SQL Server Usage


### ANSI JOIN


SQL Server supports the standard ANSI join types.
+  `<Set A> CROSS JOIN <Set B>`. Results in a Cartesian product of the two sets. Every `JOIN` starts as a Cartesian product.
+  `<Set A> INNER JOIN <Set B> ON <Join Condition>`. Filters the Cartesian product to only the rows where the join predicate evaluates to `TRUE`.
+  `<Set A> LEFT OUTER JOIN <Set B> ON <Join Condition>`. Adds to the `INNER JOIN` all the rows from the reserved left set with NULL for all the columns that come from the right set.
+  `<Set A> RIGHT OUTER JOIN <Set B> ON <Join Condition>` Adds to the `INNER JOIN` all the rows from the reserved right set with NULL for all the columns that come from the left set.
+  `<Set A> FULL OUTER JOIN <Set B> ON <Join Condition>`. Designates both sets as reserved and adds non-matching rows from both, similar to a `LEFT OUTER JOIN` and a `RIGHT OUTER JOIN`.

### APPLY


SQL Server also supports the `APPLY` operator, which is somewhat similar to a join. However, `APPLY` operators enable the creation of a correlation between `<Set A>` and `<Set B>` such that `<Set B>` may consist of a sub query, a `VALUES` row value constructor, or a table valued function that is evaluated for each row of `<Set A>` where the `<Set B>` query can reference columns from the current row in `<Set A>`. This functionality isn’t possible with any type of standard `JOIN` operator.

There are two `APPLY` types:
+  `<Set A> CROSS APPLY <Set B>`. Similar to a `CROSS JOIN` in the sense that every row from `<Set A>` is matched with every row from `<Set B>`.
+  `<Set A> OUTER APPLY <Set B>`. Similar to a `LEFT OUTER JOIN` in the sense that rows from `<Set A>` are returned even if the sub query for `<Set B>` produces an empty set. In that case, NULL is assigned to all columns of `<Set B>`.

### ANSI SQL 89 JOIN


Up until version 2008R2, SQL Server also supported the old-style `JOIN` syntax including `LEFT` and `RIGHT OUTER JOIN`.

The ANSI syntax for a `CROSS JOIN` operator was to list the sets in the `FROM` clause using commas as separators.

```
SELECT * FROM Table1,
  Table2,
  Table3...
```

To perform an `INNER JOIN`, you only needed to add the `JOIN` predicate as part of the `WHERE` clause.

```
SELECT * FROM Table1,
  Table2
WHERE Table1.Column1 = Table2.Column1
```

Although the ANSI standard didn’t specify outer joins at the time, most RDBMS supported them in one way or another. T-SQL supported outer joins by adding an asterisk to the left or the right of equality sign of the join predicate to designate the reserved table.

```
SELECT * FROM Table1,
  Table2
WHERE Table1.Column1 *= Table2.Column1
```

To perform a `FULL OUTER JOIN`, asterisks were placed on both sides of the equality sign of the join predicate.

As of SQL Server 2008R2, outer joins using this syntax have been deprecated. For more information, see [Deprecated Database Engine Features in SQL Server 2008 R2](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms143729(v=sql.105)?redirectedfrom=MSDN) in the *SQL Server documentation*.

**Note**  
Even though `INNER JOIN` using the ANSI SQL 89 syntax is still supported, they are highly discouraged due to being notorious for introducing hard-to-catch programming bugs.

### Syntax


 **CROSS JOIN** 

```
FROM <Table Source 1>
  CROSS JOIN
  <Table Source 2>
```

```
-- ANSI 89
FROM <Table Source 1>,
  <Table Source 2>
```

 **INNER / OUTER JOIN** 

```
FROM <Table Source 1>
  [ { INNER | { { LEFT | RIGHT | FULL } [ OUTER ] } }] JOIN
  <Table Source 2>
  ON <JOIN Predicate>
```

```
-- ANSI 89
FROM <Table Source 1>,
  <Table Source 2>
WHERE <Join Predicate>
<Join Predicate>:: <Table Source 1 Expression> | = | *= | =* | *=* <Table Source 2 Expression>
```

 **APPLY** 

```
FROM <Table Source 1>
  { CROSS | OUTER } APPLY
  <Table Source 2>
<Table Source 2>:: <SELECT sub-query> | <Table Valued UDF> | <VALUES clause>
```

### Examples


Create the Orders and Items tables.

```
CREATE TABLE Items
(
  Item VARCHAR(20) NOT NULL
  PRIMARY KEY
  Category VARCHAR(20) NOT NULL,
  Material VARCHAR(20) NOT NULL
);
```

```
INSERT INTO Items (Item, Category, Material)
VALUES
('M8 Bolt', 'Metric Bolts', 'Stainless Steel'),
('M8 Nut', 'Metric Nuts', 'Stainless Steel'),
('M8 Washer', 'Metric Washers', 'Stainless Steel'),
('3/8" Bolt', 'Imperial Bolts', 'Brass')
```

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL
  REFERENCES Items(Item),
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200)
```

 **INNER JOIN** 

```
SELECT *
FROM Items AS I
  INNER JOIN
  OrderItems AS OI
  ON I.Item = OI.Item;
-- ANSI SQL 89
SELECT *
FROM Items AS I,
  OrderItems AS OI
WHERE I.Item = OI.Item;
```

 **LEFT OUTER JOIN** 

Find Items that were never ordered.

```
SELECT I.Item
FROM Items AS I
  LEFT OUTER JOIN
  OrderItems AS OI
  ON I.Item = OI.Item
WHERE OI.OrderID IS NULL;

-- ANSI SQL 89
SELECT Item
FROM
(
  SELECT I.Item, O.OrderID
  FROM Items AS I,
    OrderItems AS OI
  WHERE I.Item *= OI.Item
) AS LeftJoined
WHERE LeftJoined.OrderID IS NULL;
```

 **FULL OUTER JOIN** 

```
CREATE TABLE T1(Col1 INT, COl2 CHAR(2));
CREATE TABLE T2(Col1 INT, COl2 CHAR(2));

INSERT INTO T1 (Col1, Col2)
VALUES (1, 'A'), (2,'B');

INSERT INTO T2 (Col1, Col2)
VALUES (2,'BB'), (3,'CC');

SELECT *
FROM T1
  FULL OUTER JOIN
  T2
  ON T1.Col1 = T2.Col1;
```

The preceding example produces the following results.

```
Col1  COl2  Col1  COl2
1     A     NULL  NULL
2     B     2     BB
NULL  NULL  3     CC
```

For more information, see [FROM clause plus JOIN, APPLY, PIVOT (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/from-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) supports all types of joins in the same way as SQL Server.
+  `<Set A> CROSS JOIN <Set B>`. Results in a Cartesian product of the two sets. Every `JOIN` starts as a Cartesian product.
+  `<Set A> INNER JOIN <Set B> ON <Join Condition>`. Filters the Cartesian product to only the rows where the join predicate evaluates to `TRUE`.
+  `<Set A> LEFT OUTER JOIN <Set B> ON <Join Condition>`. Adds to the `INNER JOIN` all the rows from the reserved left set with NULL for all the columns that come from the right set.
+  `<Set A> RIGHT OUTER JOIN <Set B> ON <Join Condition>` Adds to the `INNER JOIN` all the rows from the reserved right set with NULL for all the columns that come from the left set.
+  `<Set A> FULL OUTER JOIN <Set B> ON <Join Condition>`. Designates both sets as reserved and adds non-matching rows from both, similar to a `LEFT OUTER JOIN` and a `RIGHT OUTER JOIN`.

PostgreSQL doesn’t support `APPLY` options. You can replace them with `INNER JOIN LATERAL` and `LEFT JOIN LATERAL`.

### Syntax


```
FROM
    <Table Source 1> CROSS JOIN <Table Source 2>
  | <Table Source 1> INNER JOIN <Table Source 2>
    ON <Join Predicate>
  | <Table Source 1> {LEFT|RIGHT|FULL} [OUTER] JOIN <Table Source 2>
    ON <Join Predicate>
```

### Migration Considerations


For most `JOIN` statements, the syntax should be equivalent and no rewrites should be needed. Find the differences following.
+ ANSI SQL 89 isn’t supported.
+  `FULL OUTER JOIN` and `OUTER JOIN` using the pre-ANSI SQL 92 syntax aren’t supported, but you can use workarounds.
+  `CROSS APPLY` and `OUTER APPLY` aren’t supported. You can rewrite these statements using `INNER JOIN LATERAL` and `LEFT JOIN LATERAL`.

### Examples


Create the Orders and Items tables.

```
CREATE TABLE Items
(
  Item VARCHAR(20) NOT NULL
  PRIMARY KEY
  Category VARCHAR(20) NOT NULL,
  Material VARCHAR(20) NOT NULL
);
```

```
INSERT INTO Items (Item, Category, Material)
VALUES
('M8 Bolt', 'Metric Bolts', 'Stainless Steel'),
('M8 Nut', 'Metric Nuts', 'Stainless Steel'),
('M8 Washer', 'Metric Washers', 'Stainless Steel'),
('3/8" Bolt', 'Imperial Bolts', 'Brass')
```

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL
  REFERENCES Items(Item),
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200)
```

 **INNER JOIN** 

```
SELECT *
FROM Items AS I
  INNER JOIN
  OrderItems AS OI
  ON I.Item = OI.Item;
```

 **LEFT OUTER JOIN** 

Find Items that were never ordered.

```
SELECT Item
FROM Items AS I
  LEFT OUTER JOIN
  OrderItems AS OI
  ON I.Item = OI.Item
WHERE OI.OrderID IS NULL;
```

 **FULL OUTER JOIN** 

```
CREATE TABLE T1(Col1 INT, COl2 CHAR(2));
CREATE TABLE T2(Col1 INT, COl2 CHAR(2));

INSERT INTO T1 (Col1, Col2)
VALUES (1, 'A'), (2,'B');

INSERT INTO T2 (Col1, Col2)
VALUES (2,'BB'), (3,'CC');

SELECT *
FROM T1
FULL OUTER JOIN
T2
ON T1.Col1 = T2.Col1;
```

The preceding example produces the following results.

```
Col1  COl2  Col1  COl2
1     A     NULL  NULL
2     B     2     BB
NULL  NULL  3     CC
```

## Summary


The following table shows similarities, differences, and key migration considerations.


| SQL Server feature |  Aurora PostgreSQL feature | Comments | 
| --- | --- | --- | 
|   `INNER JOIN` with `ON` clause or commas.  |  Supported.  |  | 
|   `OUTER JOIN` with `ON` clause.  |  Supported.  |  | 
|   `OUTER JOIN` with commas.  |  Not supported.  |  Requires T-SQL rewrite post SQL Server 2008R2.  | 
|   `CROSS JOIN` or using commas.  |  Supported.  |  | 
|   `CROSS APPLY` and `OUTER APPLY`.  |  Not supported.  |  Rewrite required.  | 

For more information, see [Controlling the Planner with Explicit JOIN Clauses](https://www.postgresql.org/docs/13/explicit-joins.html) and [Joins Between Tables](https://www.postgresql.org/docs/13/tutorial-join.html) in the *PostgreSQL documentation*.

# Temporal tables for ANSI SQL


This topic provides reference information about temporal database tables in Microsoft SQL Server and their compatibility with Amazon Aurora PostgreSQL. You can understand the functionality of temporal tables in SQL Server, including their use of DATETIME2 columns and querying methods. The topic also explains common scenarios where temporal tables are useful for tracking data change history.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |  N/A  |  N/A  | 

## SQL Server Usage


Temporal database tables were introduced in ANSI SQL 2011. T-SQL began supporting system versioned temporal tables in SQL Server 2016.

Each temporal table has two explicitly defined `DATETIME2` columns known as period columns. The system uses these columns to record the period of availability for each row when it is modified. An additional history table retains the previous version of the data. The system can automatically create the history table, or a user can specify an existing table.

To query the history table, use `FOR SYSTEM TIME` after the table name in the `FROM` clause and combine it with the following options:
+  `ALL` — all changes.
+  `CONTAINED IN` — change is valid only within a period.
+  `AS OF` — change was valid somewhere in a specific period.
+  `BETWEEN` — change was valid from a time range.

Temporal Tables are mostly used when to track data change history as described in the following scenarios.

### Anomaly Detection


Use this option when searching for data with unusual values. For example, detecting when a customer returns items too often.

```
CREATE TABLE Products_returned
(
  ProductID int NOT NULL PRIMARY KEY CLUSTERED,
  ProductName varchar(60) NOT NULL,
  return_count INT NOT NULL,
  ValidFrom datetime2(7) GENERATED ALWAYS AS ROW START NOT NULL,
  ValidTo datetime2(7) GENERATED ALWAYS AS ROW END NOT NULL,
  PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH( SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.ProductHistory,
  DATA_CONSISTENCY_CHECK = ON ))
```

Query the Product table and run calculations on the data.

```
SELECT
  ProductId,
  LAG (return_count, 1, 1)
  over (partition by ProductId order by ValidFrom) as PrevValue,
  return_count,
  LEAD (return_count, 1, 1)
  over (partition by ProductId order by ValidFrom) as NextValue ,
  ValidFrom, ValidTo from Product
FOR SYSTEM_TIME ALL
```

### Audit


Track changes to critical data such as salaries or medical data.

```
CREATE TABLE Employee
(
  EmployeeID int NOT NULL PRIMARY KEY CLUSTERED,
  Name nvarchar(60) NOT NULL,
  Salary decimal (6,2) NOT NULL,
  ValidFrom datetime2 (2) GENERATED ALWAYS AS ROW START,
  ValidTo datetime2 (2) GENERATED ALWAYS AS ROW END,
  PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeeTrackHistory));
```

Use `FOR SYSTEM_TIME ALL` to retrieve changes from the history table.

```
SELECT * FROM Employee
  FOR SYSTEM_TIME ALL WHERE
    EmployeeID = 1000 ORDER BY ValidFrom;
```

### Other Scenarios


Additional scenarios include the following:
+ Fixing row-level corruption.
+ Slowly changing dimension.
+ Over time changes analysis.

For more information, see [Temporal tables](https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL provides an extension for supporting temporal tables, but it’s not supported by Amazon Aurora. A workaround will be to create table triggers to update a custom history table to track changes to data. For more information, see [Triggers](chap-sql-server-aurora-pg.tsql.triggers.md).

# Views for ANSI SQL


This topic provides reference information about migrating views from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. You can understand the similarities and differences in view functionality between these two database systems, which is crucial for planning and executing a successful migration. The topic covers basic view concepts, usage patterns, and specific features like indexed views, partitioned views, and updateable views.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  PostgreSQL doesn’t support indexed and partitioned views.  | 

## SQL Server Usage


Views are schema objects that provide stored definitions for virtual tables. Similar to tables, views are data sets with uniquely named columns and rows. With the exception of indexed views, view objects don’t store data. They consist only of a query definition and are reevaluated for each invocation.

Views are used as abstraction layers and security filters for the underlying tables. They can `JOIN` and `UNION` data from multiple source tables and use aggregates, window functions, and other SQL features as long as the result is a semi-proper set with uniquely identifiable columns and no order to the rows. You can use distributed views to query other databases and data sources using linked servers.

As an abstraction layer, a view can decouple application code from the database schema. You can change the underlying tables without the need to modify the application code as long as the expected results of the view don’t change. You can use this approach to provide backward compatible views of data.

As a security mechanism, a view can screen and filter source table data. You can perform permission management at the view level without explicit permissions to the base objects, provided the ownership chain is maintained. For more information, see [Overview of SQL Server Security](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/overview-of-sql-server-security).

View definitions are evaluated when they are created and aren’t affected by subsequent changes to the underlying tables. For example, a view that uses `SELECT *` doesn’t display columns that were added later to the base table. Similarly, if a column was dropped from the base table, invoking the view results in an error. Use the `SCHEMABINDING` option to prevent changes to base objects.

### Modifying Data Through Views


Updatable Views can both select and modify data. Updatable views meet the following conditions:
+ The DML targets only one base table.
+ Columns being modified must be directly referenced from the underlying base tables. Computed columns, set operators, functions, aggregates, or any other expressions aren’t permitted.
+ If a view is created with the CHECK OPTION, rows being updated can’t be filtered out of the view definition as the result of the update.

### Special View Types


SQL Server provides three types of specialized views:
+  **Indexed views**. These views are also known as materialized views or persisted views. Indexed vires are standard views that have been evaluated and persisted in a unique clustered index, much like a normal clustered primary key table. Each time the source data changes, SQL Server re-evaluates the indexed views automatically and updates them. Indexed views are typically used as a means to optimize performance by pre-processing operators such as aggregations, joins, and others. Queries needing this pre-processing don’t have to wait for it to be reevaluated on every query run.
+  **Partitioned views** rejoin horizontally partitioned data sets from multiple underlying tables, each containing only a subset of the data. The view uses a `UNION ALL` query where the underlying tables can reside locally or in other databases (or even other servers). These types of views are called Distributed Partitioned Views (DPV).
+  **System views** access server and object meta data. SQL Server also supports a set of standard `INFORMATION_SCHEMA` views for accessing object meta data.

### Syntax


```
CREATE [OR ALTER] VIEW [<Schema Name>.] <View Name> [(<Column Aliases> ])]
[WITH [ENCRYPTION][SCHEMABINDING][VIEW_METADATA]]
AS <SELECT Query>
[WITH CHECK OPTION][;]
```

### Examples


The following example creates a view that aggregates items for each customer.

```
CREATE TABLE Orders
(
  OrderID INT NOT NULL PRIMARY KEY,
  OrderDate DATETIME NOT NULL
  DEFAULT GETDATE()
);
```

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL
  REFERENCES Orders(OrderID),
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
CREATE VIEW SalesView
AS
SELECT O.Customer,
  OI.Product,
  SUM(CAST(OI.Quantity AS BIGINT)) AS TotalItemsBought
FROM Orders AS O
  INNER JOIN
  OrderItems AS OI
  ON O.OrderID = OI.OrderID;
```

The following example creates an indexed view that pre-aggregates items for each customer

```
CREATE VIEW SalesViewIndexed
AS
SELECT O.Customer,
  OI.Product,
  SUM_BIG(OI.Quantity) AS TotalItemsBought
FROM Orders AS O
  INNER JOIN
  OrderItems AS OI
  ON O.OrderID = OI.OrderID;
```

```
CREATE UNIQUE CLUSTERED INDEX IDX_SalesView
ON SalesViewIndexed (Customer, Product);
```

The following example creates a partitioned view.

```
CREATE VIEW dbo.PartitioneView
WITH SCHEMABINDING
AS
SELECT *
FROM Table1
UNION ALL
SELECT *
FROM Table2
UNION ALL
SELECT *
FROM Table3
```

For more information, see [Views](https://docs.microsoft.com/en-us/sql/relational-databases/views/views?view=sql-server-ver15), [Modify Data Through a View](https://docs.microsoft.com/en-us/sql/relational-databases/views/modify-data-through-a-view?view=sql-server-ver15), and [CREATE VIEW (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


The basic form of views is similar between PostgreSQL and SQL Server. A view defines a stored query based on one or more physical database tables that runs every time the view is accessed.

More complex option such as indexed views or partitioned views aren’t supported, and may require a redesign or might application rewrite.

**Note**  
For Amazon Relational Database Service (Amazon RDS), starting with PostgreSQL 13, you can rename view columns using `ALTER VIEW` command. This option helps DBAs avoid dropping and recreating the view to change a column name.  
Use the following syntax to rename a column name in a view: `ALTER VIEW [ IF EXISTS ] name RENAME [ COLUMN ] column_name TO new_column_name`.  
For PostgreSQL versions lower than 13, you can change the column name in a view using the `ALTER TABLE` command.

### PostgreSQL View Privileges


To create a view, make sure that you grant `SELECT` and `DML` privileges on the base tables or views to your role or user. For more information, see [GRANT](https://www.postgresql.org/docs/13/sql-grant.html) in the *PostgreSQL documentation*.

### PostgreSQL View Parameters


 **CREATE [OR REPLACE] VIEW** 

When you re-create an existing view, make sure that the new view has the same column structure as generated by the original view. The column structure includes column names, column order, and data types. It is sometimes preferable to drop the view and use the `CREATE VIEW` statement instead.

```
hr=# CREATE [OR REPLACE] VIEW VW_NAME AS
SELECT COLUMNS
FROM TABLE(s)
[WHERE CONDITIONS];

hr=# DROP VIEW [IF EXISTS] VW_NAME;
```

In the example preceding, the `IF EXISTS` parameter is optional.

 **WITH [ CASCADED \$1 LOCAL ] CHECK OPTION** 

DML `INSERT` and `UPDATE` operations are verified against the view-based tables to ensure new rows satisfy the original structure conditions or the view-defining condition. If a conflict is detected, the DML operation fails.
+  `LOCAL`. Verifies the view without a hierarchical check.
+  `CASCADED`. Verifies all underlying base views using a hierarchical check.

 **Running DML Commands On Views** 

PostgreSQL simple views are automatically updatable. No restrictions exist when performing DML operations on views. An updatable view may contain a combination of updatable and non-updatable columns. A column is updatable if it references an updatable column of the underlying base table. If not, the column is read-only and an error is raised if an `INSERT` or `UPDATE` statement is attempted on the column.

### Syntax


```
CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW name [ ( column_name [,...] ) ]
[ WITH ( view_option_name [= view_option_value] [, ... ] ) ]
AS query
[ WITH [ CASCADED | LOCAL ] CHECK OPTION ]
```

### Examples


The following example creates and updates a view without the `CHECK OPTION` parameter.

```
CREATE OR REPLACE VIEW VW_DEP AS
  SELECT DEPARTMENT_ID, DEPARTMENT_NAME, MANAGER_ID, LOCATION_ID
  FROM DEPARTMENTS
  WHERE LOCATION_ID=1700;

view VW_DEP created.

UPDATE VW_DEP SET LOCATION_ID=1600;

21 rows updated.
```

The following example creates and updates a view with the LOCAL CHECK OPTION parameter.

```
CREATE OR REPLACE VIEW VW_DEP AS
  SELECT DEPARTMENT_ID, DEPARTMENT_NAME, MANAGER_ID, LOCATION_ID
  FROM DEPARTMENTS
  WHERE LOCATION_ID=1700
  WITH LOCAL CHECK OPTION;

view VW_DEP created.

UPDATE VW_DEP SET LOCATION_ID=1600;

SQL Error: ERROR: new row violates check option for view "vw_dep"
```

## Summary



| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Indexed views  |  Supported  |  N/A  | 
|  Partitioned views  |  Supported  |  N/A  | 
|  Updateable views  |  Supported  |  Supported  | 
|  Prevent schema conflicts  |   `SCHEMABINDING` option  |  N/A  | 
|  Triggers on views  |   `INSTEAD OF`   |   `INSTEAD OF`   | 
|  Temporary Views  |   `CREATE VIEW #View…​`   |   `CREATE [ OR REPLACE ] [ TEMP ] [ TEMPORARY ] VIEW`   | 
|  Refresh view definition  |   `sp_refreshview` / `ALTER VIEW`   |   `ALTER VIEW`   | 

For more information, see [Views](https://www.postgresql.org/docs/13/tutorial-views.html) and [CREATE VIEW](https://www.postgresql.org/docs/13/sql-createview.html) in the *PostgreSQL documentation*.

# Window functions for ANSI SQL


This topic provides reference information comparing window functions in Microsoft SQL Server and PostgreSQL, which is valuable for database migration projects. You can gain insights into the similarities and differences between these two database systems' analytical capabilities. The topic highlights the types of window functions available in SQL Server, including ranking, aggregate, and analytic functions, and compares them to PostgreSQL’s window function support.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  N/A  | 

## SQL Server Usage


Window functions use an `OVER` clause to define the window and frame for a data set to be processed. They are part of the ANSI standard and are typically compatible among various SQL dialects. However, most RDBMS don’t yet support the full ANSI specification.

Window functions are a relatively new, advanced, and efficient T-SQL programming tool. They are highly utilized by developers to solve numerous programming challenges.

SQL Server currently supports the following window functions:
+ Ranking functions: `ROW_NUMBER`, `RANK`, `DENSE_RANK`, and `NTILE`.
+ Aggregate functions: `AVG`, `MIN`, `MAX`, `SUM`, `COUNT`, `COUNT_BIG`, `VAR`, `STDEV`, `STDEVP`, `STRING_AGG`, `GROUPING`, `GROUPING_ID`, `VAR`, `VARP`, and `CHECKSUM_AGG`.
+ Analytic functions: `LAG`, `LEAD`, `FIRST_Value`, `LAST_VALUE`, `PERCENT_RANK`, `PERCENTILE_CONT`, `PERCENTILE_DISC`, and `CUME_DIST`.
+ Other functions: `NEXT_VALUE_FOR`. For more information, see [Sequences and Identity](chap-sql-server-aurora-pg.tsql.sequences.md).

### Syntax


```
<Function()>
OVER
(
[ <PARTITION BY clause> ]
[ <ORDER BY clause> ]
[ <ROW or RANGE clause> ]
)
```

### Examples


The following example creates and populates an OrderItems table.

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Locking Nut', 300);
```

The following example uses a window ranking function to rank items based on the ordered quantity.

```
SELECT Item,
  Quantity,
RANK() OVER(ORDER BY Quantity) AS QtyRank
FROM OrderItems;
```

The preceding example produces the following results.

```
Item            Quantity  QtyRank
M8 Bolt         100       1
M8 Nut          100       1
M8 Washer       200       3
M6 Locking Nut  300       4
```

The following example uses a partitioned window aggregate function to calculate the total quantity for each order. This statement doesn’t use a `GROUP BY` clause.

```
SELECT Item,
Quantity,
OrderID,
SUM(Quantity)
OVER (PARTITION BY OrderID) AS TotalOrderQty
FROM OrderItems;
```

The preceding example produces the following results.

```
Item            Quantity  QtyRank  TotalOrderQty
M8 Bolt         100       1        100
M8 Nut          100       2        100
M6 Locking Nut  300       3        500
M8 Washer       200       3        500
```

The following example uses an analytic `LEAD` function to get the next largest quantity for the order.

```
SELECT Item,
  Quantity,
  OrderID,
  LEAD(Quantity)
  OVER (PARTITION BY OrderID ORDER BY Quantity) AS NextQtyOrder
FROM OrderItems;
```

The preceding example produces the following results.

```
Item            Quantity  OrderID  NextQtyOrder
M8 Bolt         100       1        NULL
M8 Nut          100       2        NULL
M8 Washer       200       3        300
M6 Locking Nut  300       3        NULL
```

For more information, see [SELECT - OVER Clause (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/select-over-clause-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL refers to ANSI SQL analytical functions as window functions. They provide the same core functionality as SQL Server analytical functions. Window functions in PostgreSQL operate on a logical partition or window of the result set and return a value for rows in that window.

From a database migration perspective, you should examine PostgreSQL window functions by type and compare them with the equivalent SQL Server window functions to verify compatibility of syntax and output.

**Note**  
Even if a PostgreSQL window function provides the same functionality of a specific SQL Server window function, the returned data type may be different and require application changes.

PostgreSQL provides support for two main types of window functions: aggregation functions and ranking functions.

### PostgreSQL Window Functions by Type



| Function type | Related functions | 
| --- | --- | 
|  Aggregate  |   `avg`, `count`, `max`, `min`, `sum`, `string_agg`   | 
|  Ranking  |   `row_number`, `rank`, `dense_rank`, `percent_rank`, `cume_dist`, `ntile`, `lag`, `lead`, `first_value`, `last_value`, `nth_value`   | 

### PostgreSQL Window Functions



| PostgreSQL window function | Returned data type | Compatible syntax | 
| --- | --- | --- | 
|  Count  |  bigint  |  Yes  | 
|  Max  |  numeric, string, date/time, network or enum type  |  Yes  | 
|  Min  |  numeric, string, date/time, network or enum type  |  Yes  | 
|  Avg  |  numeric, double, otherwise same data type as the argument  |  Yes  | 
|  Sum  |  bigint, otherwise same data type as the argument  |  Yes  | 
|  rank()  |  bigint  |  Yes  | 
|  row\$1number()  |  bigint  |  Yes  | 
|  dense\$1rank()  |  bigint  |  Yes  | 
|  percent\$1rank()  |  double  |  Yes  | 
|  cume\$1dist()  |  double  |  Yes  | 
|  ntile()  |  integer  |  Yes  | 
|  lag()  |  Same type as value  |  Yes  | 
|  lead()  |  Same type as value  |  Yes  | 
|  first\$1value()  |  Same type as value  |  Yes  | 
|  last\$1value()  |  Same type as value  |  Yes  | 

### Examples


The following example uses he PostgreSQL `rank()` function.

```
SELECT department_id, last_name, salary, commission_pct,
RANK() OVER (PARTITION BY department_id
ORDER BY salary DESC, commission_pct) "Rank"
FROM employees WHERE department_id = 80;

DEPARTMENT_ID  LAST_NAME  SALARY    COMMISSION_PCT  Rank
80             Russell    14000.00  0.40            1
80             Partners   13500.00  0.30            2
80             Errazuriz  12000.00  0.30            3
```

The returned formatting for certain numeric data types is different.

The following example calculates the total salary for the department 80.

```
SELECT SUM(salary)
FROM employees WHERE department_id = 80;

SUM(SALARY)
39500.00
```

The following example creates and populates an OrderItems table.

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity) VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Locking Nut', 300);
```

The following example uses a window ranking function to rank items based on the ordered quantity.

```
SELECT Item, Quantity, RANK()
  OVER(ORDER BY Quantity) AS QtyRank
FROM OrderItems;

Item            Quantity  QtyRank
M8 Bolt         100       1
M8 Nut          100       1
M8 Washer       200       3
M6 Locking Nut  300       4
```

The following example uses a partitioned window aggregate function to calculate the total quantity for each order. This statement doesn’t use a `GROUP BY` clause.

```
SELECT Item, Quantity, OrderID, SUM(Quantity)
  OVER (PARTITION BY OrderID) AS TotalOrderQty
FROM OrderItems;

Item            Quantity  OrderID  TotalOrderQty
M8 Bolt         100       1        100
M8 Nut          100       2        100
M6 Locking Nut  300       3        500
M8 Washer       200       3        500
```

The following example uses an analytic `LEAD` function to get the next largest quantity for the order.

```
SELECT Item, Quantity, OrderID, LEAD(Quantity)
  OVER (PARTITION BY OrderID ORDER BY Quantity) AS NextQtyOrder
FROM OrderItems;

Item            Quantity  OrderID  NextQtyOrder
M8 Bolt         100       1        NULL
M8 Nut          100       2        NULL
M8 Washer       200       3        300
M6 Locking Nut  300       3        NULL
```

For more information, see [Window Functions](https://www.postgresql.org/docs/13/tutorial-window.html) in the *PostgreSQL documentation*.