

# Special Oracle features and future PostgreSQL content
<a name="chap-oracle-aurora-pg.special"></a>

This section provides reference pages for special Oracle features and PostgreSQL current equivalents, alternatives, and future features.

**Topics**
+ [Oracle character sets and PostgreSQL encoding](chap-oracle-aurora-pg.special.charset.md)
+ [Oracle database links and PostgreSQL dblink and fdwrapper](chap-oracle-aurora-pg.special.dblinks.md)
+ [Oracle DBMS\$1SCHEDULER and PostgreSQL scheduled Lambda](chap-oracle-aurora-pg.special.scheduler.md)
+ [Oracle external tables and PostgreSQL integration with Amazon S3](chap-oracle-aurora-pg.special.external.md)
+ [Inline views](chap-oracle-aurora-pg.special.inlineviews.md)
+ [Oracle JSON document support and PostgreSQL JSON support](chap-oracle-aurora-pg.special.json.md)
+ [Oracle and PostgreSQL materialized views](chap-oracle-aurora-pg.special.matviews.md)
+ [Oracle multitenant and PostgreSQL database architecture](chap-oracle-aurora-pg.special.multitenant.md)
+ [Oracle Resource Manager and PostgreSQL dedicated Amazon Aurora clusters](chap-oracle-aurora-pg.special.dedicated.md)
+ [Oracle SecureFile LOBs and PostgreSQL large objects](chap-oracle-aurora-pg.special.lobs.md)
+ [Oracle and PostgreSQL views](chap-oracle-aurora-pg.special.views.md)
+ [Oracle XML DB and PostgreSQL XML type and functions](chap-oracle-aurora-pg.special.xmldb.md)
+ [Oracle Log Miner and PostgreSQL logging options](chap-oracle-aurora-pg.special.log.md)

# Oracle character sets and PostgreSQL encoding
<a name="chap-oracle-aurora-pg.special.charset"></a>

With AWS DMS, you can migrate databases between different database platforms while handling character set and encoding differences. Oracle databases use character sets to define which characters are allowed, while PostgreSQL uses encodings for the same purpose.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  N/A  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.charset.ora"></a>

Oracle supports most national and international encoded character set standards including extensive support for Unicode.

Oracle provides two scalar string-specific data types:
+  **VARCHAR2** — Stores variable-length character strings with a length between 1 and 4000 bytes. The Oracle database can be configured to use the VARCHAR2 data type to store either Unicode or Non-Unicode characters.
+  **NVARCHAR2** — Scalar data type used to store Unicode data. Supports AL16UTF16 or UTF8 and id specified during database creation.

Character sets in Oracle are defined at the instance level (Oracle 11g) or the pluggable database level (Oracle 12c R2). In Pre-12cR2 Oracle databases, the character set for the root container and all pluggable databases were required to be identical.

Oracle 18c updates AL32UTF8 and AL16UTF16 characted sets to Unicode standard version 9.0.

### UTF8 Unicode
<a name="chap-oracle-aurora-pg.special.charset.ora.utf"></a>

Oracle’s implementation uses the AL32UTF8 character set and provides encoding of ASCII characters as single-byte for latin characters, two-bytes for some European and Middle-Eastern languages, and three-bytes for certain South and East-Asian characters. Therefore, Unicode storage requirements are usually higher when compared non-Unicode character sets.

### Character set migration
<a name="chap-oracle-aurora-pg.special.charset.ora.csm"></a>

Two options exist for modifying existing instance-level or database-level character sets:
+ Export/Import from the source Instance/PDB to a new Instance/PDB with a modified character set.
+ Use the Database Migration Assistant for Unicode (DMU), which simplifies the migration process to the Unicode character set.

As of 2012, use of the `CSALTER` utility for character set migrations is deprecated.

**Note**  
Oracle Database 12c Release 1 (12.1.0.1) complies with version 6.1 of the Unicode standard.  
Oracle Database 12c Release 2 (12.1.0.2) extends the compliance to version 6.2 of the Unicode standard.  
UTF-8 is supported through the AL32UTF8 CS and is valid as both the client and database character sets.  
UTF-16BE is supported through AL16UTF16 and is valid as the national (NCHAR) character set.

For more information, see [Choosing a Character Set](https://docs.oracle.com/en/database/oracle/oracle-database/19/nlspg/choosing-character-set.html#GUID-BF26E01D-AB92-48FC-855A-69A5B3AF9A92), [Locale Data](https://docs.oracle.com/en/database/oracle/oracle-database/19/nlspg/appendix-A-locale-data.html#GUID-A9E30C27-FD47-4552-B670-F41A95B11405), and [Supporting Multilingual Databases with Unicode](https://docs.oracle.com/en/database/oracle/oracle-database/19/nlspg/supporting-multilingual-databases-with-unicode.html#GUID-AA09A60E-123E-457C-ACE1-89E4634E492C) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.charset.pg"></a>

PostgreSQL supports a variety of different character sets, also known as encoding, including support for both single-byte and multi-byte languages. The default character set is specified when initializing your PostgreSQL database cluster with initdb. Each individual database created on the PostgreSQL cluster supports individual character sets defined as part of database creation.

**Note**  
Starting with PostgreSQL 13, Windows version now support obtaining version information for collations (ordering rules) from OS. This option is relevant for self-managed PostgreSQL installations running on Windows.

When querying the collversion from pg\$1collation in PostgreSQL running on Windows, prior to version 13 there wasn’t any value to reflect the OS collation version, for example version 11 running on Windows.

```
CREATE COLLATION german (provider = libc, locale = 'de_DE');

CREATE COLLATION

select oid,collname,collversion from pg_collation
where collprovider='c' and collname='german';

oid    collname  collversion
16394  german
(1 row)

select pg_collation_actual_version (16394);

pg_collation_actual_version
(1 row)
```

Starting with PostgreSQL 13 running on Windows.

```
CREATE COLLATION german (provider = libc, locale = 'de_DE');

CREATE COLLATION

select oid,collname,collversion from pg_collation
where collprovider='c' and collname='german';

oid    collname  collversion
32769  german    1539.5,1539.5
(1 row)

select pg_collation_actual_version (32769);

pg_collation_actual_version
1539.5,1539.5
(1 row)
```

**Note**  
All supported character sets can be used by clients. However, some client-side only characters are not supported for use within the server.  
Unlike Oracle, PostgreSQL doesn’t support an NVARHCHAR data type and doesn’t offer support for UTF-16.


| Type | Function | Implementation level | 
| --- | --- | --- | 
|  Encoding  |  Defines the basic rules on how alphanumeric characters are represented in binary format, for example, Unicode Encoding.  |  Database  | 
|  Locale  |  Superset which include LC\$1COLLATE and LC\$1CTYPE, among others. LC\$1COLLATE defines how strings are sorted and needs to be a subset supported by the database Encoding. LC\$1CTYPE is used to classify if a character is a digit, letter, whitespace, punctuation, and so on.  |  Table-Column  | 

 **Examples** 

Create a database named test01 which uses the Korean EUC\$1KR Encoding the and the ko\$1KR locale.

```
CREATE DATABASE test01 WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;
```

View the character sets configured for each database by querying the System Catalog.

```
select datname, datcollate, datctype from pg_database;
```

### Changing character sets or encoding
<a name="chap-oracle-aurora-pg.special.charset.pg.change"></a>

In-place modification of the database encoding is not recommended nor supported. You must export all data, create a new database with the new encoding, and import the data.

Export the data using the pg\$1dump utility.

```
pg_dump mydb1 > mydb1_export.sql
```

Rename or delete your current database.

```
ALTER DATABASE mydb1 TO mydb1_backup;
```

Create a new database using the modified encoding.

```
CREATE DATABASE mydb1_new_encoding WITH ENCODING 'UNICODE' TEMPLATE=template0;
```

Import the data using the pg\$1dump file previously created. Verify that you set your client encoding to the encoding of your old database.

```
PGCLIENTENCODING=OLD_DB_ENCODING psql -f mydb1_export.sql mydb1_new_encoding
```

**Note**  
Using the client\$1encoding parameter overrides the use of PGCLIENTENCODING.

### Client/server character set conversions
<a name="chap-oracle-aurora-pg.special.charset.pg.client"></a>

PostgreSQL supports conversion of character sets between server and client for specific character set combinations as described in the pg\$1conversion system catalog.

PostgreSQL includes predefined conversions. For a complete list, see [Built-in Client/Server Character Set Conversions](https://www.postgresql.org/docs/13/static/multibyte.html#MULTIBYTE-TRANSLATION-TABLE).

You can create a new conversion using the SQL command CREATE CONVERSION.

 **Examples** 

Create a conversion from UTF8 to LATIN1 using a custom-made myfunc1 function.

```
CREATE CONVERSION myconv FOR 'UTF8' TO 'LATIN1' FROM myfunc1;
```

Configure the PostgreSQL client character set.

```
psql \encoding SJIS

SET CLIENT_ENCODING TO 'value';
```

View the client character set and reset it back to the default value.

```
SHOW client_encoding;

RESET client_encoding;
```

### Table level collation
<a name="chap-oracle-aurora-pg.special.charset.pg.collation"></a>

PostgreSQL supports specifying the sort order and character classification behavior on a per-column level.

 **Example** 

Specify specific collations for individual table columns.

```
CREATE TABLE test1 (col1 text COLLATE "de_DE", col2 text COLLATE "es_ES");
```

## Summary
<a name="chap-oracle-aurora-pg.special.charset.summary"></a>


| Feature | Oracle |  Aurora PostgreSQL | 
| --- | --- | --- | 
|  View database character set  |  <pre>SELECT * FROM NLS_DATABASE_PARAMETERS;</pre>  |  <pre>select datname,<br />  pg_encoding_to_char(encoding),<br />  datcollate, datctype<br />  from pg_database;</pre>  | 
|  Modify the database character set  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/chap-oracle-aurora-pg.special.charset.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/chap-oracle-aurora-pg.special.charset.html)  | 
|  Character set granularity  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/chap-oracle-aurora-pg.special.charset.html)  |  Database  | 
|  UTF8  |  Supported by VARCHAR2 and NVARCHAR data types  |  Supported by VARCHAR datatype  | 
|  UTF16  |  Supported by NVARCHAR2 datatype  |  Not Supported  | 
|  NCHAR/NVARCHAR data types  |  Supported  |  Not Supported  | 

For more information, see [Character Set Support](https://www.postgresql.org/docs/10/multibyte.html) in the *PostgreSQL documentation*.

# Oracle database links and PostgreSQL dblink and fdwrapper
<a name="chap-oracle-aurora-pg.special.dblinks"></a>

With AWS DMS, you can integrate heterogeneous database systems by creating database links between different database management systems. Oracle database links and PostgreSQL dblink/fdwrapper facilitate access to data in remote databases from a local database, enabling queries and data manipulation across distributed environments.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [Database Links](chap-oracle-aurora-pg.tools.actioncode.md#chap-oracle-aurora-pg.tools.actioncode.databaselinks)   |  Different paradigm and syntax  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.dblinks.ora"></a>

Database links are schema objects used to interact with remote database objects such as tables. Common use cases for database links include selecting data from tables that reside in a remote database.

To use database links, Oracle net services must be installed on both the local and remote database servers to facilitate communications.

 **Examples** 

Create a database link named remote\$1db. When creating a database link, you have the option to specify the remote database destination using a TNS Entry or to specify the full TNS Connection string.

```
CREATE DATABASE LINK remote_db CONNECT TO username IDENTIFIED BY password USING 'remote';
CREATE DATABASE LINK remotenoTNS CONNECT TO username IDENTIFIED BY password
  USING '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS = (PROTOCOL = TCP)(HOST =192.168.1.1)
  (PORT =1521)))(CONNECT_DATA =(SERVICE_NAME = orcl)))';
```

After the database link is created, you can use the database link directly as part of a SQL query using the database link name (@remote\$1db) as a suffix to the table name.

```
SELECT * FROM employees@remote_db;
```

Database links also support DML commands.

```
INSERT INTO employees@remote_db
(employee_id, last_name, email, hire_date, job_id) VALUES
(999, 'Claus', 'sclaus@example.com', SYSDATE, 'SH_CLERK');

UPDATE jobs@remote_db SET min_salary = 3000 WHERE job_id = 'SH_CLERK';

DELETE FROM employees@remote_db WHERE employee_id = 999;
```

For more information, see [Managing Database Links](https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/managing-a-distributed-database.html#GUID-7B0C4627-4473-4313-88D5-FD03CA42D9EA) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.dblinks.pg"></a>

Querying data in remote databases in PostgreSQL is available through two primary options:

1.  `dblink` database link function.

1.  `postgresql_fdw` (Foreign Data Wrapper, FDW) extension.

The PostgreSQL foreign data wrapper extension is new to PostgreSQL and offers functionality that is similar to dblink. However, the PostgreSQL foreign data wrapper aligns closer with the SQL standard and can provide improved performance.

 **Example of using dblink** 

Load the `dblink` extension into PostgreSQL.

```
CREATE EXTENSION dblink;
```

Create a persistent connection to a remote PostgreSQL database using the dblink\$1connect function specifying a connection name (myconn), database name (postgresql), port (5432), host (hostname), user (username) and password (password).

```
SELECT dblink_connect
('myconn', 'dbname=postgres port=5432
host=hostname user=username password=password');
```

The connection can be used to run queries against the remote database.

Run a query using the previously created connection (myconn) using the dblink function.

The query returns the id and name columns from the employees table. On the remote database, you must specify the connection name and the SQL query to execute as well as parameters and datatypes for selected columns (id and name in this example).

```
SELECT * from dblink
('myconn', 'SELECT id, name FROM EMPLOYEES')
AS p(id int,fullname text);
```

Close the connection using the dblink\$1disconnect function.

```
SELECT dblink_disconnect('myconn');
```

Alternatively, you can use the `dblink` function specifying the full connection string to the remote PostgreSQL database, including: database name, port, hostname, username, and password. This can be done instead of using a previously defined connection. You must also specify the SQL query to run as well as parameters and datatypes for the selected columns (id and name, in this example).

```
SELECT * from dblink
('dbname=postgres port=5432 host=hostname user=username password=password',
'SELECT id, name FROM EMPLOYEES') AS p(id int,fullname text);
```

DML commands are supported on tables referenced through the `dblink` function. For example, you can insert a new row and then delete it from the remote table.

```
SELECT * FROM dblink('myconn',$$INSERT into employees
VALUES (3,'New Employees No.3!')$$) AS t(message text);

SELECT * FROM dblink('myconn',$$DELETE FROM employees
WHERE id=3$$) AS t(message text);
```

Create a new local `new_employees_table` table by querying data from a remote table.

```
SELECT emps.* INTO new_employees_table
FROM dblink('myconn','SELECT * FROM employees')
AS emps(id int, name varchar);
```

Join remote data with local data.

```
SELECT local_emps.id , local_emps.name, s.sale_year, s.sale_amount
FROM local_emps INNER JOIN
dblink('myconn','SELECT * FROM working_hours')
AS s(id int, hours worked int)
ON local_emps.id = s.id;
```

Run DDL statements in the remote database.

```
SELECT * FROM dblink('myconn',$$CREATE table new_remote_tbl
(a int, b text)$$) AS t(a text);
```

For more information, see [dblink](https://www.postgresql.org/docs/13/dblink.html) in the *PostgreSQL documentation*.

 **Example of using the PostgreSQL Foreign Data Wrapper** 

Load the fdw extension into PostgreSQL.

```
CREATE EXTENSION postgres_fdw;
```

Create a connection to the remote PostgreSQL database specifying the remote server (hostname), database name (postgresql) and the port (5432).

```
CREATE SERVER remote_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'hostname', dbname 'postgresql', port '5432');
```

Create the user mapping, specifying the local\$1user is a user with permissions in the current database, the server connection created in the previous command (remote\$1db), and the user and password arguments specified in the options clause must have the required permissions in the remote database.

```
CREATE USER MAPPING FOR local_user
SERVER remote_db
OPTIONS (user 'remote_user', password 'remote_password');
```

After the connection with login credentials for the remote database was created, we can either import individual tables or the entire schema containing all, or some, of the tables and views.

Create a `FOREIGN TABLE` named `foreign_emp_tbl` using the remote\$1db remote connection created earlier specifying both the schema name and table name in the remote database to be queried. For example, the `hr.employees` table.

```
CREATE FOREIGN TABLE foreign_emp_tbl (
  id int, name text)
  SERVER remote_db
  OPTIONS (schema_name 'hr', table_name 'employees');
```

Queries running on the local `foreign_emp_tbl` table will actually query data directly from the remote `hr.employees` table.

```
SELECT * FROM foreign_emp_tbl;
```

You can also import an entire schema, or specific tables, without specifying a specific table name.

```
IMPORT FOREIGN SCHEMA hr LIMIT TO (employees)
FROM SERVER remote_db INTO local_hr;
```

Both dblink and FDW store the remote database username and password as plain-text, in two locations:
+ The pg\$1user\$1mapping view, accessible only to “super users” in the database.
+ When using the dblink function, passwords can be stored in your code or procedures inside the database.

Any changes to PostgreSQL user passwords require changing the FDW/dblink specifications as well.

When using FDW, if columns in the remote tables have been dropped or renamed, the queries will fail. The FDW tables must be re-created.

### PostgreSQL dblink compared to PostgreSQL foreign data wrapper
<a name="chap-oracle-aurora-pg.special.dblinks.pg.compare"></a>


| Description | PostgreSQL dblink | PostgreSQL Foreign Data Wrapper | 
| --- | --- | --- | 
|  Create a permanent reference to a remote table using a database link  |  Not supported  |  After creating: define DFW server, create user mapping, and run. <pre>CREATE FOREIGN TABLE foreign_emp_tbl<br />(id int, name text, address text )<br />SERVER foreign_server<br />OPTIONS (schema_name 'hr',<br />table_name 'employees');</pre>  | 
|  Query remote data  |  <pre>SELECT * FROM dblink('myconn',<br />  'SELECT * FROM employees')<br />  AS p(id int,fullname text,<br />  address text);</pre>  |  <pre>SELECT * FROM foreign_emp_tbl;</pre>  | 
|  DML on remote data  |  <pre>SELECT * FROM dblink('myconn',<br />$$INSERT into employees<br />VALUES (45,'Dan','South side 7432,<br />NY')$$) AS t(id int, name text,<br />address text);</pre>  |  <pre>INSERT into foreign_emp_tb<br />VALUES (45,'Dan','South side 7432,<br />NY'); (Regular DML)</pre>  | 
|  Run DDL on remote objects  |  <pre>SELECT * FROM dblink ('myconn',$$CREATE table<br />my_remote_tbl (a int, b text)$$) AS t(a text);</pre>  |  Not supported  | 

## Summary
<a name="chap-oracle-aurora-pg.special.dblinks.summary"></a>


| Description | Oracle | PostgreSQL dblink | 
| --- | --- | --- | 
|  Create a permanent named database link  |  <pre>CREATE DATABASE LINK remote<br />CONNECT TO username IDENTIFIED<br />BY password USING 'remote';</pre>  |  Not Supported. You have to manually open the connection to the remote database in your sessions / queries: <pre>SELECT dblink_connect('myconn',<br />'dbname=postgres port=5432<br />hostt=hostname user=username<br />password=password');</pre>  | 
|  Query using a database link  |  <pre>SELECT * FROM employees@remote;</pre>  |  <pre>SELECT * FROM dblink<br />('myconn','SELECT * FROM employees')<br /> AS p(id int,fullname text, address text);</pre>  | 
|  DML using database link  |  <pre>INSERT INTO employees@remote<br />(employee_id, last_name, email,<br />hire_date, job_id) VALUES (999,<br />'Claus','sclaus@example.com',<br />SYSDATE,'SH_CLERK');</pre>  |  <pre>SELECT * FROM dblink<br />('myconn',$$INSERT into employees<br />VALUES (45,'Dan','South side 7432, NY'<br />)$$) AS t(id int, name text, address text);</pre>  | 
|  Heterogeneous database link connections, such as Oracle to PostgreSQL or vice-versa  |  Supported.  |  Create extension oracle\$1fdw not supported by Amazon RDS.  | 
|  Run DDL using a database link  |  Not supported directly, but you can run a procedure or create a job on the remote database and runs the desired DDL commands. <pre>dbms_job@remote.submit(<br />  l_job, 'execute immediate<br />  ''create table t ( x int)'''<br />  ); commit;</pre>  |  <pre>SELECT * FROM dblink (<br />  'myconn',$$CREATE table my_remote_tbl<br />  (a int, b text)$$) AS t(a text);</pre>  | 
|  Delete a database link  |  <pre>drop database link remote;</pre>  |  Not supported. Close the DBLink connection instead. <pre>SELECT dblink_disconnect ('myconn');</pre>  | 

For more information, see [postgres\$1fdw](https://www.postgresql.org/docs/13/postgres-fdw.html) in the *PostgreSQL documentation*.

# Oracle DBMS\$1SCHEDULER and PostgreSQL scheduled Lambda
<a name="chap-oracle-aurora-pg.special.scheduler"></a>

With AWS DMS, you can schedule and automate database tasks using Oracle DBMS\$1SCHEDULER and PostgreSQL scheduled Lambda. Oracle DBMS\$1SCHEDULER is a job scheduler that allows defining and executing recurring or one-time jobs. PostgreSQL scheduled Lambda lets you invoke AWS Lambda functions on a schedule.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[One star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-1.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |  N/A  |  N/A  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.scheduler.ora"></a>

The DBMS\$1SCHEDULER package contains a collection of scheduling functions the can be executed or called from PL/DSQL, Create a job and attributes.

When creating a job there two main objects that should be created too: `PROGRAM` and `SCHEDULE`.

A program will define what will run when the job called it.

Scheduler can run database program unit (for example, a procedure) or external executable (filesystem sh scripts, and so on).

There are three running methods of jobs: Time Base Scheduling, Event-Based jobs, and Dependency Jobs (Chained).

 **Time base scheduling** 

Examples of the commands that will create a job with program and schedule:

1. Create a program that will call the procedure `UPDATE_HR_SCHEMA_STATS` in `HR` schema.

1. Create a schedule that will set the interval of running the jobs that using it. This schedule will run the job every 1 hour.

1. Create the job itself.

```
BEGIN
DBMS_SCHEDULER.CREATE_PROGRAM(
program_name => 'CALC_STATS',
program_action => 'HR.UPDATE_HR_SCHEMA_STATS',
program_type => 'STORED_PROCEDURE',
enabled => TRUE);
END;
/

BEGIN
DBMS_SCHEDULER.CREATE_SCHEDULE(
schedule_name => 'stats_schedule',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=HOURLY;INTERVAL=1',
comments => 'Every hour');
END;
/

BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'my_new_job3',
program_name => 'my_saved_program1',
schedule_name => 'my_saved_schedule1');
END;
/
```

Create a job without program or schedule:

1.  `job_type` — `EXECUTABLE` define that our job will run an external script.

1.  `job_action` — Define the location of the external script.

1.  `start_date` — Define since when the job will be enabled.

1.  `repeat_interval` — Define when the job will run, in this example every day at huor 23 (11:00PM).

```
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name=>'HR. BACKUP',
job_type => 'EXECUTABLE',
job_action => '/home/usr/dba/rman/nightly_bck.sh',
start_date=> SYSDATE,
repeat_interval=>'FREQ=DAILY;BYHOUR=23',
comments => 'Nightly backups');
END;
/
```

After the job is created, its attribute can be updated with `SET_ATTRIBUTE` procedure.

```
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
name => 'my_emp_job1',
attribute => 'repeat_interval',
value => 'FREQ=DAILY');
END;
/
```

 **Event-based jobs** 

Example of creating a schedule that can be used to start a job whenever the scheduler receives an event indicating that a file arrived on the system before 9AM and then create a job that will use this schedule

```
BEGIN
DBMS_SCHEDULER.CREATE_EVENT_SCHEDULE (
schedule_name => 'scott.file_arrival',
start_date => systimestamp,
event_condition => 'tab.user_data.object_owner = ''SCOTT''
and tab.user_data.event_name = ''FILE_ARRIVAL''
and extract hour from tab.user_data.event_timestamp < 9',
queue_spec => 'my_events_q');
END;
/

BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => my_job,
program_name => my_program,
start_date => '15-JUL-04 1.00.00AM US/Pacific',
event_condition => 'tab.user_data.event_name = ''LOW_INVENTORY''',
queue_spec => 'my_events_q'
enabled => TRUE,
comments => 'my event-based job');
END;
/
```

 **Dependency jobs (Chained)** 

1. Use `DBMS_SCHEDULER.CREATE_CHAIN` to create a chain.

1. Use` DBMS\$1SCHEDULER.DEFINE\$1CHAIN\$1STEP` to define three steps for this chain. Referenced programs must be enabled.

1. Use `DBMS_SCHEDULER.DEFINE_CHAIN_RULE` to define corresponding rules for the chain.

1. Use `DBMS_SCHEDULER.ENABLE` to enable the chain.

1. Use `DBMS_SCHEDULER.CREATE_JOB` to create a chain job to start the chain daily at 1:00 p.m.

```
BEGIN
DBMS_SCHEDULER.CREATE_CHAIN (
chain_name => 'my_chain1',
rule_set_name => NULL,
evaluation_interval => NULL,
comments => NULL);
END;
/

BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain1', 'stepA', 'my_program1');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain1', 'stepB', 'my_program2');
DBMS_SCHEDULER.DEFINE_CHAIN_STEP('my_chain1', 'stepC', 'my_program3');
END;
/

BEGIN
DBMS_SCHEDULER.DEFINE_CHAIN_RULE ('my_chain1', 'TRUE', 'START stepA');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain1', 'stepA COMPLETED', 'Start stepB, stepC');
DBMS_SCHEDULER.DEFINE_CHAIN_RULE (
'my_chain1', 'stepB COMPLETED AND stepC COMPLETED', 'END');
END;
/

BEGIN
DBMS_SCHEDULER.ENABLE('my_chain1');
END;
/

BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'chain_job_1',
job_type => 'CHAIN',
job_action => 'my_chain1',
repeat_interval => 'freq=daily;byhour=13;byminute=0;bysecond=0',
enabled => TRUE);
END;
/
```

There are two additional subjects to maintain your jobs.

1.  `JOB CLASS` — when you have a number of jobs that has the same behavior and attributes, maybe you will want to group them together into bigger logical group called “Job Class” and you can give priority between job classes by allocating a high percentage of available resources.

1.  `WINDOW` — when you want to prioritize your jobs based on schedule, you can create a window of time that the jobs can run during this window, for example, during non-peak time or at the end of the month.

For more information, see [Scheduling Jobs with Oracle Scheduler](https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/scheduling-jobs-with-oracle-scheduler.html#GUID-D41660D0-D88F-4D9F-8CC8-63D040EDC4E6) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.scheduler.pg"></a>

Aurora PostgreSQL can be combined with Amazon CloudWatch and Lambda to get similar functionality, see [Sending an Email from Aurora PostgreSQL using Lambda Integration](chap-oracle-aurora-pg.sql.mail.md#chap-oracle-aurora-pg.sql.mail.pg).

# Oracle external tables and PostgreSQL integration with Amazon S3
<a name="chap-oracle-aurora-pg.special.external"></a>

With AWS DMS, you can migrate data from on-premises databases to Amazon S3 by creating Oracle external tables or integrating PostgreSQL with Amazon S3. Oracle external tables provide access to data stored in Amazon S3, treating objects as records in a table. PostgreSQL integration with Amazon S3 lets you query data directly from Amazon S3 using the SQL/PostgreSQL interface.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[No feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-0.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [Creating Tables](chap-oracle-aurora-pg.tools.actioncode.md#chap-oracle-aurora-pg.tools.actioncode.tables)   |  PostgreSQL doesn’t support external tables.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.external.ora"></a>

The Oracle external tables feature allows you to create a table in your database that reads data from a source located outside your database (externally).

Beginning with Oracle 12.2, the external table can be partitioned, providing all the benefits of a regular partitioned table.

Oracle 18c adds support for inline external tables, which is a way to get data from external source in a SQL query without having to define and create external table first.

```
SELECT * FROM EXTERNAL ((i NUMBER, d DATE)
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY '|') LOCATION ('test.csv') REJECT LIMIT UNLIMITED) tst_external;
```

 **Examples** 

 `CREATE TABLE` with `ORGANIZATION EXTERNAL` to identify it as an external table. Specify the TYPE to let the database choose the right driver for the data source, the options are:
+  `ORACLE_LOADER` — The data must be sourced from text data files. (default)
+  `ORACLE_DATAPUMP` —The data must be sourced from binary dump files. You can write dump files only as part of creating an external table with the `CREATE TABLE AS SELECT` statement. Once the dump file is created, it can be read any number of times, but it can’t be modified (that is, no DML operations can be performed).
+  `ORACLE_HDFS` — Extracts data stored in a Hadoop Distributed File System (HDFS).
+  `ORACLE_HIVE` — Extracts data stored in Apache HIVE.
+  `DEFAULT DIRECTORY` — In database definition for the directory path.
+  `ACCESS PARAMETER` — Defines the DELIMITER character and the query fields.
+  `LOCATION` — The file name in the first two data source types or URI in the Hadoop data source (not in use with hive data source).

```
CREATE TABLE emp_load
(id CHAR(5), emp_dob CHAR(20), emp_lname CHAR(30),
  emp_fname CHAR(30),emp_start_date DATE) ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER DEFAULT DIRECTORY data_dir ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE FIELDS (id CHAR(2), emp_dob CHAR(20),
emp_lname CHAR(18), emp_fname CHAR(11), emp_start_date CHAR(10)
date_format DATE mask "mm/dd/yyyy"))
LOCATION ('info.dat'));
```

For more information, see [External Tables Concepts](https://docs.oracle.com/en/database/oracle/oracle-database/18/sutil/oracle-external-tables-concepts.html#GUID-44323E01-7D72-45EC-915A-99E596769D9E) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.external.pg"></a>

 Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data.

The following diagram illustrates the solution architecture.

![\[Oracle external tables solution architecture\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-oracle-external-tables.png)


This is the most relevant capability for the Oracle’s External Tables in Aurora for PostgreSQL, but requires a significant amount of syntax modifications. The main difference is that there is no open link to files and the data must be transferred from and to PostgreSQL (if all data is needed).

There are two important operations for Aurora for PostgreSQL and Amazon S3 integration:
+ Saving data to an Amazon S3 file.
+ Loading data from an Amazon S3 file.

RDS Aurora for PostgreSQL must have permissions to the Amazon S3 bucket. For more information, see the links at the end of this section.

In Oracle 18c, the inline external table feature was introduced. this, can’t be achieved in Aurora for PostgreSQL and it depends on the use case but other services can be considered.

For ETLs for example, consider using AWS Glue.

### Saving data to Amazon S3
<a name="chap-oracle-aurora-pg.special.external.pg.save"></a>

You can use the `aws_s3.query_export_to_s3` function to query data from an Amazon Aurora PostgreSQL and save it directly into text files stored in an Amazon S3 bucket. Use this functionality to avoid transferring data to the client first, and then copying the data from the client to Amazon S3.

**Note**  
The default file size threshold is six gigabytes (GB). If the data selected by the statement is less than the file size threshold, a single file is created. Otherwise, multiple files are created.

If the run fails, files already uploaded to Amazon S3 remain in the specified Amazon S3 bucket. You can use another statement to upload the remaining data instead of starting over again.

If the amount of data to be selected is more than 25 GB, it is recommended to use multiple runs on different portions of the data to save it to Amazon S3.

Meta-data, such as table schema or file meta-data, is not uploaded by Aurora PostgreSQL to Amazon S3.

 **Examples** 

Add Amazon S3 extension.

```
CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE;
```

The following statement selects all data in the employees table and saves the data into an Amazon S3 bucket in a different region from the Aurora PostgreSQL instance. The statement returns an error if files that match the `sample_employee_data` file prefix exist in the specified Amazon S3 bucket.

```
SELECT *
FROM aws_s3.query_export_to_s3(
'SELECT * FROM employees',
aws_commons.create_s3_uri(
'aurora-select-into-s3-pdx',
'sample_employee_data','s3-us-west-2'));
```

The following statement selects all data in the employees table and saves the data into an Amazon S3 bucket in the same region as the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character. It also creates a manifest file. The statement returns an error if files that match the `sample_employee_data` file prefix exist in the specified Amazon S3 bucket.

```
SELECT *
FROM aws_s3.query_export_to_s3(
'SELECT * FROM employees',
aws_commons.create_s3_uri(
'aurora-select-into-s3-pdx',
'sample_employee_data','us-west-2'), options :='format csv, delimiter $$,$$');
```

### Query export to Amazon S3 summary
<a name="chap-oracle-aurora-pg.special.external.pg.summary"></a>


| Field | Description | 
| --- | --- | 
|  query  |  A required text string containing an SQL query that the PostgreSQL engine runs. The results of this query are copied to an Amazon S3 bucket identified in the s3\$1info parameter.  | 
|  bucket  |  A required text string containing the name of the Amazon S3 bucket that contains the file.  | 
|  file\$1path  |  A required text string containing the Amazon S3 file name including the path of the file.  | 
|  region  |  An optional text string containing the AWS Region that the bucket is in options An optional text string containing arguments for the PostgreSQL `COPY` command. For more information, see [COPY](https://www.postgresql.org/docs/current/sql-copy.html) in the *PostgreSQL documentation*.  | 

For more information, see [Export and import data from Amazon S3 to Amazon Aurora PostgreSQL](https://aws.amazon.com/blogs/database/export-and-import-data-from-amazon-s3-to-amazon-aurora-postgresql/).

### Loading Data from Amazon S3
<a name="chap-oracle-aurora-pg.special.external.pg.load"></a>

You can use the `table_import_from_s3` function to load data from files stored in an Amazon S3 bucket.

 **Examples** 

The following example runs the `table_import_from_s3` function to import gzipped csv from Amazon S3 into the `test_gzip` table.

```
CREATE TABLE test_gzip(id int, a text, b text, c text, d text);

SELECT aws_s3.table_import_from_s3('test_gzip', '',
'(format csv)', 'myS3Bucket', 'test-data.gz', 'us-east-2');
```

### Table import from Amazon S3 summary
<a name="chap-oracle-aurora-pg.special.external.pg.loadsummary"></a>


| Field | Description | 
| --- | --- | 
|  table\$1name  |  A required text string containing the name of the PostgreSQL database table to import the data into.  | 
|  column\$1list  |  A required text string containing an optional list of the PostgreSQL database table columns in which to copy the data. If the string is empty, all columns of the table are used.  | 
|  options  |  A required text string containing arguments for the PostgreSQL `COPY` command. For more information, see [COPY](https://www.postgresql.org/docs/current/sql-copy.html) in the *PostgreSQL documentation*.  | 
|  s3\$1info  |  An `aws_commons._s3_uri_1` composite type containing the following information about the Amazon S3 object: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/chap-oracle-aurora-pg.special.external.html)  | 
|  credentials  |  The credentials parameter specifies the credentials to access Amazon S3. When you use this parameter, you don’t use an IAM role.  | 

For more information, see [Importing data into PostgreSQL on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html) in the *Amazon RDS user guide*.

# Inline views
<a name="chap-oracle-aurora-pg.special.inlineviews"></a>

With AWS DMS, you can create and use inline views to streamline data transformations during migration tasks. An inline view is a subquery that acts as a virtual table, allowing you to combine and manipulate data from multiple sources without creating a persistent database object.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  N/A  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.inlineviews.ora"></a>

Inline views refer to a `SELECT` statement located in the `FROM` clause of secondary `SELECT` statement. Inline views can help make complex queries simpler by removing compound calculations or eliminating join operations while condensing several separate queries into a single simplified query.

 **Examples** 

The SQL statement marked in red represents the inline view code. The query returns each employee matched to their salary and department id. In addition, the query returns the average salary for each department using the inline view column `SAL_AVG`.

```
SELECT A.LAST_NAME, A.SALARY, A.DEPARTMENT_ID, B.SAL_AVG
FROM EMPLOYEES A,
(SELECT DEPARTMENT_ID, ROUND(AVG(SALARY))
AS SAL_AVG FROM EMPLOYEES GROUP BY DEPARTMENT_ID)
WHERE A.DEPARTMENT_ID = B.DEPARTMENT_ID;
```

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.inlineviews.pg"></a>

PostgreSQL semantics may refer to inline views as Subselect or as Subquery. In either case, the functionality is the same. Running the Oracle inline view example above, as is, will result in an error: **ERROR: subquery in FROM must have an alias**. This is because Oracle supports omitting aliases for the inner statement while in PostgreSQL the use of aliases is mandatory. The following example uses `B` as an alias.

Mandatory aliases are the only major difference when migrating Oracle inline views to PostgreSQL.

 **Examples** 

The following example uses `B` as an alias.

```
SELECT A.LAST_NAME, A.SALARY, A.DEPARTMENT_ID, B.SAL_AVG
FROM EMPLOYEES A,
(SELECT DEPARTMENT_ID, ROUND(AVG(SALARY)) AS SAL_AVG
FROM EMPLOYEES GROUP BY DEPARTMENT_ID) B
WHERE A.DEPARTMENT_ID = B.DEPARTMENT_ID;
```

# Oracle JSON document support and PostgreSQL JSON support
<a name="chap-oracle-aurora-pg.special.json"></a>

With AWS DMS, you can migrate data between different database platforms, including Oracle and PostgreSQL, while preserving the JSON document structure. Oracle JSON document support and PostgreSQL JSON provide a way to store and query JSON data within the database.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |  N/A  |  Different paradigm and syntax will require application or drivers rewrite.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.json.ora"></a>

JSON documents are based on JavaScript syntax and allow the serialization of objects. Oracle support for JSON document storage and retrieval enables you to extend the database capabilities beyond purely relational usecases and allows an Oracle database to support semi-structured data. Oracle JSON support also includes fulltext search and several other functions dedicated to querying JSON documents.

Oracle 19 adds a new function, `JSON_SERIALIZE`. You can use this function to serialize JSON objects to text.

For more information, see [Introduction to JSON Data and Oracle Database](https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/intro-to-json-data-and-oracle-database.html#GUID-17642E43-7D87-4590-8870-06E9FDE9A6E9) in the *Oracle documentation*.

 **Examples** 

Create a table to store a JSON document in a data column and insert a JSON document into the table.

```
CREATE TABLE json_docs (id RAW(16) NOT NULL, data CLOB,
CONSTRAINT json_docs_pk PRIMARY KEY (id),
CONSTRAINT json_docs_json_chk CHECK (data IS JSON));

INSERT INTO json_docs (id, data) VALUES (SYS_GUID(),
'{
  "FName" : "John",
  "LName" : "Doe",
  "Address" : {
    "Street" : "101 Street",
    "City" : "City Name",
    "Country" : "US",
    "Pcode" : "90210"}
}');
```

Unlike XML data, which is stored using the SQL data type XMLType, JSON data is stored in an Oracle Database using the SQL data types `VARCHAR2`, `CLOB`, and `BLOB`. Oracle recommends that you always use an `is_json` check constraint to ensure the column values are valid JSON instances. Or, add a constraint at the table-level `CONSTRAINT json_docs_json_chk CHECK (data IS JSON)`.

You can query a JSON document directly from a SQL query without the use of special functions. Querying without functions is called Dot Notation.

```
SELECT a.data.FName,a.data.LName,a.data.Address.Pcode AS Postcode
FROM json_docs a;

FNAME  LNAME  POSTCODE
John   Doe    90210

1 row selected.
```

In addition, Oracle provides multiple SQL functions that integrate with the SQL language and enable querying JSON documents (such as `IS JSON`, `JSON_VAUE`, `JSON_EXISTS`, `JSON_QUERY`, and `JSON_TABLE`).

For more information, see [Introduction to JSON Data and Oracle Database](https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/intro-to-json-data-and-oracle-database.html#GUID-17642E43-7D87-4590-8870-06E9FDE9A6E9) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.json.pg"></a>

PostgreSQL provides native JSON Document support using the JSON data types JSON and JSONB.

 **JSON** stores an exact copy of the input text, which processing functions must re-parse on each run. It also preserves semantically-insignificant white space between tokens and the order of keys within JSON objects.

 **JSONB** stores data in a decomposed binary format causing slightly slower input performance due to added conversion to binary overhead. But, it is significantly faster to process since no re-parsing is needed on reads.
+ Doesn’t preserve white space.
+ Doesn’t preserve the order of object keys.
+ Doesn’t keep duplicate object keys. If duplicate keys are specified in the input, only the last value is retained.

Most applications store JSON data as JSONB unless there are specialized needs.

Starting with PostgreSQL 10, both JSON and JSONB are compatible with full-text search.

For more information, see [JSON Types](https://www.postgresql.org/docs/13/datatype-json.html) in the *PostgreSQL documentation*.

To comply with the full JSON specification, database encoding must be set to UTF8. If the database code page is not set to UTF8, then non-UTF8 characters are allowed and the database encoding will be non-compliant with the full JSON specification.

 **Examples** 

Because querying JSON data in PostgreSQL uses different query syntax from Oracle, change application queries. The following examples use PostgreSQL-native JSON query syntax.

Return the JSON document stored in the emp\$1data column associated with `emp_id=1`:

```
SELECT emp_data FROM employees WHERE emp_id = 1;
```

Return all JSON documents stored in the emp\$1data column having a key named address.

```
SELECT emp_data FROM employees WHERE emp_data ? ' address';
```

Return all JSON items that have an address key or a hobbies key.

```
SELECT * FROM employees WHERE emp_data ?| array['address', 'hobbies'];
```

Return all JSON items that have both an address key and a hobbies key.

```
SELECT * FROM employees WHERE emp_data ?& array['a', 'b'];
```

Return the value of home key in the phone numbers array.

```
SELECT emp_data ->'phone numbers'->>'home' FROM employees;
```

Return all JSON documents where the address key is equal to a specified value and return all JSON documents where address key contains a specific string using like.

```
SELECT * FROM employees WHERE emp_data->>'address' = '1234 First Street, Capital City';

SELECT * FROM employees WHERE emp_data->>'address' like '%Capital City%';
```

Using operators with JSON values:

```
select '{"id":132, "name":"John"}'::jsonb @> '{"id":132}'::jsonb;
```

Concatenating two JSON values.

```
select '{"id":132, "fname":"John"}'::jsonb || '{"lname":"Doe"}'::jsonb;
```

Removing keys from JSON.

```
select '{"id":132, "fname":"John", "salary":999999,
  "bank_account":1234}'::jsonb - '{salary,bank_account}'::text[];
```

For more information, see [JSON Functions and Operators](https://www.postgresql.org/docs/13/functions-json.html) in the *PostgreSQL documentation*.

### Indexing and constraints with JSONB columns
<a name="chap-oracle-aurora-pg.special.json.pg.index"></a>

You can use the `CREATE UNIQUE INDEX` statement to enforce constraints on values inside JSON documents stored in PostgreSQL. For example, you can create a unique index that forces values of the address key to be unique.

```
CREATE UNIQUE INDEX employee_address_uq ON employees( (emp_data->>'address') ) ;
```

This index allows the first SQL insert statement to work and causes the second to fail.

```
INSERT INTO employees VALUES (2, 'Second Employee','{ "address": "1234 Second Street, Capital City"}');
INSERT INTO employees VALUES (3, 'Third Employee', '{ "address": "1234 Second Street, Capital City"}');
ERROR: duplicate key value violates unique constraint "employee_address_uq" SQL state:
23505 Detail: Key ((emp_data ->> 'address'::text))=(1234 Second Street, Capital City) already exists.
```

For JSON data, PostgreSQL supports B-Tree, HASH, and GIN indexes ([Generalized Inverted Index](https://www.postgresql.org/docs/current/static/gin.html)). A GIN index is a special inverted index structure that is useful when an index must map many values to a row (such as indexing JSON documents).

When using GIN indexes, you can efficiently and quickly query data using only the following JSON operators: `@>`, `?`, `?&`, `?|`.

Without indexes, PostgreSQL is forced to perform a full table scan when filtering data. This condition applies to JSON data and will most likely have a negative impact on performance since PostgreSQL has to step into each JSON document.

Create an index on the address key of emp\$1data.

```
CREATE idx1_employees ON employees ((emp_data->>'address'));
```

Create a GIN index on a specific key or the entire emp\$1data column.

```
CREATE INDEX idx2_employees ON cards USING gin ((emp_data->'tags'));
CREATE INDEX idx3_employees ON employees USING gin (emp_data);
```

## Summary
<a name="chap-oracle-aurora-pg.special.json.summary"></a>


| Feature | Oracle |  Aurora PostgreSQL | 
| --- | --- | --- | 
|  Return the full JSON document or all JSON documents  |  The `emp_data` column stores json documents: <pre>SELECT emp_data FROM employees;</pre>  |  The `emp_data` column stores json documents: <pre>SELECT emp_data FROM employees;</pre>  | 
|  Return a specific element from a JSON document  |  Return only the address property: <pre>SELECT e.emp_data.address FROM employees e;</pre>  |  Return only the address property, for `emp_id=1` from the `emp_data` JSON column in the employees table: <pre>SELECT emp_data->>'address' from employees<br />where emp_id = 1;</pre>  | 
|  Return JSON documents matching a pattern in any field  |  Return the JSON based on a search of on all JSON properties. Could be returned even if element is equal to the pattern. <pre>SELECT e.emp_data FROM employees e<br />WHERE e.emp_data like '%pattern%';</pre>  |  Either use `jsonb_pretty` to flatten the JSON and search or, preferably, convert it to text and make the `like` search on value: <pre>SELECT * from (select jsonb_pretty(emp_data)<br />as raw_data from employees) raw_jason where<br />raw_data like '%1234%';</pre> <pre>SELECT key, value FROM card, lateral jsonb_<br />each_text(data) WHERE value LIKE '%pattern%';</pre>  | 
|  Return JSON documents matching a pattern in specific fields (root level)  |  <pre>SELECT e.emp_data.name FROM employees e<br />WHERE e.data.active = 'true';</pre>  |  Only return results where the “finished” property in the JSON document is true: <pre>SELECT * FROM employees WHERE emp_<br />data->>'active' = 'true';</pre>  | 
|  Define a column in a table that supports JSONB documents  |  Create a table with a CLOB column. Define an `IS JSON` constraint on the column. <pre>CREATE TABLE json_docs (id RAW(16) NOT NULL,<br />data CLOB, CONSTRAINT json_docs_pk PRIMARY KEY (id),<br />CONSTRAINT json_docs_json_chk CHECK (data IS JSON));</pre>  |  Create a table with a column defined as JSON: <pre>CREATE TABLE json_docs ( id integer NOT<br />NULL, data jsonb );</pre>  | 

For more information, see [JSON Types](https://www.postgresql.org/docs/13/datatype-json.html) and [JSON Functions and Operators](https://www.postgresql.org/docs/13/functions-json.html) in the *PostgreSQL documentation*.

# Oracle and PostgreSQL materialized views
<a name="chap-oracle-aurora-pg.special.matviews"></a>

With AWS DMS, you can create and manage materialized views in Oracle and PostgreSQL databases to improve query performance and enable efficient data access. A materialized view is a database object that stores a pre-computed result set from a query, providing fast access to summarized or frequently accessed data.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |   [Materialized Views](chap-oracle-aurora-pg.tools.actioncode.md#chap-oracle-aurora-pg.tools.actioncode.materializedviews)   |  PostgreSQL doesn’t support automatic or incremental REFRESH.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.matviews.ora"></a>

Oracle materialized views (also known as MViews) are table segments where the contents are periodically refreshed based on the results of a stored query. Oracle materialized views are defined with specific queries and can be manually or automatically refreshed based on specific configurations. A materialized view runs its associated query and stores the results as a table segment.

Oracle materialized views are especially useful for:
+ Replication of data across multiple databases.
+ Data warehouse use cases.
+ Increasing performance by persistently storing the results of complex queries as database tables.

Such as ordinary views, you can create materialized views with a `SELECT` query. The `FROM` clause of an MView query can reference tables, views, and other materialized views. The source objects that an Mview uses as data sources are also called master tables (replication terminology) or detail tables (data warehouse terminology).

 **Immediate or deferred refresh** 

When you create materialized views, use the `BUILD IMMEDIATE` option to instruct Oracle to immediately update the contents of the materialized view by running the underlying query. This is different from a deferred update where the materialized view is populated only on the first requested refresh.

 **Fast and complete refresh** 

You can use one of the two following options to refresh data in your materialized view.
+  `REFRESH FAST` — Incremental data refresh. Only updates rows that have changed since the last refresh of the Materialized View instead of performing a complete refresh. This type of refresh fails if materialized view logs have not been created.
+  `COMPLETE` — The table segment used by the materialized view is truncated (data is cleared) and repopulated by running the associated query.

 **Materialized view logs** 

When you create materialized views, use a materialized view log to instruct Oracle to store any changes performed by DML commands on the master tables that are used to refresh the materialized view, which provides faster materialized view refreshes.

Without materialized view logs, Oracle must re-run the query associated with the materialized view each time. This process is also known as a complete refresh. This process is slower compared to using materialized view logs.

 **Materialized view refresh strategy** 

You can use one of the two following strategies to refresh data in your materialized view.
+  `ON COMMIT` — Refreshes the materialized view upon any commit made on the underlying associated tables.
+  `ON DEMAND` — The refresh is initiated by a scheduled task or manually by the user.

 **Examples** 

Create a simple Materialized View named mv1 that runs a simple `SELECT` statement on the employees table.

```
CREATE MATERIALIZED VIEW mv1 AS SELECT * FROM hr.employees;
```

Create a more complex materialized view using a database link (remote) to obtain data from a table located in a remote database. This materialized view also contains a subquery. The `FOR UPDATE` clause allows the materialized view to be updated.

```
CREATE MATERIALIZED VIEW foreign_customers FOR
UPDATE AS SELECT * FROM sh.customers@remote cu WHERE EXISTS
(SELECT * FROM sh.countries@remote co WHERE co.country_id = cu.country_id);
```

Create a materialized view on two source tables: `times` and `products`. This approach enables `FAST` refresh of the materialized view instead of the slower `COMPLETE` refresh. Also, create a new materialized view named sales\$1mv which is refreshed incrementally `REFRESH FAST` each time changes in data are detected (`ON COMMIT`) on one or more of the tables associated with the materialized view query.

```
CREATE MATERIALIZED VIEW LOG ON times
WITH ROWID, SEQUENCE (time_id, calendar_year)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON products
WITH ROWID, SEQUENCE (prod_id)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW sales_mv
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
AS SELECT t.calendar_year, p.prod_id,
SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id AND p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;
```

For more information, see [Basic Materialized Views](https://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/basic-materialized-views.html#GUID-A7AE8E5D-68A5-4519-81EB-252EAAF0ADFF) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.matviews.pg"></a>

PostgreSQL supports materialized views with associated queries similar to the Oracle implementation. The query associated with the materialized view is used to populate the materialized view at the time the `REFRESH` command is issued. The PostgreSQL implementation of materialized views has three primary limitations when compared to Oracle materialized views:
+ PostgreSQL materialized views may be refreshed either manually or using a job running the `REFRESH MATERIALIZED VIEW` command. Automatic refresh of materialized views require the creation of a trigger.
+ PostgreSQL materialized views only support complete (full) refresh.
+ DML on materialized views is not supported.

**Note**  
In PostgreSQL 10, the statistics collector is being updated properly after a `REFRESH MATERIALIZED VIEW` run.

 **Examples** 

Create a materialized view named sales\$1summary using the sales table as the source for the materialized view.

```
CREATE MATERIALIZED VIEW sales_summary AS
SELECT seller_no,sale_date,sum(sale_amt)::numeric(10,2) as sales_amt
FROM sales
WHERE sale_date < CURRENT_DATE
GROUP BY seller_no, sale_date
ORDER BY seller_no, sale_date;
```

Execute a manual refresh of the materialized view.

```
REFRESH MATERIALIZED VIEW sales_summary;
```

**Note**  
The materialized view data will not be refreshed automatically if changes occur to its underlying tables. For automatic refresh of materialized view data, a trigger on the underlying tables must be created.

 **Creating a materialized view** 

When you create a materialized view in PostgreSQL, it uses a regular database table underneath. You can create database indexes on the materialized view directly and improve performance of queries that access the materialized view.

 **Example** 

Create an index on the `sellerno` and `sale_date` columns of the `sales_summary` materialized view.

```
CREATE UNIQUE INDEX sales_summary_seller
ON sales_summary (seller_no, sale_date);
```

## Summary
<a name="chap-oracle-aurora-pg.special.matviews.summary"></a>


| Option | Oracle | PostgreSQL | 
| --- | --- | --- | 
|  Create materialized view  |  <pre>CREATE MATERIALIZED VIEW<br />mv1 AS SELECT * FROM employees;</pre>  |  <pre>CREATE MATERIALIZED VIEW mv1 AS<br />SELECT * FROM employees;</pre>  | 
|  Manual refresh of a materialized view  |  <pre>DBMS_MVIEW.REFRESH('mv1', 'cf');</pre> The `cf` parameter configures the refresh method: `c` is complete and `f` is fast.  |  <pre>REFRESH MATERIALIZED VIEW mv1;</pre>  | 
|  Online refresh of a materialized view  |  <pre>CREATE MATERIALIZED VIEW<br />mv1 REFRESH FAST ON COMMIT<br />AS SELECT * FROM employees;</pre>  |  Create a trigger that will initiate a refresh after every DML command on the underlying tables: <pre>CREATE OR REPLACE FUNCTION<br />refresh_mv1()<br />returns trigger language plpgsql as<br />$$ begin<br />refresh materialized view mv1;<br />return null;<br />end $$;<br /><br />create trigger refresh_ mv1 after insert or update<br />or delete or truncate on employees for each statement<br />execute procedure refresh_mv1();</pre>  | 
|  Automatic incremental refresh of a materialized view  |  <pre>CREATE MATERIALIZED VIEW LOG<br />ON employees…<br />INCLUDING NEW VALUES;<br /><br />CREATE MATERIALIZED VIEW<br />mv1 REFRESH FAST AS SELECT<br />* FROM employees;</pre>  |  Not Supported  | 
|  DML on materialized view data  |  Supported  |  Not Supported  | 

For more information, see [Materialized Views](https://www.postgresql.org/docs/13/rules-materializedviews.htm) in the *PostgreSQL documentation*.

# Oracle multitenant and PostgreSQL database architecture
<a name="chap-oracle-aurora-pg.special.multitenant"></a>

With AWS DMS, you can migrate data from Oracle multitenant databases and PostgreSQL databases to AWS. Oracle multitenant architecture refers to the capability of hosting multiple pluggable databases within a single container database. PostgreSQL utilizes a traditional database architecture where each database instance operates independently.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  N/A  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.multitenant.ora"></a>

Oracle 12c introduces a new multitenant architecture that provides the ability to create additional independent pluggable databases under a single Oracle instance. Prior to Oracle 12c, a single Oracle database instance only supported running a single Oracle database as shown in the following diagram.

![\[A single Oracle database instance runs a single Oracle database\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-oracle-multitenant.png)


Oracle 12c introduces a new multitenant container database (CDB) that supports one or more pluggable databases (PDB). The CDB can be thought of as a single superset database with multiple pluggable databases. The relationship between an Oracle instance and databases is now 1:N.

![\[Multitenant container Oracle database\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-multitenant-container-database.png)


Oracle 18c adds following multitenant related features:
+ DBCA PDB Clone: UI interface which allows cloning multiple pluggable databases (PDB).
+ Refreshable PDB Switchover: ability to switch roles between pluggable database clone and its original master
+ CDB Fleet Management: ability to group multiple container databases (CDB) into fleets that can be managed as a single logical database.

Oracle 19 introduced support to having more than one pluggable database (PDB) in a container database (CDB) in sharded environments.

 **Advantages of the Oracle 12c multitenant architecture** 
+ You can use PDBs to isolate applications from one another.
+ You can use PDBs as portable collection of schemas.
+ You can clone PDBs and transport them to different CDBs/Oracle instances.
+ Management of many databases (individual PDBs) as a whole.
+ Separate security, users, permissions, and resource management for each PDB provides greater application isolation.
+ Enables a consolidated database model of many individual applications sharing a single Oracle server.
+ Provides an easier way to patch and upgrade individual clients and/or applications using PDBs.
+ Backups are supported at both a multitenant container-level as well as at an individual PDB-level (both for physical and logical backups).

 **The Oracle multitenant architecture** 
+ A multitenant CDB can support one or more PDBs.
+ Each PDB contains its own copy of `SYSTEM` and application tablespaces.
+ The PDBs share the Oracle Instance memory and background processes. The use of PDBs enables consolidation of many databases and applications into individual containers under the same Oracle instance.
+ A single Root Container (CDB\$1ROOT) exists in a CDB and contains the Oracle Instance Redo Logs, undo tablespace (unless Oracle 12.2 local undo mode is enabled), and control files.
+ A single Seed PDB exists in a CDB and is used as a template for creating new PDBs.

![\[Container Oracle database\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-oracle-container-database.png)


 **CDB and PDB semantics** 

Container databases (CDB)
+ Created as part of the Oracle 12c software installation.
+ Contains the Oracle control files, its own set of system tablespaces, the instance undo tablespaces (unless Oracle 12.2 local undo mode is enabled), and the instance redo logs.
+ Holds the data dictionary for the root container and for all of the PDBs.

Pluggable databases (PDB)
+ An independent database that exists under a CDB. Also known as a container.
+ Used to store application-specific data.
+ You can create a pluggable database from a the pdb\$1seed (template database) or as a clone of an existing PDB.
+ Stores metadata information specific to its own objects (data-dictionary).
+ Has its own set of application data files, system data files, and tablespaces along with temporary files to manage objects.

 **Examples** 

List existing PDBs created in an Oracle CDB instance.

```
SHOW PDBS;

CON_ID  CON_NAME  OPEN MODE   RESTRICTED
2       PDB$SEED  READ ONLY   NO
3       PDB1      READ WRITE  NO
```

Provision a new PDB from the template `seed$pdb`.

```
CREATE PLUGGABLE DATABASE PDB2 admin USER ora_admin
IDENTIFIED BY ora_admin FILE_NAME_CONVERT=('/pdbseed/','/pdb2/');
```

Alter a specific PDB to the `READ/WRITE` mode and verify the change.

```
ALTER PLUGGABLE DATABASE PDB2 OPEN READ WRITE;

SHOW PDBS;

CON_ID  CON_NAME  OPEN MODE   RESTRICTED
2       PDB$SEED  READ ONLY   NO
3       PDB1      READ WRITE  NO
4       PDB2      READ WRITE  NO
```

Clone a PDB from an existing PDB.

```
CREATE PLUGGABLE DATABASE PDB3
  FROM PDB2 FILE_NAME_CONVERT= ('/pdb2/','/pdb3/');

SHOW PDBS;

CON_ID  CON_NAME  OPEN MODE   RESTRICTED
2       PDB$SEED  READ ONLY   NO
3       PDB1      READ WRITE  NO
4       PDB2      READ WRITE  NO
5       PDB3      MOUNTED
```

For more information, see [Oracle Multitenant](https://docs.oracle.com/en/database/oracle/oracle-database/19/multi/index.html) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.multitenant.pg"></a>

 Amazon Aurora PostgreSQL offers a different and simplified architecture to manage and create a multitenant database environment. You can use Aurora PostgreSQL to provide levels of functionality similar (but not identical) to those offered by Oracle PDBs by creating multiple databases under the same Aurora PostgreSQL cluster and / or using separate Aurora clusters if total isolation of workloads is required.

You can create multiple PostgreSQL databases under a single Amazon Aurora PostgreSQL cluster.

![\[Aurora PostgreSQL cluster\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-aurora-postgresql-cluster.png)


Each Amazon Aurora cluster contains a primary instance that can accept both reads and writes for all cluster databases.

You can create up to 15 read-only nodes providing scale-out functionality for application reads and high availability.

![\[Aurora PostgreSQL cluster storage volume\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-aurora-postgresql-cluster-storage-volume.png)


An Oracle CDB/Instance is a high-level equivalent to an Amazon Aurora cluster, and an Oracle Pluggable Database (PDB) is equivalent to a PostgreSQL database created inside the Amazon Aurora cluster. Not all features are comparable between Oracle 12c PDBs and Amazon Aurora.

In PostgreSQL, you can copy databases using templates. Database needs to be created or modified to have `IS_TEMPLATE` flag set to true and then new database could be created with `CREATE DATABASE <newdbname> TEMPLATE <templatedbname>`.

Starting with Oracle 18c and 19c, you can use this feature for the following:
+ PDB Clone
+ Refreshable PDB Switchover
+ CDB Fleet Management
+ More than one pluggable database (PDB) in a container database (CDB) in sharded environments.

In the AWS Cloud, these features can be achieved in many ways and each can be optimized using different services.

To clone a databases inside the PostgreSQL instance, use the `TEMPLATE` option with the `CREATE DATABASE` statement. The following command copies the `emp` database to `emp_bck`.

```
CREATE DATABASE emp_bck TEMPLATE emp;
```

To achieve similar functionality to Refreshable PDB Switchover, it depends on the use case but there are multiple options mostly depended on the required granularity:
+ Databases in the same instance — you can do the failover using `CREATE DATABASE` statement when size and required downtime allow that and use an application failover to point to any of the databases.
+ Database links and replication method — database links or AWS DMS can be used to make sure there are two databases in two different instances that are in sync and have application failover to point to the other database when needed.
+ PostgreSQL logical replication provides fine-grained control over replicating and synchronizing parts of a database. For example, you can use logical replication to replicate an individual table of a database.

Managing CDB is actually very similar to the AWS orchestration, as you can manage multiple Amazon RDS instances there (CDB) and databases inside (PDB), all monitored centrally and can be managed through the AWS console or AWS CLI.

 **Examples** 

Create a new database in PostgreSQL using the `CREATE DATABASE` statement.

```
CREATE DATABASE pg_db1;
CREATE DATABASE pg_db2;
CREATE DATABASE pg_db3;
```

List all databases created under an Amazon Aurora PostgreSQL cluster.

```
\l

Name       Owner         Encoding  Collate      Ctype
admindb    rds_pg_admin  UTF8      en_US.UTF-8  en_US.UTF-8
pg_db1     rds_pg_admin  UTF8      en_US.UTF-8  en_US.UTF-8
pg_db2     rds_pg_admin  UTF8      en_US.UTF-8  en_US.UTF-8
pg_db3     rds_pg_admin  UTF8      en_US.UTF-8  en_US.UTF-8
postgres   rds_pg_admin  UTF8      en_US.UTF-8  en_US.UTF-8
rdsadmin   rdsadmin      UTF8      en_US.UTF-8  en_US.UTF-8
template0  rdsadmin      UTF8      en_US.UTF-8  en_US.UTF-8
template1  rds_pg_admin  UTF8      en_US.UTF-8  en_US.UTF-8
```

# Oracle Resource Manager and PostgreSQL dedicated Amazon Aurora clusters
<a name="chap-oracle-aurora-pg.special.dedicated"></a>

With AWS DMS, you can migrate data from an Oracle database to a PostgreSQL-compatible Amazon Aurora database cluster. Oracle Resource Manager helps manage Oracle database migration by allowing you to deploy data pump jobs for migrating schemas and data from an Oracle database. PostgreSQL dedicated Amazon Aurora clusters provide a PostgreSQL-compatible relational database built for the cloud, enabling you to scale database resources up or down based on your needs.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  Distribute load, applications, or users across multiple instances.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.dedicated.ora"></a>

Oracle Resource Manager enables enhanced management of multiple concurrent workloads running under a single Oracle database. Using Oracle Resource Manager, you can partition server resources for different workloads.

Resource Manager helps with sharing server and database resources without causing excessive resource contention and helps to eliminate scenarios involving inappropriate allocation of resources across different database sessions.

Oracle Resource Manager enables you to:
+ Guarantee a minimum amount of CPU cycles for certain sessions regardless of other running operations.
+ Distribute available CPU by allocating percentages of CPU time to different session groups.
+ Limit the degree of parallelism of any operation performed by members of a user group.
+ Manage the order of parallel statements in the parallel statement queue.
+ Limit the number of parallel running servers that a user group can use.
+ Create an active session pool. An active session pool consists of a specified maximum number of user sessions allowed to be concurrently active within a user group.
+ Monitor used database/server resources by dictionary views.
+ Manage runaway sessions or calls and prevent them from overloading the database.
+ Prevent the running of operations that the optimizer estimates will run for a longer time than a specified limit.
+ Limit the amount of time that a session can be connected but idle, thus forcing inactive sessions to disconnect and potentially freeing memory resources.
+ Allow a database to use different resource plans, based on changing workload requirements.
+ Manage CPU allocation when there is more than one instance on a server in an Oracle Real Application Cluster environment (also called instance caging).

Oracle Resource Manager introduces three concepts:
+  **Consumer group** — A collection of sessions grouped together based on resource requirements. The Oracle Resource Manager allocates server resources to resource consumer groups, not to the individual sessions.
+  **Resource plan** — Specifies how the database allocates its resources to different Consumer Groups. You will need to specify how the database allocates resources by activating a specific resource plan.
+  **Resource plan directive** — Associates a resource consumer group with a plan and specifies how resources are to be allocated to that resource consumer group.

**Note**  
Only one Resource Plan can be active at any given time. Resource Directives control the resources allocated to a Consumer Group belong to a Resource Plan. The Resource Plan can refer to Subplans to create even more complex Resource Plans.

 **Examples** 

Create a simple Resource Plan. To use the Oracle Resource Manager, you need to assign a plan name to the `RESOURCE_MANAGER_PLAN` parameter. Using an empty string will disable the Resource Manager.

```
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'mydb_plan';
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';
```

You can create complex Resource Plans. A complex Resource Plan is one that is not created with the `CREATE_SIMPLE_PLAN` PL/SQL procedure and provides more flexibility and granularity.

```
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (
PLAN => 'DAYTIME',
GROUP_OR_SUBPLAN => 'OLTP',
COMMENT => 'OLTP group',
MGMT_P1 => 75);
END;
/
```

For more information, see [Managing Resources with Oracle Database Resource Manager](https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/managing-resources-with-oracle-database-resource-manager.html#GUID-2BEF5482-CF97-4A85-BD90-9195E41E74EF) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.dedicated.pg"></a>

PostgreSQL doesn’t have built-in resource management capabilities that are equivalent to the functionality provided by Oracle Resource Manager. However, due to the elasticity and flexibility provided by cloud economics, workarounds could be applicable and such capabilities might not be as of similar importance to monolithic on-premises databases.

The Oracle Resource Manager primarily exists because traditionally, Oracle databases were installed on very powerful monolithic servers that powered multiple applications simultaneously. The monolithic model made the most sense in an environment where the licensing for the Oracle database was per-CPU and where Oracle databases were deployed on physical hardware. In these scenarios, it made sense to consolidate as many workloads as possible into few servers. In cloud databases, the strict requirement to maximize the usage of each individual server is often not as important and a different approach can be employed:

Individual Amazon Aurora clusters can be deployed, with varying sizes, each dedicated to a specific application or workload. Additional read-only Aurora Replica servers can be used to offload any reporting-style workloads from the master instance.

The following diagram shows the traditional Oracle model where maximizing the usage of each physical Oracle server was essential due to physical hardware constraints and the per-CPU core licensing model.

![\[Traditional Oracle model\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-traditional-oracle-model.png)


With Amazon Aurora, you can deploy separate and dedicated database clusters. Each cluster is dedicated to a specific application or workload creating isolation between multiple connected sessions and applications. The following diagram shows this architecture.

![\[Aurora separate and dedicated database clusters\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-aurora-dedicated-database-clusters.png)


Each Amazon Aurora instance (primary or replica) can be scaled independently in terms of CPU and memory resources using the different instance types. Because multiple Amazon Aurora instances can be instantly deployed and much less overhead is associated with the deployment and management of Aurora instances when compared to physical servers, separating different workloads to different instance classes could be a suitable solution for controlling resource management.

For instance types and resources, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/).

In addition, each Amazon Aurora primary or replica instance can also be directly accessed from your applications using its own endpoint. This capability is especially useful if you have multiple Aurora read-replicas for a given cluster and you wish to utilize different Aurora replicas to segment your workload.

 **Examples** 

Suppose that you were using a single Oracle Database for multiple separate applications and used Oracle Resource Manager to enforce a workload separation, allocating a specific amount of server resources for each application. With Amazon Aurora, you might want to create multiple separate databases for each individual application. Adding additional replica instances to an existing Amazon Aurora cluster is easy.

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Databases** and select the Amazon Aurora cluster that you want to scale-out by adding an additional reader.

1. Choose **Actions** and then choose **Add reader**.

1. Select the instance class depending on the amount of compute resources your application requires.

1. Choose **Create Aurora Replica**.

## Summary
<a name="chap-oracle-aurora-pg.special.dedicated.summary"></a>


| Oracle Resource Manager |  Amazon Aurora instances | 
| --- | --- | 
|  Set the maximum CPU usage for a resource group  |  Create a dedicated Aurora Instance for a specific application  | 
|  Limit the degree of parallelism for specific queries  |  <pre>SET max_parallel_workers_per_gather TO x;</pre> Setting the PostgreSQL `max_parallel_workers_per_gather` parameter should be done as part of your application database connection.  | 
|  Limit parallel runs  |  <pre>SET max_parallel_workers_per_gather TO x;<br />-- by a single Gather or Gather Merge node<br />-- OR<br />SET max_parallel_workers TO x;<br />-- for the whole system (since PostgreSQL 10)</pre>  | 
|  Limit the number of active sessions  |  Manually detect the number of connections that are open from a specific application and restrict connectivity either with database procedures or within the application DAL itself. <pre>select pid from pg_stat_activity where usename in(<br />  select usename from pg_stat_activity where<br />  state = 'active' group by usename having count(*) > 10)<br />  and state = 'active' order by query_Start;</pre>  | 
|  Restrict maximum runtime of queries  |  Manually terminate sessions that exceed the required threshold. You can detect the length of running queries using SQL commands and restrict maximum run duration using either database procedures or within the application DAL itself. <pre>SELECT pg_terminate_backend(pid) FROM pg_stat_activity<br />WHERE now()-pg_stat_activity.query_start > interval '5 minutes';</pre>  | 
|  Limit the maximum idle time for sessions  |  Manually terminate sessions that exceed the required threshold. You can detect the length of your idle sessions using SQL queries and restrict maximum run using either database procedures or within the application DAL itself. <pre>SELECT pg_terminate_backend(pid)<br />  FROM pg_stat_activity<br />  WHERE datname = 'regress'<br />    AND pid <> pg_backend_pid()<br />    AND state = 'idle'<br />    AND state_change < current_timestamp<br />      - INTERVAL '5' MINUTE;</pre>  | 
|  Limit the time that an idle session holding open locks can block other sessions  |  Manually terminate sessions that exceed the required threshold. You can detect the length of blocking idle sessions using SQL queries and restrict maximum run duration using either database procedures or within the application DAL itself. <pre>SELECT pg_terminate_backend(blocking_locks.pid)<br />  FROM pg_catalog.pg_locks AS blocked_locks<br />  JOIN pg_catalog.pg_stat_activity AS blocked_activity<br />    ON blocked_activity.pid = blocked_locks.pid<br />  JOIN pg_catalog.pg_locks AS blocking_locks<br />    ON blocking_locks.locktype = blocked_locks.locktype<br />  AND blocking_locks.DATABASE IS NOT DISTINCT<br />    FROM blocked_locks.DATABASE<br />  AND blocking_locks.relation  IS NOT DISTINCT<br />    FROM blocked_locks.relation<br />  AND blocking_locks.page IS NOT DISTINCT<br />    FROM blocked_locks.page<br />  AND blocking_locks.tuple IS NOT DISTINCT<br />    FROM blocked_locks.tuple<br />  AND blocking_locks.virtualxid IS NOT DISTINCT<br />    FROM blocked_locks.virtualxid<br />  AND blocking_locks.transactionid IS NOT DISTINCT<br />    FROM blocked_locks.transactionid<br />  AND blocking_locks.classid IS NOT DISTINCT<br />    FROM blocked_locks.classid<br />  AND blocking_locks.objid IS NOT DISTINCT<br />    FROM blocked_locks.objid<br />  AND blocking_locks.objsubid IS NOT DISTINCT<br />    FROM blocked_locks.objsubid<br />  AND blocking_locks.pid != blocked_locks.pid<br />  JOIN pg_catalog.pg_stat_activity<br />    AS blocking_activity<br />    ON blocking_activity.pid = blocking_locks.pid<br />    WHERE NOT blocked_locks.granted<br />    and blocked_activity.state_change <<br />      current_timestamp - INTERVAL '5' minute;</pre>  | 
|  Use instance caging in a multi-node Oracle RAC Environment  |  Similar capabilities can be achieved by separating different applications to different Aurora clusters or, for read-only workloads, separate Aurora read replicas within the same Aurora cluster.  | 

For more information, see [Resource Consumption](https://www.postgresql.org/docs/13/runtime-config-resource.html) in the *PostgreSQL documentation*.

# Oracle SecureFile LOBs and PostgreSQL large objects
<a name="chap-oracle-aurora-pg.special.lobs"></a>

With AWS DMS, you can efficiently migrate data from Oracle SecureFile LOBs and PostgreSQL large objects to target databases. Oracle SecureFile LOBs and PostgreSQL large objects are data types used to store large binary data or character data, such as documents, images, and multimedia files.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  PostgreSQL doesn’t support SecureFiles, automation and compatibility refer only to LOBs.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.lobs.ora"></a>

Large Objects (LOB) is a mechanism for storing binary data in a database. Oracle 11g introduced Secure File LOBS that provide more efficiently storage. They are created using the SECUREFILE keyword as part of the `CREATE TABLE` statement.

The Primary benefits of using `SECUREFILE` lobs include:
+  **Compression** — Uses Oracle advanced compression to analyze SecureFiles LOB data to save disk space.
+  **De-Duplication** — Automatically detects duplicate LOB data within a LOB column or partition and reduces storage space by removing duplicates of repeating binary data.
+  **Encryption** — Combined with Transparent Data Encryption (TDE).

 **Examples** 

Create a table using a SecureFiles LOB column.

```
CREATE TABLE sf_tab (COL1 NUMBER, COL2_CLOB CLOB) LOB(COL2_CLOB)
  STORE AS SECUREFILE;
```

Provide additional options for LOB compression during table creation.

```
CREATE TABLE sf_tab (COL1 NUMBER,COL2_CLOB CLOB) LOB(COL2_CLOB)
  STORE AS SECUREFILE COMPRESS_LOB(COMPRESS HIGH);
```

For more information, see [Introduction to Large Objects and SecureFiles](https://docs.oracle.com/en/database/oracle/oracle-database/19/adlob/introduction-to-large-objects.html#GUID-1A2B0023-9EE8-48AF-AA76-171D1FC5C241) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.lobs.pg"></a>

PostgreSQL doesn’t support the advanced storage, security, and encryption options of Oracle SecureFile LOBs. Regular Large Objects datatypes (LOBs) are supported by PostgreSQL and provides stream-style access.

Although not designed specifically from LOB columns, for compression PostgreSQL uses an internal TOAST mechanism (The Oversized-Attribute Storage Technique).

For more information, see [TOAST](https://www.postgresql.org/docs/13/storage-toast.html) in the *PostgreSQL documentation*.

### Large object data types supported by PostgreSQL
<a name="chap-oracle-aurora-pg.special.lobs.pg.dt"></a>

 **BYTEA** 
+ Stores a LOB within the table limited to 1GB.
+ The storage is octal and supports non-printable characters.
+ The input / output format is HEX.
+ Can be used to store a URL references to an Amazon S3 objects used by the database. For example, storing the URL for pictures stored on Amazon S3 on a database table.

 **TEXT** 
+ Data type for storing strings with unlimited length.
+ When not specifying the (n) integer for specifying the varchar data type, the TEXT datatype behaves as the text data type.

For data encryption purposes (not only for LOB columns), consider using [AWS Key Management Service](https://aws.amazon.com/kms).

For more information, see [Large Objects](https://www.postgresql.org/docs/13/largeobjects.html) in the *PostgreSQL documentation*.

# Oracle and PostgreSQL views
<a name="chap-oracle-aurora-pg.special.views"></a>

With AWS DMS, you can create database views on source and target databases to simplify data access and transformation during migration. Views are virtual tables that derive their data from one or more underlying base tables or views. They provide a logical representation of data without duplicating or moving the base data.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [Views](chap-oracle-aurora-pg.tools.actioncode.md#chap-oracle-aurora-pg.tools.actioncode.views)   |  N/A  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.views.ora"></a>

Database Views store a named SQL query in the Oracle Data Dictionary with a predefined structure. A view doesn’t store actual data and may be considered a virtual table or a logical table based on the data from one or more physical database tables.

 **Privileges** 

A user needs the `CREATE VIEW` privilege to create a view in their own schema. A user needs the `CREATE ANY VIEW` privilege to create a view in any schema.

The owner of a needs all the necessary privileges on the source tables or views on which the view is based (`SELECT` or `DML` privileges).

 **CREATE (OR REPLACE) VIEW statements** 
+  `CREATE VIEW` creates a new view.
+  `CREATE OR REPLACE` overwrites an existing view and modifies the view definition without having to manually drop and recreate the original view, and without deleting the previously granted privileges.

 **Oracle common view parameters** 


| Oracle view parameter | Description | 
| --- | --- | 
|   `CREATE OR REPLACE`   |  Recreate an existing view (if one exists) or create a new view.  | 
|   `FORCE`   |  Create the view regardless of the existence of the source tables or views and regardless of view privileges.  | 
|   `VISIBLE` or `INVISIBLE`   |  Specify if a column based on the view is visible or invisible.  | 
|   `WITH READ ONLY`   |  Disable DML commands.  | 
|   `WITH CHECK OPTION`   |  Specifies the level of enforcement when performing DML commands on the view.  | 

 **Examples** 

Views are classified as either simple or complex.

A simple view is a view having a single source table with no aggregate functions. DML operations can be performed on simple views and affect the base table(s). The following example creates and updates a simple View.

```
CREATE OR REPLACE VIEW VW_EMP
AS
SELECT EMPLOYEE_ID, LAST_NAME, EMAIL, SALARY
FROM EMPLOYEES
WHERE DEPARTMENT_ID BETWEEN 100 AND 130;
UPDATE VW_EMP
SET EMAIL=EMAIL||'.org'
WHERE EMPLOYEE_ID=110;

1 row updated.
```

A complex view is a view with several source tables or views containing joins, aggregate (group) functions, or an order by clause. Performing DML operations on complex views can’t be done directly, but `INSTEAD OF` triggers can be used as a workaround. The following example creates and updates a complex view.

```
CREATE OR REPLACE VIEW VW_DEP
AS
SELECT B.DEPARTMENT_NAME, COUNT(A.EMPLOYEE_ID) AS CNT
FROM EMPLOYEES A JOIN DEPARTMENTS B USING(DEPARTMENT_ID)
GROUP BY B.DEPARTMENT_NAME;
UPDATE VW_DEP
SET CNT=CNT +1
WHERE DEPARTMENT_NAME=90;

ORA-01732: data manipulation operation not legal on this view
```

For more information, see [CREATE VIEW](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-VIEW.html#GUID-61D2D2B4-DACC-4C7C-89EB-7E50D9594D30) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.views.pg"></a>

PostgreSQL views share functionality with Oracle views. Creating a view defines a stored query based on one or more physical database tables which runs every time the view is accessed.

Views with `INSTEAD INSERT` triggers can be used with `COPY` command, with this synopsis.

```
COPY view FROM source;
```

Starting with PostgreSQL 13 it is now possible to rename view columns using `ALTER VIEW` command, this will help the DBA to avoid dropping and recreating the view in order to change a column name.

The following syntax was added to the `ALTER VIEW`:

```
ALTER VIEW [ IF EXISTS ] name RENAME [ COLUMN ] column_name TO new_column_name
```

Prior to PostgreSQL 13 the capability was there but in order to change the view’s column name the DBA had to use the `ALTER TABLE` command.

 **PostgreSQL View Synopsis** 

```
CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW name [ (
column_name [, ...] ) ]
[ WITH ( view_option_name [= view_option_value] [, ... ] ) ]
AS query
[ WITH [ CASCADED | LOCAL ] CHECK OPTION ]
```

 **PostgreSQL view privileges** 

A Role or user must be granted `SELECT` and `DML` privileges on the base tables or views in order to create a view.

For more information, see [GRANT](https://www.postgresql.org/docs/10/sql-grant.html) in the *PostgreSQL documentation*.

 **PostgreSQL view parameters** 
+  `CREATE [OR REPLACE] VIEW` — Similar to the Oracle syntax. When you re-create an existing view, the new view must have the same column structure as generated by the original view (column names, column order and data types). As such, it is sometimes preferable to drop the view and use the `CREATE VIEW` statement instead.

  ```
  CREATE [OR REPLACE] VIEW VW_NAME AS SELECT COLUMNS FROM TABLE(s) [WHERE CONDITIONS];
  DROP VIEW [IF EXISTS] VW_NAME;
  ```

  The `IF EXISTS` parameter is optional.
+  `WITH [ CASCADED | LOCAL ] CHECK OPTION` — DML `INSERT` and `UPDATE` operations are verified against the view-based tables to ensure that new rows satisfy the original structure conditions or the view-defining condition. If a conflict is detected, the DML operation fails.

   `CHECK OPTION` can be `LOCAL` or `CASCADED`. `LOCAL` verifies against the view without a hierarchical check. `CASCADED` verifies all underlying base views using a hierarchical check.

 **Executing DML commands on views** 

PostgreSQL simple views are automatically updatable. Unlike Oracle views, no restrictions exist when performing DML operations against views. An updatable view may contain a combination of updatable and non-updatable columns. A column is updatable if it references an updatable column of the underlying base table. If not, the column is read-only and an error is raised if an `INSERT` or `UPDATE` statement is attempted on the column.

 **Examples** 

Creating and updating a view without the `CHECK OPTION` parameter.

```
CREATE OR REPLACE VIEW VW_DEP AS
SELECT DEPARTMENT_ID, DEPARTMENT_NAME,
MANAGER_ID, LOCATION_ID FROM DEPARTMENTS
WHERE LOCATION_ID=1700;

view VW_DEP created.

UPDATE VW_DEP SET LOCATION_ID=1600;

21 rows updated.
```

Creating and updating a view with the `LOCAL CHECK OPTION` parameter.

```
CREATE OR REPLACE VIEW VW_DEP AS
SELECT DEPARTMENT_ID, DEPARTMENT_NAME,
MANAGER_ID, LOCATION_ID FROM DEPARTMENTS
WHERE LOCATION_ID=1700 WITH LOCAL CHECK OPTION;

view VW_DEP created.

UPDATE VW_DEP SET LOCATION_ID=1600;

SQL Error: ERROR: new row violates check option for view "vw_dep"
```

For more information, see [Views](https://www.postgresql.org/docs/13/tutorial-views.html) and [CREATE VIEW](https://www.postgresql.org/docs/13/sql-createview.html) in the *PostgreSQL documentation*.

# Oracle XML DB and PostgreSQL XML type and functions
<a name="chap-oracle-aurora-pg.special.xmldb"></a>

With AWS DMS, you can migrate data from Oracle XML DB and PostgreSQL XML type and functions to other database engines supported by AWS. Oracle XML DB provides the ability to store and manage XML data in an Oracle database, while PostgreSQL XML type and functions offer similar capabilities for working with XML data in a PostgreSQL database.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |  N/A  |  Different paradigm and syntax will require application or drivers rewrite.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.xmldb.ora"></a>

Oracle XML DB is a set of Oracle Database technologies providing XML capabilities for database administrators and developers. It provides native XML support and other features including the native XMLType and XMLIndex.

XMLType represents an XML document in the database that is accessible from SQL. It supports standards such as XML Schema, XPath, XQuery, XSLT, and DOM.

XMLIndex supports all forms of XML data from highly structured to completely unstructured.

XML data can be schema-based or non-schema-based. Schema-based XML adheres to an XSD Schema Definition and must be validated. Non-schema-based XML data doesn’t require validation.

According to the Oracle documentation, the aspects you should consider when using XML are:
+ The ways that you intend to store your XML data.
+ The structure of your XML data.
+ The languages used to implement your application.
+ The ways you intend to process your XML data.

The most common features are:
+ Storage Model: Binary XML.
+ Indexing: XML search index, XMLIndex with structured component.
+ Database language: SQL, with SQL/XML functions.
+ XML languages: XQuery and XSLT.

### Storage model — binary XML
<a name="chap-oracle-aurora-pg.special.xmldb.ora.binary"></a>

Also called post-parse persistence, it is the default storage model for Oracle XML DB. It is a post-parse, binary format designed specifically for XML data. Binary XML is XML schema-aware and the storage is very flexible.

You can use it for XML schema-based documents or for documents that are not based on an XML schema. You can use it with an XML schema that allows for high data variability or that evolves considerably or unexpectedly.

This storage model also provides efficient partial updating and streaming query evaluation.

The other storage option is Object-relational storage and is more efficient when using XML as structured data with a minimum amount of changes and different queries. For more information, see [Oracle XML DB Developer’s Guide](https://docs.oracle.com/en/database/oracle/oracle-database/19/adxdb/xml-db-developers-guide.pdf).

### Indexing — XML search index, XMLIndex with structured component
<a name="chap-oracle-aurora-pg.special.xmldb.ora.indexing"></a>

XML Search Index provides full-text search over XML data. Oracle recommends storing XMLType data as Binary XML and to use XQuery Full Text (XQFT).

If you are not using binary storage and your data is structured XML, you can use the Oracle text indexes, use the regular string functions such as contains, or use XPath `ora:contains`.

If you want to use predicates such as `XMLExists` in your `WHERE` clause, you must create an XML search index.

 **Examples** 

The following example creates a SQL directory object, which is a logical name in the database for a physical directory on the host computer. This directory contains XML files. The example inserts XML content from the `purOrder.xml` file into the orders table.

Create an XMLType table.

```
CREATE TABLE orders OF XMLType;
CREATE DIRECTORY xmldir AS path_to_folder_containing_XML_file;
INSERT INTO orders VALUES (XMLType(BFILENAME('XMLDIR',
  'purOrder.xml'),NLS_CHARSET_ID('AL32UTF8')));
```

Create table with an XMLType column.

```
CREATE TABLE xwarehouses (warehouse_id NUMBER, warehouse_spec XMLTYPE);
```

Create an XMLType view.

```
CREATE VIEW warehouse_view AS
SELECT VALUE(p) AS warehouse_xml FROM xwarehouses p;
```

Insert data into an XMLType column.

```
INSERT INTO xwarehouses
VALUES(100, '<?xml version="1.0"?>
<PO pono="1">
<PNAME>Po_1</PNAME>
<CUSTNAME>John</CUSTNAME>
<SHIPADDR>
<STREET>1033, Main Street</STREET>
<CITY>Sunnyvale</CITY>
<STATE>CA</STATE>
</SHIPADDR></PO>')
```

Create an XML search index and query it with XQuery:

1. After the user gets all the privileges needed and set the right parameter in the Oracle text schema.

1. Create Oracle text section and preference.

1. Create the XML search index (regular index associated with the objects).

```
BEGIN
CTX_DDL.create_section_group('secgroup', 'PATH_SECTION_GROUP');
CTX_DDL.set_sec_grp_attr('secgroup', 'XML_ENABLE', 'T');
CTX_DDL.create_preference('pref', 'BASIC_STORAGE');
CTX_DDL.set_attribute('pref','D_TABLE_CLAUSE', 'TABLESPACE ts_name LOB(DOC) STORE AS
SECUREFILE(TABLESPACE ts_name COMPRESS MEDIUM CACHE)');
CTX_DDL.set_attribute('pref','I_TABLE_CLAUSE','TABLESPACE ts_name LOB(TOKEN_INFO)
STORE AS SECUREFILE(TABLESPACE ts_name NOCOMPRESS CACHE)');
END;
/
CREATE INDEX po_ctx_idx ON po_binxml(OBJECT_VALUE)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS('storage pref section group secgroup');
```

Query using the preceding index in XQuery. XQuery is W3C standard for generating, querying and updating XML, Natural query language for XML.

Content, very like SQL in the relational world, advanced form of XPath and XSLT (as read in the begging of the topic, are for structured data and like regular Oracle text indexes).

Search in the PATH `/PurchaseOrder/LineItems/LineItem/Description` for values containing **Big** and **Street** and then return their **Title** tag (only in the select).

```
SELECT XMLQuery('for $i in /PurchaseOrder/LineItems/LineItem/Description where $i[.
contains text "Big" ftand "Street"] return <Title>{$i}</Title>'
PASSING OBJECT_VALUE RETURNING CONTENT)
FROM po_binxml
WHERE XMLExists('/PurchaseOrder/LineItems/LineItem/Description [. contains
text "Big" ftand "Street"]'
```

XMLIndex with structured component is used for queries that project fixed structured islands of XML content, even if the surrounding data is relatively unstructured. A structured XMLIndex component organizes such islands in a relational format.

You must define the parts of XML data that you search in queries (applies to both XML schema-based and non-schema-based data).

Create an XMLIndex with structured component:

1. Create the base XMLIndex on `po_binxml` table. `OBJECT_VALUE` is the XML data stored in the table. All definitions of XML types and Objects are from the XDB schema in the database.

1. Use `DBMS_XMLINDEX.register` parameter to add another structure to the index.

1. Create tables (`po_idx_tab` and `po_index_lineitem`) to store index data as structured data. Next to each table name there is the root of the PATH in the XML data (/PurchaseOrder and /LineItem). After that, each column is another PATH in this root. Note that in the `po_idx_tab` table the last column is XMLType. It takes everything under this PATH and saves it in XML datatype.

1. Add the group of structure to the index.

```
CREATE INDEX po_xmlindex_ix ON po_binxml (OBJECT_VALUE)
INDEXTYPE IS XDB.XMLIndex PARAMETERS ('PATH TABLE path_tab');
BEGIN
DBMS_XMLINDEX.registerParameter(
'myparam',
'ADD_GROUP GROUP po_item
XMLTable po_idx_tab ''/PurchaseOrder''
COLUMNS reference VARCHAR2(30) PATH ''Reference'',
requestor VARCHAR2(30) PATH ''Requestor'',
username VARCHAR2(30) PATH ''User'',
lineitem XMLType PATH ''LineItems/LineItem'' VIRTUAL
XMLTable po_index_lineitem ''/LineItem'' PASSING lineitem
COLUMNS itemno BINARY_DOUBLE PATH ''@ItemNumber'',
description VARCHAR2(256) PATH ''Description'',
partno VARCHAR2(14) PATH ''Part/@Id'',
quantity BINARY_DOUBLE PATH ''Part/@Quantity'',
unitprice BINARY_DOUBLE PATH ''Part/@UnitPrice''');
END;
/

ALTER INDEX po_xmlindex_ix PARAMETERS('PARAM myparam');
```

For more information, see [Indexes for XMLType Data](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/XMLQUERY.html#GUID-9E8D3220-2CF5-4C63-BDC2-0526D57B9CDB) in the *Oracle documentation*.

### SQL/XML functions
<a name="chap-oracle-aurora-pg.special.xmldb.ora.functions"></a>

Oracle Database provides two main SQL/XML groups:
+ SQL/XML publishing functions.
+ SQL/XML query and update functions.

#### SQL/XML publishing functions
<a name="chap-oracle-aurora-pg.special.xmldb.ora.functions.publish"></a>

SQL/XML publishing functions are SQL results generated from XML data (also called SQL/XML generation functions).

 **XMLQuery** is used in `SELECT` clauses to return the result as XMLType data (See the previous example for creating an XML search index).

 **XMLTable** is used in `FROM` clauses to get results using XQuery, and insert the results into a virtual table (can insert into existing database table).

 **Example** 

Use XMLTable to generate virtual table from the xml value (`OBJECT_VALUE`). Generate columns under the root (`/PurchaseOrder`).

One of the columns is XMLType and then another XMLTable call insert deeper into the XML data calling the virtual table (po) and create another virtual table. When using `@` the path is looking in the inner tag.

```
SELECT po.reference, li.*
FROM po_binaryxml p,
XMLTable('/PurchaseOrder' PASSING p.OBJECT_VALUE
COLUMNS
reference VARCHAR2(30) PATH 'Reference',
lineitem XMLType PATH 'LineItems/LineItem') po,
XMLTable('/LineItem' PASSING po.lineitem
COLUMNS
itemno NUMBER(38) PATH '@ItemNumber',
description VARCHAR2(256) PATH 'Description',
partno VARCHAR2(14) PATH 'Part/@Id',
quantity NUMBER(12, 2) PATH 'Part/@Quantity',
unitprice NUMBER(8, 4) PATH 'Part/@UnitPrice') li;
```

 **XMLExists** is used in `WHERE` clauses to check if an XQuery expression returns a non-empty query sequence. If it does, it returns `TRUE`. Otherwise, it returns `FALSE`. In the following example, the query searches the `purchaseorder` table for `PurchaseOrders` that where the `SpecialInstructions` tag is set to `Expedite`.

```
SELECT OBJECT_VALUE FROM purchaseorder
  WHERE XMLExists('/PurchaseOrder[SpecialInstructions="Expedite"]'
  PASSING OBJECT_VALUE);
```

 **XMLCast** is used in `SELECT` clauses to convert scalar values returned from XQuery to `NUMBER`, `VARCHAR2`, `CHAR`, `CLOB`, `BLOB`, `REF`, or `XMLType`. For example, after finding the objects that have `SpecialInstructions` set to `Expedite`, XMLCast returns the Reference in each item as `VARCHAR2(100)`.

```
SELECT XMLCast(XMLQuery('/PurchaseOrder/Reference'
  PASSING OBJECT_VALUE
  RETURNING CONTENT) AS VARCHAR2(100)) "REFERENCE"
  FROM purchaseorder
  WHERE XMLExists('/PurchaseOrder[SpecialInstructions="Expedite"]'
  PASSING OBJECT_VALUE);
```

For more information, see [XMLELEMENT](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/XMLELEMENT.html#GUID-DEA75423-00EA-4034-A246-4A774ADC988E) in the *Oracle documentation*.

#### SQL/XML query and update functions
<a name="chap-oracle-aurora-pg.special.xmldb.ora.functions.query"></a>

SQL/XML query and update functions are used to query and update XML content as part of regular SQL operations.

For XMLQuery, see the example preceding.

Where in the `SET` clause there is XMLType instance, SQL functions or XML constructors that return an XML instance. In the following example, after finding the relevant item with XMLExists in the `SET` clause, the command sets the `OBJECT_VALUE` to a new file ('NEW-DAUSTIN-20021009123335811PDT.xml') located in the `XMLDIR` directory.

```
UPDATE purchaseorder po
SET po.OBJECT_VALUE = XMLType(bfilename('XMLDIR','NEW-DAUSTIN-20021009123335811PDT.xml'),
  nls_charset_id('AL32UTF8'))
WHERE XMLExists('$p/PurchaseOrder[Reference="DAUSTIN-20021009123335811PDT"]'
  PASSING po.OBJECT_VALUE AS "p");
```

For more information, see [XMLQUERY](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/XMLQUERY.html#GUID-9E8D3220-2CF5-4C63-BDC2-0526D57B9CDB) in the *Oracle documentation*.

### SQL and PL/SQL
<a name="chap-oracle-aurora-pg.special.xmldb.ora.sql"></a>

Conversion of SQL and PL/SQL is covered in the [SQL and PL/SQL](chap-oracle-aurora-pg.sql.md) topic.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.xmldb.pg"></a>

The data type xml in PostgreSQL can be used when creating tables, the main advantage to keep the xml data in xml type column and not in regular text column is the xml type check the input to alert if we try to insert wrong data format, in additional, there are support functions to perform type-safe operations on it. XML can store well-formed “documents” as defined by XML standard or “content” fragments that are defined by the production XMLDecl, this means that content fragments can have more than one top-level element or character node.

You can use `IS DOCUMENT` to evaluate whether a particular xml value is a full document or only a content fragment.

The `xmltable()` and `xpath()` functions that may not work with non-ASCII data when the server encoding is not UTF-8.

 **Examples** 

Create XML data and insert it to the table.

The first insert is Document and the second is just content, the two types of the data can be inserted to the same column.

If you insert wrong XML (for example, with a missing tag), the insert will fail with relevant error.

The following query retrieves only the `DOCUMENT RECORDS`.

```
CREATE TABLE test (a xml);

insert into test values (XMLPARSE (DOCUMENT '<?xml vesion=" 1.0"?><Series><title>Simpsons</title><chapter>...</chapter></Series>'));

insert into test values (XMLPARSE (CONTENT 'note<tag>value</tag><tag>value</tag>'));

select * from test where a IS DOCUMENT;
```

Convert XML data to rows will be a new feature in PostgreSQL 10, this can be very helpful to read XML data using table equivalent.

```
CREATE TABLE xmldata_sample AS SELECT
xml $$
<ROWS>
  <ROW id="1">
    <EMP_ID>532</EMP_ID>
    <EMP_NAME>John</EMP_NAME>
  </ROW>
  <ROW id="5">
    <EMP_ID>234</EMP_ID>
    <EMP_NAME>Carl</EMP_NAME>
    <EMP_DEP>6</EMP_DEP>
    <SALARY unit="dollars">10000</SALARY>
  </ROW>
  <ROW id="6">
    <EMP_ID>123</EMP_ID>
    <EMP_DEP>8</EMP_DEP>
    <SALARY unit="dollars">5000</SALARY>
  </ROW>
</ROWS>
$$ AS data;

SELECT xmltable.*
  FROM xmldata_sample,
    XMLTABLE('//ROWS/ROW'
      PASSING data
      COLUMNS id int PATH '@id',
        ordinality FOR ORDINALITY,
        "EMP_NAME" text,
        "EMP_ID" text PATH 'EMP_ID',
        SALARY_USD float PATH 'SALARY[@unit = "dollars"]',
        MANAGER_NAME text PATH 'MANAGER_NAME' DEFAULT 'not specified');

id  ordinality  EMP_NAME  EMP_ID  salary_usd  manager_name
1   1           John      532                 not specified
5   2           Carl      234     10000       not specified
6   3                     123     5000        not specified
```

## Summary
<a name="chap-oracle-aurora-pg.special.xmldb.summary"></a>


| Description | PostgreSQL | Oracle | 
| --- | --- | --- | 
|  Create table with XML  |  <pre>CREATE TABLE test (a xml);</pre>  |  <pre>CREATE TABLE test OF XMLType;<br />or<br />CREATE TABLE test (doc XMLType);</pre>  | 
|  Insert data into xml column  |  <pre>INSERT INTO test<br />VALUES (XMLPARSE (DOCUMENT<br />'<?xml version="1.0"?><br /><PO pono="1"><br /><PNAME>Po_1</PNAME><br /><CUSTNAME>John</CUSTNAME><br /><SHIPADDR><br />  <STREET>1033, Main Street</STREET><br />  <CITY>Sunnyvale</CITY><br />  <STATE>CA</STATE><br /></SHIPADDR> </PO>'));</pre>  |  <pre>INSERT INTO test<br />VALUES ('<?xml version="1.0"?><br /><PO pono="1"> <PNAME>Po_1</PNAME><br /><CUSTNAME>John</CUSTNAME><br /><SHIPADDR><br />  <STREET>1033, Main Street</STREET><br />  <CITY>Sunnyvale</CITY><br />  <STATE>CA</STATE><br /></SHIPADDR> </PO>')</pre>  | 
|  Create Index  |  We index a specific path so the queries must be the same <pre>CREATE INDEX test_isbn ON test<br />(((((xpath('/path/tag/text()', a))[1])::text)));</pre>  |  <pre>CREATE INDEX test_idx ON test (OBJECT_VALUE)<br />INDEXTYPE IS XDB.XMLIndex<br />PARAMETERS ('PATH TABLE path_tab');<br /><br />BEGIN<br />DBMS_XMLINDEX.registerParameter(<br />'myparam', 'ADD_GROUP GROUP a_item<br />XMLTable test_idx_tab ''/Path'' COLUMNS tag<br />VARCHAR2(30) PATH ''tag''');<br />END;<br />/<br /><br />ALTER INDEX test_idx PARAMETERS<br />('PARAM myparam');</pre>  | 
|  Create Fulltext Index  |  We index a specific path so the queries must be the same <pre>CREATE INDEX my_funcidx ON<br />test USING GIN ( CAST(xpath('/PNAME/-<br />text()', a) AS TEXT[]) );</pre>  |  After preference and section created in Oracle Text <pre>CREATE INDEX test_idx ON test (OBJECT_<br />VALUE) INDEXTYPE IS CTXSYS.CONTEXT<br />PARAMETERS('storage pref section group secgroup');</pre>  | 
|  Query using XQuery  |  Not Supported  |  <pre>SELECT XMLQuery('for $i in<br />/PurchaseOrder/LineItems/LineItem/Description<br />where $i[. contains text "Big"]<br />return <Title>{$i}</Title>'<br />PASSING OBJECT_VALUE RETURNING CONTENT)<br />FROM xml_tbl;</pre>  | 
|  Query using XPath  |  <pre>SELECT xpath('//student/firstname/text()', a) FROM test</pre>  |  <pre>select sys.XMLType.extract<br />(doc,'/student/firstname/text()') firstname from test;</pre>  | 
|  Function to check if tag exists and function to cast and return another data type (string)  |  <pre>SELECT XMLCast(XMLQuery<br />('/PurchaseOrder/Reference'<br />  PASSING OBJECT_VALUE<br />  RETURNING CONTENT) AS VARCHAR2(100))<br />"REFERENCE"<br />  FROM purchaseorder<br />  WHERE XMLExists('/PurchaseOrder[SpecialInstructions="Expedite"]'<br />  PASSING OBJECT_VALUE);</pre>  |  <pre>select cast (xpath('//book/title/text()', a) as text[])<br />as BookTitle from test where xmlexists('//book/title' PASSING by ref a);</pre>  | 
|  Validate schema using XSD  |  Not out-of-the-box but can create trigger before insert or delete and find tag with XPATH and try to cast the type of the value to know if it’s OK, then if something is wrong stop the insert or delete command  |  Supported  | 

For more information, see [XML Type](https://www.postgresql.org/docs/13/datatype-xml.html) and [XML Functions](https://www.postgresql.org/docs/13/functions-xml.html) in the *PostgreSQL documentation*.

# Oracle Log Miner and PostgreSQL logging options
<a name="chap-oracle-aurora-pg.special.log"></a>

With AWS DMS, you can migrate data from Oracle and PostgreSQL databases while maintaining transaction integrity by utilizing Oracle Log Miner and PostgreSQL logical replication capabilities. Oracle Log Miner provides access to redo log files, allowing you to capture data manipulation language (DML) and data definition language (DDL) changes made to Oracle databases. PostgreSQL logical replication streams write-ahead log (WAL) records, enabling data synchronization between primary and standby servers.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  PostgreSQL doesn’t support LogMiner, workaround is available.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.special.log.ora"></a>

Oracle Log Miner is a tool for querying the database Redo Logs and the Archived Redo Logs using an SQL interface. Using Log Miner, you can analyze the content of database “transaction logs” (online and archived redo logs) and gain historical insights on past database activity such as data modification by individual DML statements.

 **Examples** 

The following examples demonstrate how to use Log Miner to view DML statements that run on the employees table.

Find the current redo log file.

```
SELECT V$LOG.STATUS, MEMBER
FROM V$LOG, V$LOGFILE
WHERE V$LOG.GROUP# = V$LOGFILE.GROUP#
AND V$LOG.STATUS = 'CURRENT';

STATUS    MEMBER
CURRENT   /u01/app/oracle/oradata/orcl/redo02.log
```

Use the `DBMS_LOGMNR.ADD_LOGFILE` procedure. Pass the file path as a parameter to the Log Miner API.

```
BEGIN
DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/oradata/orcl/redo02.log');
END;
/

PL/SQL procedure successfully completed.
```

Start Log Miner using the `DBMS_LOGMNR.START_LOGMNR` procedure.

```
BEGIN
DBMS_LOGMNR.START_LOGMNR(options=>
dbms_logmnr.dict_from_online_catalog);
END;
/

PL/SQL procedure successfully completed.
```

Run a DML statement.

```
UPDATE HR.EMPLOYEES SET SALARY=SALARY+1000 WHERE EMPLOYEE_ID=116;
COMMIT;
```

Query the `V$LOGMNR_CONTENTS` table to view the DML commands captured by the Log Miner.

```
SELECT TO_CHAR(TIMESTAMP,'mm/dd/yy hh24:mi:ss') TIMESTAMP,
SEG_NAME, OPERATION, SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS
WHERE TABLE_NAME = 'EMPLOYEES'
AND OPERATION = 'UPDATE';

TIMESTAMP  SEG_NAME  OPERATION
10/09/17   06:43:44  EMPLOYEES UPDATE

SQL_REDO                                         SQL_UNDO
update "HR"."EMPLOYEES" set                      update "HR"."EMPLOYEES" set
"SALARY" = '3900' where "SALARY" = '2900'        "SALARY" = '2900' where "SALARY" = '3900'
and ROWID = 'AAAViUAAEAAABVvAAQ';                and ROWID = 'AAAViUAAEAAABVvAAQ';
```

For more information, see [Using LogMiner to Analyze Redo Log Files](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html#GUID-3417B738-374C-4EE3-B15C-3A66E01AE2B5) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.special.log.pg"></a>

PostgreSQL doesn’t provide a feature that is directly equivalent to Oracle Log Miner. However, several alternatives exist which allow viewing historical database activity in PostgreSQL.

 **Using PG\$1STAT\$1STATEMENTS** 

Extension module for tracking query run details with statistical information. The `PG_STAT_STATEMENTS` view presents a single row for each database operation that was logged, including information about the user, query, number of rows retrieved by the query, and more.

 **Examples** 

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Parameter groups** and choose the parameter to edit.

1. On the **Parameter group actions**, choose **Edit**.

1. Set the following parameters:
   + shared\$1preload\$1libraries = 'pg\$1stat\$1statements'
   + pg\$1stat\$1statements.max = 10000
   + pg\$1stat\$1statements.track = all

1. Choose **Save changes**.

A database reboot may be required for the updated values to take effect.

Connect to your database and run the following command.

```
CREATE EXTENSION PG_STAT_STATEMENTS;
```

Test the `PG_STAT_STATEMENTS` view to see captured database activity.

```
UPDATE EMPLOYEES
SET SALARY=SALARY+1000
WHERE EMPLOYEE_ID=116;

SELECT *
FROM PG_STAT_STATEMENTS
WHERE LOWER(QUERY) LIKE '%update%';

[ RECORD 1 ]
userid               16393
dbid                 16394
queryid              2339248071
query                UPDATE EMPLOYEES + SET SALARY = SALARY + ? + WHERE EMPLOYEE_ID=?
calls                1
total_time           11.989
min_time             11.989
max_time             11.989
mean_time            11.989
stddev_time          0
rows                 1
shared_blks_hit      15
shared_blks_read     10
shared_blks_dirtied  0
shared_blks_written  0
local_blks_hit       0
local_blks_read      0
local_blks_dirtied   0
local_blks_written   0
temp_blks_read       0
temp_blks_written    0
blk_read_time        0
blk_write_time       0
```

**Note**  
PostgreSQL `PG_STAT_STATEMENTS` doesn’t provide a feature that is equivalent to LogMiner `SQL_UNDO` column.

 **DML / DDL Database Activity Logging** 

DML and DML operations can be tracked inside the PostgreSQL log file (postgres.log) and viewed using AWS console.

 **Examples** 

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Parameter groups** and choose the parameter to edit.

1. On the **Parameter group actions**, choose **Edit**.

1. Set the following parameters:
   + log\$1statement = 'ALL'
   + log\$1min\$1duration\$1statement = 1

1. Choose **Save changes**.

A database reboot may be required for the updated values to take effect.

Test DDL/DML logging.

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Databases**, then choose your database, and choose **Logs**.

1. Sort the log by the `Last Written` column to show recent logs.

1. For the log you want to review, choose **View**. For example, the following image shows the PostgreSQL log file with a logged `UPDATE` command.

![\[A screenshot of a PostgreSQL log file\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-pg-log.png)


 ** Amazon Aurora Performance Insights** 

The Amazon Aurora performance insights dashboard provides information about current and historical SQL statements, runs and workloads. Note, enhanced monitoring should be enabled during Amazon Aurora instance configuration.

 **Examples** 

1. Sign in to the AWS Management Console and choose **RDS**.

1. Choose **Databases**, then choose your database.

1. On the **Actions**, choose **Modify**.

1. Make sure that the Enable Enhanced Monitoring option is set to Yes.

1. Choose **Apply immediately** and then choose **Continue**.

1. On the AWS console, choose **RDS**, and then choose **Performance insights**.

1. Choose the instance to monitor.

1. Specify the timeframe and the monitoring scope (Waits, SQL, Hosts and Users).

For more information, see [Error Reporting and Logging](https://www.postgresql.org/docs/13/runtime-config-logging.html) and [pg\$1stat\$1statements](https://www.postgresql.org/docs/13/pgstatstatements.html) in the *PostgreSQL documentation* and [PostgreSQL database log files](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.PostgreSQL.html) in the *Amazon RDS user guide*.