

# Oracle and PostgreSQL high availability and disaster recovery
<a name="chap-oracle-aurora-pg.hadr"></a>

This section includes pages about Oracle and PostgreSQL high availability and disaster recovery capabilities.

**Topics**
+ [Oracle Active Data Guard and PostgreSQL replicas](chap-oracle-aurora-pg.hadr.replicates.md)
+ [Oracle Real Application Clusters and PostgreSQL Aurora architecture](chap-oracle-aurora-pg.hadr.rac.md)
+ [Oracle Traffic Director and Amazon RDS Proxy for Amazon Aurora PostgreSQL](chap-oracle-aurora-pg.hadr.traffic.md)
+ [Oracle Data Pump and PostgreSQL pg\$1dump and pg\$1restore](chap-oracle-aurora-pg.hadr.datapump.md)
+ [Oracle Flashback Database and PostgreSQL Amazon Aurora snapshots](chap-oracle-aurora-pg.hadr.flashback.md)
+ [Oracle Flashback Table and Amazon Aurora PostgreSQL snapshots](chap-oracle-aurora-pg.hadr.snapshots.md)
+ [Oracle Recovery Manager (RMAN) and Amazon RDS snapshots](chap-oracle-aurora-pg.hadr.rman.md)
+ [Oracle SQL\$1Loader and PostgreSQL pg\$1dump and pg\$1restore](chap-oracle-aurora-pg.hadr.dump.md)

# Oracle Active Data Guard and PostgreSQL replicas
<a name="chap-oracle-aurora-pg.hadr.replicates"></a>

With AWS DMS, you can create and manage Oracle Active Data Guard and PostgreSQL logical replication instances to maintain standby databases for disaster recovery and read scaling. Oracle Active Data Guard and PostgreSQL logical replication provide continuous data protection by transmitting database changes from a primary database to one or more standby databases.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  Distribute load, applications, or users across multiple instances  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.replicates.ora"></a>

Oracle Active Data Guard (ADG) is a synced database architecture with primary and standby databases. The difference between Data Guard and ADG is that ADG standby databases allow read access only.

The following diagram illustrates the ADG architecture.

![\[Active Data Guard architecture\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-active-data-guard.png)

+  **Primary DB** — The main database open to read and write operations.
+  **Redo/Archive** — The redo files and archives that store the redo entries for recovery operations.
+  **Data Broker** — The data guard broker service is responsible for all failover and syncing operations.
+  **Standby DB** — The secondary database that allows read operations only. This database remains in recovery mode until it is shut down or becomes the primary (failover or switchover).
+  **Log Apply** — Runs all the redo log entries from the redo and archives files on the standby db.
+  **Redo/Archive** — Contains the redo files and archives that are synced from the primary log and archive files.
+  **Data Broker** — The Data Guard broker service is responsible for all failover and syncing operations.

All components use SQL\$1NET protocol.

 **Special features** 
+ You can select "asynchronously" for best performance or "synchronously" for best data protection.
+ You can temporarily convert a standby database to a snapshot database and allow read/write operations. When you are done running QA, testing, loads, or other operations, it can be switched back to standby.
+ A sync gap can be specified between the primary and standby databases to account for human errors (for example, creating 12 hours gap of sync).

For more information, see [Creating a Physical Standby Database](https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/creating-oracle-data-guard-physical-standby.html#GUID-B511FB6E-E3E7-436D-94B5-071C37550170) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.replicates.pg"></a>

You can use Aurora replicas for scaling read operations and increasing availability such as Oracle Active Data Guard, but with less configuration and administration. You can easily manage many replicas from the Amazon RDS console. Alternatively, you can use the AWS CLI for automation.

When you create Aurora PostgreSQL instances, use one of the two following replication options:
+  **Multi-AZ (Availability Zone)** — Create a replicating instance in a different region.
+  **Instance Read Replicas** — Create a replicating instance in the same region.

For instance options, you can use one of the two following options:
+ Create Aurora Replica.
+ Create Cross Region Read Replica.

The main differences between these two options are:
+ Cross Region creates a new reader cluster in a different region. Use Cross Region for a higher level of Higher Availability and to keep the data closer to the end users.
+ Cross Region has more lag between the two instances.
+ Additional charges apply for transferring the data between the two regions.

DDL statements that run on the primary instance may interrupt database connections on the associated Aurora Replicas. If an Aurora Replica connection is actively using a database object such as a table, and that object is modified on the primary instance using a DDL statement, the Aurora Replica connection is interrupted.

Rebooting the primary instance of an Amazon Aurora database cluster also automatically reboots the Aurora Replicas for that database cluster.

Before you create a cross region replica, turn on the `binlog_format` parameter.

When using Multi-AZ, the primary database instance switches over automatically to the standby replica if any of the following conditions occur:
+ The primary database instance fails.
+ An Availability Zone outage.
+ The database instance server type is changed.
+ The operating system of the database instance is undergoing software patching.
+ A manual failover of the database instance was initiated using reboot with fail-over.

 **Examples** 

The following walkthrough demonstrates how to create a replica/reader.

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Instance actions** and choose **Add reader**.

1. Enter all required details and choose **Create**.

After the replica is created, you can run read and write operations on the primary instance and read-only operations on the replica.

### Compare Oracle Active Data Guard and Aurora PostgreSQL Replicates
<a name="chap-oracle-aurora-pg.hadr.replicates.pg.compare"></a>


| Description | Oracle Active Data Guard |  Aurora PostgreSQL Replicates | 
| --- | --- | --- | 
|  How to switch over  |  <pre>ALTER DATABASE SWITCHOVER TO DBREP VERIFY;</pre>  |  Note that you can’t choose to which instance to failover, the instance with the higher priority will become a writer (primary).  | 
|  Define automatic failover  |  <pre>EDIT DATABASE db1 SET PROPERTY<br />FASTSTARTFAILOVERTARGET='db1rep';<br /><br />EDIT DATABASE db1rep SET PROPERTY<br />FASTSTARTFAILOVERTARGET='db1';<br /><br />ENABLE FAST_START FAILOVER;</pre>  |  Use Multi-AZ on instance creation or by modifying existing instance.  | 
|  Asynchronous or synchronous replication  |  Change to synchronous <pre>ALTER SYSTEM SET<br />LOG_ARCHIVE_DEST_2='SERVICE=db1rep<br />AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,<br />PRIMARY_ROLE) DB_UNIQUE_NAME=db1rep';<br />ALTER DATABASE SET STANDBY<br />DATABASE TO MAXIMIZE AVAILABILITY;</pre> Change to asynchronous <pre>ALTER SYSTEM SET<br />LOG_ARCHIVE_DEST_2='SERVICE=db1rep<br />NOAFFIRM<br />ASYNC VALID_FOR=(ONLINE_LOGFILES,<br />PRIMARY_ROLE) DB_UNIQUE_NAME=db1rep';<br />ALTER DATABASE SET STANDBY<br />DATABASE TO MAXIMIZE PERFORMANCE;</pre>  |  Not supported. Only asynchronous replication is in use.  | 
|  Open standby to read/write and continue syncing afterwards  |  <pre>CONVERT DATABASE db1rep<br />  TO SNAPSHOT STANDBY;<br />CONVERT DATABASE db1rep<br />  TO PHYSICAL STANDBY;</pre>  |  Not supported but you can: restore your database from snapshot, run your QA, testing or other operations on the restored instance. After you finish, drop the restored instance.  | 
|  Create gaped replication  |  Create 5 minutes delay <pre>ALTER DATABASE<br />  RECOVER MANAGED STANDBY<br />  DATABASE CANCEL;<br />ALTER DATABASE<br />  RECOVER MANAGED STANDBY<br />  DATABASE DELAY 5<br />  DISCONNECT FROM SESSION;</pre> Return for no delay <pre>ALTER DATABASE<br />  RECOVER MANAGED STANDBY<br />  DATABASE CANCEL;<br />ALTER DATABASE<br />  RECOVER MANAGED STANDBY<br />  DATABASE NODELAY<br />  DISCONNECT FROM SESSION;</pre>  |  Not Supported  | 

For more information, see [Replication with Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html) in the *user guide* and [Multi-AZ deployments for high availability](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) in the *user guide*.

# Oracle Real Application Clusters and PostgreSQL Aurora architecture
<a name="chap-oracle-aurora-pg.hadr.rac"></a>

With AWS DMS, you can migrate your on-premises Oracle Real Application Clusters (RAC) and PostgreSQL databases to Amazon Aurora, a fully managed relational database service. Oracle RAC provides scalability and high availability by allowing multiple instances to access a single database. Similarly, Amazon Aurora PostgreSQL clusters consist of a writer instance and multiple reader instances, enabling read scaling and failover support.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |  N/A  |  N/A  |  Distribute load, applications, or users across multiple instances.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.rac.ora"></a>

Oracle Real Application Clusters (RAC) is one of the most advanced and capable technologies providing highly available and scalable relational databases. It allows multiple Oracle instances to access a single database. Applications can access the database through the multiple instances in Active-Active mode.

The following diagram illustrates the Oracle RAC architecture.

![\[Oracle RAC architecture\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-oracle-rac.png)


Oracle RAC requires network configuration of SCAN IPs, VIP IPs, interconnect, and other items. As a best practice, all severs should run the same versions of Oracle software.

Because of the shared nature of the RAC cluster architecture—specifically, having all nodes write to a single set of database data files on disk—the following two special coordination mechanisms ensure Oracle database objects and data maintain ACID compliance:
+  **GCS (Global Cache Services)** — Tracks the location and status of the database data blocks and helps guarantee data integrity for global access across all cluster nodes.
+  **GES (Global Enqueue Services)** — Performs concurrency control across all cluster nodes including cache locks and transactions.

These services, which run as background processes on each cluster node, are essential for serializing access to shared data structures in an Oracle database.

Shared storage is another essential component in the Oracle RAC architecture. All cluster nodes read and write data to the same physical database files stored on a disk accessible by all nodes. Most customers rely on high-end storage hardware to provide the shared storage capabilities required for RAC.

In addition, Oracle provides its own software-based storage/disk management mechanism called Automatic Storage Management (ASM). ASM is implemented as a set of special background processes that run on all cluster nodes and allow for easy management of the database storage layer.

### Performance and scale-out with Oracle RAC
<a name="chap-oracle-aurora-pg.hadr.rac.ora.performance"></a>

You can add new nodes to an existing RAC cluster without downtime. Adding more nodes increases the level of high availability and enhances performance.

Although you can scale read performance easily by adding more cluster nodes, scaling write performance is more complicated. Technically, Oracle RAC can scale writes and reads together when adding new nodes to the cluster, but attempts from multiple sessions to modify rows that reside in the same physical Oracle block (the lowest level of logical I/O performed by the database) can cause write overhead for the requested block and impact write performance.

Concurrency is another reason why RAC implements a “smart mastering” mechanism that attempts to reduce write-concurrency overhead. The “smart mastering” mechanism enables the database to determine which service causes which rows to be read into the buffer cache and master the data blocks only on those nodes where the service is active. Scaling writes in RAC isn’t as straightforward as scaling reads.

With the limitations for pure write scale-out, many Oracle RAC customers choose to split their RAC clusters into multiple services, which are logical groupings of nodes in the same RAC cluster. By using services, you can use Oracle RAC to perform direct writes to specific cluster nodes. This is usually done in one of two ways:
+ Splitting writes from different individual modules in the application (that is, groups of independent tables) to different nodes in the cluster. This approach is also known as application partitioning (not to be confused with database table partitions).
+ In extremely non-optimized workloads with high concurrency, directing all writes to a single RAC node and load-balancing only the reads.

In summary, Oracle Real Application Clusters provides two major benefits:
+ Multiple database nodes within a single RAC cluster provide increased high availability. No single point of failure exists from the database servers themselves. However, the shared storage requires storage-based high availability or disaster recovery solutions.
+ Multiple cluster database nodes enable scaling-out query performance across multiple servers.

For more information, see [Oracle Real Application Clusters](https://docs.oracle.com/en/database/oracle/oracle-database/19/racad/index.html) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.rac.pg"></a>

Aurora extends the vanilla versions of PostgreSQL in two major ways:
+ Adds enhancements to the PostgreSQL database kernel itself to improve performance (concurrency, locking, multi-threading, and so on).
+ Uses the capabilities of the AWS ecosystem for greater high availability, disaster recovery, and backup/recovery functionality.

Comparing the Amazon Aurora architecture to Oracle RAC, there are major differences in how Amazon implements scalability and increased high availability. These differences are due mainly to the existing capabilities of PostgreSQL and the strengths the AWS backend provides in terms of networking and storage.

Instead of having multiple read/write cluster nodes access a shared disk, an Aurora cluster has a single primary node that is open for reads and writes and a set of replica nodes that are open for reads with automatic promotion to primary in case of failures. While Oracle RAC uses a set of background processes to coordinate writes across all cluster nodes, the Amazon Aurora primary writes a constant redo stream to six storage nodes distributed across three Availability Zones within an AWS Region. The only writes that cross the network are redo log records (not pages).

Each Aurora cluster can have one or more instances serving different purposes:
+ At any given time, a single instance functions as the primary that handles both writes and reads from your applications.
+ You can create up to 15 read replicas in addition to the primary, which are used for two purposes:
  +  **Performance and Read Scalability** — Replicas can be used as read-only nodes for queries and report workloads.
  +  **High Availability** — Replicas can be used as failover nodes in the event the master fails. Each read replica can be located in one of the three Availability Zones hosting the Aurora cluster. A single Availability Zone can host more than one read replica.

The following diagram illustrates a high-level Aurora architecture with four cluster nodes: one primary and three read replicas. The primary node is located in Availability Zone A, the first read replica in Availability Zone B, and the second and third read replicas in Availability Zone C.

![\[Aurora architecture with four cluster nodes\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-aurora-architecture-four-cluster-nodes.png)


An Aurora Storage volume is made up of 10 GB segments of data with six copies spread across three Availability Zones. Each Amazon Aurora read replica shares the same underlying volume as the master instance. Updates made by the master are visible to all read replicas through a combination of reading from the shared Aurora storage volume and applying log updates in-memory when received from the primary instance after a master failure. Promotion of a read replica to master usually occurs in less than 30 seconds with no data loss.

For a write to be considered durable in Aurora, the primary instance (“master”) sends a redo stream to six storage nodes — two in each availability zone for the storage volume — and waits until four of the six nodes have responded. No database pages are ever written from the database tier to the storage tier. The Aurora Storage volume asynchronously applies redo records to generate database pages in the background or on demand. Aurora hides the underlying complexity.

### High availability and scale-out in Aurora
<a name="chap-oracle-aurora-pg.hadr.rac.pg.ha"></a>

Aurora provides two endpoints for cluster access. These endpoints provide both high availability capabilities and scale-out read processing for connecting applications.
+  **Cluster Endpoint** — Connects to the current primary instance for the Aurora cluster. You can perform both read and write operations using the cluster endpoint. If the current primary instance fails, Aurora automatically fails over to a new primary instance. During a failover, the database cluster continues to serve connection requests to the cluster endpoint from the new primary instance with minimal interruption of service.
+  **Reader Endpoint** — Provides load-balancing capabilities (round-robin) across the replicas allowing applications to scale-out reads across the Aurora cluster. Using the Reader Endpoint provides better use of the resources available in the cluster. The reader endpoint also enhances high availability. If an AWS Availability Zone fails, the application’s use of the reader endpoint continues to send read traffic to the other replicas with minimal disruption.

![\[Aurora architecture with cluster endpoints\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-aurora-cluster-endpoints.png)


While Amazon Aurora focuses on the scale-out of reads and Oracle RAC can scale-out both reads and writes, most OLTP applications are usually not limited by write scalability. Many Oracle RAC customers use RAC first for high availability and second to scale-out their reads. You can write to any node in an Oracle RAC cluster, but this capability is often a functional benefit for the application versus a method for achieving unlimited scalability for writes.

## Summary
<a name="chap-oracle-aurora-pg.hadr.rac.summary"></a>
+ Multiple cluster database nodes provide increased high availability. There is no single point of failure from the database servers. In addition, since an Aurora cluster can be distributed across three availability zones, there is a large benefit for high availability and durability of the database. These types of “stretch” database clusters are usually uncommon with other database architectures.
+  AWS managed storage nodes also provide high availability for the storage tier. A zero-data loss architecture is employed in the event a master node fails and a replica node is promoted to the new master. This failover can usually be performed in under 30 seconds.
+ Multiple cluster database nodes enable scaling-out query read performance across multiple servers.
+ Greatly reduced operational overhead using a cloud solution and reduced total cost of ownership by using AWS and open source database engines.
+ Automatic management of storage. No need to pre-provision storage for a database. Storage is automatically added as needed, and you only pay for one copy of your data.
+ With Amazon Aurora, you can easily scale-out your reads (and scale-up your writes) which fits perfectly into the workload characteristics of many, if not most, OLTP applications. Scaling out reads usually provides the most tangible performance benefit.

When comparing Oracle RAC and Amazon Aurora side by side, you can see the architectural differences between the two database technologies. Both provide high availability and scalability, but with different architectures.

![\[Oracle RAC and Aurora architecture comparison\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-oracle-rac-amazon-aurora-comparison.png)


Overall, Amazon Aurora introduces a simplified solution that can function as an Oracle RAC alternative for many typical OLTP applications that need high performance writes, scalable reads, and very high availability with lower operational overhead.


| Feature | Oracle RAC |  Amazon Aurora  | 
| --- | --- | --- | 
|  Storage  |  Usually enterprise-grade storage \$1 ASM  |   Aurora Storage Nodes: Distributed, Low Latency, Storage Engine Spanning Multiple AZs  | 
|  Cluster type  |  Active/Active. All nodes open for R/W  |  Active/Active. Primary node open for R/W, Replica nodes open for reads  | 
|  Cluster virtual IPs  |  R/W load balancing: SCAN IP  |  R/W: Cluster endpoint \$1 Read load balancing: Reader endpoint  | 
|  Internode coordination  |  Cache-fusion \$1 GCS \$1 GES  |  N/A  | 
|  Internode private network  |  Interconnect  |  N/A  | 
|  Transaction (write) TTR from node failure  |  Typically, 0-30 seconds  |  Typically, less than 30 seconds  | 
|  Application (Read) TTR from node failure  |  Immediate  |  Immediate  | 
|  Max number of cluster nodes  |  Theoretical maximum is 100, but smaller clusters (2 to 10 nodes) are far more common  |  15  | 
|  Provides built-in read scaling  |  Yes  |  Yes  | 
|  Provides built-in write scaling  |  Yes, under certain scenarios, write performance can be limited and affect scale-out capabilities. For example, when multiple sessions attempt to modify rows contained in the same database blocks  |  No  | 
|  Data loss in case of node failure  |  No data loss  |  No data loss  | 
|  Replication latency  |  N/A  |  Milliseconds  | 
|  Operational complexity  |  Requires database, IT, network, and storage expertise  |  Provided as a cloud-solution  | 
|  Scale-up nodes  |  Difficult with physical hardware, usually requires to replace servers  |  Easy using the AWS UI/CLI  | 
|  Scale-out cluster  |  Provision, deploy, and configure new servers, unless you pre-allocate a pool of idle servers to scale-out on  |  Easy using the AWS UI/CLI  | 

For more information, see [Amazon Aurora as an Alternative to Oracle RAC](https://aws.amazon.com/blogs/database/amazon-aurora-as-an-alternative-to-oracle-rac).

# Oracle Traffic Director and Amazon RDS Proxy for Amazon Aurora PostgreSQL
<a name="chap-oracle-aurora-pg.hadr.traffic"></a>

With AWS DMS, you can migrate Oracle Traffic Director to Amazon RDS Proxy for PostgreSQL to modernize your applications. Oracle Traffic Director is a web server that distributes client requests across origin servers, while Amazon RDS Proxy for PostgreSQL is a fully managed proxy for PostgreSQL databases.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |  N/A  |  Some features may be replaced by Amazon RDS Proxy  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.traffic.ora"></a>

Starting with Oracle 18c Oracle Connection Manager can be configured to run in Traffic Director mode. This mode introduces multiple features that help with High Availability, scalability, load balancing, zero downtime and security. Oracle Traffic Director is fast and reliable load-balancing solution. By enabling it for Oracle Connection Manager users can now get following features:
+ Increased scalability through usage of transparent connection load-balancing.
+ Essential high availability feature of zero downtime that includes support for planned database maintenance, pluggable database relocation, and unplanned database outages for read-mostly workloads.
+ High availability of Connection Manager (CMAN) which avoids single point of failure
+ Various security features, such as database proxy, firewall, tenant isolation in multi-tenant environment, DDOS protection, and database traffic secure tunneling.

For more information, see [Configuring Oracle Connection Manager in Traffic Director Mode](https://docs.oracle.com/en/database/oracle/oracle-database/18/netag/configuring-oracle-connection-manager.html#GUID-3917FC5D-4B23-4752-85BA-39A88C4D13F8) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.traffic.pg"></a>

Oracle Traffic Director mode for Connection Manager can be potentially replaced by Amazon RDS Proxy for migration to Aurora PostgreSQL.

RDS Proxy simplifies connection management for Amazon RDS DB instances and clusters. It handles the network traffic between the client application and the database in an active way first by understanding the database protocol. Then Amazon RDS Proxy adjusts its behavior based on the SQL operations from user application and the result sets from the database.

RDS Proxy also reduces the memory and CPU overhead for the database connection management. The database needs less memory and CPU resources when applications open many simultaneous connections. Amazon RDS Proxy also doesn’t require applications to close and reopen connections that stay idle for a long time. Similarly, it requires less application logic to reestablish connections in case of a database problem.

The infrastructure for Amazon RDS Proxy is highly available and deployed over multiple Availability Zones (AZs). The computation, memory, and storage for Amazon RDS Proxy are independent of Amazon RDS DB instances and Aurora DB clusters. This separation helps lower overhead on database servers, so that they can devote their resources to serving database workloads. The Amazon RDS Proxy compute resources are serverless, automatically scaling based on your database workload.

For more information, see [Amazon RDS Proxy](chap-oracle-aurora-pg.tools.rdsproxy.md) and [Using Amazon RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) in the *Amazon RDS user guide*.

# Oracle Data Pump and PostgreSQL pg\$1dump and pg\$1restore
<a name="chap-oracle-aurora-pg.hadr.datapump"></a>

With AWS DMS, you can migrate data from source databases to target databases using Oracle Data Pump and PostgreSQL pg\$1dump and pg\$1restore. Oracle Data Pump is a utility for transferring data between Oracle databases, while PostgreSQL pg\$1dump and pg\$1restore create a backup of a PostgreSQL database.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[No compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-0.png)   |  N/A  |  N/A  |  Non-compatible tool.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.datapump.ora"></a>

Oracle Data Pump is a utility for exporting and importing data from/to an Oracle database. It can be used to copy an entire database, entire schemas, or specific objects in a schema. Oracle Data Pump is commonly used as a part of a backup strategy for restoring individual database objects (specific records, tables, views, stored procedures, and so on) as opposed to snapshots or Oracle RMAN, which provides backup and recovery capabilities at the database level. By default (without using the sqlfile parameter during export), the dump file generated by Oracle Data Pump is binary (it can’t be opened using a text editor).

Oracle Data Pump supports:
+  **Export Data from an Oracle database** — The Data Pump `EXPDP` command creates a binary dump file containing the exported database objects. Objects can be exported with data or with metadata only. Exports can be performed for specific timestamps or Oracle SCNs to ensure cross-object consistency.
+  **Import Data to an Oracle database** — The Data Pump `IMPDP` command imports objects and data from a specific dump file created with the `EXPDP` command. The `IMPDP` command can filter on import (for example, only import certain objects) and remap object and schema names during import.

The term “Logical backup” refers to a dump file created by Oracle Data Pump.

Both `EXPDP` and `IMPDP` can only read/write dump files from file system paths that were pre-configured in the Oracle database as directories. During export/import, users must specify the logical directory name where the dump file should be created; not the actual file system path.

 **Examples** 

Use EXPDP to export the `HR` schema.

```
$ expdp system/**** directory=expdp_dir schemas=hr dumpfile=hr.dmp logfile=hr.log
```

The command contains the credentials to run Data Pump, the logical Oracle directory name for the dump file location (which maps in the database to a physical file system location), the schema name to export, the dump file name, and log file name.

Use IMPDP to import the `HR` a schema and rename to `HR_COPY`.

```
$ impdp system/**** directory=expdp_dir schemas=hr dumpfile=hr.dmp logfile=hr.log REMAP_SCHEMA=hr:hr_copy
```

The command contains the database credentials to run Data Pump, the logical Oracle directory for where the export dump file is located, the dump file name, the schema to export, the name for the dump file, the log file name, and the `REMAP_SCHEMA` parameter

For more information, see [Oracle Data Pump](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump.html#GUID-501A9908-BCC5-434C-8853-9A6096766B5A) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.datapump.pg"></a>

PostgreSQL provides native utilities — `pg_dump` and `pg_restore` can be used to perform logical database exports and imports with a degree of comparable functionality to the Oracle Data Pump utility. Such as for moving data between two databases and creating logical database backups.
+  `pg_dump` is an equivalent to Oracle expdp
+  `pg_restore` is an equivalent to Oracle impdp

 Amazon Aurora PostgreSQL supports data export and import using both `pg_dump` and `pg_restore`, but the binaries for both utilities will need to be placed on your local workstation or on an Amazon EC2 server as part of the PostgreSQL client binaries.

PostgreSQL dump files created using `pg_dump` can be copied, after export, to an Amazon S3 bucket as cloud backup storage or for maintaining the desired backup retention policy. Later, when dump files are needed for database restore, the dump files should be copied back to the desktop / server that has a PostgreSQL client (such as your workstation or an Amazon EC2 server) to issue the `pg_restore` command.

Starting with PostgreSQL 10, the following capabilities were added:
+ A schema can be excluded in `pg_dump` or `pg_restore` commands.
+ Can create dumps with no blobs.
+ Allow to run `pg_dumpall` by non-superusers, using the `--no-role-passwords` option.
+ Create additional integrity option to ensure that the data is stored in disk using the `fsync()` method.

Starting with PostgreSQL 11, `pg_dump` and `pg_restore` can export and import relationships between extensions and database objects established with `ALTER …​ DEPENDS ON EXTENSION`, which allows these objects to be dropped when extension is dropped with `CASCADE` option.

**Note**  
 `pg_dump` will create consistent backups even if the database is being used concurrently. `pg_dump` doesn’t block other users accessing the database (readers or writers).`pg_dump` only exports a single database, in order to backup global objects that are common to all databases in a cluster, such as roles and tablespaces, use `pg_dumpall`. PostgreSQL dump files can be both plain-text and custom format files.

Another option to export and import data from PostgreSQL database is to use `COPY TO` and `COPY FROM` commands. Starting with PostgreSQL 12 the `COPY FROM` command, that can be used to load data into DB, has support for filtering incoming rows with `WHERE` condition.

```
CREATE TABLE tst_copy(v TEXT);
COPY tst_copy FROM '/home/postgres/file.csv' WITH (FORMAT CSV) WHERE v LIKE '%apple%';
```

 **Examples** 

Export data using `pg_dump`: Use a workstation or server with the PostgreSQL client installed in order to connect to the Aurora PostgreSQL instance in AWS; providing the hostname (-h), database user name (-U) and database name (-d) while issuing the `pg_dump` command.

```
$ pg_dump -h hostname.rds.amazonaws.com -U username -d db_name -f dump_file_name.sql
```

The output file, `dump_file_name.sql`, will be stored on the server where the `pg_dump` command runs. You can later copy the output file to an Amazon S3 Bucket, if needed.

Run `pg_dump` and copy the backup file to an Amazon S3 bucket using pipe and the AWS CLI.

```
$ pg_dump -h hostname.rds.amazonaws.com -U username -d db_name -f dump_file_name.sql | aws s3 cp - s3://pg-backup/pg_bck-$(date"+%Y-%m-%d-%H-%M-%S")
```

Restore data with `pg_restore`. Use a workstation or server with the PostgreSQL client installed to connect to the Aurora PostgreSQL instance providing the hostname (-h), database user name (-U), database name (-d) and the dump file to restore from while issuing the `pg_restore` command.

```
$ pg_restore -h hostname.rds.amazonaws.com -U username -d dbname_restore dump_file_name.sql
```

Copy the output file from the local server to an Amazon S3 Bucket using the AWS CLI. Upload the dump file to Amazon S3 bucket.

```
$ aws s3 cp /usr/Exports/hr.dmp s3://my-bucket/backup-$(date "+%Y-%m-%d-%H-%M-%S")
```

The \$1-\$1(date "\$1%Y-%m-%d-%H-%M-%S")\$1 format is valid on Linux servers only.

Download the output file from the Amazon S3 bucket.

```
$ aws s3 cp s3://my-bucket/backup-2017-09-10-01-10-10 /usr/Exports/hr.dmp
```

You can create a copy of an existing database without having to use `pg_dump` or `pg_restore`. Instead, use the template keyword to signify the database used as the source.

```
CREATE DATABASE mydb_copy TEPLATE mydb;
```

## Summary
<a name="chap-oracle-aurora-pg.hadr.datapump.summary"></a>


| Description | Oracle Data Pump | PostgreSQL Dump | 
| --- | --- | --- | 
|  Export data to a local file  |  <pre>expdp system/***<br />schemas=hr<br />dumpfile=hr.dmp<br />logfile=hr.log</pre>  |  <pre>pg_dump -F c -h<br />hostname.rds.amazonaws.com<br />-U username -d hr<br />-p 5432 > c:\Export\hr.dmp</pre>  | 
|  Export data to a remote file  |  Create Oracle directory on remote storage mount or NFS directory called EXP\$1DIR. Use the export command: <pre>expdp system/***<br />schemas=hr directory=EXP_DIR<br />dumpfile=hr.dmp logfile=hr.log</pre>  |  Export: <pre>pg_dump -F c<br />-h hostname.rds.amazonaws.com<br />-U username -d hr<br />-p 5432 > c:\Export\hr.dmp</pre> Upload to Amazon S3  <pre>aws s3 cp<br />c:\Export\hr.dmp<br />s3://my-bucket/backup-$(date"+%Y-%m-%d-%H-%M-%S")</pre>  | 
|  Import data to a new database with a new name  |  <pre>impdp system/***<br />schemas=hr dumpfile=hr.dmp<br />logfile=hr.log<br />REMAP_SCHEMA=hr:hr_copy<br />TRANSFORMM=OID:N</pre>  |  <pre>pg_restore<br />-h hostname.rds.amazonaws.com<br />-U hr -d hr_restore<br />-p 5432 c:\Expor\hr.dmp</pre>  | 
|  Exclude schemas  |  <pre>expdp system/*** FULL=Y<br />directory=EXP_DIR<br />dumpfile=hr.dmp<br />logfile=hr.log<br />exclude=SCHEMA:"HR"</pre>  |  <pre>pg_dump -F c<br />-h hostname.rds.amazonaws.com<br />-U username -d hr -p 5432<br />-N 'log_schema'<br />c:\Export\hr_nolog.dmp</pre>  | 

For more information, see [SQL Dump](https://www.postgresql.org/docs/13/backup-dump.html) and [pg\$1restore](https://www.postgresql.org/docs/13/app-pgrestore.html) in the *PostgreSQL documentation*.

# Oracle Flashback Database and PostgreSQL Amazon Aurora snapshots
<a name="chap-oracle-aurora-pg.hadr.flashback"></a>

With AWS DMS, you can migrate databases between different database platforms or versions by capturing consistent data snapshots from the source database and applying them to the target database. Oracle Flashback Database and PostgreSQL Amazon Aurora snapshots provide point-in-time backups of the source database, enabling migration with minimal downtime.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |  N/A  |  N/A  |  Storage level backup managed by Amazon RDS.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.flashback.ora"></a>

Oracle Flashback Database is a special mechanism built into Oracle databases that helps protect against human errors by providing capabilities to revert the entire database back to a previous point in time using SQL commands. Flashback database implements a self-logging mechanism that captures all changes applied to a database and to data. Essentially, it stores previous versions of database modifications in the configured database “Fast Recovery Area”.

When using Oracle flashback database, you can choose to restore an entire database to either a user-created restore point, a timestamp value, or to a specific System Change Number (SCN).

 **Examples** 

Create a database restore point to which you can flashback a database.

```
CREATE RESTORE POINT before_update GUARANTEE FLASHBACK DATABASE;
```

Flashback a database to a previously created restore point.

```
shutdown immediate;
startup mount;
flashback database to restore point before_update;
```

Flashback a database to a specific time.

```
shutdown immediate;
startup mount;
FLASHBACK DATABASE TO TIME "TO_DATE('01/01/2017','MM/DD/YY')";
```

For more information, see [FLASHBACK DATABASE](https://docs.oracle.com/en/database/oracle/oracle-database/19/rcmrf/FLASHBACK-DATABASE.html#GUID-584AC79A-40C5-45CA-8C63-DED3BE3A4511) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.flashback.pg"></a>

Snapshots are the primary backup mechanism for Amazon Aurora databases. They are extremely fast and nonintrusive. You can take snapshots using the Amazon RDS Management Console or the AWS CLI. Unlike RMAN, there is no need for incremental backups. You can choose to restore your database to the exact time when a snapshot was taken or to any other point in time.

 Amazon Aurora provides the following types of backups:
+  **Automated Backups** — Always enabled on Amazon Aurora. They do not impact database performance.
+  **Manual Backups** — You can create a snapshot at any time. There is no performance impact when taking snapshots of an Aurora database. Restoring data from snapshots requires creation of a new instance. Up to 100 manual snapshots are supported for each database.

 **Examples** 

The following steps to enable Aurora automatic backups and configure the backup retention window as part of the database creation process. This process is equivalent to setting the Oracle RMAN backup retention policy using the `configure retention policy to recovery window of X days` command.

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Databases**, then choose your database or create a new one.

1. Expand **Additional configuration** and specify **Backup retention period** in days.

    ![\[Backup retention period\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-backup-retention-period.png) 

The following table identifies the default automatic backup time for each region.


| Region | Default backup window | 
| --- | --- | 
|  US West (Oregon)  |  06:00–14:00 UTC  | 
|  US West (N. California)  |  06:00–14:00 UTC  | 
|  US East (Ohio)  |  03:00–11:00 UTC  | 
|  US East (N. Virginia)  |  03:00–11:00 UTC  | 
|  Asia Pacific (Mumbai)  |  16:30–00:30 UTC  | 
|  Asia Pacific (Seoul)  |  13:00–21:00 UTC  | 
|  Asia Pacific (Singapore)  |  14:00–22:00 UTC  | 
|  Asia Pacific (Sydney)  |  12:00–20:00 UTC  | 
|  Asia Pacific (Tokyo)  |  13:00–21:00 UTC  | 
|  Canada (Central)  |  06:29–14:29 UTC  | 
|  EU (Frankfurt)  |  20:00–04:00 UTC  | 
|  EU (Ireland)  |  22:00–06:00 UTC  | 
|  EU (London)  |  06:00–14:00 UTC  | 
|  South America (São Paulo)  |  23:00–07:00 UTC  | 
|   AWS GovCloud (US)  |  03:00–11:00 UTC  | 

Use the following steps to perform a manual snapshot backup of an Aurora database. This process is equivalent to creating a full Oracle RMAN backup (`BACKUP DATABASE PLUS ARCHIVELOG`).

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Databases**, then choose your database.

1. Choose **Actions** and then choose **Take snapshot**.

    ![\[Take snapshot\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-take-snapshot.png) 

Use the following steps to restore an Aurora database from a snapshot. This process is similar to the Oracle RMAN commands `RESTORE DATABASE` and `RECOVER DATABASE`. However, instead of running in place, restoring an Aurora database creates a new cluster.

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Snapshots**, then choose the snapshot to restore.

1. Choose **Actions** and then choose **Restore snapshot**. This action creates a new instance.

1. On the **Restore snapshot** page, for **DB instance identifier**, enter the name for your restored DB instance.

1. Choose **Restore DB instance**.

Use the following steps to restore an Aurora PostgreSQL database backup to a specific point in time. This process is similar to running the Oracle RMAN command `SET UNTIL TIME "TO_DATE('XXX')"` before running `RESTORE DATABASE` and `RECOVER DATABASE`.

1. Sign in to your AWS console and choose **RDS**.

1. Choose **Databases**, then choose your database.

1. Choose **Actions** and then choose **Restore to point in time**.

1. This process launches a new instance. Select the date and time to which you want to restore your database. The selected date and time must be within the configured backup retention for this instance.

### AWS CLI backup and restore operations
<a name="chap-oracle-aurora-pg.hadr.flashback.pg.cli"></a>

In addition to using the AWS web console to backup and restore an Aurora instance snapshot, you can also use the AWS CLI to perform the same actions. The CLI is especially useful for migrating existing automated Oracle RMAN scripts to an AWS environment. The following list highlights some CLI operations:
+ Use `describe-db-cluster-snapshots` to view all current Aurora PostgreSQL snapshots.
+ Use `create-db-cluster-snapshot` to create a snapshot ("Restore Point").
+ Use `restore-db-cluster-from-snapshot` to restore a new cluster from an existing database snapshot.
+ Use `create-db-instance` to add new instances to the restored cluster.

```
aws rds describe-db-cluster-snapshots

aws rds create-db-cluster-snapshot
  --db-cluster-snapshot-identifier Snapshot_name
  --db-cluster-identifier Cluster_Name

aws rds restore-db-cluster-from-snapshot
  --db-cluster-identifier NewCluster
  --snapshot-identifier SnapshotToRestore
  --engine aurora-postgresql

aws rds create-db-instance
  --region us-east-1
  --db-subnet-group default
  --engine aurora-postgresql
  --db-cluster-identifier NewCluster
  --db-instance-identifier newinstance-nodeA
  --db-instance-class db.r4.large
```
+ Use `restore-db-instance-to-point-in-time` to perform point-in-time recovery.

```
aws rds restore-db-cluster-to-point-in-time
  --db-cluster-identifier clusternamerestore
  --source-db-cluster-identifier clustername
  --restore-to-time 2017-09-19T23:45:00.000Z

aws rds create-db-instance
  --region us-east-1
  --db-subnet-group default
  --engine aurora-postgresql
  --db-cluster-identifier clustername-restore
  --db-instance-identifier newinstance-nodeA
  --db-instance-class db.r4.large
```

## Summary
<a name="chap-oracle-aurora-pg.hadr.flashback.summary"></a>


| Description | Oracle |  Amazon Aurora  | 
| --- | --- | --- | 
|  Create a restore point  |  <pre>CREATE RESTORE POINT<br />  before_update GUARANTEE<br />  FLASHBACK DATABASE;</pre>  |  <pre>aws rds create-db-cluster-snapshot<br />  --db-cluster-snapshotidentifier Snapshot_name<br />  --db-cluster-identifier Cluster_Name</pre>  | 
|  Configure flashback retention period  |  <pre>ALTER SYSTEM SET<br />  db_flashback_retention_target=2880;</pre>  |  Configure the **Backup retention window** setting using the AWS management console or AWS CLI.  | 
|  Flashback database to a previous restore point  |  <pre>shutdown immediate;<br />startup mount;<br />flashback database to<br />  restore point before_update;</pre>  |  Create new cluster from a snapshot. <pre>aws rds restore-db-cluster-from-snapshot<br />  --db-cluster-identifier NewCluster<br />  --snapshot-identifier SnapshotToRestore<br />  --engine aurora-postgresql</pre> Add new instance to the cluster. <pre>aws rds create-db-instance<br />  --region us-east-1<br />  --db-subnetgroup default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre>  | 
|  Flashback database to a previous point in time  |  <pre>shutdown immediate;<br />startup mount;<br />FLASHBACK DATABASE TO TIME<br />  "TO_DATE ('01/01/2017','MM/DD/YY')";</pre>  |  Create a new cluster from a snapshot and provide a specific point in time. <pre>aws rds restore-db-cluster-to-point-in-time<br />  --db-cluster-identifier clustername-restore<br />  --source-db-cluster-identifier clustername<br />  --restore-to-time 2017-09-19T23:45:00.000Z</pre> Add a new instance to the cluster: <pre>aws rds create-db-instance<br />  --region us-east-1<br />  --db-subnetgroup default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre>  | 

For more information, see [rds](https://docs.aws.amazon.com/cli/latest/reference/rds/index.html#cli-aws-rds) in the *CLI Command Reference* and [Restoring a DB instance to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html) and [Restoring from a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html) in the *Amazon RDS user guide*.

# Oracle Flashback Table and Amazon Aurora PostgreSQL snapshots
<a name="chap-oracle-aurora-pg.hadr.snapshots"></a>

With AWS DMS, you can migrate databases between different database platforms or versions by capturing consistent data snapshots from the source database and applying them to the target database. Oracle Flashback Table and Amazon Aurora PostgreSQL snapshots provide point-in-time backups of the source database, enabling migration with minimal downtime.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |  N/A  |  N/A  |  Storage level backup managed by Amazon RDS.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.snapshots.ora"></a>

Oracle Flashback Table is a data protection feature used to undo changes to a table and rewind it to a previous state (not from backup). While Flashback table operations are running, the affected tables are locked, but the rest of the database remains available.

If the structure of a table has been changed since the point of restore, the FLASHBACK will fail. Row movement must be enabled.

The data to restore must be found in the undo (dba must manage the size and retention). A table can be restored to an System Change Number (SCN), Restore Point, or Timestamp.

 **Examples** 

Flashback a table using SCN (query V\$1DATABASE to obtain the SCN).

```
SELECT CURRENT_SCN FROM V$DATABASE;
FLASHBACK TABLE employees TO SCN 3254648;
```

Flashback a table using a Restore Point (query V\$1RESTORE\$1POINT to obtain restore points).

```
SELECT NAME, SCN, TIME FROM V$RESTORE_POINT;
FLASHBACK TABLE employees TO RESTORE POINT employees_year_update;
```

Flashback a table using a Timestamp (query V\$1PARAMETER to obtain the undo\$1retention value).

```
SELECT NAME, VALUE/60 MINUTES_RETAINED
FROM V$PARAMETER
WHERE NAME = 'undo_retention';
FLASHBACK TABLE employees TO
TIMESTAMP TO_TIMESTAMP('2017-09-21 09:30:00', 'YYYY-MM-DD HH:MI:SS');
```

For more information, see [Backup and Recovery User Guide](https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/index.html) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.snapshots.pg"></a>

Snapshots are the primary backup mechanism for Amazon Aurora databases. They are extremely fast and nonintrusive. You can take snapshots using the Amazon RDS Management Console or the AWS CLI. Unlike RMAN, there is no need for incremental backups. You can choose to restore your database to the exact time when a snapshot was taken or to any other point in time.

 Amazon Aurora provides the following types of backups:
+  **Automated Backups** — Always enabled on Amazon Aurora. They do not impact database performance.
+  **Manual Backups** — You can create a snapshot at any time. There is no performance impact when taking snapshots of an Aurora database. Restoring data from snapshots requires creation of a new instance. Up to 100 manual snapshots are supported for each database.

 **Examples** 

For examples, see [PostgreSQL Amazon Aurora Snapshots](chap-oracle-aurora-pg.hadr.flashback.md#chap-oracle-aurora-pg.hadr.flashback.pg).

## Summary
<a name="chap-oracle-aurora-pg.hadr.snapshots.summary"></a>


| Description | Oracle |  Amazon Aurora  | 
| --- | --- | --- | 
|  Create a restore point  |  <pre>CREATE RESTORE POINT<br />  before_update GUARANTEE<br />  FLASHBACK DATABASE;</pre>  |  <pre>aws rds create-db-cluster-snapshot<br />  --db-cluster-snapshotidentifier Snapshot_name<br />  --db-cluster-identifier Cluster_Name</pre>  | 
|  Configure flashback retention period  |  <pre>ALTER SYSTEM SET<br />  db_flashback_retention_target=2880;</pre>  |  Configure the **Backup retention window** setting using the AWS management console or AWS CLI.  | 
|  Flashback table to a previous restore point  |  <pre>shutdown immediate;<br />startup mount;<br />flashback database to<br />  restore point before_update;</pre>  |  Create new cluster from a snapshot. <pre>aws rds restore-db-cluster-from-snapshot<br />  --db-cluster-identifier NewCluster<br />  --snapshot-identifier SnapshotToRestore<br />  --engine aurora-postgresql</pre> Add new instance to the cluster. <pre>aws rds create-db-instance<br />  --region us-east-1<br />  --db-subnetgroup default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre> Use `pg_dump` and `pg_restore` to copy the table from the restored instance to the original instance.  | 
|  Flashback table to a previous point in time  |  <pre>shutdown immediate;<br />startup mount;<br />FLASHBACK DATABASE TO TIME<br />  "TO_DATE ('01/01/2017','MM/DD/YY')";</pre>  |  Create a new cluster from a snapshot and provide a specific point in time. <pre>aws rds restore-db-cluster-to-point-in-time<br />  --db-cluster-identifier clustername-restore<br />  --source-db-cluster-identifier clustername<br />  --restore-to-time 2017-09-19T23:45:00.000Z</pre> Add a new instance to the cluster: <pre>aws rds create-db-instance<br />  --region us-east-1<br />  --db-subnetgroup default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre> Use `pg_dump` and `pg_restore` to copy the table from the restored instance to the original instance.  | 

For more information, see [rds](https://docs.aws.amazon.com/cli/latest/reference/rds/index.html#cli-aws-rds) in the *CLI Command Reference* and [Restoring a DB instance to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html) and [Restoring from a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html) in the *Amazon RDS user guide*.

# Oracle Recovery Manager (RMAN) and Amazon RDS snapshots
<a name="chap-oracle-aurora-pg.hadr.rman"></a>

With AWS DMS, you can migrate data from Oracle databases by using Oracle Recovery Manager (RMAN) backup sets or Amazon RDS snapshots. Oracle Recovery Manager is a utility for backing up, restoring, and recovering Oracle databases. Amazon RDS snapshots capture the entire database instance, including transaction logs, at a specific point in time.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |  N/A  |  N/A  |  Storage level backup managed by Amazon RDS.  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.rman.ora"></a>

Oracle Recovery Manager (RMAN) is a primary backup and recovery tool in Oracle. It provides its own scripting syntax and can be used to take full or incremental backups of an Oracle database. The following list identifies the types of backups.
+  **Full RMAN Backup** — Take a full backup of an entire database or individual Oracle data files. For example, a level 0 full backup.
+  **Differential Incremental RMAN Backup** — Performs a backup of all database blocks that have changed from the previous level 0 or 1 backup.
+  **Cumulative Incremental RMAN Backup** — Perform a backup all of blocks that have changed from the previous level 0 backup.

RMAN supports online backups of an Oracle database if it has been configured to run in Archived Log Mode.

RMAN backs up the following files:
+ Database data files.
+ Database control file.
+ Database parameter file.
+ Database Archived Redo Logs.

 **Examples** 

Use the RMAN CLI to connect to an Oracle database.

```
export ORACLE_SID=ORCL
rman target=/
```

Perform a full backup of the database and the database archived redo logs.

```
BACKUP DATABASE PLUS ARCHIVELOG;
```

Perform an incremental level 0 or level 1 backup of the database.

```
BACKUP INCREMENTAL LEVEL 0 DATABASE;
BACKUP INCREMENTAL LEVEL 1 DATABASE;
```

Restore a database.

```
RUN {
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN;
}
```

Restore a specific pluggable database (Oracle 12c).

```
RUN {
ALTER PLUGGABLE DATABASE pdbA, pdbB CLOSE;
RESTORE PLUGGABLE DATABASE pdbA, pdbB;
RECOVER PLUGGABLE DATABASE pdbA, pdbB;
ALTER PLUGGABLE DATABASE pdbA, pdbB OPEN;
}
```

Restore a database to a specific point in time.

```
RUN {
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
SET UNTIL TIME "TO_DATE('20-SEP-2017 21:30:00','DD-MON-YYYY HH24:MI:SS')";
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
}
```

List all current database backups created with RMAN.

```
LIST BACKUP OF DATABASE;
```

For more information, see [Backup and Recovery User Guide](https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/index.html) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.rman.pg"></a>

Snapshots are the primary backup mechanism for Amazon Aurora databases. They are extremely fast and nonintrusive. You can take snapshots using the Amazon RDS Management Console or the AWS CLI. Unlike RMAN, there is no need for incremental backups. You can choose to restore your database to the exact time when a snapshot was taken or to any other point in time. Amazon Aurora provides the following types of backups:
+  **Automated Backups** — Always enabled on Amazon Aurora. They do not impact database performance.
+  **Manual Backups** — You can create a snapshot at any time. There is no performance impact when taking snapshots of an Aurora database. Restoring data from snapshots requires creation of a new instance. Up to 100 manual snapshots are supported for each database.

 **Examples** 

For examples, see [PostgreSQL Amazon Aurora Snapshots](chap-oracle-aurora-pg.hadr.flashback.md#chap-oracle-aurora-pg.hadr.flashback.pg).

## Summary
<a name="chap-oracle-aurora-pg.hadr.rman.summary"></a>


| Description | Oracle |  Amazon Aurora  | 
| --- | --- | --- | 
|  Scheduled backups  |  Create DBMS\$1SCHEDULER job that will run your RMAN script on a scheduled basis.  |  Automatic  | 
|  Manual full database backups  |  <pre>BACKUP DATABASE PLUS ARCHIVELOG;</pre>  |  Use Amazon RDS dashboard or the AWS CLI command to take a snapshot on the cluster. <pre>aws rds create-db-cluster-snapshot<br />  --dbcluster-snapshot-identifier Snapshot_name<br />  --db-cluster-identifier Cluster_Name</pre>  | 
|  Restore database  |  <pre>RUN<br />{<br />SHUTDOWN IMMEDIATE;<br />STARTUP MOUNT;<br />RESTORE DATABASE;<br />RECOVER DATABASE;<br />ALTER DATABASE OPEN;<br />}</pre>  |  Create new cluster from a cluster snapshot. <pre>aws rds restore-db-cluster-from-snapshot<br />  --db-cluster-identifier NewCluster<br />  --snapshotidentifier SnapshotToRestore<br />  --engine aurora-postgresql</pre> Add a new instance to the new/restored cluster. <pre>aws rds create-db-instance<br />  --region useast-1<br />  --db-subnet-group default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre>  | 
|  Incremental differential  |  <pre>BACKUP INCREMENTAL LEVEL 0<br />DATABASE;<br />BACKUP INCREMENTAL LEVEL 1<br />DATABASE;</pre>  |  N/A  | 
|  Incremental cumulative  |  <pre>BACKUP INCREMENTAL LEVEL 0<br />CUMULATIVE DATABASE;<br />BACKUP INCREMENTAL LEVEL 1<br />CUMULATIVE DATABASE;</pre>  |  N/A  | 
|  Restore database to a specific point in time  |  <pre>RUN {<br />  SHUTDOWN IMMEDIATE;<br />  STARTUP MOUNT;<br />  SET UNTIL TIME "TO_DATE(<br />    '19-SEP-2017 23:45:00',<br />    'DD-MON-YYYY HH24:MI:SS')";<br />  RESTORE DATABASE;<br />  RECOVER DATABASE;<br />  ALTER DATABASE<br />  OPEN RESETLOGS;<br />}</pre>  |  Create a new cluster from a cluster snapshot by given custom time to restore. <pre>aws rds restore-db-cluster-to-point-in-time<br />  --db-cluster-identifier clustername-restore<br />  --source-db-cluster-identifier clustername<br />  --restore-to-time 2017-09-19T23:45:00.000Z</pre> Add a new instance to the new or restored cluster. <pre>aws rds create-db-instance<br />  --region useast-1<br />  --db-subnet-group default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre>  | 
|  Backup database archive logs  |  <pre>BACKUP ARCHIVELOG ALL;</pre>  |  N/A  | 
|  Delete old database archive logs  |  <pre>CROSSCHECK BACKUP;<br />DELETE EXPIRED BACKUP;</pre>  |  N/A  | 
|  Restore a single pluggable database (12c)  |  <pre>RUN {<br />  ALTER PLUGGABLE DATABASE pdb1, pdb2 CLOSE;<br />  RESTORE PLUGGABLE DATABASE pdb1, pdb2;<br />  RECOVER PLUGGABLE DATABASE pdb1, pdb2;<br />  ALTER PLUGGABLE DATABASE pdb1, pdb2<br />  OPEN;<br />}</pre>  |  Create new cluster from a cluster snapshot. <pre>aws rds restore-db-cluster-from-snapshot<br />  --db-cluster-identifier NewCluster<br />  --snapshotidentifier SnapshotToRestore<br />  --engine aurora-postgresql</pre> Add a new instance to the new or restored cluster. <pre>aws rds create-db-instance<br />  --region useast-1<br />  --db-subnet-group default<br />  --engine aurora-postgresql<br />  --db-cluster-identifier clustername-restore<br />  --db-instance-identifier newinstance-nodeA<br />  --db-instance-class db.r4.large</pre> Use `pg_dump` and `pg_restore` to copy the database to the original instance. <pre>pgdump -F c<br />  -h hostname.rds.amazonaws.com<br />  -U username<br />  -d hr -p 5432 > c:\Export\hr.dmp<br /><br />pg_restore<br />  -h restoredhostname.rds.amazonaws.com<br />  -U hr -d hr_restore<br />  -p 5432 c:\Export\hr.dmp</pre> Optionally, replace with the old database using `ALTER DATABASE RENAME`.  | 

For more information, see [rds](https://docs.aws.amazon.com/cli/latest/reference/rds/index.html#cli-aws-rds) in the *CLI Command Reference* and [Restoring a DB instance to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html) and [Restoring from a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html) in the *Amazon RDS user guide*.

# Oracle SQL\$1Loader and PostgreSQL pg\$1dump and pg\$1restore
<a name="chap-oracle-aurora-pg.hadr.dump"></a>

With AWS DMS, you can efficiently migrate data from flat files into AWS databases using Oracle SQL\$1Loader and PostgreSQL `pg_dump` and `pg_restore` commands. These utilities facilitate bulk data loading from external files into database tables.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  Not all functions are supported by PostgreSQL and may require to create manually  | 

## Oracle usage
<a name="chap-oracle-aurora-pg.hadr.dump.ora"></a>

SQL\$1Loader is powerful utility that imports data from external files into database tables. It has strong parsing engine with few limitations on data formats.

You can use SQL\$1Loader with or without a control file. A control file enables handling more complicated load environments. For simpler loads, use SQL\$1Loader without a control file (also referred to as SQL\$1Loader Express).

The outputs of SQL\$1Loader include the imported database data, a log file, a bad file (rejected records), and a discard file (if enabled).

 **Examples** 

Oracle SQL\$1Loader is well suited for large databases with a limited number of objects. The process of exporting from a source database and loading to a target database is very specific to the schema. The following example creates sample schema objects, exports from a source, and loads into a target database.

Create a source table.

```
CREATE TABLE customer_0 TABLESPACE users
  AS SELECT rownum id, o.* FROM all_objects o, all_objects x
    where rownum <= 1000000;
```

On the target Amazon RDS instance, create a destination table for the loaded data.

```
CREATE TABLE customer_1 TABLESPACE users
  AS select 0 as id, owner, object_name, created
    from all_objects where 1=2;
```

The data is exported from the source database to a flat file with delimiters. This example uses SQL\$1Plus. For your data, you will likely need to generate a script that does the export for all the objects in the database.

```
alter session set nls_date_format = 'YYYY/MM/DD HH24:MI:SS';
set linesize 800
HEADING OFF FEEDBACK OFF array 5000 pagesize 0
spool customer_0.out
SET MARKUP HTML PREFORMAT ON SET COLSEP ',' SELECT id,
  owner, object_name, created FROM customer_0;
spool off
```

Create a control file describing the data. Depending on the data, you may need to build a script that provides this functionality.

```
cat << EOF > sqlldr_1.ctl
LOAD DATA
INFILE customer_0.out
into table customer_1
APPEND
fields terminated by "," optionally enclosed by '"'
(id POSITION(01:10) INTEGER EXTERNAL,
owner POSITION(12:41) CHAR,
object_name POSITION(43:72) CHAR,
created POSITION(74:92) date "YYYY/MM/DD HH24:MI:SS")
```

Import the data using SQL\$1Loader. Use the appropriate username and password for the target database.

```
sqlldr cust_dba@targetdb control=sqlldr_1.ctl BINDSIZE=10485760 READSIZE=10485760 ROWSS=1000
```

For more information, see [SQL\$1Loader](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-sql-loader.html#GUID-8D037494-07FA-4226-B507-E1B2ED10C144) in the *Oracle documentation*.

## PostgreSQL usage
<a name="chap-oracle-aurora-pg.hadr.dump.pg"></a>

You can use the two following options as a replacement for the Oracle SQL\$1Loader utility:
+  **PostgreSQL Import** using an export file similar to a control file.
+  **Load from Amazon S3 File** using a table-formatted file on Amazon S3 and loading it into a PostgreSQL database.

 `pg_restore` is a good option when it’s required to use a tool from another server or a client. The `LOAD DATA` command can be combined with meta-data tables and `EVENT` objects to schedule loads.

Another option to export and import data from PostgreSQL database is to use `COPY TO` and `COPY FROM` commands. Starting with PostgreSQL 12, the `COPY FROM` command, that you can use to load data into DB, has support for filtering incoming rows with the `WHERE` condition.

```
CREATE TABLE tst_copy(v TEXT);

COPY tst_copy FROM '/home/postgres/file.csv' WITH (FORMAT CSV) WHERE v LIKE '%apple%';
```

For more information, see [PostgreSQL pg\$1dump and pg\$1restore](chap-oracle-aurora-pg.hadr.datapump.md#chap-oracle-aurora-pg.hadr.datapump.pg).