

# VMware migration
VMware migration

AWS Transform can help you migrate your VMware environment to Amazon EC2 by using generative AI. This document provides an overview of AWS Transform and of the workflow of the migration process.

## Capabilities and key features


AWS Transform offers the following capabilities and key features for migrating your VMware environment to AWS.
+ Three discovery options: 
  + Assisted discovery of your VMware environment by using collectors from AWS Application Discovery Service.
  + Use the open-source [Export for vCenter](https://github.com/awslabs/export-for-vcenter) tool.
  + Importing independently collected discovery data.
+ AI-driven conversion of your source VMware network configuration to an Amazon VPC network architecture.
+ AI-driven generation of migration plans, including application grouping and suggested migration waves.
+ Rehosting your servers to run natively on Amazon EC2.

AWS Transform supports migrating Windows and Linux servers of supported operating systems. For the full list of supported operating systems, see [Supported operating systems](https://docs.aws.amazon.com/mgn/latest/ug/Supported-Operating-Systems.html) in the *AWS Application Migration Service User Guide*.

## AWS Transform VMware migration architecture
Architecture

This diagram displays an overview of AWS Transform VMware migration architecture.

![\[AWS Transform VMware architecture\]](http://docs.aws.amazon.com/transform/latest/userguide/images/atx-vm-architecture_v2.png)


## Limitations


AWS Transform has the following limitations:
+ If you stop a running migration job, and then ask the agent to restart it, the job will start again from the beginning and you will lose any progress you have made in the job. However, artifacts created in the job before restarting it will still be available. 
+ You can specify one target AWS Region per VMware migration job. To migrate applications to different target Regions, create multiple VMware migration jobs.
+ NSX imports are only supported for end-to-end migration jobs.

**Important**  
AWS Transform generates network configurations and migration strategies based on your environment assessment. Review these configurations with stakeholders to ensure that they meet your organization's security, compliance, and business requirements. While AWS Transform provides automated configuration recommendations, you are responsible for validating and adjusting the settings to match your security and compliance needs before proceeding with migration.

# VMware migration jobs


To use AWS Transform for VMware migrations, you first need a workspace, which is a logical container in which you can create one or more transformation jobs. The sections in this topic describe how to get a workspace and how to create and start a VMware migration job in it.

## Getting a workspace


For information about getting a workspace, see [Getting started](https://docs.aws.amazon.com/transform/latest/userguide/getting-started.html). 

The workspace that you use determines the AWS Region where you can create transformation jobs. That is the AWS Region where your jobs will reside. Your discovery data and AWS Transform recommendations will also reside in this AWS Region. To create workspaces and jobs in a different AWS Region, ask your administrator to create a different workspace for you. For information about supported AWS Regions, see [Supported Regions for AWS Transform](regions.md).

Even though the AWS Region where you can create jobs and store discovery data and recommendations is determined by your AWS Transform administrator, you can specify a different AWS Region as your target for the migration. In other words, you can run discovery and receive AWS Transform recommendations in one AWS Region, but then create your target environment in a different AWS Region. If you do that, you will be transferring your data across AWS Regions. For more information, see [AWS account connectors for VMware migrations](transform-app-vmware-acct-connections.md).

## Job types


AWS Transform offers the following types of VMware migration jobs that you can choose from depending on your migration needs.

### End-to-end migration


1. Perform discovery

1. Generate wave plan

1. Generate VPC configuration

1. (Optional) Deploy VPC networks

1. Migrate servers

### Network migration only




1. Generate VPC configuration

1. (Optional) Deploy VPC networks

### Network-and-server migration




1. Generate VPC configuration

1. (Optional) Deploy VPC networks

1. Migrate servers

### Discovery and server migration




1. Perform discovery

1. Generate wave plan

1. Migrate servers

## Creating and starting a job


The first step of a migration project is to create an AWS Transform job. For VMware migration projects, you can choose different job types, depending on your goals. The following procedure describes how to create and start a new VMware migration job of any type. For information about the different job types, see [Job types](#vmware-job-types).

**To create and start a new VMware migration job**

1. On your workspace landing page, choose **Create a job**.

1. Choose the VMware migration option, and then specify the type of VMware migration job that you want to create. For information about the steps included in each of the four VMware migration job types, see [Job types](#vmware-job-types).

1. After you answer all the chat questions, choose **Create job**.

# VMware migration workflow


The type of VMware migration job you choose determines the workflow. For a description of the different types of VMware migration jobs, see [Job types](vmware-jobs.md#vmware-job-types).

# Discover source data


To discover and collect on-premises data, you can use the [AWS Transform discovery tool](discovery-tool.md), AWSMigration Evaluator collector, or upload exports from RVTools and ModelizeIT, as well as AWSMigration Portfolio Assessment (MPA) format exports generated by tools like Cloudamize. AWS Transform can process AWS Transform discovery tool exports as CSV and JSON files in a ZIP file format, RVTools exports as either an Excel file or a ZIP file containing CSV files, and ModelizeIT CSV exports in a ZIP file. For each data source AWS Transform accepts the primary server file individually. However, supplementary files such as connections will only be processed when they are included in a ZIP file with the server files or in an Excel file with the server sheet.

 We recommend that you upload the most detailed data available and review your export files before uploading to ensure data completeness and accuracy. Verify that all required files are included in your upload, and confirm that the data reflects your current environment state. This preliminary review helps minimize the need for re-uploads and ensures that AWS Transform can more accurately capture your on-premises environment and better support migration planning, including application grouping and wave planning. You can also incrementally add data as you obtain better data. 

 AWS Transform automatically detects file format and structure, parses and extracts structured entity records, removes duplicates across multiple files, validates data quality and reports issues, and then prepares a summary ready for downstream migration planning.

 After you share and confirm your on-premises data, you can review discovery data by expanding **Discover on-premises** data in the **Job Plan** and choosing **Inventory readiness summary**. You can also ask questions about the data you uploaded to verify that AWS Transform correctly ingested everything or to identify and correct any mistakes in the data processing. For example, you can ask about operating system and versions. If you need to correct mistakes you can re-upload your inventory data from the discovery tool you used, and AWS Transform will automatically process, de-duplicate, and merge the updated records. You can also remove a previously uploaded file if you no longer want to use that inventory data. 

# Migration planning


The Migration planning job step within AWS Transform for VMware is a collaborative chat-based experience for planning large migrations. AWS Transform agents apply AWS Prescriptive Guidance to guide customers from analysis of on-premises data to finalized migration wave plans.

After the Discover on-premises data job completes successfully, AWS Transform uses the discovery data to group applications into migration waves. AWS Transform guides you through steps to analyze and scope your servers, group them into applications, generate move groups, and build migration waves. When analyzing your on-premises environment, you can ask questions to better understand how AWS Transform analyzed your installed software, for example, server dependencies and network architecture.

AWS Transform supports scope adjustments within application groups and waves. You can re-upload discovery data at any time, and AWS Transform will automatically process, de-duplicate, and merge new records with existing data. When changes are detected—such as newly discovered dependencies or infrastructure additions, AWS Transform will flag impacted dependency groups and provide recommendations for wave plan adjustments. Migration planning can also make use of unstructured text data to enrich the planning process. 

There are four migration planning stages:
+ In **Scope and analyze**, you can review your discovery data, ask questions about your software and network environment and determine the resources in scope for migration.
+ In **Group apps**, you can provide a combination of business and technical rules regarding, for example, hostname analysis, network dependencies, and business rules, so that migration planning can group your infrastructure into applications. If you already have an inventory of your applications, migration planning can use that instead.
+ In **Generate move groups**, you can give migration planning your technical and business requirements so that it can determine which applications must be moved together. Technical dependencies include databases, message queues, or other resources shared between multiple apps. Business and operational dependencies include business criticality, RPO and RTO, data center location, and application owners. 
+ Finally, in **Build waves**, you can give migration planning context about your timelines and priorities so it can build a wave plan that you can migrate. You can select move groups for inclusion in a wave based on factors such as priority score, move group size, user counts, and application complexity. 

**Migration Planning Terminology:**
+ *Migration waves* are logical groups that are migrated together. Migration waves are comprised of one or more move groups.
+ A *move group* is a set of co-dependent applications that must be moved together. They may have technical dependencies such as a shared database or they have business dependencies such as supporting a shared business function.
+ A *dependency* is a relationship between systems. There are several types of dependencies including:
  + Critical or hard dependencies, where systems cannot operate without the dependency. Common examples of this are applications that depend on databases, other applications, or services.
  + Soft dependencies, which are not critical for the operation of the system. Common examples of this include latency insensitive dependencies that can be migrated independently.
  + Non-technical dependencies include business, organizational, operational and compliance dependencies. These are dependencies related to your organization and its priorities. Examples of this include shared business function and organizational ownership.

## Workflow


Migration planning is an interactive and iterative workflow. You can go back and make changes to previous steps at any time. A typical migration planning workflow is:

1. Migration planning starts by summarizing the discovery data that is available. Review the available data and return to the discovery step at any time to provide additional data.

1. Within the scoping and analysis step you can ask questions about your on-premises environment to validate the data you have collected. Example questions include:

   1. List my servers by operating system

   1. Summarize my on-premises network topology

   1. List the most common technologies running in my environment.

1. While analyzing your environment, if you identify servers that should not be in scope for migration you can tell AWS Transform to exclude those resources. Examples of this include:

   1. Remove all servers which have *legacy* in their hostname

   1. Remove all servers in the 10.0.2.0/24 subnet

   1. Remove all servers running versions of Windows older than 2022

1. Once you have sufficiently explored your environment and determined your migration scope you can tell AWS Transform to move to the next migration planning step.

1. The next step is application grouping. If you already have your servers mapped to applications, you can tell AWS Transform to use that mapping and skip this step. If you do not have your applications pre-defined you can provide the technical and business logic that defines your applications. AWS Transform will guide you through the application grouping process and suggest the data points that you can provide to effectively group your servers together into applications. The more information you can provide about your on-premises applications, the more effectively AWS Transform can group your servers into apps. Once you have provided sufficient information, you can then instruct AWS Transform to perform application grouping.

1. Once application grouping has been performed, review the application groups. You can instruct AWS Transform to make any necessary changes, for example:

   1. Move server example-server to application-5

   1. Rename application-5 "HR App Test Environment"

   1. Remove all Linux servers from IIS Dev Farm

1. Once your apps are grouped, instruct AWS Transform to move to the next step

1. The next step is move grouping. In the move grouping step you identify applications that must be moved together. Provide context around your technical and non-technical dependencies. AWS Transform will guide you through the process and suggest data points that you can provide to group your apps together. There are several considerations to make at this stage including:

   1. What should be the target size of move group?

   1. Do you want to combine environments, for example dev, test, and prod, for each app, or split them?

   1. How do you want to consider network dependencies? Are all dependencies critical or can some dependencies be considered soft dependencies and be split across move groups?

1. Once you have provided your rules for move grouping, instruct AWS Transform to execute your move grouping strategy. You can then review and modify your move groups. Once you have reviewed your move groups you can instruct AWS Transform to move to the final migration planning step.

1. Wave planning is the final step within migration planning. In this step, you group your move groups into migration waves and prioritize those waves. Within the wave planning step, AWS Transform will guide you through providing the business prioritization required to group your move groups into waves and then prioritize those waves. Considerations within wave planning include:

   1. The business criticality of each of your move groups

   1. The migration timelines and the timelines for each move group

   1. The risks associated with each move group

   1. The number of servers to migrate per wave

1. Once you have provided sufficient guidance on how to group into waves, instruct AWS Transform to execute the wave planning. You can then review your waves and modify them.

1. Once you have finalized your wave plan, you can complete migration planning and move to execution. You can return to migration planning at any time to refine and iterate on your plan.

# Connect target account


The target account is where your network will be deployed and where your migrated servers and applications will reside in AWS.

**Important**  
AWS Transform will create an Amazon S3 bucket on your behalf in this target AWS account. This bucket won't have `SecureTransport` enabled by default. If you want the bucket policy to include secure transport, you must update the policy yourself. For more information, see [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html).

**To use an existing target account connector**

1. In the **Job Plan** pane, expand **Choose target account**, and then choose **Create or select connectors**.

1. In the **Collaboration** tab, select an existing connector if your workspace already has connectors, and then choose **Use connector**. In the list of available connectors, if a connector is grayed out, that means its version isn't compatible with the job type that you selected earlier.
**Important**  
If you specify a connector with a target AWS Region that is different from the AWS Transform Region, that means AWS Transform will be transferring your data across AWS Regions.

1. Choose **Continue**.

**To create a new connector**

1. In the **Job Plan** pane, expand **Connect target account**, and then choose **Create or select connectors**.

1. Specify the AWS account and AWS Region that you want to use as your target, and then choose **Next**.
**Important**  
If you specify a connector with a target AWS Region that is different from the discovery AWS Region, that means AWS Transform will be transferring your data across AWS Regions.

1. Choose whether you want to use Amazon S3 managed keys for encryption. If you specify your own KMS key, you can use the default key policy. However, if you want a less permissive key policy, the following is an example. For information about how to create a KMS key, see [Create a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.

   AWS Transform uses the `kms:DescribeKey` permission to make sure the key exists. It uses the `kms:GenerateDataKey` and `kms:Decrypt` permissions to encrypt and decrypt the transformation job data in the Amazon S3 bucket.

   AWS Transform uses default Amazon S3 encryption. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html)

1. Choose **Continue**.

1. Copy the verification link, share it with an administrator of the target AWS account, and ask them to approve the connection request.

1. After the administrator of the AWS account approves the request, select the newly created connector from the list of connectors in the **Collaboration** tab, and then choose **Use connector**.

1. Choose **Send to AWS Transform**.

If you plan to modify the AWS Application Migration Service template to enable post-launch actions, add the following permission to the target connector role. You can find the name of that role in the **Collaboration** tab after the connector is created. For information about how to add permissions to a role, see [Update permissions for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html) in the *IAM User Guide*.

```
{
      "Sid": "MGNPostLaunchActions",
      "Effect": "Allow",
      "Action": [
        "iam:PassRole"
      ],
      "Resource": "arn:aws:iam::target-account-ID:role/service-role/AWSApplicationMigrationLaunchInstanceWithSsmRole"
}
```

# Migrate network


AWS Transform migrates VMware networks to AWS by translating your source environment configuration into AWS-equivalent network resources. AWS Transform analyzes your source network data and creates VPCs, subnets, security groups, NAT gateways, transit gateways, elastic IPs, routes, and route tables as needed. You can review and modify the generated network configuration before deployment. For deployment, you can either have AWS Transform deploy the configuration for you and analyze deployed network connectivity, or choose self-deployment—in which case AWS Transform generates Infrastructure as Code (IaC) in your preferred format: AWS Cloud Development Kit (AWS CDK) (AWS CDK), Landing Zone Accelerator (LZA), or HashiCorp Terraform.

To migrate your network, follow these steps:

1. Upload your source network file

1. Upload additional configuration files (optional, for RVTools environments)

1. Select a network topology

1. Select security groups mapping strategy

1. Review the generated VPC configurations

1. Generate a network diagram (optional)

1. Configure resource tagging

1. Deploy your network

**Note**  
For multi-account deployments, you must configure cross-account IAM roles and trusted access for AWS Organizations before starting the network migration. For more information, see [Migration to multiple target accounts](migration-multiple-target-accounts.md).

## Step 1: Source network mapping


The network mapping process requires uploading a configuration file from your source environment. The tool you choose depends on your source network type:
+ **Software Defined Networks (SDN):** Import/Export for VMware NSX network virtualization or Cisco ACI config for Cisco Application Centric Infrastructure.
+ **VMware vSphere networks:** [RVTools](https://www.dell.com/en-us/shop/vmware/sl/rvtools). When using RVTools files, AWS Transform generates Amazon VPC configurations only. Security group configurations require additional input from firewall or software-defined network files. See [Additional configuration files](#transform-vmware-firewall-and-sdn-config-files) for more details.
+ **Networks based on firewall configuration data:** Export files from Palo Alto Networks Firewall, Fortinet FortiGate Firewall, or Cisco ACI. For supported versions and extraction instructions, see [Configuration file extraction](#transform-vmware-config-file-extraction).
+ **Hybrid networks running both VMware and non-VMware workloads:** Application mapping tools - modelizeIT.
+ **Other file types:** If your configuration file is not one of the supported formats listed above, AWS Transform will attempt to automatically convert it to a format that can be processed by the service. This conversion can take up to two hours depending on the file size and complexity.

**Warning**  
The official RVTools site is [https://www.dell.com/en-us/shop/vmware/sl/rvtools](https://www.dell.com/en-us/shop/vmware/sl/rvtools), which is the site that this guide links to in steps that mention RVTools. Beware of the scam site (rvtools)(dot)(org).

AWS Transform creates VPCs from all source network segments, with each detected segment becoming its own distinct VPC. Network segmentation varies by source type:
+ **vNetwork:** AWS Transform groups VMs by vSwitch and VLAN. VLANs can appear under multiple vSwitches (except VLAN 0).
+ **NSX networks:** AWS Transform segments the network based on Tier-1 routers, grouping the routers and collecting their segments.

## Step 2: Additional configuration files


For RVTools source environments, you can optionally upload additional configuration files to enable security group generation. Without additional configuration, no security groups will be generated for RVTools-based migrations.

AWS Transform supports the following additional configuration file types. Only one configuration file from one platform can be uploaded.
+ Cisco Application Centric Infrastructure (ACI): Network policy configurations
+ Palo Alto Networks: Firewall security policies
+ Fortinet FortiGate: Firewall security policies

When you upload a firewall or Cisco ACI file, AWS Transform generates network infrastructure and security groups. When you upload an RVTools file alone, AWS Transform generates network infrastructure only.

For supported versions and extraction instructions, see [Configuration file extraction](#transform-vmware-config-file-extraction).

## Step 3: Network topologies


During the network definition step, you select a network topology. You can choose the **Isolated VPCs** topology or the **Hub and Spoke** topology.

**Important**  
For both topologies, AWS Transform does not open the communication to the internet. You must open it manually after taking appropriate security precautions.

### Isolated VPCs


These are independent network environments that operate as separate units within AWS. VPCs maintain complete network isolation, with no built-in communication pathways between them. This separation provides the highest level of network boundary protection. You can connect the VPCs through specific networking configurations if needed.

### Hub and Spoke


In this model, an AWS Transit Gateway created by AWS Transform acts as the hub that connects to multiple workload VPCs (the spokes). During network convergence, AWS Transform creates a spoke VPC for each detected source network segment.

AWS Transform creates three specialized VPCs for traffic management and security:
+ Inspection VPC: Where you establish the firewall that inspects the traffic. You can create firewall rule configurations here to modify VPC connections.
+ Inbound VPC: For all traffic from the public internet (north-south). Includes an internet gateway.
+ Outbound VPC: For all traffic to the public internet. Has an internet gateway, a Network Address Translation (NAT) gateway and an [elastic IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html).

AWS Transform automatically associates all spoke VPCs with the default association route table and propagates routes from all spoke VPCs to the default propagation route table. This automation creates routing paths without manual configuration, though traffic flow remains subject to security group permissions.

If you want fine-grained control over the communication between the VPCs, choose the **Isolated VPCs** option and modify the generated network to create the specific communication paths you require.

## Step 4: Security group mapping


AWS Transform creates security groups based on your source environment configurations. AWS Transform converts security policies, security policy rules, gateway policies, and gateway policy rules to security groups.

**Important**  
AWS Transform makes a best effort to create security groups that match your source environment. Review and modify the generated security groups to ensure that they meet your company's needs and security policies.

Choose one of the following security group mapping strategies:
+ **MAP:** Translate rules from your source environment. Only static IP assignment is supported with this strategy (see IP migration approaches below).
+ **SKIP:** Do not translate rules from your source environment. No security groups are generated. Both static and dynamic (DHCP) IP assignment are available. For RVTools source environments without additional configuration files, AWS Transform automatically uses SKIP.

**IP migration approaches**

The system offers two key network configuration choices for your migration:

**Network range selection:**
+ **Keep Existing Ranges (IP Address Ranges Retention):** Keep original IP address ranges during migration. Ideal for lift-and-shift scenarios with legacy applications that have hard-coded IP dependencies or existing firewall rules.
+ **Update to new IP ranges (CIDR update):** You can modify each VPC CIDR range during migration, and AWS Transform automatically propagates changes to subnets, route tables, and security groups.

**IP addresses assignment:**
+ **Fixed IP addresses (Static):** The system assigns static IPs based on the CIDR. This is best for applications requiring predictable network behavior, DNS management, or IP-based access control. IPs persist across instance restarts using Elastic Network Interfaces (ENIs).
+ **Dynamic IP assignment (AWS DHCP):** Automatically assign IPs from subnet pools at instance launch. Optimal for cloud-native applications and auto-scaling workloads. Reduces operational overhead but requires applications to use DNS or service discovery.

You can combine either range selection with either IP assignment method.

**Note**  
IP addresses assignment strategy is set at the wave level. You can assign different strategies to specific servers by customizing the wave file. For example, if you chose a static IP address approach for the wave but want to assign a dynamic approach to a specific server, you would use `[RESET_VALUE]` as described in [Editing your configuration](https://docs.aws.amazon.com/mgn/latest/ug/configuration-editing.html) in the *Application Migration Service user guide*.

## Step 5: Review VPC configurations


After AWS Transform generates Amazon VPC configurations, it displays the generated VPC networks. You can either use the current configuration or modify VPC CIDRs.

For single-account deployments, you can edit VPC CIDRs only. For multi-account deployments, you can also edit the target accounts for each VPC network.

**Note**  
You cannot modify the prefix length (the value after the "/").

To modify VPC CIDRs:

1. In the Generated VPCs list, provide your modified CIDRs.

1. Choose **Submit** to apply the changes and rerun the mapping process.

1. Review the results, then either continue or repeat the modification steps.

## Step 6: Network diagram


After reviewing the generated VPC configurations, you can optionally generate a network diagram to visualize your network topology. AWS Transform supports the following diagram formats:
+ **Mermaid code (.mmd):** A text-based diagram definition file that can be rendered using Mermaid-compatible tools.
+ **Image (.png):** A rendered image of your network topology.

## Step 7: Configure resource tagging


AWS Transform tags your network resources for launch and replication, and supports custom tags and AWS Migration Acceleration Program (MAP) tags.

### Automatic tags for launch and replication


AWS Transform automatically tags migrated network resources (VPCs, subnets, security groups, and route tables) with the following tags:
+ **Key:** `CreatedBy` **Value:** `AWSApplicationMigrationService`
+ **Key:** `ATWorkspace` **Value:** `workspace-id`

These tags allow using the VPC and subnet for launching test and cutover instances in AWS.

**Note**  
Network migration generates network segments without internet connectivity, so migrated VPCs and subnets are not suitable as staging areas for replication by default.

To also use the VPC and subnet as a staging area (replication), manually add the following tags:
+ **Key:** `CreatedFor` **Value:** `AWSTransform`
+ **Key:** `ATWorkspace` **Value:** `workspace-id`

You can also apply these tags to any existing AWS network resource to make it available for replication.

Find your workspace ID in the AWS Transform web app URL: https://... /workspace/*workspace-id*/job/job-id

### Custom tags


In addition to the tags applied automatically by AWS Transform, you can add custom tags to organize, track costs, and manage compliance for your migrated network resources. You can apply custom tags at two levels:
+ **Job-level tags:** Apply to all resources created by this job, including all VPCs, subnets, security groups, and route tables.
+ **VPC-level tags:** Apply to a specific VPC and automatically cascade to all its associated resources (subnets, security groups, route tables).

**Note**  
Maximum 40 tags per request. Each tag requires a key and value. AWS tagging conventions apply.

AWS Transform applies these tags when generating the Infrastructure as Code templates.

### AWS Migration Acceleration Program


If your migration is part of the **AWS Migration Acceleration Program (MAP 2.0)**, AWS Transform applies a MAP tag to your resources. If you provided your MPE ID earlier in the migration process, the tag is applied automatically. Otherwise, after you finish reviewing the generated VPC configurations, AWS Transform asks whether you have a MAP agreement and prompts you to provide your MPE ID — a 10-character code using uppercase letters and digits (for example, ABCDE12345). The applied tag uses the format:
+ **Key:** `map-migrated` **Value:** `migMPE_ID`

## Step 8: Deploy network


After tagging, select your deployment strategy:
+ **AWS Transform-managed deployment:** AWS Transform uses CloudFormation templates to deploy your network and runs Reachability Analyzer to check connectivity between subnets across multiple VPCs and within the same VPC.
**Note**  
Network deployment requests require explicit approval before execution. See the deployment approvals process below.
+ **Self-deployment:** AWS Transform generates Infrastructure as Code (IaC) templates. CloudFormation templates are generated by default. You can also select additional output formats:
  + AWS CDK: TypeScript project for programmatic infrastructure deployment
  + HashiCorp Terraform: HCL templates for managing network resources
  + Landing Zone Accelerator (LZA): A network-config.yaml file for LZA network configuration

**Note**  
When deploying via the Landing Zone Accelerator (LZA) pipeline, ensure that your AWS Transform account and LZA installation are in the same AWS Organization. Deployment will fail if there is a mismatch between the Organizations IDs.

For self-deployment, use the link provided to download a zip file containing the generated templates. The zip folder includes a README.md file that explains how to use the generated templates.

To verify the downloaded file hasn't been corrupted or tampered with, generate and download a checksum, then compare it to a locally generated hash using `openssl dgst -sha256 -binary <file.zip> | base64` command.

**Deployment approvals process**

Network deployment requests require explicit approval before execution. When you submit a deployment request, it automatically routes to authorized approvers through the AWS Transform Approvals tab. Approvers validate both CloudFormation templates and network configurations to ensure compliance with security standards and architectural requirements. Each submission triggers a new review cycle, and deployments proceed only after receiving confirmation. If an approver denies your request, contact them directly to discuss necessary modifications. The system tracks all approval decisions for audit purposes and maintains deployment history.

## Network deletion


After deployment, you can delete the deployed network resources. Deletion is available immediately after deployment completes. If you modify the deployed network resources after deployment, AWS Transform cannot delete them.
+ **AWS Transform-managed deployments:** AWS Transform removes all CloudFormation stacks that were created during deployment. This action requires approval through the AWS Transform Approvals tab.
+ **Self-deployments:** You must manually delete the deployed resources through the AWS Management Console or AWS CLI.

## Configuration file extraction


You can use Cisco ACI, Palo Alto Networks, and Fortinet FortiGate configuration files as standalone source files to generate network infrastructure and security groups, or as complementary files alongside an RVTools upload to add security group generation. The extraction process is the same in both cases.

To extract configuration files from your firewall and network environments, follow these procedures. Consult vendor documentation for the latest information.

### Fortinet FortiGate

+ Firmware: v7.0 and up
+ Requirements: `super_admin` or `super_admin_readonly` privileges on global level
+ Steps:

  1. Connect to the firewall via SSH or built-in CLI client

  1. Run: `show | grep ""` (`| grep ""` disables pagination)

  1. Save all output to a file starting from the `show` command

### Palo Alto Networks

+ Firmware: 10.1 and up
+ Requirements: superadmin role
+ Steps: Connect via SSH, run the commands below, and save the outputs:

  ```
  set cli pager off
  set cli config-output-format set
  configure
  show              # Save as palo-conf.txt
  show predefined   # Save as palo-default.txt
  ```

### Cisco ACI

+ Firmware: 6.0 and up
+ Requirements: Admin role with all privileges; SCP/SFTP/FTP destination configured
+ Steps:

  1. Connect to APIC controller via browser

  1. Go to Admin >> Config Rollbacks

  1. In "Take a snapshot" select remote location and choose "Create a snapshot now"

  1. After receiving "Transfer successful" message, connect to the remote location server and retrieve the latest snapshot file (.gz file)

# Migration to multiple target accounts


AWS Transform supports migrating VMware workloads to multiple AWS accounts simultaneously. This capability enables you to migrate workloads directly to their intended target accounts while maintaining your organization's security boundaries and governance structures.

## Benefits


Multi-account migration provides the following benefits:
+ *Maintain security boundaries* - Migrate workloads directly to accounts that align with your business units and security requirements
+ *Unified management* - Control the entire migration process from a single management interface

## Limitations


Multi-account migration has the following limitations:
+ *Single region only* - You can migrate to multiple accounts within a single AWS Region. For multi-region migrations, you must create separate projects for each target region
+ *One account per wave* - Each migration wave can target only one account. Applications requiring different target accounts must be placed in separate waves
+ *AWS Organizations required* - All target accounts must be part of an AWS Organization

## Prerequisites


Before you begin a multi-account migration, ensure you have the following:
+ An AWS Organization containing all target accounts for your migration
+ A designated Management Account with [Delegated administrator ](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_delegate_policies.html) or an account with *Manager* permissions in the AWS Organization.
+ Appropriate permissions initialized across all target accounts. Set up these roles using the link to the Application Migration Service console, provided to you by AWS Transform when it requests that you set up these permissions. 

  1. If you're using Delegated administrator permissions, you can select delegated administrators.

  1. You can **View Roles** to learn about the permissions.

  1. Select **Initialize Multi Account Network Migration**. Return to AWS Transform and report that you successfully initialized multi account network migration.



## Implementation overview


Multi-account migration follows this high-level process:

1. After you've kicked off migration and connected your discovery account tell AWS Transform that you want to start network migration.

1. AWS Transform asks if you want to deploy to **Single Account** or **Multi Account**

1. Create or select a target connector. Learn more in [Connect target account](transform-vmware-connect-target-account.md). The target account must be an account in your Organizations with Delegated administrator or Manager permissions. 

1. Provide your source network file.

1. *Review network translation* - AWS Transform displays the target network configuration, including which resources deploy to which accounts. After your VPC networks are generated you can modify CIDR ranges and select targets across multiple accounts in the table provided in the human-in-the-loop (HITL) pane.

1. Generate Infrastructure as code (IaC) files - The IaC files (CDK, CFT, Terraform and LZA-compatible yaml) that AWS Transform generates include any changes you make to the target accounts and CIDR ranges.

1. *Deploy network infrastructure* - Choose to deploy the network yourself or have AWS Transform deploy it across your target accounts.

1. *Execute server migration* - AWS Transform migrates servers to their assigned target accounts (using Application Migration Service global view).
**Note**  
The inventory file that you provide to AWS Transform should include the target account configuration. The workloads, and the subnets they use, should go to the same target account.

# Set up service permissions


In this step, you initialize the AWS Application Migration Service (Application Migration Service) if you haven't already. To learn more about this requirement, see [Initializing Application Migration Service with the console](https://docs.aws.amazon.com/mgn/latest/ug/mgn-initialize-console.html) or [Initializing AWS Application Migration Service with the API](https://docs.aws.amazon.com/mgn/latest/ug/mgn-initialize-api.html). 

After you initiate Application Migration Service, AWS Transform helps you add MAP tags to your resources (if you have a MAP 2.0 agreement in place) so that you can get MAP credit. For information about MAP, see [AWS Migration Acceleration Program](https://aws.amazon.com/migration-acceleration-program/).

# Prepare and migrate waves


At this stage, you will see migration waves in the **Job Plan** pane. For each wave, perform the following steps. In some of these steps you will have the option of importing an updated inventory file. AWS Transform allows one import to a given target AWS account and target AWS Region at a time. This means that if you work on more than one wave simultaneously, or if there is more than one migration job running with the same target account, you must wait for an import to finish before you can perform another import in a different wave or job.

## Prepare waves


Each wave includes a **Set up migration wave** task. On its **Collaboration** tab you can configure the wave's settings.

**Set up migration wave**

1. In the **Job Plan** pane, expand the step **Set up migration waves**, and then choose **Set EC2 recommendation preferences**. Follow the instructions in the right pane, and then choose **Continue**. Learn more about Amazon EC2 recommendations in [Generating Amazon EC2 recommendations in AWS Migration Hub](https://docs.aws.amazon.com/migrationhub/latest/ug/generating-ec2-recommendations.html).

1. In the **Staging area subnet** section, you can choose a staging area subnet from the dropdown menu of the available subnets. 

   Only subnets that are tagged in VPCs *that are also tagged* with these tag key-value pairs appear in the list. Learn more in [VPC and subnet tags](#transform-tag-vpc-subnets).

1. For each wave, choose your [IP assigment approach](https://docs.aws.amazon.com/AWSQT/flexible_ip/transform-vmware-migrate-network.html#vmware-migration-ip):
   + **Use source IP or the converted IP from the new CIDR**
   + **Use new IP using DHCP**

1. In the **Job Plan** pane, choose **Confirm inventory for *wave-name***. Download the inventory file and review the list of servers and Amazon EC2 configurations. Modify the file if necessary, but do not remove columns or change the titles of the existing columns. You can control the operating system licensing options (BYOL / LI) and tenancy by specifying the configuration in columns with these headers: mgn:launch:placement:operating-system-licensing and mgn:launch:placement:tenancy. Learn more in [Import parameters](https://docs.aws.amazon.com/mgn/latest/ug/import-main.html#import-parameters) in the *Application Migration Service user guide*. After you choose whether to continue with the file you downloaded or to upload a version of the file that you updated, choose **Continue**.

**Note**  
AWS Transform provides Amazon EC2 recommendations based on the utilization specification of your source VMs. You can modify the suggested Amazon EC2 instance types to include recommendations from the [Migration Evaluator](https://aws.amazon.com/migration-evaluator/), [AWS Optimization and Licensing Assessment (OLA)](https://docs.aws.amazon.com/prescriptive-guidance/latest/optimize-costs-microsoft-workloads/aws-ola.html), or a [Migration assessment](transform-app-assessments.md) job.

### VPC and subnet tags


Your VPCs and their subnets are automatically tagged with these tags so that their subnets appear in AWS Transform's list of available subnets:
+ **Key:** `CreatedFor` **Value:** `AWSTransform`
+ **Key:** `ATWorkspace` **Value:** `workspace ID`

## Migrate waves


When you migrate a wave, AWS Transform keeps you informed of the progress by providing a table in the **Collaboration** pane. You can also ask AWS Transform about the status of the migration in natural language, for example:
+ What is the status of my servers?
+ What's the status of my wave?
+ What's the status of the step that I'm currently in?

**Deploy replication agents**

1. In the **Job Plan** pane, expand **Deploy replication agents**, and then choose **Start replication agent deployment**. You have two options:
   + ****Use AWS Transform to automate deployment**: **To automate the deployment of the agents on the source servers in this wave, AWS Transform uses an MGN connector already deployed in your account. For information about how to deploy an MGN connector in your account, see [Set up the MGN Connector](https://docs.aws.amazon.com/mgn/latest/ug/mgn-connector-setup-instructions.html) in the *Application Migration Service User Guide*.

     To use this option, perform the following steps:

     1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

     1. In the left navigation pane, under **Node Tools**, choose **Fleet Manager**.

     1. Choose the name of the managed instance of the MGN connector that you want AWS Transform to use for this wave.

     1. Tag the managed instance with the following key-value pairs.
        + Key: `CreatedFor` Value: `AWSTransform`
        + Key: `ATWorkspace ` Value: *workspace ID*

        Find your workspace ID in the AWS Transform web app URL, https:// ... /workspace/*workspace-id*/job/job-id

     1. In AWS Transform, choose **Use AWS Transform to automate deployment**.

     1. Specify the MGN connector that you tagged and the AWS Secrets Manager secret that you want AWS Transform to use for this wave. You must create a single set of credentials for the MGN connector to use for deploying replication agents on all servers in a particular wave. For information about setting up the secret, see [Register server credentials](https://docs.aws.amazon.com/mgn/latest/ug/connector-register-server-credentials.html).

     1. If AWS Transform encounters errors during the deployment of the agent, you will see those errors in the **Job Plan** pane. Choose each error in the **Job Plan** pane to view its details in the **Collaboration** tab.

     1. After you resolve all errors, you can track the replication status for the wave by choosing **Review replication status** in the **Job Plan** pane.
   + ****Deploy replication agents on your own**: **You can deploy the replication agents on the source servers manually. Alternatively, you can use the MGN connector or another automation framework to deploy them on your own. For information about how to set up the MGN connector, see [Set up the MGN Connector](https://docs.aws.amazon.com/mgn/latest/ug/mgn-connector-setup-instructions.html) in the *Application Migration Service User Guide*.

     To deploy the replication agents manually, or use an automation framework other than the MGN Connector to deploy them, perform the following steps.

     1. Go to the AWS Application Migration Service console, and export a list of your servers. For instructions, see [Exporting your data inventory](https://docs.aws.amazon.com/mgn/latest/ug/export-main.html).

     1. Filter the list by wave to obtain a list of the servers in the current wave.

     1. Follow the instructions under [Installing the AWS Replication Agent](https://docs.aws.amazon.com/mgn/latest/ug/agent-installation.html). Specify the `user-provided-id` parameter, and for every server set its value to the server's `mgn:server:user-provided-id` as it appears in the .csv file that you exported from AWS Application Migration Service. AWS Transform connects the replication agent with the imported server using this parameter. If it's not provided, MGN will create a separate instance of source server for each agent that is installed.

   **To see the replication agent installation status**, check the AWS Systems Manager run command history at agent installation time. For information, see [Understanding command statuses](https://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) in the *AWS Systems Manager User Guide*.

   **To see the replication status in real-time**, go to the AWS Application Migration Service console. Status updates in the AWS Transform web app are delayed.

   **For quotas related to replication**, see [AWS Application Migration Service service quota limits](https://docs.aws.amazon.com/mgn/latest/ug/MGN-service-limits.html) in the *Application Migration Service User Guide*.
**Note**  
AWS Transform does not support MGN agentless replication. For information about agentless replication, see [Agentless replication overview](https://docs.aws.amazon.com/mgn/latest/ug/installing-vcenter-overview-mgn.html) in the *Application Migration Service User Guide*.

1. When replication is complete, expand **Review the replication status** in the **Job Plan** pane. In the right pane you can see the status of the replication and resolve replication alerts.

**Note**  
To proceed, you must install the MGN replication agent on all servers in a wave. Disconnect and archive servers on which you don't install the replication agent. You can use the [disconnect-from-service](https://docs.aws.amazon.com/cli/latest/reference/mgn/disconnect-from-service.html) command to disconnect servers. To archive disconnected servers, use the [mark-as-archived](https://docs.aws.amazon.com/cli/latest/reference/mgn/mark-as-archived.html) command. The archiving command only works for source servers whose lifecycle state is `DISCONNECTED`.

**Launch test instances**

1. In the **Job Plan** pane, under **Launch test instances**, choose **Confirm instance launch**.

1. Download the inventory file, review it, and choose whether to continue with the current file or upload a modified one, then choose **Launch test instances**. You can change the launch settings within the inventory file, but don't modify the list of source servers and applications.

**Mark applications as ready for cutover**

1. In the **Job Plan** pane, expand **Mark applications as ready for cutover**, and choose **Mark applications as ready for cutover**.

1. In the **Collaboration** tab, review the replication status of each application, and resolve replication alerts.

1. Choose **Mark for cutover**.

**Launch cutover instances**

1. In the **Job Plan** pane, under **Launch cutover instances**, choose **Confirm instance launch**.

1. Download and open the inventory file, review the inventory, and choose whether to continue with the current inventory or upload a modified one. At this step, don't modify the list of source servers and applications listed in the inventory file. You can only change the launch settings within the inventory file.

1. Choose whether to continue with the current inventory or upload a modified one, and then choose **Continue**.

1. Choose **Launch cutover instances**.

**Finalize cutover**

1. (Optional) Review the launched Amazon EC2 cutover instances, validate connectivity and run acceptance tests. If you want to fix anything because there's a connectivity issue or a problem in the testing, you need to revert the cutover. This is the time to revert it.

1. In the **Job Plan** pane, expand **Finalize cutover**, and then choose **Start finalizing cutover**. Finalizing the cutover removes the replication agents. After you finalize the cutover you cannot make any changes, you cannot fix any connectivity issues, or anything else, and you cannot revert the cutover.

1. Choose **Finalize cutover**.

### Manage server status
Manage servers

During wave migration you can ask AWS Transform to update or change the status of a server. For example, if 9 out of 10 of the servers in your wave passed the test phase but one failed, you can allow AWS Transform to continue to move the 9 into the next phase, and ask to put re-run the test on the tenth.

Lifecycle states include:
+ **Not ready** – The server is undergoing the initial sync process and is not yet ready for testing. Data replication can only commence once all of the initial sync steps have been completed.
+ **Ready for testing** – The server has been successfully added and data replication has started. test or cutover instances can now be launched for this server.
+ **Test in progress** – A Test instance is currently being launched for this server.
+ **Ready for cutover** – This server has been tested and is now ready for a cutover instance to be launched.
+ **Cutover in progress** – A cutover instance is currently being launched for this server.
+ **Cutover complete** – This server has been cutover. All of the data on this server has been migrated to the AWS cutover instance.
+ **Disconnected** – This server has been disconnected.

# AWS account connectors for VMware migrations
AWS account connectors

To perform a VMware migration, you need an AWS account target account connector.

The target AWS account connector connects your migration job to your new AWS environment where your workloads will reside after the migration. It's important to ensure that the target AWS account that you specify for this connector is properly set up with the necessary permissions, quotas, and configurations to support your migrated infrastructure. 

When you create your target AWS account connector, AWS Transform will ask you to specify a target AWS Region. That is the AWS Region where your target environment with all your servers will reside. You can specify any one of the following AWS Regions for the target account connector:
+ US East (N. Virginia)
+ US East (Ohio)
+ US West (Oregon)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Tokyo)
+ Asia Pacific (Osaka)
+  Asia Pacific (Seoul)
+ Asia Pacific (Sydney)
+ Asia Pacific (Singapore)
+ Canada (Central)
+ Europe (Frankfurt)
+ Europe (London)
+ Europe (Paris)
+ Europe (Ireland)
+ Europe (Stockholm)
+ South America (São Paulo)

**Important**  
If you specify a target AWS Region that is different from the AWS Transform AWS Region, that means AWS Transform will be transferring your data across AWS Regions.

The target connector connects your migration job to the target AWS account and target AWS Region for the following purposes: 
+ **Network-infrastructure setup** – The target account is where you will create new Amazon VPCs and associated network resources to host your migrated applications in the target AWS Region that you specify when you create the target connector.
+ **Amazon EC2-instance setup** – The target AWS account is where you will migrate your VMware virtual machines and run them as Amazon EC2 instances in the target AWS Region.
+ **Testing and validation:** – Before final cutover, you will use the target AWS account for testing the migrated servers and ensuring they function correctly in the AWS environment.
+ **Cost management** – The target AWS account will be where the costs for running your migrated infrastructure are incurred and where you can track those costs.
+ **Long-term operations** – Post-migration, this target AWS account becomes your primary account for operating and managing your former source workloads in AWS.

**Note**  
AWS Transform may update connector types when introducing features requiring permission changes within your AWS accounts. You can use a connector version that is compatible with your VMware migration job. New connectors are created with the latest version for that connector type. The current version for the discovery account connector type is 1.0. The current version for target account connector type is 2.0.

# Tracking the progress of a migration job


You can track the progress of the transformation in two ways:
+ **Worklog** – Provides a detailed log of the actions that AWS Transform takes, along with human input requests, and your responses to those requests.
+ **Dashboard** – Provides a high-level summary of the VMware migration.