

# Discovery acceleration and initial planning
<a name="portfolio-discovery-initial-planning"></a>

This first stage of portfolio assessment focuses on the initial steps of obtaining and analyzing data at the portfolio level. The main objective is to identify business drivers and collect general data from applications and infrastructure to obtain an initial view of the portfolio. This data includes high-level technical and business attributes such as application names, environment, product versions, criticality, performance values, and others, as described in the [data requirements](understanding-initial-assessment-data-requirements.md) section. Completing this stage is key to understanding the scope of the project, identifying initial migration candidates, and informing the business case.

## Primary outcomes of this stage
<a name="discovery-outcomes"></a>
+ Documented business drivers, outcomes, goals, and technical guiding principles.
+ An initial inventory of applications and infrastructure, and identified data gaps. This is an initial view of the portfolio that will be iterated and refined in further stages.
+ A directional business case and estimated cost to migrate.
+ A list of initial migration candidates (for example, three-five applications).
+ Defined next steps.

# Understanding initial assessment data requirements
<a name="understanding-initial-assessment-data-requirements"></a>

Data collection can take a significant amount of time and easily become a blocker when there is no clarity about what data is needed and when it is needed. The key is to understand the balance between what is too little and what is too much data for the outcomes of this stage. To focus on the data and the fidelity level required for this early stage of portfolio assessment, adopt an iterative approach to data collection.

## Data sources and data requirements
<a name="data-sources-data-requirements"></a>

The first step is to identify your sources of data. Start by identifying the key stakeholders within your organization that can fulfill the data requirements. These are typically members of the service management, operations, capacity planning, monitoring, and support teams, and the application owners. Establish working sessions with members of these groups. Communicate data requirements and obtain a list of tools and existing documentation that can provide the data.

To guide these conversations, use the following set of questions:
+ How accurate and up to date is the current infrastructure and application inventory? For example, for the company configuration management database (CMDB), do we already know where the gaps are?
+ Do we have active tools and processes that keep the CMDB (or equivalent) updated? If so, how frequently it is updated? What is the latest refresh date?
+ Does the current inventory, such as the CMDB, contain application-to-infrastructure mapping? Is each infrastructure asset associated to an application? Is each application mapped to infrastructure?
+ Does the inventory contain a catalog of licenses and licensing agreements for each product?
+ Does the inventory contain dependency data? Note the existence of communication data such as server to server, application to application, application or server to database.
+ What other tools that can provide application and infrastructure information are available in the environment? Note the existence of performance, monitoring, and management tools that can be used as a source of data.
+ What are the different locations, such as data centers, hosting our applications and infrastructure?

After these questions have been answered, list your identified sources of data. Then assign a level of fidelity, or level of trust, to each of them. Data validated recently (within 30 days) from active programmatic sources, such as tools, have the highest level of fidelity. Static data is considered of lower fidelity and less trusted. Examples of static data are documents, workbooks, manually updated CMDBs, or any other non-programmatically maintained dataset, or whose last refresh date is older than 60 days. 

The data fidelity levels in the following table are provided as examples. We recommend that you assess the requirements of your organization in terms of maximum tolerance to assumptions and associated risk to determine what is an appropriate level of fidelity. In the table, institutional knowledge refers to any information about applications and infrastructure that is not documented.


| **Data sources** | **Fidelity level** | **Portfolio coverage** | **Comments** | 
| --- |--- |--- |--- |
| Institutional knowledge | Low - Up to 25% of accurate data, 75% assumed values or data is older than 150 days. | Low | Scarce, focused on critical applications | 
| Knowledge base | Medium-low - 35-40% of accurate data, 65-60% assumed values or data is 120-150 days old. | Medium | Manually maintained, inconsistent levels of detail | 
| CMDB | Medium - \$150% of accurate data, \$150% assumed values or data is 90-120 days old. | Medium | Contains data from mixed sources, several data gaps | 
| VMware vCenter exports | Medium-high - 75-80% of accurate data, 25-20% assumed values or data is 60-90 days old. | High | Covers 90% of the virtualized estate | 
| Application performance monitoring | High - Mostly accurate data, \$15% assumed values or data is 0-60 days old. | Low | Limited to critical production systems (covers 15% of the application portfolio) | 

The following tables specify the required and optional data attributes for each asset class (applications, infrastructure, networks, and migration), the specific activity (inventory or business case), and the recommended data fidelity for this stage of assessment. The tables use the following abbreviations:
+ R, for required
+ (D), for directional business case, required for total cost of ownership (TCO) comparisons and directional business cases
+ (F), for full directional business case, required for TCO comparison and directional business cases that include migration and modernization costs
+ O, for optional
+ N/A, for not applicable

**Applications**


| **Attribute name** | **Description** | **Inventory and prioritization** | **Business case** | **Recommended fidelity level (minimum)** | 
| --- |--- |--- |--- |--- |
| Unique identifier | For example, application ID. Typically available on existent CMDBs or other internal inventories and control systems. Consider creating unique IDs whenever these are not defined in your organization. | R | R (D) | High | 
| Application name | Name by which this application is known to your organization. Include commercial off-the-shelf (COTS) vendor and product name when applicable. | R | R (D) | Medium-high | 
| Is COTS? | Yes or No. Whether this is a commercial application or internal development | R | R (D) | Medium-high | 
| COTS product and version | Commercial software product name and version  | R | R (D) | Medium | 
| Description | Primary application function and context | R | O | Medium | 
| Criticality | For example, strategic or revenue-generating application, or supporting a critical function | R | O | Medium-high | 
| Type | For example, database, customer relationship management (CRM), web application, multimedia, IT shared service | R | O | Medium | 
| Environment | For example, production, pre-production, development, test, sandbox | R | R (D) | Medium-high | 
| Compliance and regulatory | Frameworks applicable to the workload (e.g., HIPAA, SOX, PCI-DSS, ISO, SOC, FedRAMP) and regulatory requirements | R | R (D) | Medium-high | 
| Dependencies | Upstream and downstream dependencies to internal and external applications or services. Non-technical dependencies such as operational elements (e.g., maintenance cycles) | O | O | Medium-low | 
| Infrastructure mapping | Mapping to physical and/or virtual assets that make up the application | O | O | Medium | 
| License | Commodity software license type (e.g., Microsoft SQL Server Enterprise) | O | R | Medium-high | 
| Cost | Costs for software license, software operations, and maintenance | N/A | O | Medium | 

**Infrastructure**


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
| **Attribute Name** | **Description** | **Inventory and prioritization** | **Business case** | **Recommended fidelity level (minimum)** | 
| Unique identifier | For example, server ID. Typically available on existing CMDBs or other internal inventories and control systems. Consider creating unique IDs whenever these are not defined in your organization. | R | R | High | 
| Network name | Asset name in the network (e.g., hostname) | R | O | Medium-high | 
| DNS name (fully qualified domain name, or FQDN) | DNS name | O | O | Medium | 
| IP address and netmask | Internal and/or public IP addresses | R | O | Medium-high | 
| Asset type | Physical or virtual server, hypervisor, container, device, database instance, etc. | R | R | Medium-high | 
| Product name | Commercial vendor and product name (for example, VMware ESXi, IBM Power Systems, Exadata) | R | R | Medium | 
| Operating system | For example, REHL 8, Windows Server 2019, AIX 6.1 | R | R | Medium-high | 
| Configuration | Allocated CPU, number of cores, threads per core, total memory, storage, network cards | R | R | Medium-high | 
| Utilization | CPU, memory, and storage peak and average. Database instance throughput. | R | O | Medium-high | 
| License | Commodity license type (e.g., RHEL Standard) | R | R | Medium | 
| Is shared infrastructure? | Yes or No to denote infrastructure services that provide shared services such as authentication provider, monitoring systems, backup services, and similar services | R | R (D) | Medium | 
| Application mapping | Applications or application components that run in this infrastructure | O | O | Medium | 
| Cost | Fully loaded costs for bare-metal servers, including hardware, maintenance, operations, storage (SAN, NAS, Object), operating system license, share of rackspace, and data center overheads | N/A | O | Medium-high | 

**Networks**


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
| **Attribute Name** | **Description** | **Inventory and prioritization** | **Business case** | **Recommended fidelity level (minimum)** | 
| Size of pipe (Mb/s), redundancy (Y/N) | Current WAN link specifications (e.g., 1000 Mb/s redundant) | O | R | Medium | 
| Link utilization | Peak and average utilization, outbound data transfer (GB/month) | O | R | Medium | 
| Latency (ms) | Current latency between connected locations. | O | O | Medium | 
| Cost | Current cost per month | N/A | O | Medium | 

**Migration**


|  |  |  |  |  | 
| --- |--- |--- |--- |--- |
| **Attribute Name** | **Description** | **Inventory and prioritization** | **Business case** | **Recommended fidelity level (minimum)** | 
| Rehost | Customer and partner effort for each workload (person-days), customer and Partner cost rates per day, tool cost, number of workloads | N/A | R (F) | Medium-high | 
| Replatform | Customer and partner effort for each workload (person-days), customer and partner cost rates per day, number of workloads | N/A | R (F) | Medium-high | 
| Refactor | Customer and partner effort for each workload (person-days), customer and partner cost rates per day, number of workloads | N/A | O | Medium-high | 
| Retire | Number of servers, average decommission cost | N/A | O | Medium-high | 
| Landing zone | Re-use existing (Y/N), list of AWS Regions needed, cost | N/A | R (F) | Medium-high | 
| People and change | Number of staff to train in cloud operations and development, cost of training per person, cost of training time per person | N/A | R (F) | Medium-high | 
| Duration | Duration of in-scope workload migration (months) | O | R (F) | Medium-high | 
| Parallel cost | Time frame and rate at which as-is costs can be removed during migration | N/A | O | Medium-high | 
| Time frame and rate at which AWS products and services, and other infrastructure costs, are introduced during migration | N/A | O | Medium-high | 

## Evaluating the need for discovery tooling
<a name="discovery-tooling"></a>

Does your organization need discovery tooling? Portfolio assessment requires high-confidence, up-to-date data about applications and infrastructure. Initial stages of portfolio assessment can use assumptions to fill data gaps. 

However, as progress is made, high-fidelity data enables the creation of successful migration plans and the correct estimation of target infrastructure to reduce cost and maximize benefits. It also reduces risk by enabling implementations that consider dependencies and avoids migration pitfalls. The primary use case for discovery tooling in cloud migration programs is to reduce risk and increase confidence levels in data through the following:
+ Automated or programmatic data collection, resulting in validated, highly trusted data
+ Acceleration of the rate at which data is obtained, improving project speed and reducing costs
+ Increased levels of data completeness, including communication data and dependencies not typically available in CMDBs
+ Obtaining insights such as automated application identification, TCO analysis, projected run rates, and optimization recommendations
+ High-confidence migration wave planning

When there is uncertainty about whether systems exist in a given location, most discovery tools can scan network subnets and discover those systems that respond to ping or Simple Network Management Protocol (SNMP) requests. Note that not all network or systems configurations will allow ping or SNMP traffic. Discuss these options with your network and technical teams.

Further stages of application portfolio assessment and migration heavily rely on accurate dependency-mapping information. Dependency mapping provides an understanding of the infrastructure and configuration that will be required in AWS (such as security groups, instance types, account placement, and network routing). It also helps with grouping applications that must move at the same time (such as applications that must communicate over low latency networks). In addition, dependency mapping provides information for evolving the business case.

When deciding on a discovery tool, it is important to consider all stages of the assessment process and to anticipate data requirements. Data gaps have the potential to become blockers, so it is key to anticipate those by analyzing future data requirements and data sources. Experience in the field dictates that most stalled migration projects have a limited dataset in which the applications in scope, associated infrastructure, and their dependencies are not clearly identified. This lack of identification can lead to incorrect metrics, decisions, and delays. Obtaining up-to-date data is the first step to successful migration projects.

*How to select a discovery tool?*

Several discovery tools in the market provide different features and capabilities. Consider your requirements. And decide on the most appropriate option for your organization. The most common factors when deciding on a discovery tool for migrations are the following:

*Security*
+ What is the authentication method to access the tool data repository or analytics engines? 
+ Who can access the data, and what are the security controls to access the tool? 
+ How does the tool collect data? Does it need dedicated credentials? 
+ What credentials and access level does the tool need to access my systems and obtain data? 
+ How is data transferred between the tool components? 
+ Does the tool support data encryption at rest and in-transit? 
+ Is data centralized in a single component inside or outside of my environment? 
+ What are the network and firewall requirements? 

Ensure that security teams are involved in early conversations about discovery tooling.

*Data sovereignty*
+ Where is the data stored and processed? 
+ Does the tool use a software as a service (SaaS) model? 
+ Does it have the possibility to retain all data within the boundaries of my environment? 
+ Can data be screened before it leaves the boundaries of my organization? 

Consider your organization needs in terms of data residency requirements.

*Architecture*
+ What infrastructure is required and what are the different components?
+ Is more than one architecture available? 
+ Does the tool support installing components in air-locked security zones?

*Performance*
+ What is the impact of data collection on my systems? 

*Compatibility and scope*
+ Does the tool support all or most of my products and versions? Review the tool documentation to verify supported platforms against the current information about your scope. 
+ Are most of my operating systems supported for data collection? If you don't know your operating system versions, try to narrow the list of discovery tools to those with the wider range of supported systems.

*Collection methods*
+ Does the tool require to install an agent on each targeted system?
+ Does it support agent-less deployments? 
+ Do agent and agent-less provide the same features? 
+ What is the collection process?

*Features*
+ What are the features available? 
+ Can it calculate total cost of ownership (TCO) and estimated AWS Cloud run rate? 
+ Does it support migration planning? 
+ Does it measure performance? 
+ Can it recommend target AWS infrastructure? 
+ Does it perform dependency mapping? 
+ What level of dependency mapping does it provide? 
+ Does it provide API access? (for example, can it be programmatically accessed to obtain data?)

Consider tools with strong application and infrastructure dependency-mapping functions and those that can infer applications from communication patterns. 

*Cost*
+ What is the licensing model? 
+ How much does the licensing cost? 
+ Is the pricing for each server? Is it tiered pricing? 
+ Are there any options with limited features that can be licensed on-demand? 

Discovery tools are typically used throughout the entire lifecycle of migration projects. If your budget is limited, consider at least 6 months. However, absence of discovery tooling typically leads to higher manual effort and internal costs.

*Support model*
+ What levels of support are provided by default? 
+ Is any support plan available? 
+ What are the incident response times?

*Professional services*
+ Does the vendor offer professional services to analyze discovery outputs? 
+ Can they cover the elements of this guide? 
+ Are there any discounts or bundles for tooling \$1 services?

**Tip**  
To find and evaluate discovery tooling, use the [Discovery, Planning, and Recommendation](https://aws.amazon.com/prescriptive-guidance/migration-tools/migration-discovery-tools/) site.

*Recommended features for the discovery tool*

To avoid provisioning and combining data from multiple tools over time, a discovery tool should cover the following minimum features: 
+ **Software** – The discovery tool should be able to identify running processes and installed software.
+ **Dependency mapping** – It should be able to collect network connection information and build inbound and outbound dependency maps of the servers and running applications. Also, the discovery tool should be able to infer applications from groups of infrastructure based on communication patterns.
+ **Profile and configuration discovery** – It should be able to report the infrastructure profile such as CPU family (for example, x86, PowerPC), the number of CPU cores, memory size, number of disks and size, and network interfaces.
+ **Network storage discovery** – It should be able to detect and profile network shares from network-attached storage (NAS).
+ **Performance** – It should be able to report peak and average utilization of CPU, memory, disk, and network.
+ **Gap analysis** – It should be able to provide insights on data quantity and fidelity.
+ **Network scanning** – It should be able to scan network subnets and discover unknown infrastructure assets.
+ **Reporting** – It should be able to provide collection and analysis status.
+ **API access** – It should be able to provide programmatic means to access collected data.

*Additional features to consider*
+ **TCO analysis** to provide a cost comparison between current on-premises cost and projected AWS cost.
+ **Licensing analysis and optimization recommendations** for Microsoft SQL Server and Oracle systems in rehost and replatform scenarios.
+ **Migration strategy recommendation** (Can the discovery tool make default migration R type recommendations based on current technology?)
+ **Inventory export** (to CSV or a similar format)
+ **Right-sizing recommendation** (for example, can it map a recommended target AWS infrastructure?)
+ **Dependency visualization** (for example, can dependency mapping be visualized in a graphical mode?)
+ **Architectural view** (for example, can architectural diagrams be automatically produced?)
+ **Application prioritization** (Can it assign weight or relevance to application and infrastructure attributes to create prioritization criteria for migration?)
+ **Wave planning** (for example, recommended groups of applications and the ability to create migration wave plans)
+ **Migration cost estimation** (estimation of effort to migrate)

*Deployment considerations*

After you have selected and procured a discovery tool, consider the following questions to drive conversations with the teams responsible for deploying the tool in your organization:
+ Are servers or applications operated by a third party? This could dictate the teams to involve and processes to follow.
+ What is the high-level process for gaining approval to deploy discovery tools?
+ What is the main authentication process to access systems such as servers, containers, storage, and databases? Are server credentials local or centralized? What is the process to obtain credentials? Credentials will be required to collect data from your systems (for example, containers, virtual or physical servers, hypervisors, and databases). Obtaining credentials for the discovery tool to connect to each asset can be challenging, especially when these assets are not centralized.
+ What is the network security zones outline? Are network diagrams available?
+ What is the process for requesting firewall rules in the data centers?
+ What are the current support service-level agreements (SLAs) in relation to data center operations (discovery tool installation, firewall requests)?

# Business drivers and technical guiding principles
<a name="business-drivers-technical-guiding-principles"></a>

## Business drivers
<a name="business-drivers"></a>

Whether your organization has already decided to move to the cloud or is close to that decision, defining and documenting business drivers for cloud migration will clarify the reasons for migrating. After the reasons are documented, you can define what will be migrated and how it will be migrated. This activity is important. We recommend that it takes place as early in the process as possible to inform and guide next steps. 

Identify the stakeholders that should be part of the discussion to document the drivers. Typically, CxOs, senior managers, and key technology leaders within the organization, and your own customers. Although your customers are not likely to be part of this discussion, we recommend that one or more persons in your organization are designated represent your customers' views and goals.

Business drivers should be linked to a metric that can be measured throughout the migration journey to validate whether the outcomes have been achieved. The company's strategic goals and annual reports can act as a starting point. 

Focus the conversation on where the company wants to be, based on existing and projected metrics, as a result of moving to the cloud. Consider goals and business outcomes. Also, consider what success looks like as cloud adoption increases. 

Next, establish the importance level for each driver. What are the priorities? What are the expected benefits? How do the benefits support the business goals and outcomes? In the context of application portfolio assessment, the answers will help to prioritize workloads for migration and to establish technical guiding principles. However, business drivers will define and impact the migration program as a whole.

## Technical guiding principles
<a name="technical-guiding-principles"></a>

Technical guiding principles inform migration strategy selection in later stages of portfolio assessment. In the current stage, the focus is to identify them. 

Guiding principles can be established as general technology-related and approach-related decisions derived from business goals and outcomes.

For example, a company has a primary goal to reduce cost, and the desired outcome is to close an on-premises data center by a given date in 6–12 months. A resulting guiding principle is to lift and shift all applications to the cloud by using a rehost or relocate migration strategy whenever possible. In this case, the lift-and-shift approach accelerates near-term migration outcomes. After the applications have moved out of the on-premises data center, the company can focus on the main business drivers to optimize or modernize the migrated workloads.

To establish the technical guiding principles, start by analyzing business drivers. Identify a list of technologies and techniques that will achieve the business goals and outcomes. Next, refine the list and assign an order of relevance based on suitability or preference to achieve a desired outcome.

Document and communicate the guiding principles with the people involved in planning and performing the migration. Highlight concerns and potential conflicts between the principles and the actual implementation.

The following table provides an example of business drivers and technical guiding principles.


| **Business driver** | **Outcome** | **Metrics** | **Technical guiding principle** | 
| --- |--- |--- |--- |
| Accelerate innovation. | Improved competitiveness, increased business agility | Number of deployments per day or month, new features released per quarter, customer satisfaction scores, number of experiments | Refactor differentiating applications by using microservices and the DevOps operating model to increase agility and speed to market of new features. | 
| Reduce operational and infrastructure costs. | Supply and demand matched, elastic cost base (pay for what you use) | Variation of spend over time | 1. Rehost applications with infrastructure right-sizing. 2. Retire applications that have low or no utilization. | 
| Increase operational resiliency. | Improved uptime, reduced mean time to recovery | SLAs, number of incidents | 1. Replatform applications to the latest and best-supported operating system versions. 2. Implement high availability architectures for critical applications. | 
| Exit the data center. | Data-center closure by a date within 6–12 months | Speed of server migrations | Rehost applications by using Cloud Migration Factory Solution. | 
| Stay on premises, but increase agility and resiliency. | Improved competitiveness and uptime while remaining on premises | Number of deployments per day or month, new features release per quarter, SLAs, number of incidents | 1. Modernize systems by extending their functionality into the cloud.2. Assess for rehosting or replatforming to AWS Outposts. | 

# Initiating data collection
<a name="initiating-data-collection"></a>

Data collection is the process of gathering metadata from applications and infrastructure. The process is iterative throughout all stages of assessment. In each stage, data quantity and fidelity will increase. At this stage, the focus is on gathering general data that can help to establish an initial inventory. The inventory will be used to create a directional business case and the identification of initial migration candidates.

After the current data sources have been identified, we recommend gathering information from as many systems as possible. For more information, see the [data requirements](understanding-initial-assessment-data-requirements.md) for this stage.

This approach has the benefit of helping to update the current portfolio view and the organization's knowledge of their applications and services. It also helps with determining what is targeted to move. The recommended approach is to review existing data, such as configuration management database (CMDB) outputs and information technology service management (ITSM) systems. Then construct a list of assets targeted for data collection. If your organization has complete clarity of what is in scope and out of scope for the migration, you might restrict data collection to the systems that are in scope.

When building your portfolio, consider the applications and their environments or software release lifecycles. For example, instead of identifying a customer relationship management (CRM) application and specify that it has test, dev, and prod environments, list three applications (for example, CRM-Test, CRM-Dev, CRM-Prod). Alternatively, use the CRM name but assign a unique ID to each environment and present them as separate records in your data repository. This will help with planning and tracking the migration of these environments individually. For example, you might want to migrate non-production environments first. By listing the instances of your application according to the environment, you can clearly manage and govern their transition.

During data collection, there might be uncertainty about which applications or servers are in a given data center or source location. In these cases, obtaining bare-metal and hypervisor lists from existing management tools is helpful. For example, you can connect to a hypervisor to obtain lists of virtual machines to be targeted for data collection. 

Note that the initial output, when combining existing data sources, could be incomplete. The key is to perform a gap analysis in terms of [data requirements](understanding-initial-assessment-data-requirements.md) for this stage and what can be obtained from existing sources. It's important to contrast percentage of completeness with level of data fidelity. Higher completeness levels from low-fidelity sources will contain several assumptions that could lead to flawed analysis. While this stage of assessment does not require the maximum data fidelity, we recommend that data sources are at least medium to medium-high fidelity. Contrast these numbers against your organization's tolerance to risk, including the use of assumptions to fill data gaps. 

The gap analysis helps you understand the quantity and quality of data you are working with. The analysis also helps you to establish the level of assumptions that must be made to create a directional business case and prioritize applications for migration. Discovery tooling can help to fill the gaps and collect high-fidelity data. To increase the confidence levels in data and accelerate migration outcomes, we recommend deploying discovery tooling as early as possible. Early action is also important because internal procurement, security, and implementation processes for new tools could require several weeks or months to complete.

We recommend establishing a communication plan or cadence and a scope-change control mechanism at this stage. This helps you to keep stakeholders informed so that they can plan ahead and mitigate risks. A key element for clear communications is to define a single source of truth for the application portfolio and associated infrastructure. Avoid keeping multiple systems of record and application and infrastructure lists. Keep data in one place (for example, a database, a tool, or a spreadsheet) that supports versioning and online collaboration, and assign an owner to it.

# Prioritization and migration strategy
<a name="prioritization-and-migration-strategy"></a>

A key element of migration planning is to establish prioritization criteria. The point of this exercise is to understand the order in which applications will be migrated. The strategy is to take an iterative and progressive approach to evolve the prioritization model.

## Prioritizing applications
<a name="prioritizing-applications"></a>

This stage of assessment focuses on establishing initial criteria to prioritize low-risk and low-complexity workloads. These workloads are good candidates for pilot applications. Using low-risk, low-complexity workloads in initial migrations reduces the risk and gives teams the opportunity to gain experience. These criteria will be evolved in further assessment stages to align prioritization with business drivers when creating the migration wave plan.

The initial criteria should prioritize applications with a small number of dependencies, running in cloud-supported infrastructure, and from non-production environments. An example would be applications with 0–3 dependencies ready to rehost as-is in a development or test environment. These criteria are valid for defining the pilot applications and potentially the first and second migration waves, depending on the level of cloud adoption maturity and confidence levels. 

*Deciding what initial criteria to use*

Select 2–10 data points to use for prioritizing your first workloads. These data points come from your initial application and infrastructure inventory (refer to the [data collection](initiating-data-collection.md) section). 

Next, define a score, or weight, for each possible value of each data point. For example, if the environment attribute is selected, and the possible values are production, development, and test, each value is assigned a score, a greater number representing higher priority. Although it is optional, we recommend assigning a multiplying factor for importance or relevance to each data point. This optional step provides a higher-level differentiator to emphasize what is more important, which helps to keep the criteria aligned as you iterate on assigning scores to the values.

Based on the strategy to prioritize low-risk, simple applications for the first few migration waves, the following table shows example attributes selection and their value assignments.


| **Attribute (data point)** | **Possible values** | **Score (0-99)** | **Importance or relevance multiplying factor** | 
| --- |--- |--- |--- |
| Environment | Test | 60 | High (1x) | 
| Development | 40 | 
| Production | 20 | 
| Business criticality | Low | 60 | High (1x) | 
| Medium | 40 | 
| High | 20 | 
| Regulatory or compliance framework | None | 60 | High (1x) | 
| FedRAMP | 10 | 
| Operating system support | Cloud ready | 60 | Medium-high (0.8x) | 
| Unsupported in cloud | 10 | 
| Number of compute instances | 1-3 | 60 | Medium-high (0.8x) | 
| 4-10 | 40 | 
| 11 or more | 20 | 
| Migration strategy | Rehost | 70 | Medium (0.6x) | 
| Replatform | 30 | 
| Refactor, or re-architect | 10 | 

Make sure that you select attributes that can act as key differentiators between applications. Otherwise, the criteria will result in many workloads sharing the same priority. After you apply the model, we recommend looking at the top and bottom of the resulting ranking to see if you agree. If you don't generally agree, you can revisit the criteria that you used to score the workloads. 

After you obtain a ranking, look at the distribution of scores across the entire portfolio. The scores themselves do not matter. It is the difference between scores that matters. For example, you might find that the top total score is 8,000 and the bottom score is 800. Consider plotting the resulting scores as a histogram, so you can verify that you have a good distribution. The ideal distribution looks like a standard bell curve, with a few very high-priority workloads and a few very low-priority workloads. The majority of applications will be somewhere in the middle.

Another key aspect of initial prioritization is to include internal teams or business units that show interest in being early adopters of the cloud. These could be a considerable lever in obtaining business support to migrate a given application, especially in the early days. If this is the case in your organization, include the business unit attribute in the preceding table. Assign a high score to those business units that are willing to come forward with their applications. Using the business unit attribute will help bring those applications to the top of the list.

After you agree with the resulting ranking, select the top 5–10 applications. These will be your initial application migration candidates. Refine the list so that you confirm 3–5 applications. This helps you to take a targeted approach when performing a detailed application assessment. For more information, see [Prioritized applications assessment](prioritized-applications-assessment.md).

 

## Determining the R type for migration
<a name="migration-r-type"></a>

Deciding on a migration strategy for each application and associated infrastructure will have implications to migration speed, cost, and level of benefits. It is key to determine strategy based on a balanced combination of factors, including business drivers, technical guiding principles, prioritization criteria, and business strategy.

Sometimes these factors create conflicting views. For example, the primary driver for migration might be innovation and agility. At the same time, you might need to reduce costs quickly. Modernizing all applications in-scope will reduce costs in the long-run, but it will require a greater investment upfront. In that case, one approach is to migrate applications by using strategies that require less effort, such as rehost or replatform. That can provide quick efficiencies and cost reduction in the short term. Then reinvest the savings into modernizing the application at a later stage, and achieve further cost reduction. 

However, starting with a complete rehost of all applications delays the greater benefits of modernization. The key is to find balance between migration strategies so that business-strategic applications are prioritized for modernization while other applications can be rehosted or replatformed first then modernized.

*How to determine a migration strategy for your applications?*

At this stage of assessment, the focus is to incorporate an initial model for guiding migration strategy selection. To validate the migration strategy for the initial applications, use the model in conjunction with the business drivers and the prioritization criteria. The default logic of the decision tree will help you to determine the initial treatment for the scope. In the tree, the most complex approaches, such as refactor, or re-architect, are reserved for your strategic workloads.

![\[The 6 R decision process discussed in this guide.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/application-portfolio-assessment-guide/images/6Rs-DecisionTree-baseModel.png)


A customizable [draw.io](https://github.com/jgraph/drawio-desktop/releases) version of this diagram is available in the *[Attachments](#attachments)* section.

The first step to an initial model is to update the business drivers at the top of the tree with those defined by your organization. Next, apply the tree to application components rather than applications as a whole. For example, in the case of a three-tier application that has three components (front-end, application layer, and database), each component should transit the tree independently and be assigned a specific strategy and pattern. This is because in some cases you might want to rehost or replatform a given tier and refactor (re-architect) other tiers. 

The independent component assignment will lead you to define a migration strategy for the associated infrastructure. The infrastructure strategy might be the same strategy as the application component that it supports, or it might be different. For example, an application component that will be replatformed into a new virtual machine with a newer operating system will follow the replatform strategy while the current virtual machine that hosts it will be retired. The migration strategy for infrastructure is calculated based on the strategy chosen for the application components.

Before using the decision tree to establish migration strategies, test the logic with a few applications and see if you generally agree with the outcome. The 6 Rs decision tree is a guide that does not replace the analysis required to determine its correctness. The tree logic might not apply to particular cases. Treat those cases as exceptions and proceed to override the decision driven by the tree by documenting the rationale for the override rather than changing the tree logic. This prevents multiple decision tree versions, which could become difficult to manage. General guidance is that the tree should be valid for at least 70-80 percent of the workloads. For the rest, there will be exceptions. Any adjustments to the tree logic, at this stage of assessment, should be focused on establishing an initial model. Further iterations and refinement will occur in later stages, such as [portfolio analysis and migration planning](portfolio-analysis-migration-planning.md).

## Attachments
<a name="attachments"></a>

[attachment.zip](samples/attachment.zip)

# Creating a directional business case
<a name="directional-business-case"></a>

Stakeholders from across the business should understand and buy into the business case for transformation each step along the way. 

In the early stages, it's important to quickly show enough potential value from a migration program, so that you can secure the resources needed to plan and establish the program. The directional business case is designed to provide reasonable confidence in achieving compelling business value with the limited data that can be collected early.

After the program is established, the business case is developed further. The detailed case provides greater accuracy, a more complete picture of the program value, and insight into planning priorities. It defines and quantifies the planned business outcomes that the organization buys into, and it sets the baseline against which your program governance office can then steer the program and measure its achievements.

## Fixing the scope of the directional business case
<a name="fix-scope"></a>

A directional business case is typically assembled rapidly, within 2–4 weeks. It needs to generate enough confidence so that you can secure the resources to establish the core team, engage AWS Partners if needed, and as a minimum, complete the [prioritized applications assessment](prioritized-applications-assessment.md) and [portfolio analysis and migration planning](portfolio-analysis-migration-planning.md) stages.

Typically, directional business cases that support portfolio migrations are created as one of the following:
+ A simple *total cost of ownership (TCO)*** **comparison between the as-is infrastructure landscape and the post-migration AWS service architecture. The comparison shows the difference in expected run rates for given workload volumes.
+ A business case** **that shows the net present value (NPV), return on investment (ROI), payback period, modified internal rate of return (MIRR) and 3–5 year cash-flow analyses for migrating to AWS inclusive of migration costs vs staying as-is. 

The directional business case scope is typically limited to one of the following:
+ A comparison of the infrastructure technology costs
+ A comparison of the infrastructure technology and operations costs

In general, the larger the portfolio, the less developed the case needs to be. This is because broader assumptions can be made without significantly affecting the result. For a smaller portfolio, any change will have a greater impact, so more detail is required.

Start by building the base infrastructure cost comparison. Then decide if the comparison is sufficiently compelling before you continue. Usually, portfolios of more than 400 servers will show a positive business case on infrastructure cost reduction alone within 3 years of operation on AWS, or 250 servers within 5 years, although this can vary. For smaller portfolios, more detail may be needed.

Conversely, it is rarely useful to examine other business value components at this stage, such as value derived from improved resilience or business agility, unless the total migration scope is fewer than about 5 workloads or 50 servers.

## Focus value drivers
<a name="focus-value-drivers"></a>

The infrastructure technology TCO comparison compares a model of the as-is infrastructure costs with a basic model of the AWS service bill of materials needed to run your workloads with equivalent performance and availability. Many optimizations can be done. At this stage, however, the focus is on the following list because they are easier to assess and typically yield about 30 percent TCO savings, which is enough to move forward:
+ **Compute elasticity** – Map servers whose usage is not 100 percent, such as development or UAT servers running 8x5 (24 percent usage), 10x5 (30 percent), or 10x6 (36 percent), and disaster recovery (DR) servers running at 2 percent, to on-demand services that are billed only when used.
+ **Procure with a savings plan** – Plan to procure production servers and other servers with high usage (greater than36 percent) with a suitable savings plan to reduce costs by up to 75 percent. Options include 1-year and 3-year commitments, with differing levels of upfront payments to secure greater discounts.
+ **Remove zombies** – Identify servers with CPU utilization of less than 2 percent that you can confirm are no longer needed, and remove them from the cost analysis.
+ **Compute right-sizing** – Use CPU and memory utilization time series data to assess for each server the compute power and memory needed. Then select the Amazon Elastic Compute Cloud (Amazon EC2) instance to fit.
+ **Relational database management system (RDBMS) license right-sizing** – Reassess your RDBMS licensing needs after compute right-sizing on your database servers, compare Bring Your Own License (BYOL) and Procuring license from AWS, and explore the potential of Amazon Relational Database Service (Amazon RDS) to increase savings.
+ **Storage** – Right-size the total storage volume needed, and identify the input/output operations per second (IOPS) needs across the portfolio. Determine how much can be moved to object storage with different SLAs and costs.

## Data needs
<a name="data-needs"></a>

The table in [Understanding initial assessment data requirements](understanding-initial-assessment-data-requirements.md) shows the data required to build each part of a directional business case, and whether it is mandatory or optional. 

To build the case, you need the infrastructure subset of the initial planning data plus cost data. Determining how to identify the infrastructure to include depends on your business goal: 
+ If the program's objective is to migrate and modernize specific applications, build the infrastructure portfolio based on what the applications need, taking into consideration infrastructure that is shared. 
+ If the program's objective is infrastructure-centric, such as migrating out of a data center whose lease is due to expire, application mapping isn't needed for infrastructure TCO comparisons. 

Data that is marked as optional (such as CPU and memory peak utilization for servers) can usually be replaced with standard benchmark values. You can discuss this with an AWS Partner or AWS Professional Services. Or you can extrapolate the values from data points that are available in part of your portfolio (such as data collected by a hypervisor). The larger the portfolio, the more accurate this is.

## Building infrastructure TCO comparisons
<a name="building-infrastructure-tco-comparisons"></a>

Tools are vital to constructing infrastructure TCO comparisons. [AWS Professional Services](https://aws.amazon.com/professional-services/) or an [AWS Partner](https://aws.amazon.com/migration/partner-solutions/) can provide help with all types of directional case, especially if you plan to engage them to assist in the wider migration process. 

There are tools available to do the following:
+ Collect inventory data.
+ Collect utilization data.
+ Provide accurate as-is infrastructure cost benchmarking data.
+ Identify and remove zombies.
+ Make right-sizing assessments.
+ Recommend purchasing options.
+ Compare software licensing options.
+ Produce simple graphical cash-flow analyses.

[Migration Evaluator](https://aws.amazon.com/migration-evaluator/) from AWS is one option. It provides all of these capabilities as a **free managed service. You** can request Migration Evaluator through your AWS account manager or AWS Migration Competency Partner or by submitting [a request online](https://pages.awscloud.com/Migration-Evaluator-request.html). Migration Evaluator has been designed specifically as a point solution to produce infrastructure technology TCO comparisons and quickly.

Key advantages: 
+ Free of charge
+ Agent-less discovery or manual configuration of inventory data where tool-based discovery is restricted
+ Dedicated support to assist deployment, configuration, data collection, and building the base case, or directional business case
+ Convenience of SaaS operation, but can run data collection entirely within the customer network to support scrubbing before loading into the analytics engine
+ Strong support for Microsoft license right-sizing
+ Full data-export capabilities 

Key limitations: 
+ Assesses only x86 architecture servers (Windows and Linux) 
+ Limited options to configure or calibrate benchmark as-is cost data
+ No support for modeling operations cost optimization
+ No support for migration cost modeling 
+ No direct support for building business cases beyond TCO comparisons

If you decide to use a commercial discovery tool for portfolio discovery and analysis capabilities such as application stack and interdependency discovery, it will usually provide infrastructure TCO comparison as well. For guidance on the use of tools for portfolio discovery and assessment, see [Evaluating the need for discovery tooling](understanding-initial-assessment-data-requirements.md#discovery-tooling). To review and compare the key capabilities of the market-leading tools, see [Discovery, Planning, and Recommendation migration tools](https://aws.amazon.com/prescriptive-guidance/migration-tools/migration-discovery-tools/).

## Building in operational cost optimization
<a name="building-operational-cost-optimization"></a>

IT operations productivity improvement is often a significant value contributor for migrations. On average, after migration to AWS, IT operational staff productivity increases by 62 percent through migration, according to the International Data Corporation (IDC) whitepaper [Fostering Business and Organizational Transformation to Generate Business Value with Amazon Web Services](https://pages.awscloud.com/rs/112-TZM-766/images/AWS-BV%20IDC%202018.pdf?aliId=1614258770). However, there are two challenges with sizing and including these benefits in the directional case.

First, assessing the full range of productivity gains requires extensive data gathering and is more appropriate for the [detailed business case](detailed-business-case.md). This challenge can be resolved by focusing on a few elements that are more easily assessed and sized with simple benchmark data but that still show significant advantage. 

Second, focusing on productivity as a source of cost reduction can generate concern and negativity among key customer stakeholders and program members. Make sure that you provide clarity over how the benefit will be realized and what that means for the people impacted. Such problems can be avoided by clarifying that this will only enhance the team's roles:
+ The migration program includes a track to develop and move internal operations staff into new roles, such as joining DevSecOps teams building infrastructure as code automations and test automations that will drive growth for the team.
+ The benefit can be realized by rescoping and resizing operations outsourcing contracts, so that internal staff can increase their focus on higher-value activities

Approach constructing this business case element based on what operations transformations you want to consider:
+ If you have an existing in-house operations team, upskill the team members, and show the expected productivity improvement.
+ Alternatively, migrate away from your current operations solution to AWS Managed Services (AMS) or to an alternative managed services offering from an AWS Partner.

For the first transformation, to get a conservative financial estimate of the improved productivity that can be included in the case, we recommend the following:

1. Focus on server management operations productivity specifically. It tends to be a significant proportion of operations effort, can be more easily assessed, and is more readily verified later. 

1. Calculate staffing needed based on benchmarks for number of servers that can be managed by each full-time equivalent (FTE) employee. On premises, that number is about 150 severs. On AWS, it's about 400 servers.

1. Apply these metrics to the number of on-premises servers compared with the number of EC2 instances. 

1. Multiply the time saved with a blended cost rate for the whole operations team.

You can then check your results with either approach by verifying the result does not greatly exceed the average productivity gains by role provided in the following table (data sourced from the IDC whitepaper [Fostering Business and Organizational Transformation to Generate Business Value with Amazon Web Services](https://pages.awscloud.com/rs/112-TZM-766/images/AWS-BV%20IDC%202018.pdf?aliId=1614258770).

 


| **Role** | **Efficiency gain** | 
| --- |--- |
| IT infrastructure management | 62% | 
| IT support | 59% | 
| Application management | 43% | 
| Database management | 19% | 
| Application development | 25% | 

For the second transformation, you can add the operational cost savings by directly comparing the current total operations and support cost for the in-scope portfolio with the cost of the managed service being considered. 

To obtain the cost of the managed service, provide your AWS account manager or any [AWS Managed Services Partner](https://aws.amazon.com/managed-services/partners/) with your proposed AWS Bill of Materials, your service level choice (Plus or Premium), and your AMS package (AMS Accelerate or AMS Advanced). This will provide you with a total cost of managed services for the :AWS service components of the transformed solution. Similarly, you could obtain pricing from an AWS Partner that offers its own managed services package based its own parameters.

## Expanding to a full directional business case
<a name="full-directional-business-case"></a>

In general, to assemble a full directional business case, build the TCO comparison, with or without the IT productivity element, and estimate all the migration and modernization costs. Then create a cash flow that covers pairs of migrate-and-modernize and don't-migrate-and-modernize scenarios.

The most basic case is preparation of a single pair of scenarios, where the don't-migrate-and-modernize scenario is your current situation and the migrate-and-modernize scenario has the following characteristics:
+ No growth or shrinkage in transactional volume, compute, or networking capacity
+ Steady low-volume growth in storage requirements
+ Quality-of-service capabilities (such as availability, durability, throughput, and performance) matching the existing system's capabilities

For all but very small portfolios, this fits the objectives of building a directional case well. It demonstrates enough value quickly to gain the mandate to move forward. 

For smaller portfolios, it can be valuable to add pairs of migrate-and-modernize and don't-migrate-and-modernize scenarios that demonstrate other aspects of the increased value of cloud migration, such as:
+ A mix of moderate and high-capacity growth requirements across workloads where that growth is expected
+ Inclusion of enhanced resilience, such as high availability, DR, and fault tolerance
+ Improved global performance with edge computing, content delivery network (CDN), multi-Region database replication.
+ Any other specific improved quality of service that you have made a business priority for the program

For these scenarios, make sure that the costs and cash-flow implications of upgrading the current non-cloud infrastructure architecture to match the new specification are estimated accurately. The most direct way to obtain this estimate can be requesting a quotation from a systems integrator, especially if they are also an AWS Consulting Partner with Migration Competency, who can support you with both the migrate-and-modernize and the don't-migrate-and-modernize scenarios.

For each pair of scenarios, assemble a case comprising the following:
+ The costs of the don't-migrate-and-modernize scenario. In the most basic case, this includes:
  + The total cost of ownership over the business case term for the current infrastructure configuration
  + Periodic increases in compute, storage, and network traffic consumption 
+ The costs of the migrate-and-modernize; scenario, including:
  + Setting up the program, which includes detailed discovery, migration planning, detailed business case development, establishing the core team and upskilling them, establish a landing zone if not already in place, and establishing security management and operations integration for migrated workloads
  + The workload migration and modernization costs 
  + The migration infrastructure costs, including network connections, data migration services such as [AWS Snowball Edge](https://aws.amazon.com/snowball/) and [AWS DataSync](https://aws.amazon.com/datasync/), and the AWS utility costs for the architecture needed during the migration process itself (for example, for testing)
  + The ramp-up of AWS utility costs over the course of the migration as waves go live, and the ramp down of the existing infrastructure costs as it is replaced by AWS based services and decommissioned
+ The decommissioning costs and write-offs for any stranded assets

## Estimating migration and modernization program setup
<a name="estimating-migration-and-modernization-program-setup"></a>

To set up a program for success, you might need to run a series of foundational activities to build baseline capabilities and the detailed plan if this has not been done before. These foundational activities include the following:

1. Performing detailed portfolio discovery, migration planning, and detailed business case development, as described in the [Portfolio analysis and migration planning](portfolio-analysis-migration-planning.md) section, plus the documenting the cost of any discovery tools used.

1. Establishing a cloud business and technical core team and developing in-house skills through training and hiring. Identify the members of the IT organization that will need training, and allocate a training budget for each person.

1. Establishing a [landing zone](https://aws.amazon.com/solutions/implementations/aws-landing-zone/) and configuring it to support the cost, operational, and security governance capabilities you will need.

AWS Consulting Partners can help provide estimates for items 1 and 3. 

*Estimating migration and modernization costs*

To meet the objectives for a directional business case and demonstrate *just enough* commercial potential to proceed to the next phase, keep migration and modernization cost estimation as basic as possible. 

To this end, we recommend that you prepare the directional business case by focusing on the applications falling into the following migration strategies: 
+ Retire
+ Retain
+ Relocate
+ Rehost
+ Replatform
+ Repurchase

Typically, about 70 percent of workloads can be rehosted, relocated or replatformed, and another 5 percent can be retired. Assessing the applications by migration strategy usually addresses the core of the cost reduction case. 

Estimating costs for refactoring, or re-architecting**,** can be complex. It is not practical to attempt this within the time frame given to preparing a directional business case. As discussed previously in [Determining the R type for the migration](prioritization-and-migration-strategy.md#migration-r-type), consider using rehost, relocate, or replatform strategies for your first phase of migration and modernization. These R strategies will likely accelerate initial payback, reduce implementation risk, and improve the business case in the short term. It is also materially easier for your application teams to modernize applications that are running within the AWS environment than those that are not. Estimates for refactoring (re-architecting) specific applications are best added when the [detailed business case](detailed-business-case.md) is prepared. 

*Estimating effort for migration by strategy*

Each migration is different. Before committing any budgets or plans, seed workload estimates for the migration activities from the team who will be responsible for the project, whether that is your in-house applications teams, AWS Professional Services or an AWS Partner organization. 

To help build the directional case, the following table provides indicative ranges of effort for the different treatments. These ranges assume that a medium-to-large portfolio is being migrated and that the migration team is trained and experienced. For small portfolios, it is best to have the team responsible for the migration prepare the estimate even for a directional case.


| Migration strategy | Estimation process | Elements | Person hours | Person hours | 
| --- |--- |--- |--- |--- |
| Retain | Do nothing, with no cost, no benefits, and no reduction in technology debt. | – | – | – | 
| Retire | Estimate decommissioning the hardware equipment used, if any. | – | – | – | 
| Relocate | Estimate copying the workload within VMware using VMware tools. This includes copying the data, smoke testing to verify, and any hardware decommissioning. The effort to relocate VMs is typically less than for low-complexity rehost patterns. | – | – | – | 
| Rehost | Estimate copying the workload and data with an image copy, smoke testing, high availability (HA) and disaster recovery (DR) testing where appropriate for production servers, and any hardware decommissioning. The best practice is to use tools such as [AWS Application Migration Service](https://aws.amazon.com/application-migration-service/). Divide workloads into low, medium, and high complexity, based on factors such as whether a database or other infrastructure software is running, database complexity, whether clustered, integration complexity, and data volumes. | Effort per app per server | Migration | HA/DR test | 
| Low | 10–14 | 3–5 | 
| Medium | 16–24 | 4–6 | 
| High | 26–38 | 8–12 | 
| Replatform | For replatform migrations that include upgrades to operating system or RDBMS version, take the estimate for a rehost, and add time to run a rebuild and smoke test on the new platform.If the replatform includes changing the technology of the platform, estimate additional time for the use of the conversion tools, such as [AWS Schema Conversion Tool](https://aws.amazon.com/dms/schema-conversion-tool/) and [AWS Database Migration Service](https://aws.amazon.com/dms/), and a more complete application test. An example of changing the technology is migrating away from a proprietary commercial database to an open source replacement. | Effort per app per server | Version up | Technology change | 
| Low | Add 1–3 | Add 10–15 | 
| Medium | Add 2–5 | Add 20–30 | 
| High | Add 4–8 | Add 40–60 | 
| Repurchase | Estimate data extraction, transformation, and uploading into the newly purchased SaaS service replacement, and any hardware decommissioning. | – | – | – | 

*Estimating migration infrastructure costs*

Include estimates for the infrastructure that you will use over the course of the migration. Typically, these estimates comprise:
+ A budget for connectivity and data exchange services for workload and data migration from the current environment to AWS
+ A budget for the AWS services (especially compute and storage) needed for hosting the migrated workloads during the migration, testing, and cutover processes
+ The ramping up of AWS utility costs as each migration wave is completed
+ The decommissioning costs of the existing infrastructure that will no longer run the migrated workloads

For data exchange, examine your total data volumes and assess the feasibility of using networking. If you have provisioned an [AWS Direct Connect](https://aws.amazon.com/directconnect/) link or [Site-to-Site VPN](https://aws.amazon.com/vpn/) from AWS to a point on your WAN ahead of time for operational use after the migration, you can use that resource up to its service quota. 

If your network capacity is insufficient, a short-term increase in internet bandwidth with a virtual private network (VPN) is often a highly cost-effective solution. If not, AWS media exchange devices such as [AWS Snowball Edge](https://aws.amazon.com/snowball/) and [AWS Snowball Edge](https://aws.amazon.com/snowcone/) offer solutions in most AWS Regions. Also, for very high-volume data migration, consider including budget for [AWS DataSync](https://aws.amazon.com/datasync/), which improves reliability and can accelerate transfers irrespective of the media used.

Modeling the ramp up of AWS services and the ramp down of existing infrastructure is important for the cash flow analysis element of the business case. At this stage, you are not likely to have a wave plan to determine exactly when costs will be incurred. We recommend the following:
+ Ramping up the costs for AWS at a constant rate over the migration. 
+ Ramping down costs for the existing infrastructure you plan to decommission at a constant rate over the same duration.

Starting the AWS cost ramp up 1-2 months before the existing infrastructure ramp down. This provides 1 month of AWS utility usage to conduct the migration for each wave. It includes time for testing, and additional time to complete the decommissioning work needed to stop incurring costs on the replaced infrastructure.

*Estimating decommissioning costs*

Decommissioning equipment that cannot be redeployed, and disposing of it in a legal and environmentally friendly way, can incur some small costs. However, for a directional business case, typically the only potentially material sum is the cost of writing off any remaining book value of the replaced assets.

For the directional business case, we recommend that you do the following:
+ Review your asset list.
+ Identify those that would be decommissioned.
+ To reduce the write-off, examine the opportunities for switching devices around so that newer devices on the list can be used to replace older, more fully depreciated assets. 
+ Make an assessment of the future book value of the assets that would be decommissioned at that point.
+ Include this as the migration cost of decommissioning.

*Assembling and adjusting the full directional business case*

After you prepare the full set of costs for each pair of scenarios, construct a discounted cash flow statement for each and graph them. We recommend building directional business cases over the same period as the hardware refresh cycle. This is typically 5 years for servers, storage, and network devices. When you use the same period as the hardware refresh cycle, the costs of exactly one refresh are included in the as-is costs for each scenario.

Then calculate the key financial metrics that you need for getting approval to move to the next phase of the program. We usually include the following:
+ The net present value (NPV) to gauge the absolute value of the cost reductions and productivity gains assessed
+ The payback period in months to verify that returns are sufficiently fast
+ The final run-rate comparison to verify whether the process is taking enough cost out over the term
+ The return on investment (ROI) and modified investment rate of return (MIRR) to assess the relative financial performance of the program over other demands on capital your organization may be prioritizing

Use the first iteration of the case to determine whether the expected financial performance means that refinements should be made, as in the following examples:
+ If payback is too slow, consider options to accelerate and reduce the cost of migration, such as the following:
  + Use AWS Partners or AWS Professional Services to expand the available resources and further parallelize migrating workloads with more basic patterns. 
  + For workloads running in VMware, compare the relocate strategy to the rehost or replatform strategy, at least for the initial phase. Using the relocate strategy can reduce migration cost and increase migration speed.
  + Where technically feasible, push workloads that require more complex replatform or refactor (re-architect) strategies into a future phase, outside the scope of the initial business case.
+ If ROI and MIRR are too low, consider the following:
  + Are the scenarios that you are considering too conservative? Do you have a scenario that reflects the most likely capacity growth and elasticity needs? Do you have scenarios that compare the costs inclusive of the increases in quality of service within your objectives?
  + Can you refine the scope of the application portfolio to be migrated in the first phase to focus on workloads that will yield stronger returns, such as those with lower current utilization or expensive disaster recovery (DR) needs?
  + Can refine the scope of the application portfolio to initially exclude specific workloads that achieve less commercially? For example, can you postpone workloads for which third-party software licenses become more expensive due to different terms for deployment in public cloud infrastructure?
+ If the final run-rate comparison does not meet the expected target, explore the following:
  + First, confirm that the other metrics meet expectations. The directional business case is primarily to show that there is sufficient financial opportunity to justify starting the next phase of migration preparation. 
  + Identify a list of the opportunities to continue to improve cost performance on AWS after the initial phase of migration.

Include an assessment of the list of opportunities when preparing the detailed business case. In addition, include an opportunities assessment in the ongoing maintenance of the case and the month-to-month cost-optimization process after migration is complete.