

# Portfolio analysis and migration planning
<a name="portfolio-analysis-migration-planning"></a>

This stage of assessment focuses on completing the portfolio-level discovery and analysis started in the [Portfolio discovery and initial planning](portfolio-discovery-initial-planning.md) section. The goal is to iterate and establish a baseline for the initial portfolio of applications and infrastructure. This baseline includes identifying all dependencies, iterating rationalization models for migration, creating a detailed business case, and outlining a migration wave plan. As a result, the required data fidelity is higher. This stage will require time investment. To accelerate assessment outcomes, we recommend using as many programmatic data sources, such as discovery tooling, as possible.

Primary outcomes of this stage include the following:
+ A high-fidelity application and infrastructure inventory
+ A high-level migration strategy for each application
+ A high-confidence migration wave plan
+ A detailed business case

# Understanding complete assessment data requirements
<a name="understanding-complete-assessment-data-requirements"></a>

The following table describes the information required to obtain a complete portfolio view of the applications in the migration and their associated infrastructure.

The tables use the following abbreviations:
+ R, for required
+ O, for optional
+ N/A, for not applicable

**Applications**


| **Attribute name** | **Description** | **Inventory and prioritization** | **Detailed Business case** | **Recommended fidelity level (minimum)** | 
| --- | --- | --- | --- | --- | 
| Unique identifier | For example, application ID. Typically available on existent CMDBs or other internal inventories and control systems. Consider creating unique IDs whenever these are not defined in your organization. | R | R | High | 
| Application name | Name by which this application is known to your organization. Include commercial off-the-shelf (COTS) vendor and product name when applicable. | R | R | High | 
| Is COTS? | Yes or No. Whether this is a commercial application or internal development | R | R | High | 
| COTS product and version | Commercial software product name and version  | R | R | High | 
| Description | Primary application function and context | R | R | High | 
| Criticality | For example, strategic or revenue-generating application, or supporting a critical function | R | R | High | 
| Type | For example, database, customer relationship management (CRM), web application, multimedia, IT shared service | R | R | High | 
| Environment | For example, production, pre-production, development, test, sandbox | R | R | High | 
| Compliance and regulatory | Frameworks applicable to the workload (for example, HIPAA, SOX, PCI-DSS, ISO, SOC, FedRAMP) and regulatory requirements | R | R | High | 
| Dependencies | Upstream and downstream dependencies to internal and external applications or services. Non-technical dependencies such as operational elements (for example, maintenance cycles). | R | O | High | 
| Infrastructure mapping | Mapping to physical and/or virtual assets that make up the application | R | R | High | 
| License | Commodity software license type (for example, Microsoft SQL Server Enterprise) | R | R | Medium-high | 
| Cost | Costs for software license, software operations, and maintenance | N/A | R | Medium-high | 
| Business unit | For example, marketing, finance, sales | R | R | High | 
| Owner details | Contact information for application owner | R | R | High | 
| DR information | Disaster recovery components | R | R | High | 
| Migration strategy | For example, one of the 6 Rs for migration to AWS | R | R | High | 
| Support tickets | 12–24 months of data to help assess the productivity and financial impact of outages, slow downs, transaction throttling, and batch window overruns | O | R | Medium | 

**Infrastructure**


| **Attribute name** | **Description** | **Inventory and prioritization** | **Business case** | **Recommended fidelity level (minimum)** | 
| --- | --- | --- | --- | --- | 
| Unique identifier | For example, server ID. Typically available on existing CMDBs or other internal inventories and control systems. Consider creating unique IDs whenever these are not defined in your organization. | R | R | High | 
| Network name | Asset name in the network (e.g., hostname) | R | R | High | 
| DNS name (fully qualified domain name, or FQDN) | DNS name | R | O | High | 
| IP address and netmask | Internal and/or public IP addresses | R | R | High | 
| Asset type | For example, physical or virtual server, hypervisor, container, device, database instance | R | R | High | 
| Product name | Commercial vendor and product name (for example, VMware ESXi, IBM Power Systems, Exadata) | R | R | High | 
| Operating system | For example, REHL 8, Windows Server 2019, AIX 6.1 | R | R | High | 
| Configuration | Allocated CPU, number of cores, threads per core, total memory, storage, network cards | R | R | High | 
| Utilization | CPU, memory, and storage peak and average. Database instance throughput. | R | R | High | 
| License | Commodity license type (for example, RHEL Standard) | R | R | High | 
| Is shared infrastructure? | Yes or No to denote infrastructure services that provide shared services such as authentication provider, monitoring systems, backup services, and similar services | R | R | High | 
| Application mapping | Applications or application components that run in this infrastructure | R | R | High | 
| Cost | Fully loaded costs for bare-metal servers, including hardware, maintenance, operations, storage (SAN, NAS, Object), operating system license, share of rack space, and data center overheads | N/A | R | Medium-high | 
| Estimated volume of data transfer (in/out) | For example, per infrastructure asset over per day over a 30 days period  | O | R | Medium | 

**Networks**


| **Attribute name** | **Description** | **Inventory and prioritization** | **Business case** | **Recommended fidelity level (minimum)** | 
| --- | --- | --- | --- | --- | 
| Size of pipe (Mb/s), redundancy (Y/N) | Current WAN link specifications (for example, 1000 Mb/s redundant) | R | R | Medium-high | 
| Link utilization | Peak and average utilization, outbound data transfer (GB/month) | R | R | Medium-high | 
| Latency (ms) | Current latency between connected locations. | R | O | High | 
| Cost | Current cost per month | N/A | R | Medium-high | 

**Migration**

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/application-portfolio-assessment-guide/understanding-complete-assessment-data-requirements.html)

# Establishing a baseline for the application portfolio
<a name="baseline-application-portfolio"></a>

To create high-confidence migration wave plans, you must establish a baseline for the portfolio of applications and its associated infrastructure. A portfolio baseline provides a comprehensive view of the migration scope, including technical dependencies and migration strategy. The portfolio baseline provides clarity about which applications are in-scope of migration and that the data points outlined in the [Understanding complete assessment data requirements](understanding-complete-assessment-data-requirements.md) section are collected. Likewise, all the associated infrastructure (compute, storage networks) is understood and mapped to the applications. 

Technical dependencies can be described in four categories:
+ **Application-to-infrastructure** dependencies establish the link between software and physical or virtual hardware. For example, there is a dependency between a CRM application and the virtual machines where it is installed. 
+ **Application-component** dependencies describe how components running in different infrastructure assets interact. An example of an application-component dependency is a web front end running on virtual machines, with an application layer running on a different virtual machine, and a database running on a database cluster.
+ **Application-to-application** dependencies relate to the interaction between applications or application components with other applications or their components. An example of an application-to-application dependency is a payment-processing application and a stock-management application. These applications are independent, but they constantly interact using defined API operations. 
+ **Application-to-infrastructure services** dependencies are technically application-to-application dependencies, given that the infrastructure service is itself an application. However, we recommend categorizing these separately. The main reason is that infrastructure services typically are shared by many applications, so they have a long trail of dependencies. They also typically follow a different migration strategy and pattern.For example, a load balancer can contain balancing pools for several applications. What matters is the dependency to the pool, which is likely to be migrated individually, alongside the dependent application, while the load balancer itself is retained or retired.In addition, individualizing application-to-infrastructure service dependencies helps to avoid false dependency groups. A false dependency group is when several business applications are grouped together, implying that have a common dependency to an infrastructure service must be migrated at the same time. For example, authentication services, such as Active Directory, are likely to be associated with large groups of applications. The key is to approach these applications individually and to address the dependency by enabling the service, such as AWS Directory Service for Microsoft Active Directory, in the cloud environment.

When you establish a baseline for the portfolio, we recommend that you confirm a migration strategy for each application component. The migration strategy will be one of the 6 Rs for migration (see the [Iterating the 6 Rs migration strategy](iterating-6-rs-migration-strategy-selection.md) section). In the portfolio baseline, one of the 6 Rs should be associated with each application. A 6 R strategy should also be associated with each of the application's infrastructure components.

To establish a baseline version of the portfolio, including dependencies and migration strategies, use automated discovery tooling (see [Evaluating the need for discovery tooling](understanding-initial-assessment-data-requirements.md#discovery-tooling)). Complement the data with information gathered from key stakeholders such as application owners and infrastructure teams. Keep gathering data until you obtain a complete portfolio inventory that matches the attributes and level of fidelity outlined in the [data requirements section](understanding-complete-assessment-data-requirements.md) for this stage. The resulting dataset will be instrumental in driving the migration.

Consider that, depending on the extent of your migration scope and the available tooling, this activity can take several weeks to complete.

# Iterating the prioritization criteria
<a name="iterating-prioritization-criteria"></a>

Before you create migration wave plans, we recommend that you iterate the application prioritization criteria to pivot from pilot application selection to long-term wave planning. 

In earlier sections, we introduced a default prioritization criteria that would prioritize simple cloud-ready applications (see [Prioritizing applications](prioritization-and-migration-strategy.md#prioritizing-applications)). This was because in early stages we recommend starting with noncritical applications to refine migration processes and incorporate lessons learned. However, at this stage, and to create long-term plans, the order in which applications are migrated should be aligned to business drivers. Applying the new criteria will generate a new ranking of applications that will be a key input for wave planning.

Review the available data points from the application portfolio, and select the attributes that will determine application prioritization based on business drivers.

First, validate your business drivers (see [Business drivers and technical guiding principles](business-drivers-technical-guiding-principles.md)). Next, based on your business drivers, select the attributes that will help to prioritize applications for migration. 

The following table shows example prioritization criteria aligned to business drivers for innovation.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/application-portfolio-assessment-guide/iterating-prioritization-criteria.html)

The following table shows example prioritization criteria aligned to business drivers for quick cost reduction.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/application-portfolio-assessment-guide/iterating-prioritization-criteria.html)

Test the prioritization criteria and iterate until you generally agree with the output. It takes at least three or four iterations to obtain a baseline version.

# Iterating the 6 Rs migration strategy selection
<a name="iterating-6-rs-migration-strategy-selection"></a>

At this stage, we recommend that you iterate and evolve the 6 Rs decision tree. The [Determining the R type for migration](prioritization-and-migration-strategy.md#migration-r-type) section introduced a default decision tree. We recommend revising the tree, considering the learnings throughout migration of the initial pilot applications, and ensuring that it still aligns to business drivers, prioritization criteria, and your unique circumstances. Validate the decision tree with sample applications, and verify that it still produces the expected strategy. Otherwise, update the logic accordingly. The resulting tree will be key in establishing baselines for the portfolio of applications and in allocating migration strategies for each application component.

As described in the previous [6 Rs section](prioritization-and-migration-strategy.md#migration-r-type), the 6 Rs also apply to infrastructure, and it is equally important to assign them accordingly. While a given application component will have a migration strategy, at an infrastructure level, each infrastructure asset will follow a given migration strategy that might be different from the strategy established for the application component it supports. 

Remember that the 6 Rs decision tree applies to application components only. The migration strategy for infrastructure is derived from the strategy chosen for the application. For example, for an application component that will be replatformed, the current infrastructure that hosts it could be retired.

Ensure that migration strategies are allocated to each application component and its associated infrastructure. This information will be a key factor when estimating effort, capacity, and skills needed, and when creating migration wave plans.

For more information about determining your 6 Rs, see the [AWS Migration Hub strategy recommendations](https://aws.amazon.com/blogs/aws/new-strategy-recommendations-service-helps-streamline-aws-cloud-migration-and-modernization/).

# Wave planning
<a name="wave-planning"></a>

In wave planning, a dependency group is a collection of applications and infrastructure that have technical and nontechnical dependencies that can't be resolved. Because of these dependencies the applications and infrastructure in a dependency group must be migrated at the same time or on a specific date. For example, an application running on a virtual machine and a database running in a separate virtual machine, where there are low-latency requirements or high-traffic volumes and complex queries, are likely to be migrated together rather than operating one component in the cloud and the other on premises. Likewise, independent applications that interact over an API with similar low-latency requirements will also be migrated at the same time. 

Migration waves typically span 4–8 weeks, and they can contain one or more migration events. Dependency groups are combined into waves so that a wave can contain one or more dependency groups. The wave also contains other activities that are required for the migration. These include AWS infrastructure setup (such as landing zone, security, and operations), migration tooling, and migration activities such as data replication, cut-over planning, testing, and post-migration support. 

To measure success and track progress, waves should be aligned to outcomes and business drivers. This will also influence wave duration and the dependency groups that a wave contains. The completion of a wave should reflect a measurable achievement. The planning of a wave can also combine other factors, such as technical guiding principles. For example, waves can be defined by environment (for example, development, test, production) or by migration strategy (for example, rehost wave, replatform wave).

To create effective and high-confidence migration wave plans, you must obtain a complete view of the application portfolio, associated infrastructure (compute, storage, networks), dependency mapping, and migration strategy.

The section on [establishing a baseline for the application portfolio](baseline-application-portfolio.md) described four categories of technical dependencies. These dependencies contribute to the creation of migration waves and the definition of dependency groups. Dependency groups will be determined by the criticality of the dependency. In addition, nontechnical dependencies must be considered. For example, application release schedules, maintenance windows, and key business dates such as end of month of end of quarter processing will influence the wave plan.

Determine whether the dependency is *soft* or *hard*. A soft dependency is a relationship between two or more assets, or from an asset to a constraint, that is not dependent on the location of the components. For example, two systems that operate in the same local network (or same infrastructure) can be split apart by moving one of those systems to the cloud while the other remains on premises. Another example is a system that can be migrated during a maintenance window without impacting maintenance activities. 

A hard dependency is a relationship between two or more assets, or from an asset to a constraint, that is dependent on location. For example, two systems that operate in the same local network, and that are heavily dependent on low latency for communication between the application server and the database server, have a hard dependency. Moving only one of these systems to the cloud would cause functionality or performance problems that cannot be resolved. Likewise, nontechnical reasons, such as resource availability (for example, the team performing the migration) or operational constraints, such as maintenance windows where two systems can only be migrated in a given time window, might create a hard dependency for these assets.

To create a migration wave plan, determine your dependency groups by analyzing dependencies, ideally from a highly trusted source of data such as specialized discovery tooling, and combine this information with your application prioritization criteria and operational circumstances. The applications at the top of the prioritization ranking should be targeted for your initial migration waves. Determine wave capacity (the number of applications a wave can contain) based on resource availability, risk tolerance, business and technical constraints, experience, and skills. Consider working with AWS Professional Services or AWS Migration Competency Partners, that can provide specialists to assist you throughout the process.

The prioritization criteria are an initial indication of the order in which you will move your applications to the cloud. However, dependency groups will be the actual determinant for the applications that will be moved at a given time. This is because applications that are ranked as high priority could have hard dependencies to applications that are in the middle or at the bottom of the ranking. 

The migration strategy will also influence the composition of a wave. For example, a high-priority application that requires a refactor strategy that might require several weeks or months of analysis, design, testing, and preparations will likely be placed in a later wave.

## Creating a wave plan
<a name="create-wave-plan"></a>

A prerequisite to migrating a wave of applications is the application portfolio data and the detailed application assessment of the group of applications that will be migrated in the wave. The detailed assessment should include the list of applications in the wave, the associated infrastructure details, a target design, and a migration strategy for each application. 

Establishing wave ownership and governance is key to managing and tracking the wave work, program dependencies, change management, issues, and risks. Ensure that a governance framework is in place to manage the plan.

To outline the wave plan, start with a default wave construct. What happens within a wave? After the initial input is defined, the wave can commence. Typically, the activities will be: 

1. Refine the cutover plan. This activity should outline the runbooks and steps that must be taken at the moment of migration, including the coordination with other internal and external teams.

1. Refine the rollback plan. What must be done to roll back the applications if things go wrong?

1. Prepare the target infrastructure. For example, you can create or extend the AWS landing zone (AWS account, security, networking, infrastructure services, other supporting infrastructure).

1. Test target infrastructure.

1. Operate migration tooling. For example, install replication agents and start data transfer.

1. Conduct cutover plan and runbook dry runs. Group all participating team members, and review all steps in advance.

1. Monitor data replication and infrastructure deployments.

1. Confirm readiness for operation of infrastructure and applications in AWS.

1. Confirm security readiness.

1. Confirm compliance and regulatory requirements (for example, workload validation pre-migration and post-migration) if applicable.

1. Migrate the applications to AWS and perform pre go-live testing.

1. Provide post-migration support for a period of time, such as 3 days, when the operations teams and the migration teams are fully available to resolve issues, and apply optimizations.

1. Conduct a post-migration review. Document lessons learned, and incorporate them into future waves.

1. Perform wave closure by confirming operational handover and obtainment of metrics for reporting.

How long each of these activities takes will be dictated by the complexity of the scope, the wave capacity, the people involved, and your unique circumstances. Where possible, smaller waves are preferable because that will reduce the impact of any delays or migration blockers. Determine, with your teams, what the default duration of a wave will be.

Next, proceed to analyze dates to create an initial high-level structure of empty waves (with no application assigned yet). Consider the following questions:
+ What is the total migration program length? 
+ What are the deadlines?
+ Are there fixed data center exit dates? 
+ Are there collocation contract end dates? 
+ What are the application and infrastructure refresh cycles? 
+ What are the application maintenance and release cycles? 
+ Are there any dates when migrations should be avoided (for example, release and maintenance cycles, end of year, holidays, month-end processing)?

With these considerations, plot the waves into a plan. To accelerate the migration process, we recommend overlapping waves where possible. The key to overlapping waves is to define and consider what happens within a wave. Typically, deployment activities, target infrastructure validation, and data synchronization will occur during the first half of a wave. The second half will focus on the actual migration, testing, and operational handover. This means that different teams are involved in each half of the process, and that you can gain some efficiencies. For example, as soon as the team involved in target infrastructure preparation has completed their work, they can start working on the requirements of the next wave. In general, it is preferable that most waves have a similar length and structure to facilitate a factory-like approach to migrations. However, during the wave planning process, the size of a given wave can be extended to meet dependencies or operational requirements. 

Next, based on the dependency groups that have been identified, determine the maximum size of a wave in terms of the number of dependency groups it can contain. Wave size is typically dictated by risk appetite (for example, how much parallel change can be tolerated), and resource availability (for example, how much parallel change can be performed with the available resources, skills, and budget). However, during early planning, don't be limited by resource requirements and availability. Waves that contain more than one dependency group can be decomposed into smaller waves in future iterations.

After the dependency groups for a given wave have been confirmed, review resource requirements for migrating the wave. Consider adjusting the wave size (the number of dependency groups it contains) based on resource requirements. This might lead to smaller or larger waves. Iterate the wave plan as needed until all waves have been defined.

## Managing change
<a name="manage-change"></a>

The portfolio of applications and associated infrastructure will change during the lifecycle of migration programs. Long-running migration programs coexist with normal business evolution and change. Applications keep evolving as they wait to be migrated. Servers are added or removed, new infrastructure is deployed on premises. It is expected that the scope of a wave or dependency group will require changes. Changes are required especially when, closer to the migration date, a previously unknown dependency is identified, or a new server is included in the inventory. Sometimes this can happen during the migration itself.

Scope changes affect dependency groups and waves. To handle change and minimize impact, it's important to establish a scope-control mechanism. A scope change control mechanism requires the definition of a single source of truth for the scope. This could be a tool for managing the scope, or a .csv file, spreadsheet, or database, as defined by the migration program governance. You must identify changes, analyze impact, and communicate change to the relevant stakeholders so that they can take action. The wave plan will be iterated as a result.

# Detailed business case
<a name="detailed-business-case"></a>

In this stage, we recommend validating and expanding the scope of the business case to provide a greater level of detail to support the transformation program. The quickly assembled initial directional business case is designed to provide enough confidence to invest in the foundational steps and next level of detailed planning. 

Developing a detailed business case supports this planning process in the following ways:
+ Providing financial analyses that inform decisions on what should be migrated and modernized, which options to select and how to phase and prioritize the work
+ Validating, refining, and developing the original directional financial case by re-examining in detail:
  + The infrastructure cost-reduction potential
  + The internal IT productivity and any outsourced operations efficiencies
  + The estimates for the investments needed for program setup, migration, and modernization
+ Identifying, estimating the scale of, and setting up the process for tracking the further value drivers that migration brings

In the detailed business case, you establish the following:
+ The objective basis on which to secure the mandate and investment to implement at least the first phase of migration
+ The baseline minimum financial performance expectation for the program
+ Clarity over the financial basis on which various migration design and prioritization decisions are made, so that when circumstances and people change over the course of the program, the new leadership can make informed choices.
+ Insight into incremental areas of cost optimization to be explored after initial usage data becomes available as workloads are migrated and start operation
+ Estimates for the value that cloud transformation brings to the business from increased resilience and agility 
+ The associated KPIs, metrics, and assumptions used to estimate the financial return from improved resilience and agility, which then form the baseline for driving the primary benefits realization out of the program

## Determine the scenarios needed for the case
<a name="determine-scenarios-needed"></a>

When building the detailed business case, it is usually necessary to develop multiple scenarios to support the various purposes that the business case is used for. 

**Minimum change scenario** – To assess the minimum financial performance expectation, prepare a scenario that assumes the minimum expected change to the status quo. This scenario, as a worst-case scenario, is useful support when getting the mandate to invest in the migration. This scenario models the minimum expected degree of capacity growth and minimum changes for other quality-of-service needs, such as availability and resilience. The least change creates the lowest cost and least resource inefficiencies for the current operating model.

**Most likely scenario** – To inform program strategy and prioritization decisions, prepare the scenario that reflects what the business expects to happen. This scenario should include the likely peak utilization growth or reduction and the upgrade costs to meet demand for high levels of service quality (especially availability and resilience) from the business.

**Other specific scenarios** – Where it is still necessary to make an assumption that could have a large impact on the business case, develop scenarios for both where the assumption holds true and where it does not. However, we recommend keeping the number of these alternative scenarios to the absolute minimum. Creating any more than three to four scenarios in total slows progress, and becomes expensive, confusing and difficult to maintain. Wherever possible, conduct experiments and work to remove larger assumptions.

## Validate and refine the infrastructure and migration cost model
<a name="validate-refine-infrastructure-migration-cost-model"></a>

After you have completed the portfolio analysis and prepared the design and sizing of the target AWS services, refine the running cost estimates for the current operating model (COM) and future operating model (FOM) on AWS for each scenario. It is usually necessary to refine the estimates for the following:
+ **COM infrastructure costs** of hypervisor host server, bare-metal server, storage, network device, security appliance hardware refreshes, installation, and maintenance. Calculate these with actual pricing and discount levels for the capacity needed for the scenario.
+ **COM data center and collocated facilities costs**, including space, cooling, power, racks, uninterruptible power supply (UPS), cabling, physical security systems, sized for the growth and specified to meet the capacity, and high availability and disaster recovery (DR) levels for the scenario.
+ **COM network services costs**, including costs for WAN links, content delivery networks, and virtual private networks (VPNs), calculated using contracted pricing for the connectivity, bandwidth, throughput, and latency needs for the scenario.
+ **COM application and infrastructure software costs** based on existing contracts to provide the growth or reduction of usage for the scenario.
+ **FOM AWS utility costs**, including tech support and managed services as needed, based on the refined service architecture, instance sizes, preferred pricing model, expected usage, and usage volatility.
+ **FOM application licensing** based on final application design, the configuration of the infrastructure running the applications, growth over time, and license transferability rules.
+ **FOM migration and modernization cost estimates**, refined to reflect the baseline migration wave plan for the scenario, and detailed to provide costs for each workload, especially for those to be replatformed, repurchased, or refactored. 
+ **FOM decommissioning costs**, including estimates of asset write-off and contract early termination costs, revised to reflect decommissioning timing in the baseline migration wave plan, verification of what assets can be repurposed and what assets can be switched around to minimize write-offs, and the cost of disposal of the physical assets and media. 
+ **Migration parallel run costs** refined to reflect the timing of each migration cutover and each existing service decommissioning.

## Refine the IT productivity and IT operations and support efficiency value model
<a name="refine-it-productivity-it-operations-support-efficiency-value-model"></a>

As with the directional business case, there are two primary approaches to refining and developing the value model around IT operations and support. The approach that you choose depends on whether the COM is managed in-house or with contractors or outsourced services:

*Internal team productivity improvement*

Where IT operations and support are managed in house, the focus of the business case is on the following:
+ Identifying and quantifying the productivity gains from migration and any operational automation that is included in scope
+ Validating that the time freed up for the in-house team can be readily and productively applied to other typically higher-value activities, giving opportunities for progression and greater reward to the team and more value to the organization

Assess how much time each member in each role within the team spends on their various regular activities, and guidance on the expected reduction in workload for different activities.

The following table provides initial guidance for the typical levels of workload reduction by activity for those tasks that consume the bulk of IT operations and support effort across the different roles in the team. The table includes a description of how the productivity is achieved.

**Note**  
The activities listed are typically performed by team members in several different roles, so the productivity saving for each task should be assessed across the full set of roles in the team. For example, in IT operations teams organized by infrastructure tower (such as compute, storage, and networking), capital expenditure planning and budgeting might be common to tower leads for each tower.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/application-portfolio-assessment-guide/detailed-business-case.html)

The following table shows the expected savings for each level of workload reduction.


| **Level** | **Expected** | 
| --- | --- | 
| Very high | 85% - 100% | 
| High | 60% - 90% | 
| Medium | 30% - 70% | 
| Low | 10% - 35% | 
| Minimal | 0% - 10% | 

These metrics provide a starting point for assessing productivity gains and including them in the detailed business case. Actual productivity gains vary based on the specific situation. It can be useful to calculate the productivity savings at both the midpoint and lower end of the ranges to estimate typical and conservative scenarios. 

As the program progresses, it is valuable to capture actual data for time spent on each activity by role. That data builds an improved base for estimating operations and supports costs for new projects and expansions of services.

*Outsourced IT operations and support cost reduction*

Where IT operations and support are primarily outsourced or managed with contractors, the cost allocation for the future operating model (FOM) can be prepared by requesting quotations from AWS Partners that offer managed service solutions, including AWS Partner-led [AWS Managed Services (AMS)](https://aws.amazon.com/managed-services/partners/). You can also contact your AWS account manager and request a price for AMS directly, as described in the subsection on [Building in operational cost optimization](directional-business-case.md#building-operational-cost-optimization) within the [Creating a directional business case](directional-business-case.md) section.

For the detailed business case, replace any benchmark figure with a quotation based on the revised AWS services bill of materials and expected service consumption, the AMS package and any options needed, and the service level needed. The cost will have a one-time implementation component and a consumption-based run rate. 

Include any remaining IT operations, support that must be retained for any service that will not be migrated to AWS, and a one-time cost if there are any contract penalties (for example, for early termination).

## Develop the resilience value model
<a name="develop-resilience-value-model"></a>

On AWS, you can construct a wide range of high availability, disaster recovery, and fault-tolerant architectures. Consumption-based pricing means that services are charged for only when used. Together, these two factors provide exceptional cost performance for resilience. 

Furthermore, AWS ccustomers have been using this to improve their workloads' resilience. The [IDC 2018 survey](https://pages.awscloud.com/rs/112-TZM-766/images/AWS-BV%20IDC%202018.pdf?aliId=1614258770) gives examples of participating customers achieving 73 percent fewer outages per year, a 58 percent reduction in mean time to recover (MTTR) and a 94 percent reduction in lost productivity. The same survey showed that the financial benefits derived through increased resilience were 50 percent greater than the IT infrastructure cost reduction benefit. 

In addition, further resilience is achieved through modernizing the software development lifecycle for applications. Where CI/CD pipelines with test automation are introduced to support greater business agility, software defects are caught earlier in the development cycle, greatly reducing software maintenance costs.

To assess and include this value in the business case, first work with application business owners to build a picture of the *total benefit opportunity* for each workload to be migrated**. **This might include as the following items:
+ The number, average duration, and nature of interruptions in service:
  + Examples of service interruptions include outages, performance slow-downs, planned batch and maintenance windows overrunning, bugs in key functions, and access throttling during peak periods. 
+ Impact on revenue by interruptions of revenue-generating services, such as ecommerce systems:
  + The likely number of transactions unable to be completed through service interruptions, based on the interruption time and transaction rates
  + Average value for each transaction impacted
+ The additional cost of support engineers' time to resolve defects in production systems compared to the cost of discovering them earlier in the development process
+ Impact on internal users' productivity and the cost of lost time

Then make an assessment of an expected and a more conservative reduction in time lost to service interruptions** **that the increased resilience should yield. For example, consider including the following items:
+ Reduced number of outages and MTTR using high availability architectures and improved recovery time objective (RTO) and recovery point objective (RPO)
+ Reduction in slow-downs, elimination of capacity throttling and avoidance in batch processing overruns, using capabilities such as automatic scaling
+ Reduced number of application bugs that are discovered only in production, through the implementation of CI/CD pipelines and automated regression testing on infrastructure spun up and spun down to minimize cost

Put these together for the portfolio of applications to be migrated and modernized, and calculate the expected and more conservative business value figures for each year of the case. The benefits should ramp up in line with the migration schedule and then scale in volume in line with the usage growth expectations of the contributing applications. 

 

## Develop the business agility value model
<a name="develop-business-agility-value-model"></a>

Business agility is the prime reason that AWS customers migrate to AWS. The [IDC 2018 Survey](https://pages.awscloud.com/rs/112-TZM-766/images/AWS-BV%20IDC%202018.pdf?aliId=1614258770) of AWS customers indicated that for them, business agility benefits accounted for 47 percent of the total benefits measured and over five times the benefits accruing from infrastructure cost reduction.

Accurately predicting all the business agility benefits that will accrue from any transformation is challenging. However, by focusing on applications that support large numbers of users or are sources of business differentiation, you can model and include a material portion of this benefit into the baseline detailed business case.

As the migration proceeds, incrementally refine and expand the business agility value model as more benefits become quantifiable. This keeps the business case relevant, so that it can be used as the primary decision support tool with which to steer the program.

To build the business agility value model, use the following guidance:
+ Select the workloads that have the opportunity to drive the greatest business performance improvement, such as:
  + Revenue-generating workloads 
  + Business operations workloads with scope for driving efficiency gains and removing costs from the business
  + Business productivity tools supporting large user bases
+ For revenue and efficiency generating workloads, do the following:
  + Make a realistic and a more conservative assessment of the revenue growth or operational efficiencies that major and minor application upgrades could be expected to drive.
  + Estimate the increased number of major and minor releases per year that AWS increased application development speed and reduced infrastructure deployment time enables. Some baseline metrics for this are provided in the IDC report.
  + Calculate the realistic and more conservative benefit expectations. Map them over the period of the business case, making allowances for ramping up to full efficiency some time after the respective workloads are migrated.
+ For business productivity tools, do the following:
  + Make a realistic and a more conservative assessment of the time savings that major and a minor application upgrades could be expected to drive.
  + Estimate the average cost of people's time and effort across the impacted user base.
  + Use the figures for increased major and minor release frequency, and calculate the benefits over the term of the business case.

Because the increased developer productivity and reduced time to launch require no additional resources, add the net benefit lines for each workload into the business case cash flow model for inclusion in the discounted cashflow, NPV, ROI, MIRR, and payback calculations.