

 This whitepaper is for historical reference only. Some content might be outdated and some links might not be available.

# Implementation Techniques
Implementation Techniques

 The following techniques are examples of how you can implement blue/green on AWS. While AWS highlights specific services in each technique, you may have other services or tools to implement the same pattern. Choose the appropriate technique based on the existing architecture, the nature of the application, and the goals for software deployment in your organization. Experiment as much as possible to gain experience for your environment and to understand how the different deployment risk factors affect your specific workload. 

**Topics**
+ [

# Update DNS Routing with Amazon Route 53
](update-dns-routing-with-amazon-route-53.md)
+ [

# Swap the Auto Scaling Group Behind the Elastic Load Balancer
](swap-the-auto-scaling-group-behind-elastic-load-balancer.md)
+ [

# Update Auto Scaling Group launch configurations
](update-auto-scaling-group-launch-configurations.md)
+ [

# Swap the Environment of an Elastic Beanstalk Application
](swap-the-environment-of-an-elastic-beanstalk-application.md)
+ [

# Clone a Stack in AWS OpsWorks and Update DNS
](clone-a-stack-in-aws-opsworks-and-update-dns.md)

# Update DNS Routing with Amazon Route 53
Update DNS Routing with Amazon Route 53

 DNS routing through record updates is a common approach to blue/green deployments. DNS is used as a mechanism for switching traffic from the blue environment to the green and vice versa when rollback is necessary. This approach works with a wide variety of environment configurations, as long as you can express the endpoint into the environment as a DNS name or IP address. 

 Within AWS, this technique applies to environments that are: 
+  Single instances, with a public or Elastic IP address 
+  Groups of instances behind an Elastic Load Balancing load balancer, or third-party load balancer 
+  Instances in an Auto Scaling group with an Elastic Load Balancing load balancer as the front end 
+  Services running on an Amazon Elastic Container Service (Amazon ECS) cluster fronted by an Elastic Load Balancing load balancer 
+  Elastic Beanstalk environment web tiers 
+  Other configurations that expose an IP or DNS endpoint 

The following figure shows how Amazon Route 53 manages the DNS hosted zone. By updating the [alias record](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html), you can route traffic from the blue environment to the green environment.

![\[AWS architecture showing traffic routing between blue and green environments using Route 53 and load balancing.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/classic-dns.png)


* Classic DNS pattern *

 You can shift traffic all at once or you can do a weighted distribution. For weighted distribution with Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic. This provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to scale out to support the full production load if you’re using Elastic Load Balancing(ELB), for example. [ELB automatically scales its request-handling capacity](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) to meet the inbound application traffic; the process of scaling isn’t instant, so we recommend that you test, observe, and understand your traffic patterns. Load balancers can also be pre-warmed (configured for optimum capacity) through a support request. 

![\[AWS architecture diagram showing traffic distribution between blue and green environments using Route 53.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/classic-dns-weighted.png)


* Classic DNS-weighted distribution *

 If issues arise during the deployment, you can roll back by updating the DNS record to shift traffic back to the blue environment. Although DNS routing is simple to implement for blue/green, you should take into consideration how quickly can you complete a rollback. DNS Time to Live (TTL) determines how long clients cache query results. However, with earlier clients and potentially clients that aggressively cache DNS records, certain sessions may still be tied to the previous environment. 

 Although rollback can be challenging, this feature has the benefit of enabling a granular transition at your own pace to allow for more substantial testing and for scaling activities. To help manage costs, consider using Auto Scaling instances to scale out the resources based on actual demand. This works well with the gradual shift using Amazon Route 53 weighted distribution. For a full cutover, be sure to tune your Auto Scaling policy to scale as expected and remember that the new Elastic Load Balancing endpoint may need time to scale up as well. 

# Swap the Auto Scaling Group Behind the Elastic Load Balancer
Swap the Auto Scaling Group Behind the Elastic Load Balancer

 If DNS complexities are prohibitive, consider using load balancing for traffic management to your blue and green environments. This technique uses Auto Scaling to manage the EC2 resources for your blue and green environments, scaling up or down based on actual demand. You can also control the Auto Scaling group size by updating your maximum desired instance counts for your particular group. 

 Auto Scaling also integrates with Elastic Load Balancing (ELB), so any new instances are automatically added to the load balancing pool if they pass the health checks governed by the load balancer. ELB tests the health of your registered EC2 instances with a simple ping or a more sophisticated connection attempt or request. Health checks occur at configurable intervals and have defined thresholds to determine whether an instance is identified as healthy or unhealthy. For example, you could have an ELB health check policy that pings port 80 every 20 seconds and, after passing a threshold of 10 successful pings, health check will report the instance as being `InService`. If enough ping requests time out, then the instance is reported to be `OutofService`. With Auto Scaling, an instance that is `OutofService` could be replaced if the Auto Scaling policy dictates. Conversely, for scaled-down activities, the load balancer removes the EC2 instance from the pool and drains current connections before they terminate. 

 The following figure shows the environment boundary reduced to the Auto Scaling group. A blue group carries the production load while a green group is staged and deployed with the new code. When it’s time to deploy, you simply attach the green group to the existing load balancer to introduce traffic to the new environment. For HTTP/HTTPS listeners, the load balancer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm. For more information see, [How Elastic Load Balancing Works](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/how-elb-works.html). You can also control how much traffic is introduced by adjusting the size of your green group up or down. 

![\[AWS architecture showing blue-green deployment with Auto Scaling groups and load balancing.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/swap-auto-scaling-group.png)


* Swap Auto Scaling group patterns *

 As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state. For more information see, [Temporarily removing instances from your Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html). Standby is a good option because if you need to roll back to the blue environment, you only have to put your [blue server instances back in service](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html) and they're ready to go. As soon as the green group is scaled up without issues, you can decommission the blue group by adjusting the group size to zero. If you need to roll back, detach the load balancer from the green group or reduce the group size of the green group to zero. 

![\[AWS architecture showing load balancing and auto scaling transition between two regions.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/blue-auto-scaling-group-nodes.png)


* Blue Auto Scaling group nodes in standby and decommission *

 This pattern’s traffic management capabilities aren’t as granular as the classic DNS, but you could still exercise control through the configuration of the Auto Scaling groups. For example, you could have a larger fleet of smaller instances with finer scaling policies, which would also help control costs of scaling. Because the complexities of DNS are removed, the traffic shift itself is more expedient. In addition, with an already warm load balancer, you can be confident that you’ll have the capacity to support production load. 

# Update Auto Scaling Group launch configurations
Update Auto Scaling Group launch configurations

 A launch configuration contains information like the Amazon Machine Image (AMI) ID, instance type, key pair, one or more security groups, and a block device mapping. Auto Scaling groups have their own launch configurations. You can associate only one launch configuration with an Auto Scaling group at a time, and it can’t be modified after you create it. To change the launch configuration associated with an Auto Scaling group, replace the existing launch configuration with a new one. After a new launch configuration is in place, any new instances that are launched use the new launch configuration parameters, but existing instances are not affected. When Auto Scaling removes instances (referred to as *scaling in*) from the group, the default termination policy is to remove instances with the earliest launch configuration. However, you should know that if the Availability Zones were unbalanced to begin with, then Auto Scaling could remove an instance with a new launch configuration to balance the zones. In such situations, you should have processes in place to compensate for this effect. 

 To implement this technique, start with an Auto Scaling group and an Elastic Load Balancing load balancer. The current launch configuration has the blue environment as shown in the following figure. 

![\[AWS architecture diagram showing Auto Scaling group with blue and green launch configs connecting to various AWS services.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/launch-configuration-update.png)


* Launch configuration update pattern *

 To deploy the new version of the application in the green environment, update the Auto Scaling group with the new launch configuration, and then scale the Auto Scaling group to twice its original size. 

![\[AWS architecture diagram showing Auto Scaling group with blue and green launch configurations.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/scale-up-green-launch.png)


* Scale up green launch configuration *

The next step is to shrink the Auto Scaling group back to the original size. By default, instances with the old launch configuration are removed first. You can also utilize a group’s Standby state to [temporarily remove instances](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html) from an Auto Scaling group. Having the instance in Standby state helps in quick rollbacks, if required. As soon as you’re confident about the newly deployed version of the application, you can permanently remove instances in Standby state. 

![\[AWS architecture diagram showing user traffic flow through Route 53, load balancing, and auto scaling group to various AWS services.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/scale-down-blue-launch.png)


* Scale down blue launch configuration *

 To perform a rollback, update the Auto Scaling group with the old launch configuration. Then, perform the preceding steps in reverse. Or if the instances are in Standby state, bring them back online. 

# Swap the Environment of an Elastic Beanstalk Application
Swap the Environment of an Elastic Beanstalk Application

 Elastic Beanstalk enables quick and easier deployment and management of applications without having to worry about the infrastructure that runs those applications. To deploy an application using Elastic Beanstalk, upload an application version in the form of an application bundle (for example, Java `.war` file or `.zip` file), and then provide some information about your application. Based on application information, Elastic Beanstalk deploys the application in the blue environment and provides a URL to access the environment (typically for web server environments). 

 Elastic Beanstalk provides several deployment policies that you can configure for use, ranging from policies that perform an in-place update on existing instances, to immutable deployment using a set of new instances. Because Elastic Beanstalk performs an in-place update when you update your application versions, your application may become unavailable to users for a short period of time. 

 However, you can avoid this downtime by deploying the new version to a separate environment. The existing environment’s configuration is copied and used to launch the green environment with the new version of the application. The new green environment will have its own URL. When it’s time to promote the green environment to serve production traffic, you can use [Elastic Beanstalk's Swap Environment URLs feature](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html). 

 To implement this technique, use Elastic Beanstalk to spin up the blue environment. 

![\[Elastic Beanstalk architecture with Route 53, load balancing, and auto scaling group.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/elastic-beanstalk-environment.png)


* Elastic Beanstalk environment *

 Elastic Beanstalk provides an environment URL when the application is up and running. The green environment is spun up with its own environment URL. At this time, two environments are up and running, but only the blue environment is serving production traffic. 

![\[AWS architecture diagram showing Elastic Beanstalk containers with load balancing and auto scaling groups.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/prepare-green-elastic.png)


* Prepare green Elastic Beanstalk environment *

 Use the following procedure to promote the green environment to serve production traffic. 

1. Navigate to the environment's dashboard in the [Elastic Beanstalk console](https://console.aws.amazon.com//elasticbeanstalk/).

1. In the **Actions** menu, choose **Swap Environment URL.**

   Elastic Beanstalk performs a DNS switch, which typically takes a few minutes. See the [Update DNS Routing with Amazon Route 53](update-dns-routing-with-amazon-route-53.md) section for the factors to consider when performing a DNS switch.

1. Once the DNS changes have propagated, you can terminate the blue environment. 

   To perform a rollback, select **Swap Environment URL** again.

![\[AWS architecture diagram showing Elastic Beanstalk containers with load balancing and auto scaling.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/decommission-blue-elastic.png)


* Decommission blue Elastic Beanstalk environment *

# Clone a Stack in AWS OpsWorks and Update DNS
Clone a Stack in AWS OpsWorks and Update DNS

 [AWS OpsWorks](https://aws.amazon.com/opsworks) utilizes the concept of *stacks*, which are logical groupings of AWS resources (EC2 instances, Amazon RDS, Elastic Load Balancing, and so on) that have a common purpose and should be logically managed together. Stacks are made of one or more layers. A layer represents a set of EC2 instances that serve a particular purpose, such as serving applications or hosting a database server. When a data store is part of the stack, you should be aware of certain data management challenges, such as those discussed in the next section. 

 To implement this technique in AWS OpsWorks, bring up the blue environment/stack with the current version of the application. 

![\[AWS architecture diagram showing user traffic flow through Route 53, load balancing, and application servers.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/aws-opsworks-stack.png)


* AWS OpsWorks stack *

 Next, create the green environment/stack with the newer version of application. At this point, the green environment is not receiving any traffic. If Elastic Load Balancing needs to be initialized, you can do that at this time. 

![\[AWS architecture diagram showing blue and green stacks with load balancing and application server layers.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/clone-stack-to-create-green.png)


* Clone stack to create green environment *

 When it’s time to promote the green environment/stack into production, update DNS records to point to the green environment/stack’s load balancer. You can also do this DNS flip gradually by using the Amazon Route 53 weighted routing policy. This process involves updating DNS, so be aware of DNS issues discussed in the technique in the [Update DNS Routing with Amazon Route 53 section](update-dns-routing-with-amazon-route-53.md). 

![\[AWS architecture diagram showing blue and green stacks with Route 53 DNS endpoint for traffic routing.\]](http://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/images/decommission-blue-stack.png)


* Decommission blue stack *