

# Understanding CI/CD
<a name="understanding-cicd"></a>

Continuous integration and continuous delivery (CI/CD) is the process of automating the software release lifecycle. In some cases, the *D* in CI/CD can also mean *deployment*. The difference between *continuous delivery* and* continuous deployment* occurs when you release a change to the production environment. With continuous delivery, a manual approval is required before promoting changes to production. Continuous deployment features an uninterrupted flow through the entirety of the pipeline, and no explicit approvals are required. Because this strategy discusses general CI/CD concepts, the recommendations and information provided are applicable to both the continuous delivery and continuous deployment approaches.

CI/CD automates much or all of the manual processes traditionally required to get new code from a commit into production. A CI/CD pipeline encompasses the source, build, test, staging, and production stages. In each stage, the CI/CD pipelines provisions any infrastructure that is needed to deploy or test the code. By using a CI/CD pipeline, development teams can make changes to code that are then automatically tested and pushed to deployment.

Let's review the basic CI/CD process before discussing some of the ways that you can, knowingly or unknowingly, deviate from being fully CI/CD. The following diagram shows the CI/CD stages and activities in each stage.



![\[The five stages of a CI/CD process and the activities and environments of each.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-cicd-litmus/images/cicd-stages.png)


## About continuous integration
<a name="about-continuous-integration"></a>

Continuous integration occurs in a code repository, such as a Git repository in GitHub. You treat a single, main branch as the source of truth for the code base, and you create short-lived branches for feature development. You integrate a feature branch into the main branch when you're ready to deploy the feature to upper environments. Feature branches are never deployed directly to upper environments. For more information, see [Trunk-based approach](fully-cicd-process-differences.md#trunk-based-approach) in this guide.

*Continuous integration process*

1. The developer creates a new branch from the main branch.

1. The developer makes changes and builds and tests locally.

1. When the changes are ready, the developer creates a [pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) (GitHub documentation) with the main branch as the destination.

1. The code is reviewed.

1. When the code is approved, it is merged into the main branch.

## About continuous delivery
<a name="about-continuous-delivery"></a>

Continuous delivery occurs in isolated environments, such as development environments and production environments. The actions that occur in each environment can vary. Often, one of the first stages is used to make updates to the pipeline itself before proceeding. The end result of the deployment is that each environment is updated with the latest changes. The number of development environments for building and testing also varies, but we recommend you use at least two. In the pipeline, each environment is updated in order of its significance, ending with the most important environment, the production environment.

*Continuous delivery process*

The continuous delivery portion of the pipeline initiates by pulling the code from the main branch of the source repository and passing it to the build stage. The infrastructure as code (IaC) document for the repository outlines the tasks that are performed in each stage. Although using an IaC document is not mandatory, an IaC service or tool, such as [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) or [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html), is strongly recommended. The most common steps include:

1. Unit tests

1. Code build

1. Resource provisioning

1. Integration tests

If any errors occur or any tests fail at any stage in the pipeline, the current stage rolls back to its previous state, and the pipeline is terminated. Subsequent changes must start in the code repository and go through the fully CI/CD process.

# Tests for CI/CD pipelines
<a name="tests-for-cicd-pipelines"></a>

The two types of automated tests that are commonly referred to in deployment pipelines are *unit tests* and *integration tests*. However, there are many types of tests that you can run on a code base and the development environment. The [AWS Deployment Pipeline Reference Architecture](https://pipelines.devops.aws.dev/application-pipeline/) defines the following types of tests:
+ **Unit test** – These tests build and run application code to verify that it is performing according to expectations. They simulate all external dependencies that are used in the code base. Examples of unit test tools include [JUnit](https://junit.org/), [Jest](https://jestjs.io/), and [pytest](https://docs.pytest.org/en/stable/).
+ **Integration test** – These tests verify that the application satisfies technical requirements by testing against a provisioned test environment. Examples of integration test tools include [Cucumber](https://cucumber.io/), [vRest NG](https://vrest.io/), and [integ-tests](https://docs.aws.amazon.com/cdk/api/v2/docs/integ-tests-alpha-readme.html) (for AWS CDK).
+ **Acceptance test** – These tests verify that the application satisfies user requirements by testing against a provisioned test environment. Examples of acceptance test tools include [Cypress](https://cypress.io/) and [Selenium](https://selenium.dev/).
+ **Synthetic test** – These tests run continuously in the background to generate traffic and verify that the system is healthy. Examples of synthetic test tools include [Amazon CloudWatch Synthetics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html) and [Dynatrace Synthetic Monitoring](https://www.dynatrace.com/monitoring/platform/synthetic-monitoring/).
+ **Performance test** – These tests simulate production capacity. They determine if the application meets performance requirements and compare metrics to past performance. Examples of performance test tools include [Apache JMeter](https://jmeter.apache.org/), [Locust](https://locust.io/), and [Gatling](https://gatling.io/).
+ **Resilience test** – Also known as *chaos testing*, these tests inject failures into environments in order to identify risk areas. Periods when the failures are injected are then compared to periods without the failures. Examples of resilience test tools include [AWS Fault Injection Service](https://aws.amazon.com/fis/) and [Gremlin](https://www.gremlin.com/).
+ **Static application security test (SAST)** – These tests analyze code for security violations, such as [SQL injection](https://owasp.org/www-community/attacks/SQL_Injection) or [cross-site scripting (XSS)](https://owasp.org/www-community/attacks/xss/). Examples of SAST tools include [Amazon CodeGuru](https://aws.amazon.com/codeguru/), [SonarQube](https://www.sonarqube.org/), and [Checkmarx](https://checkmarx.com/).
+ **Dynamic application security test (DAST)** – These tests are also known as *penetration testing* or *pen testing*. They identify vulnerabilities, such as SQL injection or XSS in a provisioned test environment. Examples of DAST tools include [Zed Attack Proxy (ZAP)](https://www.zaproxy.org/) and [HCL AppScan](https://www.hcltechsw.com/appscan). For more information, see [Penetration Testing](https://aws.amazon.com/security/penetration-testing/).

Not all fully CI/CD pipelines run all of these tests. However, at a minimum, a pipeline should run unit tests and SAST tests on the code base as well as integration and acceptance tests on a test environment.

# Metrics for CI/CD pipelines
<a name="metrics-for-cicd-pipelines"></a>

According to the [AWS Deployment Pipeline Reference Architecture](https://pipelines.devops.aws.dev/application-pipeline/), you should, at a minimum, track the following four metrics for CI/CD pipelines:
+ **Lead time** – The average amount of time it takes for a single commit to get all the way into production. We recommend targeting a lead time between 1 hour and 1 day, as appropriate for your use case.
+ **Deployment frequency** – The number of production deployments within a given period of time. We recommend targeting deployment frequencies between multiple times each day to twice each week, as appropriate for your use case.
+ **Mean time between failure (MTBF)** – The average amount of time between the start of a successful pipeline and the start of a failed pipeline. We recommending targeting an MTBF that is as high as possible. For more information, see [Increasing MTBF](https://docs.aws.amazon.com/whitepapers/latest/availability-and-beyond-improving-resilience/increasing-mtbf.html).
+ **Mean time to recovery (MTTR)** – The average amount of time between the start of a failed pipeline and the start of the next successful pipeline. We recommending targeting an MTTR that is as low as possible. For more information, see [Reducing MTTR](https://docs.aws.amazon.com/whitepapers/latest/availability-and-beyond-improving-resilience/reducing-mttr.html).

These metrics help teams track their progress toward becoming fully CI/CD. Teams should have open discussions with the organization's stakeholders regarding what the optimal goals should be. Situations and needs vary greatly from organization to organization, and even from team to team.

It's very important to remember that rapid, drastic change usually increases the risk of problems arising. Set goals to aim for small, incremental improvements. A common optimal lead time for fully CI/CD pipelines is less than 3 hours. A team that starts with a lead time of 5.2 days should target a reduction of one day every few weeks. After this team reaches a lead time of one day or less, they can stay there for several months and move to a more aggressive lead time only if the team and organization stakeholders deem it necessary.