Testing serverless applications on AWS
Amazon Web Services (Contributorscontributors)
March 2026 (document history)
This guide discusses methodologies for testing serverless applications, describes the challenges that you might encounter during testing, and introduces best practices. These testing techniques are intended to help you iterate more quickly and release your code more confidently.
This guide is for developers who are looking to establish testing strategies for their
serverless applications. You can use the guide as a starting point to learn about testing
strategies, and then visit the Serverless Test Samples
repository
Overview
Automated tests are critical investments that help ensure application quality and development speed. Testing also accelerates developer feedback. As a developer, you want to be able to iterate rapidly on your application and get feedback on the quality of your code. Many developers are used to writing applications that they deploy to an environment on their desktop, either directly to their operating system or within a container-based environment. When you work in desktop or container-based environments, you typically write tests against code that is hosted entirely on your desktop. However, in serverless applications, architecture components might not be deployable to a desktop environment but might instead exist only in the cloud. A cloud-based architecture might include persistence layers, messaging systems, security constructs, APIs, and other components. When you write application code that relies on these components, it might be difficult to determine the best way to design and run tests.
This guide helps you align to a testing strategy that reduces friction and confusion, and increases code quality.
Prerequisites
This guide assumes that you are familiar with the basics of automated tests, including how automated software tests are used to ensure software quality. The guide provides a high-level introduction to a serverless application testing strategy, and doesn't require any hands-on experience writing tests.
Definitions
This guide uses the following terms:
-
Unit tests are tests that are run against code for a single architectural component in isolation.
-
Integration tests are run against two or more architectural components, typically in a cloud environment.
-
End-to-end tests verify behaviors across entire applications or workflows.
-
Emulators are applications (often provided by a third party) that are designed to mimic a cloud service without provisioning or invoking any cloud resources.
-
Mocks (also called fakes) are implementations in a testing application that replace a dependency with a simulation of that dependency.
Objectives
The best practices in this guide are intended to help you achieve two main objectives:
-
Increase quality of serverless applications
-
Testing at architecture boundaries
-
Testing at code boundaries
-
-
Decrease time to implement or change features
Increase software quality
An application's quality depends to a large extent on the ability of developers to test a variety of scenarios to verify functionality. When you don't implement automated tests, or, more typically, if your tests don't cover the required scenarios adequately, the quality of your application can't be determined or guaranteed.
In a server-based architecture, teams are able to easily define a scope for testing: Any code that runs on the application server has to be tested. Other components that call in to the server, or dependencies that the server calls, are often considered external and out of scope for testing by the team responsible for the application on the server.
Serverless applications often consist of smaller units of work, such as AWS Lambda functions, that run in their own environment. Teams will likely be responsible for multiples of these smaller units within a single application. Some application functionality can be delegated entirely to managed services such as Amazon Simple Storage Service (Amazon S3) or Amazon Simple Queue Service (Amazon SQS) without using any internally developed code. Traditional server-based models for software testing might exclude managed services by considering them external to the application. This can lead to inadequate coverage, where critical scenarios might be limited to manual exploratory testing or to a few integration test cases where the outcome varies by environment. Therefore, adopting testing strategies that encompass managed service behaviors and cloud configurations can improve software quality.
Testing at architecture boundaries
As serverless applications grow, they naturally spread across multiple architectural components. While this uses AWS distributed capabilities, it can make end-to-end behavior difficult to understand.
Identifying natural boundaries
When designing your architecture following serverless best practices (one function = one job, decoupling), you'll notice natural boundaries around subsystems. These boundaries represent logical separation points in your application.
Boundaries as testing contracts
These architectural boundaries are excellent candidates for testing edges. Treat each boundary as a contract and validate that it behaves according to its defined specification. Think of these boundaries as seams in your application where you can insert test validation.
Key benefits
The following are key benefits of testing at architecture boundaries:
-
Focused testing scope – Test subsystems independently without needing to understand the entire application.
-
Contract validation – Ensure each boundary maintains its expected behavior as the system evolves
-
Dual-purpose instrumentation – These same boundaries make excellent observability hooks in production
-
Test harness – Allows you to test asynchronous serverless systems. It helps you test event-driven architectures by capturing and validating events as they flow through your subsystem.
Testing at code boundaries
Define clear code boundaries by separating infrastructure code, such as Lambda code, from your core business logic. This separation creates distinct testing scopes that simplify your test strategy.
The boundary pattern
Establish two clear code boundaries in your Lambda functions:
-
Outer boundary (Lambda handler) – A slim adapter layer that handles concerns specific to AWS Lambda
-
Inner boundary (business logic) – Pure business logic methods independent of Lambda runtime
Handler as adapter (outer scope)
Your Lambda function handler should be a thin layer that:
-
Extracts data from the incoming
eventandcontextobjects -
Validates the extracted data
-
Passes only relevant details to business logic methods
-
Returns results in the expected format for Lambda
Business logic (inner scope)
Your core business logic should:
-
Operate independently of details specific to Lambda
-
Accept simple, validated inputs
-
Return predictable outputs
-
Require minimal dependencies for initialization
Testing benefits by scope
-
Inner boundary tests – Comprehensive unit tests around business logic without Lambda complexity or environment setup
-
Outer boundary tests – Focused integration tests validating the adapter layer's event handling and data extraction
-
Minimal test overhead – No complex environments or extensive dependencies needed for the majority of your tests
This boundary-based approach allows you to test most of your code as pure functions while keeping Lambda tests minimal and targeted.
Decrease time to implement or change features
You can minimize the effect of software bugs and configuration problems on costs and schedules by catching these issues during an iterative development cycle. When a developer fails to detect these issues, more people must invest additional effort to identify the problems.
A serverless architecture might include managed services that provide critical application functionality through API calls. For this reason, your development cycle should include tests that validate both the happy path (where interactions with these services behave as expected) and the sad path (where calls fail, return unexpected responses, or behave differently across environments). Without these tests in place, you may encounter issues that stem from differences between your local environment and the deployed environment. When that happens, you must spend additional time attempting to reproduce and verify a fix, because each iteration now requires validating changes against an environment that differs from your preferred setup.
A proper serverless testing strategy improves your iteration time by providing accurate results for tests that include calls to other services.