Architecture details
This section describes the components and AWS services that make up this solution and the architecture details on how these components work together.
AWS services in this solution
| AWS service | Description |
|---|---|
|
Core. Serve client requests to Prebid Server application. |
|
|
Core. Automate transfer of Prebid Server application logs and metrics from Amazon EFS to Amazon S3. |
|
|
Core. Host and manage containerized Prebid Server application. |
|
|
Core. Centralize storage of Prebid Server application logs and metrics across containers. |
|
|
Core. Provide serverless Redis-compatible cache (Valkey) for storing cached bid responses with configurable time-to-live (TTL). |
|
|
Core. Provide high availability and automate scaling of Prebid Server application containers hosted on Amazon ECS. Route cache requests to Lambda function. |
|
|
Core. Send and receive messages between solution resources handling Prebid Server application metrics and logs. |
|
|
Core. Transform, catalog, and partition metrics data into Amazon S3 and AWS Glue Data Catalog. |
|
|
Core. Restricts solution resource permissions to least privilege access for security. |
|
|
Core. Encrypt and decrypt the data in Amazon S3. |
|
|
Core. Facilitate deployment and deletion of the solution through Lambda-backed custom resources, cleaning archived log and metrics files from Amazon EFS after being moved to Amazon S3 for long term storage, triggering AWS Glue, and handling cache storage and retrieval operations. |
|
|
Core. Provide long term storage of Prebid Server application logs and metrics from Amazon EFS. |
|
|
Core. Provide application-level resource monitoring and visualization of resource operations and cost data. |
|
|
Core. Control network permissions between solution resources. |
|
|
Core. Provide layer of security around Amazon CloudFront. |
|
|
Supporting. Track activity across solution S3 buckets and Lambda functions. |
|
|
Supporting. View logs and subscribe to alarms for AWS Lambda and AWS Glue. |
|
|
Optional. Access AWS Glue Data Catalog and query the Prebid Server application metrics in Amazon S3. |
|
|
Optional. Provide low-latency, cost-optimized private network connectivity between Prebid Server and bidders without traversing the public internet. |
CloudFront distribution
The solution uses Amazon CloudFront as the unified network entry point. It receives the incoming auction requests and handles outgoing responses. CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. CloudFront provides a TLS endpoint for privacy of requests and responses in transit with the pubic internet. ALB is the configured origin for CloudFront. Direct access to ALB is restricted by using a custom header, enhancing security.
AWS WAF
AWS Web Application Firewall (AWS WAF) and AWS Shield Standard are used as a protection mechanism from Distributed Denial of Service (DDoS) attacks against the Prebid Server cluster. AWS WAF can activate one or more managed rule groups by default after extended testing including rules in the Baseline Rule Group and the IP Reputation Rule Group. You have the option to activate, purchase, or use existing rule subscriptions, or add regular expression or CIDR matching rules as needed.
Note
If you want to opt out of using CloudFront and AWS WAF and directly send requests to the ALB, see How to opt out.
Application Load Balancer (ALB)
ALB distributes incoming request traffic for Prebid Server through the cluster of containers. It provides a single entry point into the cluster and is the primary origin for the CloudFront distribution. ALB also routes cache-related requests (paths starting with /cache) to the cache Lambda function, which handles storage and retrieval of cached bid responses in ElastiCache.
Amazon VPC
The Amazon Virtual Private Cloud (Amazon VPC) is configured with redundant subnets, routes, and NAT gateways. Security groups permit traffic to and from the subnets. The Amazon VPC contains the network interfaces for the Prebid Server container cluster nodes. It is configured for private IP addresses only and container networks configured within the Amazon VPC use the NAT gateway as a default route to the internet for communication.
When the bidder simulator is deployed, the solution supports two connectivity modes:
VPC Peering (default)
When deploying with the bidder simulator, the solution automatically creates a VPC peering connection between the Prebid Server VPC and the Bidder Simulator VPC. This provides direct, private connectivity between the two environments without traversing the public internet. The peering connection is automatically configured with appropriate routes and security group rules.
AWS RTB Fabric (optional)
When deploying with both the bidder simulator and RTB Fabric, the solution creates a private network connection through AWS RTB Fabric instead of VPC peering. This provides purpose-built, low-latency connectivity optimized for real-time bidding traffic. See the AWS RTB Fabric integration section for details.
Amazon ECS
Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized Prebid Server application. These resources define the configuration, count, and thresholds to scale-out and scale-in the total container count in the ECS cluster. The ECS task and service resource define the operating environment for the cluster and thresholds for scaling and health. Scaling changes are based on CPU, process load, and network traffic (requests per target). For cost optimization, ECS uses a weighted combination of Fargate and Fargate Spot instances. There’s a cost benefit to using more Fargate Spot instances, but the risk of unavailability goes up. You might find that after running the solution for a while that a different ratio is better for you.
AWS RTB Fabric integration (optional)
AWS RTB Fabric
Requester Gateway
Deployed in the Prebid Server VPC, the requester gateway sends bid requests over HTTPS (port 443) through the RTB Fabric private network. Prebid Server is configured to route bid requests to the RTB Fabric link URL instead of directly to bidder endpoints.
Responder Gateway
Deployed in the Bidder Simulator VPC, the responder gateway receives bid requests over HTTP (port 80) and forwards them to the bidder simulator Application Load Balancer. The connection uses asymmetric security—HTTPS from requester to responder, with HTTP responses on the internal AWS network.
Fabric Link
The Fabric Link connects the requester and responder gateways through AWS RTB Fabric’s private network. A Lambda function automatically accepts the Fabric Link during deployment. The link provides dedicated bandwidth and low-latency routing optimized for real-time bidding traffic patterns.
Note
RTB Fabric requires the bidder simulator to be deployed as it needs both requester and responder gateways. RTB Fabric must also be available in your deployment region.
Bidder Simulator (optional)
The bidder simulator is an optional component that provides a quick start testing environment to validate your Prebid Server deployment without needing to configure external bidders. The simulator includes:
Architecture
The bidder simulator uses a CloudFront + Application Load Balancer + Lambda architecture to simulate bidder responses. It supports both banner and video ad formats, including VAST instream video.
Bidder Simulator Adapter Integration
When deployed, the solution automatically configures the custom prebid bidder adapter in Prebid Server allowing connectivity to the Bidder Simulator. The adapter files are conditionally included in the Docker build, and environment variables are automatically set for proper integration.
Demo Website
A demo website with Prebid.js integration is included to test the end-to-end flow from prebid.js through Prebid Server to the bidder simulator. The demo supports both banner and video ad units. For usage instructions, see the demo website readme at source/loadtest/demo/README.md.
Connectivity Options
The bidder simulator supports two connectivity modes to Prebid Server:
-
VPC Peering (default): Direct private connectivity through VPC peering connection
-
RTB Fabric: Private network connectivity through AWS RTB Fabric
Note
The bidder simulator is intended for testing and validation purposes. For production deployments, configure Prebid Server to connect to your actual bidder endpoints.
Cache architecture
The solution includes a cache service that stores and retrieves bid responses for Prebid Server. The cache architecture uses the following components:
ElastiCache Serverless (Valkey)
The solution uses Amazon ElastiCache Serverless with Valkey (Redis-compatible) engine to provide a fully managed, serverless cache. The cache is deployed in the private subnets of the VPC and uses IAM authentication for secure access. Cache data is stored with configurable time-to-live (TTL) settings.
Cache Lambda Function
An AWS Lambda function handles cache storage and retrieval operations. The function is invoked through ALB target group rules that route requests with paths starting with /cache to the Lambda function. The Lambda function connects to the ElastiCache Serverless cache using IAM authentication and the Redis protocol.
Unified Cache Endpoint
Both Prebid Server containers and client-side code (prebid.js) use the same publicly accessible cache endpoint:
-
CloudFront deployment: Cache endpoint is the CloudFront distribution domain
-
ALB-only deployment: Cache endpoint is the external ALB DNS name
This unified approach ensures consistent cache access patterns and simplifies the architecture by eliminating the need for separate internal and external cache endpoints.
Cache Request Flow
When Prebid Server needs to cache a bid response:
-
Container sends cache request to the configured
CACHE_HOST(CloudFront domain or external ALB DNS) -
Request flows through CloudFront (if deployed) to the external ALB
-
ALB routes the request to the cache Lambda function based on path rules
-
Lambda function stores the bid response in ElastiCache Serverless and returns a cache key
-
Client-side code later retrieves the cached response using the same endpoint
Note
The cache service is designed to handle both server-side caching (from Prebid Server containers) and client-side retrieval (from browsers). Using a single public endpoint for both access patterns ensures optimal performance and simplifies configuration.
Prebid Server container
This is a docker container that runs the open source Prebid Server and is hosted in Amazon Elastic Container Registry
Amazon EFS
The EFS file system is mounted and shared among all container instances in the ECS cluster. This file system is used for log capture (operational and metrics), and has the potential to be expanded to include shared configuration and storage related to more advertisement types (for example, video and mobile).
DataSync (EFS to S3)
DataSync is configurated to periodically move rotated log files from each Prebid Server container’s EFS location to an equivalent location in the DataSyncLogsBucket S3 bucket. After each file is copied to S3 and verified, it is removed from the EFS file system through a clean-up Lambda function. Essentially, only actively written log files are retained on the EFS file system until the Prebid Server process closes it, rotates it, and starts a new file. Rotated log files are migrated with DataSync. Runtime logs are rotated every 24 hours or when reaching 100 MB. Metrics logs are rotated every one hour or when reaching 100 MB.
Glue ETL (Metrics processing)
AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use it for analytics, machine learning, and application development. It also includes additional productivity and data ops tooling for authoring, running jobs, and implementing business workflows. This resource is responsible for periodically processing new metrics log files in the DataSyncLogsBucket S3 bucket. The CSV-formatted metrics are transformed into several tables and partitioned. After ETL processing completes, the new data is available to clients through AWS Glue Data Catalog.
AWS Glue Data Catalog
AWS AWS Glue Data Catalog provides access for clients to the Prebid Server metric data through Athena or other compatible clients, such as Amazon SageMaker AI, Amazon QuickSight, and JDBC clients. Clients can query and view the Prebid Server metrics data, generate graphs, summaries or inferences using AI/ML.
Amazon CloudWatch
CloudWatch alarms monitor specific metrics in real-time and proactively notify AWS Management Console users when predefined conditions are met. This solution has several CloudWatch alarms to help monitor its health and performance. These alarms are enabled automatically when the AWS CDK stack is deployed. For details, see the CloudWatch Alarms section.
Note
All resources are created in a single Region specified by the user except for CloudFront and AWS WAF. CloudFront is considered a global resource, and AWS WAF is always created in the us-east-1 (N.Virginia) Region for configuration with CloudFront.