Skip to content

Lambda Managed Instances

Lambda Managed Instances enables you to run Lambda functions on Amazon EC2 instances without managing infrastructure. It supports multi-concurrent invocations, EC2 pricing models, and specialized compute options like Graviton4.

Key differences from Lambda On Demand

Aspect Lambda On Demand Lambda Managed Instances
Concurrency Single invocation per execution environment Multiple concurrent invocations per environment
Python model One process, one request Multiple processes, one request each
Pricing Per-request duration EC2-based with Savings Plans support
Scaling Scale on demand with cold starts Async scaling based on CPU
Isolation Firecracker microVMs Containers on EC2 Nitro

How Lambda Python runtime handles concurrency

The Lambda Python runtime uses multiple processes for concurrent requests. Each request runs in a separate process, which provides natural isolation between requests.

This means:

  • Each process has its own memory - global variables are isolated per process
  • /tmp directory is shared across all processes - use caution with file operations

For more details on the isolation model, see Lambda Managed Instances documentation.

Powertools integration

Powertools for AWS Lambda (Python) works seamlessly with Lambda Managed Instances. All utilities are compatible with the multi-process concurrency model used by Python.

Logger, Tracer, and Metrics

Core utilities work without any changes. Each process has its own instances, so correlation IDs and traces are naturally isolated per request.

VPC connectivity required

Lambda Managed Instances run in your VPC. Ensure you have network connectivity to send logs to CloudWatch, traces to X-Ray, and metrics to CloudWatch.

Using Logger, Tracer, and Metrics with Managed Instances
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

tracer = Tracer()
logger = Logger()
metrics = Metrics()


@tracer.capture_lambda_handler
@metrics.log_metrics
@logger.inject_lambda_context
def lambda_handler(event: dict, context: LambdaContext) -> dict:
    order_id = event.get("order_id", "unknown")
    logger.append_keys(order_id=order_id)

    result = process_order(order_id)

    # Metrics are flushed per request
    metrics.add_metric(name="OrderProcessed", unit=MetricUnit.Count, value=1)

    return {"statusCode": 200, "body": result}


@tracer.capture_method
def process_order(order_id: str) -> str:
    logger.info("Processing order")
    return f"Processed order {order_id}"

Parameters

The Parameters utility works as expected, but be aware that caching is per-process.

Using Parameters with Managed Instances
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
from aws_lambda_powertools.utilities import parameters
from aws_lambda_powertools.utilities.typing import LambdaContext


def lambda_handler(event: dict, context: LambdaContext) -> dict:
    # Cache is per-process, not shared across concurrent requests
    # Each process maintains its own cache
    # This is generally fine - cache will warm up per process
    api_key = parameters.get_secret("my-api-key", max_age=300)  # noqa: F841

    return {"statusCode": 200}
Cache behavior

Since each process has its own cache, you might see more calls to SSM/Secrets Manager during initial warm-up. Once each process has cached the value, subsequent requests within that process use the cache. You can customize the caching behavior with max_age to control the TTL.

Idempotency

Idempotency works without any changes. It uses DynamoDB for state management, which is external to the process.

Using Idempotency with Managed Instances
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
from aws_lambda_powertools.utilities.idempotency import (
    DynamoDBPersistenceLayer,
    idempotent,
)
from aws_lambda_powertools.utilities.typing import LambdaContext

persistence_layer = DynamoDBPersistenceLayer(table_name="IdempotencyTable")


@idempotent(persistence_store=persistence_layer)
def lambda_handler(event: dict, context: LambdaContext) -> dict:
    # Idempotency is guaranteed across all concurrent requests
    # DynamoDB handles the distributed locking
    return {"statusCode": 200, "body": "Order processed"}

VPC connectivity

Lambda Managed Instances require VPC configuration for:

  • Sending logs to CloudWatch Logs
  • Sending traces to X-Ray
  • Accessing AWS services (SSM, Secrets Manager, DynamoDB, etc.)

Configure connectivity using one of these options:

  1. VPC Endpoints - Private connectivity without internet access
  2. NAT Gateway - Internet access from private subnets
  3. Public subnet with Internet Gateway - Direct internet access
  4. Egress-only Internet Gateway - IPv6 outbound connectivity without inbound access (learn more)

See Networking for Lambda Managed Instances for detailed setup instructions.

FAQ

Does Powertools for AWS Lambda (Python) work with Lambda Managed Instances?

Yes, all Powertools for AWS Lambda (Python) utilities work seamlessly with Lambda Managed Instances. The multi-process model in Python provides natural isolation between concurrent requests.

Is my code thread-safe?

Lambda Managed Instances uses multiple processes, instead of threads. Each request runs in its own process with isolated memory. If you implement multi-threading within your handler, you are responsible for thread safety.

Why is my cache not shared between requests?

Each process maintains its own cache (for Parameters, Feature Flags, etc.). This is expected behavior. The cache will warm up independently per process, which may result in slightly more calls to backend services during initial warm-up.

Can I use global variables?

Yes, but remember they are per-process, not shared across concurrent requests. This is actually safer than shared state.

Do I need to change my existing Powertools for AWS Lambda (Python) code?

No changes are required if you are running Powertools for AWS Lambda (Python) version 3.4.0 or later. Your existing code will work as-is with Lambda Managed Instances.