Skip to content

Durable Functions

Lambda Durable Functions enable you to build resilient multi-step workflows that can execute for up to one year. They use checkpoints to track progress and automatically recover from failures through replay.

Key concepts

Concept Description
Durable execution Complete lifecycle of a durable function, from start to completion
Checkpoint Saved state that tracks progress through the workflow
Replay Re-execution from the beginning, skipping completed checkpoints
Step Business logic with built-in retries and progress tracking
Wait Suspend execution without incurring compute charges

How it works

Durable functions use a checkpoint/replay mechanism:

  1. Your code runs always from the beginning
  2. Completed operations are skipped using stored results
  3. Execution of new steps continues from where it left off
  4. State is automatically managed by the SDK

Powertools integration

Powertools for AWS Lambda (Python) works seamlessly with Durable Functions. The Durable Execution SDK has native integration with Logger via context.set_logger().

Found an issue?

If you encounter any issues using Powertools for AWS Lambda (Python) with Durable Functions, please open an issue.

Logger

The Durable Execution SDK provides a context.logger instance that automatically handles log deduplication during replays. You can integrate Logger to get structured JSON logging while keeping the deduplication benefits.

For the best experience, set the Logger on the durable context. This gives you structured JSON logging with automatic log deduplication during replays:

Integrating Logger with Durable Functions
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from aws_durable_execution_sdk_python import DurableContext, durable_execution  # type: ignore[import-not-found]

from aws_lambda_powertools import Logger

logger = Logger(service="order-processing")


@logger.inject_lambda_context
@durable_execution
def handler(event: dict, context: DurableContext) -> str:
    # Set Logger on the context for automatic deduplication
    context.set_logger(logger)

    # Logs via context.logger appear only once, even during replays
    context.logger.info("Starting workflow", extra={"order_id": event.get("order_id")})

    result: str = context.step(
        lambda _: "processed",
        name="process_order",
    )

    # This log won't repeat when the function replays after completing the step above
    context.logger.info("Workflow completed", extra={"result": result})

    return result

This gives you:

  • JSON structured logging from Powertools for AWS Lambda (Python)
  • Log deduplication during replays (logs from completed operations don't repeat)
  • Automatic SDK enrichment (execution_arn, parent_id, name, attempt)
  • Lambda context injection (request_id, function_name, etc.)
Direct logger usage

If you use the Logger directly (not through context.logger), logs will be emitted on every replay:

1
2
3
4
5
# Logs will duplicate during replays
logger.info("This appears on every replay")

# Use context.logger instead for deduplication
context.logger.info("This appears only once")

Tracer

Tracer works with Durable Functions. Each execution creates trace segments.

Trace continuity

Due to the replay mechanism, traces may be interleaved. Each execution (including replays) creates separate trace segments. Use the execution_arn to correlate traces.

Using Tracer with Durable Functions
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from aws_durable_execution_sdk_python import DurableContext, durable_execution  # type: ignore[import-not-found]

from aws_lambda_powertools import Logger, Tracer

tracer = Tracer()
logger = Logger()


@logger.inject_lambda_context
@tracer.capture_lambda_handler
@durable_execution
def handler(event: dict, context: DurableContext) -> str:
    context.set_logger(logger)

    result: str = context.step(
        lambda _: process_data(),
        name="process_data",
    )

    return result


@tracer.capture_method
def process_data() -> str:
    # This is traced on first execution
    # On replay, the cached result is used
    return "processed"

Metrics

Metrics work with Durable Functions, but be aware that metrics may be emitted multiple times during replay if not handled carefully. Emit metrics at workflow completion rather than during intermediate steps to avoid counting replays as new executions.

Using Metrics with Durable Functions
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from aws_durable_execution_sdk_python import DurableContext, durable_execution  # type: ignore[import-not-found]

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics()


@metrics.log_metrics
@durable_execution
def handler(event: dict, context: DurableContext) -> str:
    result: str = context.step(
        lambda _: "processed",
        name="process",
    )

    # Emit metrics in a dedicated step to ensure they are only counted once
    context.step(
        lambda _: metrics.add_metric(name="WorkflowCompleted", unit=MetricUnit.Count, value=1),
        name="emit_completion_metric",
    )

    return result

Idempotency

The @idempotent decorator integrates with Durable Functions and is replay-aware. It's useful for protecting the Lambda handler entry point, especially for Event Source Mapping (ESM) invocations like SQS, Kinesis, or DynamoDB Streams.

Using Idempotency with Durable Functions
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from aws_durable_execution_sdk_python import DurableContext, durable_execution  # type: ignore[import-not-found]

from aws_lambda_powertools.utilities.idempotency import (
    DynamoDBPersistenceLayer,
    idempotent,
)

persistence_layer = DynamoDBPersistenceLayer(table_name="IdempotencyTable")


def process_order(event: dict) -> str:
    return f"processed-{event.get('order_id')}"


@idempotent(persistence_store=persistence_layer)
@durable_execution
def handler(event: dict, context: DurableContext) -> str:
    # Idempotency protects against duplicate ESM invocations
    # Steps within the workflow are already idempotent via checkpoints

    result: str = context.step(
        lambda _: process_order(event),
        name="process_order",
    )

    return result
Decorator ordering matters

The @idempotent decorator must be placed above @durable_execution. This ensures the idempotency check runs first, preventing duplicate executions before the durable workflow begins. Reversing the order would cause the durable execution to start before the idempotency check, defeating its purpose.

When to use Powertools Idempotency:

  • Protecting the Lambda handler entry point from duplicate invocations
  • Methods you don't want to convert into steps but need idempotency guarantees
  • Event Source Mapping triggers (SQS, Kinesis, DynamoDB Streams)

When you don't need it:

  • Steps within a durable function are already idempotent via the checkpoint mechanism

Parameters

Parameters work normally with Durable Functions.

Using Parameters with Durable Functions
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
from aws_durable_execution_sdk_python import DurableContext, durable_execution  # type: ignore[import-not-found]

from aws_lambda_powertools.utilities import parameters


def call_api(api_key: str) -> str:
    return f"called-with-{api_key[:4]}..."


@durable_execution
def handler(event: dict, context: DurableContext) -> str:
    # Parameters may come from cache if replay hits the same execution environment within the TTL
    api_key = parameters.get_secret("api-key")

    result: str = context.step(
        lambda _: call_api(api_key),
        name="call_api",
    )

    return result
Parameter freshness

If the replay or execution happens within the cache TTL on the same execution environment, the parameter value may come from cache. For long-running workflows (hours/days), parameters fetched at the start may become stale. Consider fetching parameters within steps that need the latest values, and customize the caching behavior with max_age to control freshness.

Best practices

Use Idempotency for ESM triggers

When your durable function is triggered by Event Source Mappings (SQS, Kinesis, DynamoDB Streams), use the @idempotent decorator to protect against duplicate invocations.

Idempotency for ESM
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
from aws_durable_execution_sdk_python import DurableContext, durable_execution  # type: ignore[import-not-found]

from aws_lambda_powertools.utilities.idempotency import (
    DynamoDBPersistenceLayer,
    idempotent,
)

persistence_layer = DynamoDBPersistenceLayer(table_name="IdempotencyTable")


@idempotent(persistence_store=persistence_layer)
@durable_execution
def handler(event: dict, context: DurableContext) -> str:
    # Protected against duplicate SQS/Kinesis/DynamoDB triggers

    result: str = context.step(
        lambda _: "processed",
        name="process",
    )

    return result

FAQ

Do I need Idempotency utility with Durable Functions?

It depends on your use case. Steps within a durable function are already idempotent via checkpoints. However, the @idempotent decorator is useful for protecting the Lambda handler entry point, especially for Event Source Mapping invocations (SQS, Kinesis, DynamoDB Streams) where the same event might trigger multiple invocations.

Why do I see duplicate logs?

If you're using the logger directly instead of context.logger, logs will be emitted on every replay. Use context.set_logger(logger) and then context.logger.info() to get automatic log deduplication.

How do I correlate logs across replays?

Use the execution_arn field that's automatically added to every log entry when using context.logger:

1
2
3
fields @timestamp, @message, execution_arn
| filter execution_arn = "arn:aws:lambda:us-east-1:123456789012:function:my-function:execution-id"
| sort @timestamp asc

Can I use Tracer with Durable Functions?

Yes, but be aware that each execution (including replays) creates separate trace segments. Use the execution_arn as a correlation identifier for end-to-end visibility.

How should I emit metrics without duplicates?

Emit metrics at workflow completion rather than during intermediate steps. This ensures you count completed workflows, not replay attempts.