

# Building Lambda functions with Python
<a name="lambda-python"></a>

You can run Python code in AWS Lambda. Lambda provides [runtimes](lambda-runtimes.md) for Python that run your code to process events. Your code runs in an environment that includes the SDK for Python (Boto3), with credentials from an AWS Identity and Access Management (IAM) role that you manage. To learn more about the SDK versions included with the Python runtimes, see [Runtime-included SDK versions](#python-sdk-included).

Lambda supports the following Python runtimes.


| Name | Identifier | Operating system | Deprecation date | Block function create | Block function update | 
| --- | --- | --- | --- | --- | --- | 
|  Python 3.14  |  `python3.14`  |  Amazon Linux 2023  |   Jun 30, 2029   |   Jul 31, 2029   |   Aug 31, 2029   | 
|  Python 3.13  |  `python3.13`  |  Amazon Linux 2023  |   Jun 30, 2029   |   Jul 31, 2029   |   Aug 31, 2029   | 
|  Python 3.12  |  `python3.12`  |  Amazon Linux 2023  |   Oct 31, 2028   |   Nov 30, 2028   |   Jan 10, 2029   | 
|  Python 3.11  |  `python3.11`  |  Amazon Linux 2  |   Jun 30, 2027   |   Jul 31, 2027   |   Aug 31, 2027   | 
|  Python 3.10  |  `python3.10`  |  Amazon Linux 2  |   Oct 31, 2026   |   Nov 30, 2026   |   Jan 15, 2027   | 

**To create a Python function**

1. Open the [Lambda console](https://console.aws.amazon.com/lambda).

1. Choose **Create function**.

1. Configure the following settings:
   + **Function name**: Enter a name for the function.
   + **Runtime**: Choose **Python 3.14**.

1. Choose **Create function**.

The console creates a Lambda function with a single source file named `lambda_function`. You can edit this file and add more files in the built-in code editor. In the **DEPLOY** section, choose **Deploy** to update your function's code. Then, to run your code, choose **Create test event** in the **TEST EVENTS** section.

Your Lambda function comes with a CloudWatch Logs log group. The function runtime sends details about each invocation to CloudWatch Logs. It relays any [logs that your function outputs](python-logging.md) during invocation. If your function returns an error, Lambda formats the error and returns it to the invoker.

**Topics**
+ [

## Runtime-included SDK versions
](#python-sdk-included)
+ [

## Disabled Python features
](#python-disabled-features)
+ [

## Response format
](#python-response-format)
+ [

## Graceful shutdown for extensions
](#python-graceful-shutdown)
+ [

# Define Lambda function handler in Python
](python-handler.md)
+ [

# Working with .zip file archives for Python Lambda functions
](python-package.md)
+ [

# Deploy Python Lambda functions with container images
](python-image.md)
+ [

# Working with layers for Python Lambda functions
](python-layers.md)
+ [

# Using the Lambda context object to retrieve Python function information
](python-context.md)
+ [

# Log and monitor Python Lambda functions
](python-logging.md)
+ [

# AWS Lambda function testing in Python
](python-testing.md)
+ [

# Instrumenting Python code in AWS Lambda
](python-tracing.md)

## Runtime-included SDK versions
<a name="python-sdk-included"></a>

The version of the AWS SDK included in the Python runtime depends on the runtime version and your AWS Region. To find the version of the SDK included in the runtime you're using, create a Lambda function with the following code.

```
import boto3
import botocore

def lambda_handler(event, context):
   print(f'boto3 version: {boto3.__version__}')
   print(f'botocore version: {botocore.__version__}')
```

## Disabled Python features
<a name="python-disabled-features"></a>

The following table lists Python features which are disabled in the Lambda managed runtimes and container base images for Python. These features must be enabled when the Python runtime executable is compiled and cannot be enabled by using an execution-time flag. To use these features in Lambda, you can deploy your own Python runtime build with these features enabled, using a [container image](python-image.md#python-image-clients) or [custom runtime](runtimes-custom.md).


| Python feature | Affected Python versions | Status | 
| --- | --- | --- | 
| Just-in-Time (JIT) compiler | Python 3.13 and later | The JIT compiler is experimental and is not recommended for production workloads. It is therefore disabled in the Lambda Python runtimes. | 
| Free-threading | Python 3.13 and later | Free threading (option to disable the global interpreter lock) is disabled in Lambda Python builds due to the performance impact on single-threaded code. | 

## Response format
<a name="python-response-format"></a>

In Python 3.12 and later Python runtimes, functions return Unicode characters as part of their JSON response. Earlier Python runtimes return escaped sequences for Unicode characters in responses. For example, in Python 3.11, if you return a Unicode string such as "こんにちは", it escapes the Unicode characters and returns "\$1u3053\$1u3093\$1u306b\$1u3061\$1u306f". The Python 3.12 runtime returns the original "こんにちは".

Using Unicode responses reduces the size of Lambda responses, making it easier to fit larger responses into the 6 MB maximum payload size for synchronous functions. In the previous example, the escaped version is 32 bytes—compared to 17 bytes with the Unicode string.

When you upgrade to Python 3.12 or later Python runtimes, you might need to adjust your code to account for the new response format. If the caller expects escaped Unicode, you must either add code to the returning function to escape the Unicode manually, or adjust the caller to handle the Unicode return.

## Graceful shutdown for extensions
<a name="python-graceful-shutdown"></a>

Python 3.12 and later Python runtimes offer improved graceful shutdown capabilities for functions with [external extensions](lambda-extensions.md). When Lambda shuts down an execution environment, it sends a `SIGTERM` signal to the runtime and then a `SHUTDOWN` event to each registered external extension. You can catch the `SIGTERM` signal in your Lambda function and clean up resources such as database connections that were created by the function.

To learn more about the execution environment lifecycle, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md). For examples of how to use graceful shutdown with extensions, see the [AWS Samples GitHub repository](https://github.com/aws-samples/graceful-shutdown-with-aws-lambda).

# Define Lambda function handler in Python
<a name="python-handler"></a>

The Lambda function *handler* is the method in your function code that processes events. When your function is invoked, Lambda runs the handler method. Your function runs until the handler returns a response, exits, or times out.

This page describes how to work with Lambda function handlers in Python, including naming conventions, valid handler signatures, and code best practices. This page also includes an example of a Python Lambda function that takes in information about an order, produces a text file receipt, and puts this file in an Amazon Simple Storage Service (Amazon S3) bucket.

**Topics**
+ [

## Example Python Lambda function code
](#python-handler-example)
+ [

## Handler naming conventions
](#python-handler-naming)
+ [

## Using the Lambda event object
](#python-handler-event)
+ [

## Accessing and using the Lambda context object
](#python-handler-context)
+ [

## Valid handler signatures for Python handlers
](#python-handler-signature)
+ [

## Returning a value
](#python-handler-return)
+ [

## Using the AWS SDK for Python (Boto3) in your handler
](#python-handler-sdk)
+ [

## Accessing environment variables
](#python-handler-env-vars)
+ [

## Code best practices for Python Lambda functions
](#python-handler-best-practices)

## Example Python Lambda function code
<a name="python-handler-example"></a>

The following example Python Lambda function code takes in information about an order, produces a text file receipt, and puts this file in an Amazon S3 bucket:

**Example Python Lambda function**  

```
import json
import os
import logging
import boto3

# Initialize the S3 client outside of the handler
s3_client = boto3.client('s3')

# Initialize the logger
logger = logging.getLogger()
logger.setLevel("INFO")

def upload_receipt_to_s3(bucket_name, key, receipt_content):
    """Helper function to upload receipt to S3"""
    
    try:
        s3_client.put_object(
            Bucket=bucket_name,
            Key=key,
            Body=receipt_content
        )
    except Exception as e:
        logger.error(f"Failed to upload receipt to S3: {str(e)}")
        raise

def lambda_handler(event, context):
    """
    Main Lambda handler function
    Parameters:
        event: Dict containing the Lambda function event data
        context: Lambda runtime context
    Returns:
        Dict containing status message
    """
    try:
        # Parse the input event
        order_id = event['Order_id']
        amount = event['Amount']
        item = event['Item']
        
        # Access environment variables
        bucket_name = os.environ.get('RECEIPT_BUCKET')
        if not bucket_name:
            raise ValueError("Missing required environment variable RECEIPT_BUCKET")

        # Create the receipt content and key destination
        receipt_content = (
            f"OrderID: {order_id}\n"
            f"Amount: ${amount}\n"
            f"Item: {item}"
        )
        key = f"receipts/{order_id}.txt"

        # Upload the receipt to S3
        upload_receipt_to_s3(bucket_name, key, receipt_content)

        logger.info(f"Successfully processed order {order_id} and stored receipt in S3 bucket {bucket_name}")
        
        return {
            "statusCode": 200,
            "message": "Receipt processed successfully"
        }

    except Exception as e:
        logger.error(f"Error processing order: {str(e)}")
        raise
```

This file contains the following sections of code:
+ `import` block: Use this block to include libraries that your Lambda function requires.
+ Global initialization of SDK client and logger: Including initialization code outside of the handler takes advantage of [execution environment](lambda-runtime-environment.md) re-use to improve the performance of your function. See [Code best practices for Python Lambda functions](#python-handler-best-practices) to learn more.
+ `def upload_receipt_to_s3(bucket_name, key, receipt_content):` This is a helper function that's called by the main `lambda_handler` function.
+ `def lambda_handler(event, context):` This is the **main handler function** for your code, which contains your main application logic. When Lambda invokes your function handler, the [Lambda runtime](concepts-basics.md#gettingstarted-concepts-runtime) passes two arguments to the function, the [event object](#python-handler-event) that contains data for your function to process and the [context object](#python-handler-context) that contains information about the function invocation.

## Handler naming conventions
<a name="python-handler-naming"></a>

The function handler name defined at the time that you create a Lambda function is derived from:
+ The name of the file in which the Lambda handler function is located.
+ The name of the Python handler function.

In the example above, if the file is named `lambda_function.py`, the handler would be specified as `lambda_function.lambda_handler`. This is the default handler name given to functions you create using the Lambda console.

If you create a function in the console using a different file name or function handler name, you must edit the default handler name.

**To change the function handler name (console)**

1. Open the [Functions](https://console.aws.amazon.com/lambda/home#/functions) page of the Lambda console and choose your function.

1. Choose the **Code** tab.

1. Scroll down to the **Runtime settings** pane and choose **Edit**.

1. In **Handler**, enter the new name for your function handler.

1. Choose **Save**.

## Using the Lambda event object
<a name="python-handler-event"></a>

When Lambda invokes your function, it passes an [event object](concepts-basics.md#gettingstarted-concepts-event) argument to the function handler. JSON objects are the most common event format for Lambda functions. In the code example in the previous section, the function expects an input in the following format:

```
{
    "Order_id": "12345",
    "Amount": 199.99,
    "Item": "Wireless Headphones"
}
```

If your function is invoked by another AWS service, the input event is also a JSON object. The exact format of the event object depends on the service that's invoking your function. To see the event format for a particular service, refer to the appropriate page in the [Invoking Lambda with events from other AWS services](lambda-services.md) chapter.

If the input event is in the form of a JSON object, the Lambda runtime converts the object to a Python dictionary. To assign values in the input JSON to variables in your code, use the standard Python dictionary methods as illustrated in the example code.

You can also pass data into your function as a JSON array, or as any of the other valid JSON data types. The following table defines how the Python runtime converts these JSON types.


| JSON data type | Python data type | 
| --- | --- | 
| object | dictionary (dict) | 
| array | list (list) | 
| number | integer (int) or floating point number (float) | 
| string | string (str) | 
| Boolean | Boolean (bool) | 
| null | NoneType (NoneType) | 

## Accessing and using the Lambda context object
<a name="python-handler-context"></a>

The Lambda context object contains information about the function invocation and execution environment. Lambda passes the context object to your function automatically when it's invoked. You can use the context object to output information about your function's invocation for monitoring purposes.

The context object is a Python class that's defined in the [Lambda runtime interface client](https://github.com/aws/aws-lambda-python-runtime-interface-client/blob/main/awslambdaric/lambda_context.py). To return the value of any of the context object properties, use the corresponding method on the context object. For example, the following code snippet assigns the value of the `aws_request_id` property (the identifier for the invocation request) to a variable named `request`. 

```
request = context.aws_request_id
```

To learn more about using the Lambda context object, and to see a complete list of the available methods and properties, see [Using the Lambda context object to retrieve Python function information](python-context.md).

## Valid handler signatures for Python handlers
<a name="python-handler-signature"></a>

When defining your handler function in Python, the function must take two arguments. The first of these arguments is the Lambda [event object](#python-handler-event) and the second one is the Lambda [context object](#python-handler-context). By convention, these input arguments are usually named `event` and `context`, but you can give them any names you wish. If you declare your handler function with a single input argument, Lambda will raise an error when it attempts to run your function. The most common way to declare a handler function in Python is as follows:

```
def lambda_handler(event, context):
```

You can also use Python type hints in your function declaration, as shown in the following example:

```
from typing import Dict, Any
      
def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
```

To use specific AWS typing for events generated by other AWS services and for the context object, add the `aws-lambda-typing` package to your function's deployment package. You can install this library in your development environment by running `pip install aws-lambda-typing`. The following code snippet shows how to use AWS-specific type hints. In this example, the expected event is an Amazon S3 event.

```
from aws_lambda_typing.events import S3Event
from aws_lambda_typing.context import Context
from typing import Dict, Any

def lambda_handler(event: S3Event, context: Context) -> Dict[str, Any]:
```

You can't use the Python `async` function type for your handler function.

## Returning a value
<a name="python-handler-return"></a>

Optionally, a handler can return a value, which must be JSON serializable. Common return types include `dict`, `list`, `str`, `int`, `float`, and `bool`.

What happens to the returned value depends on the [invocation type](lambda-invocation.md) and the [service](lambda-services.md) that invoked the function. For example:
+ If you use the `RequestResponse` invocation type to [invoke a Lambda function synchronously](invocation-sync.md), Lambda returns the result of the Python function call to the client invoking the Lambda function (in the HTTP response to the invocation request, serialized into JSON). For example, AWS Lambda console uses the `RequestResponse` invocation type, so when you invoke the function on the console, the console will display the returned value.
+ If the handler returns objects that can't be serialized by `json.dumps`, the runtime returns an error.
+ If the handler returns `None`, as Python functions without a `return` statement implicitly do, the runtime returns `null`.
+ If you use the `Event` invocation type (an [asynchronous invocation](invocation-async.md)), the value is discarded.

In the example code, the handler returns the following Python dictionary:

```
{
  "statusCode": 200,
  "message": "Receipt processed successfully"
}
```

The Lambda runtime serializes this dictionary and returns it to the client that invoked the function as a JSON string.

**Note**  
In Python 3.9 and later releases, Lambda includes the requestId of the invocation in the error response.

## Using the AWS SDK for Python (Boto3) in your handler
<a name="python-handler-sdk"></a>

Often, you'll use Lambda functions to interact with other AWS services and resources. The simplest way to interface with these resources is to use the AWS SDK for Python (Boto3). All [supported Lambda Python runtimes](lambda-runtimes.md#runtimes-supported) include a version of the SDK for Python. However, we strongly recommend that you include the SDK in your function's deployment package if your code needs to use it. Including the SDK in your deployment package gives you full control over your dependencies and reduces the risk of version misalignment issues with other libraries. See [Runtime dependencies in Python](python-package.md#python-package-dependencies) and [Backward compatibility](runtimes-update.md#runtime-update-compatibility) to learn more.

To use the SDK for Python in your Lambda function, add the following statement to the import block at the beginning of your function code:

```
import boto3
```

Use the `pip install` command to add the `boto3` library to your function's deployment package. For detailed instructions on how to add dependencies to a .zip deployment package, see [Creating a .zip deployment package with dependencies](python-package.md#python-package-create-dependencies). To learn more about adding dependencies to Lambda functions deployed as container images, see [Creating an image from a base image](python-image.md#python-image-create) or [Creating an image from an alternative base image](python-image.md#python-alt-create).

When using `boto3` in your code, you don't need to provide any credentials to initialize a client. For example, in the example code, we use the following line of code to initialize an Amazon S3 client:

```
# Initialize the S3 client outside of the handler
s3_client = boto3.client('s3')
```

With Python, Lambda automatically creates environment variables with credentials. The `boto3` SDK checks your function's environment variables for these credentials during initialization.

## Accessing environment variables
<a name="python-handler-env-vars"></a>

In your handler code, you can reference [environment variables](configuration-envvars.md) by using the `os.environ.get` method. In the example code, we reference the defined `RECEIPT_BUCKET` environment variable using the following line of code:

```
# Access environment variables
bucket_name = os.environ.get('RECEIPT_BUCKET')
```

Don't forget to include an `import os` statement in the import block at the beginning of your code.

## Code best practices for Python Lambda functions
<a name="python-handler-best-practices"></a>

Adhere to the guidelines in the following list to use best coding practices when building your Lambda functions:
+ **Separate the Lambda handler from your core logic.** This allows you to make a more unit-testable function. For example, in Python, this may look like: 

  ```
  def lambda_handler(event, context):
      foo = event['foo']
      bar = event['bar']      
      result = my_lambda_function(foo, bar)
  
  def my_lambda_function(foo, bar):
      // MyLambdaFunction logic here
  ```
+ **Control the dependencies in your function's deployment package.** The AWS Lambda execution environment contains a number of libraries. For the Node.js and Python runtimes, these include the AWS SDKs. To enable the latest set of features and security updates, Lambda will periodically update these libraries. These updates may introduce subtle changes to the behavior of your Lambda function. To have full control of the dependencies your function uses, package all of your dependencies with your deployment package.
+ **Minimize the complexity of your dependencies.** Prefer simpler frameworks that load quickly on [execution environment](lambda-runtime-environment.md) startup.
+ **Minimize your deployment package size to its runtime necessities. ** This will reduce the amount of time that it takes for your deployment package to be downloaded and unpacked ahead of invocation.

**Take advantage of execution environment reuse to improve the performance of your function.** Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the `/tmp` directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time.

To avoid potential data leaks across invocations, don’t use the execution environment to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user.

**Use a keep-alive directive to maintain persistent connections.** Lambda purges idle connections over time. Attempting to reuse an idle connection when invoking a function will result in a connection error. To maintain your persistent connection, use the keep-alive directive associated with your runtime. For an example, see [Reusing Connections with Keep-Alive in Node.js](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-reusing-connections.html).

**Use [environment variables](configuration-envvars.md) to pass operational parameters to your function.** For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.

**Avoid using recursive invocations** in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of function invocations and escalated costs. If you see an unintended volume of invocations, set the function reserved concurrency to `0` immediately to throttle all invocations to the function, while you update the code.

**Do not use non-documented, non-public APIs** in your Lambda function code. For AWS Lambda managed runtimes, Lambda periodically applies security and functional updates to Lambda's internal APIs. These internal API updates may be backwards-incompatible, leading to unintended consequences such as invocation failures if your function has a dependency on these non-public APIs. See [the API reference](https://docs.aws.amazon.com/lambda/latest/api/welcome.html) for a list of publicly available APIs.

**Write idempotent code.** Writing idempotent code for your functions ensures that duplicate events are handled the same way. Your code should properly validate events and gracefully handle duplicate events. For more information, see [How do I make my Lambda function idempotent?](https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-idempotent/).

# Working with .zip file archives for Python Lambda functions
<a name="python-package"></a>

 Your AWS Lambda function’s code comprises a .py file containing your function’s handler code, together with any additional packages and modules your code depends on. To deploy this function code to Lambda, you use a *deployment package*. This package may either be a .zip file archive or a container image. For more information about using container images with Python, see [Deploy Python Lambda functions with container images](https://docs.aws.amazon.com/lambda/latest/dg/python-image.html). 

 To create your deployment package as .zip file archive, you can use your command-line tool’s built-in .zip file archive utility, or any other .zip file utility such as [7zip](https://www.7-zip.org/download.html). The examples shown in the following sections assume you’re using a command-line `zip` tool in a Linux or MacOS environment. To use the same commands in Windows, you can [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) to get a Windows-integrated version of Ubuntu and Bash. 

 Note that Lambda uses POSIX file permissions, so you may need to [set permissions for the deployment package folder](http://aws.amazon.com/premiumsupport/knowledge-center/lambda-deployment-package-errors/) before you create the .zip file archive. 

**Topics**
+ [

## Runtime dependencies in Python
](#python-package-dependencies)
+ [

## Creating a .zip deployment package with no dependencies
](#python-package-create-no-dependencies)
+ [

## Creating a .zip deployment package with dependencies
](#python-package-create-dependencies)
+ [

## Dependency search path and runtime-included libraries
](#python-package-searchpath)
+ [

## Using \$1\$1pycache\$1\$1 folders
](#python-package-pycache)
+ [

## Creating .zip deployment packages with native libraries
](#python-package-native-libraries)
+ [

## Creating and updating Python Lambda functions using .zip files
](#python-package-create-update)

## Runtime dependencies in Python
<a name="python-package-dependencies"></a>

For Lambda functions that use the Python runtime, a dependency can be any Python package or module. When you deploy your function using a .zip archive, you can either add these dependencies to your .zip file with your function code or use a [Lambda layer](chapter-layers.md). A layer is a separate .zip file that can contain additional code and other content. To learn more about using Lambda layers in Python, see [Working with layers for Python Lambda functions](python-layers.md).

The Lambda Python runtimes include the AWS SDK for Python (Boto3) and its dependencies. Lambda provides the SDK in the runtime for deployment scenarios where you are unable to add your own dependencies. These scenarios include creating functions in the console using the built-in code editor or using inline functions in AWS Serverless Application Model (AWS SAM) or CloudFormation templates.

Lambda periodically updates the libraries in the Python runtime to include the latest updates and security patches. If your function uses the version of the Boto3 SDK included in the runtime but your deployment package includes SDK dependencies, this can cause version misalignment issues. For example, your deployment package could include the SDK dependency urllib3. When Lambda updates the SDK in the runtime, compatibility issues between the new version of the runtime and the version of urllib3 in your deployment package can cause your function to fail.

**Important**  
To maintain full control over your dependencies and to avoid possible version misalignment issues, we recommend you add all of your function’s dependencies to your deployment package, even if versions of them are included in the Lambda runtime. This includes the Boto3 SDK.

To find out which version of the SDK for Python (Boto3) is included in the runtime you're using, see [Runtime-included SDK versions](lambda-python.md#python-sdk-included).

 Under the [AWS shared responsibility model](https://docs.aws.amazon.com/whitepapers/latest/aws-risk-and-compliance/shared-responsibility-model.html), you are responsible for the management of any dependencies in your functions' deployment packages. This includes applying updates and security patches. To update dependencies in your function's deployment package, first create a new .zip file and then upload it to Lambda. See [Creating a .zip deployment package with dependencies](#python-package-create-dependencies) and [Creating and updating Python Lambda functions using .zip files](#python-package-create-update) for more information.

## Creating a .zip deployment package with no dependencies
<a name="python-package-create-no-dependencies"></a>

 If your function code has no dependencies, your .zip file contains only the .py file with your function’s handler code. Use your preferred zip utility to create a .zip file with your .py file at the root. If the .py file is not at the root of your .zip file, Lambda won’t be able to run your code. 

 To learn how to deploy your .zip file to create a new Lambda function or update an existing one, see [Creating and updating Python Lambda functions using .zip files](#python-package-create-update). 

## Creating a .zip deployment package with dependencies
<a name="python-package-create-dependencies"></a>

 If your function code depends on additional packages or modules, you can either add these dependencies to your .zip file with your function code or [use a Lambda layer](python-layers.md). The instructions in this section show you how to include your dependencies in your .zip deployment package. For Lambda to run your code, the .py file containing your handler code and all of your function's dependencies must be installed at the root of the .zip file.

 Suppose your function code is saved in a file named `lambda_function.py`. The following example CLI commands create a .zip file named `my_deployment_package.zip` containing your function code and its dependencies. You can either install your dependencies directly to a folder in your project directory or use a Python virtual environment. 

**To create the deployment package (project directory)**

1. Navigate to the project directory containing your `lambda_function.py` source code file. In this example, the directory is named `my_function`.

   ```
   cd my_function
   ```

1. Create a new directory named package into which you will install your dependencies.

   ```
   mkdir package
   ```

   Note that for a .zip deployment package, Lambda expects your source code and its dependencies all to be at the root of the .zip file. However, installing dependencies directly in your project directory can introduce a large number of new files and folders and make navigating around your IDE difficult. You create a separate `package` directory here to keep your dependencies separate from your source code.

1. Install your dependencies in the `package` directory. The example below installs the Boto3 SDK from the Python Package Index using pip. If your function code uses Python packages you have created yourself, save them in the `package` directory.

   ```
   pip install --target ./package boto3
   ```

1. Create a .zip file with the installed libraries at the root.

   ```
   cd package
   zip -r ../my_deployment_package.zip .
   ```

   This generates a `my_deployment_package.zip` file in your project directory.

1. Add the lambda\$1function.py file to the root of the .zip file

   ```
   cd ..
   zip my_deployment_package.zip lambda_function.py
   ```

   Your .zip file should have a flat directory structure, with your function's handler code and all your dependency folders installed at the root as follows.

   ```
   my_deployment_package.zip
   |- bin
   |  |-jp.py
   |- boto3
   |  |-compat.py
   |  |-data
   |  |-docs
   ...
   |- lambda_function.py
   ```

   If the .py file containing your function’s handler code is not at the root of your .zip file, Lambda will not be able to run your code.

**To create the deployment package (virtual environment)**

1. Create and activate a virtual environment in your project directory. In this example the project directory is named `my_function`.

   ```
   ~$ cd my_function
   ~/my_function$ python3.14 -m venv my_virtual_env
   ~/my_function$ source ./my_virtual_env/bin/activate
   ```

1. Install your required libraries using pip. The following example installs the Boto3 SDK

   ```
   (my_virtual_env) ~/my_function$ pip install boto3
   ```

1. Use `pip show` to find the location in your virtual environment where pip has installed your dependencies.

   ```
   (my_virtual_env) ~/my_function$ pip show <package_name>
   ```

   The folder in which pip installs your libraries may be named `site-packages` or `dist-packages`. This folder may be located in either the `lib/python3.x` or `lib64/python3.x` directory (where python3.x represents the version of Python you are using).

1. Deactivate the virtual environment

   ```
   (my_virtual_env) ~/my_function$ deactivate
   ```

1. Navigate into the directory containing the dependencies you installed with pip and create a .zip file in your project directory with the installed dependencies at the root. In this example, pip has installed your dependencies in the `my_virtual_env/lib/python3.14/site-packages` directory.

   ```
   ~/my_function$ cd my_virtual_env/lib/python3.14/site-packages
   ~/my_function/my_virtual_env/lib/python3.14/site-packages$ zip -r ../../../../my_deployment_package.zip .
   ```

1. Navigate to the root of your project directory where the .py file containing your handler code is located and add that file to the root of your .zip package. In this example, your function code file is named `lambda_function.py`.

   ```
   ~/my_function/my_virtual_env/lib/python3.14/site-packages$ cd ../../../../
   ~/my_function$ zip my_deployment_package.zip lambda_function.py
   ```

## Dependency search path and runtime-included libraries
<a name="python-package-searchpath"></a>

 When you use an `import` statement in your code, the Python runtime searches the directories in its search path until it finds the module or package. By default, the first location the runtime searches is the directory into which your .zip deployment package is decompressed and mounted (`/var/task`). If you include a version of a runtime-included library in your deployment package, your version will take precedence over the version that's included in the runtime. Dependencies in your deployment package also have precedence over dependencies in layers. 

 When you add a dependency to a layer, Lambda extracts this to `/opt/python/lib/python3.x/site-packages` (where `python3.x` represents the version of the runtime you're using) or `/opt/python`. In the search path, these directories have precedence over the directories containing the runtime-included libraries and pip-installed libraries (`/var/runtime` and `/var/lang/lib/python3.x/site-packages`). Libraries in function layers therefore have precedence over versions included in the runtime. 

**Note**  
In the Python 3.11 managed runtime and base image, the AWS SDK and its dependencies are installed in the `/var/lang/lib/python3.11/site-packages` directory.

 You can see the full search path for your Lambda function by adding the following code snippet. 

```
import sys
      
search_path = sys.path
print(search_path)
```

**Note**  
Because dependencies in your deployment package or layers take precedence over runtime-included libraries, this can cause version misalignment problems if you include an SDK dependency such as urllib3 in your package without including the SDK as well. If you deploy your own version of a Boto3 dependency, you must also deploy Boto3 as a dependency in your deployment package. We recommend that you package all of your function’s dependencies, even if versions of them are included in the runtime.

 You can also add dependencies in a separate folder inside your .zip package. For example, you might add a version of the Boto3 SDK to a folder in your .zip package called `common`. When your .zip package is decompressed and mounted, this folder is placed inside the `/var/task` directory. To use a dependency from a folder in your .zip deployment package in your code, use an `import from` statement. For example, to use a version of Boto3 from a folder named `common` in your .zip package, use the following statement. 

```
from common import boto3
```

## Using \$1\$1pycache\$1\$1 folders
<a name="python-package-pycache"></a>

 We recommend that you don't include `__pycache__` folders in your function's deployment package. Python bytecode that's compiled on a build machine with a different architecture or operating system might not be compatible with the Lambda execution environment. 

## Creating .zip deployment packages with native libraries
<a name="python-package-native-libraries"></a>

 If your function uses only pure Python packages and modules, you can use the `pip install` command to install your dependencies on any local build machine and create your .zip file. Many popular Python libraries, including NumPy and Pandas, are not pure Python and contain code written in C or C\$1\$1. When you add libraries containing C/C\$1\$1 code to your deployment package, you must build your package correctly to ensure that it’s compatible with the Lambda execution environment. 

 Most packages available on the Python Package Index ([PyPI](https://pypi.org/)) are available as “wheels” (.whl files). A .whl file is a type of ZIP file which contains a built distribution with pre-compiled binaries for a particular operating system and instruction set architecture. To make your deployment package compatible with Lambda, you install the wheel for Linux operating systems and your function’s instruction set architecture. 

 Some packages may only be available as source distributions. For these packages, you need to compile and build the C/C\$1\$1 components yourself. 

 To see what distributions are available for your required package, do the following: 

1. Search for the name of the package on the [Python Package Index main page](https://pypi.org/).

1. Choose the version of the package you want to use.

1. Choose **Download files**.

### Working with built distributions (wheels)
<a name="python-package-wheels"></a>

 To download a wheel that’s compatible with Lambda, you use the pip `--platform` option. 

 If your Lambda function uses the **x86\$164** instruction set architecture, run the following `pip install` command to install a compatible wheel in your `package` directory. Replace `--python 3.x` with the version of the Python runtime you are using. 

```
pip install \
--platform manylinux2014_x86_64 \
--target=package \
--implementation cp \
--python-version 3.x \
--only-binary=:all: --upgrade \
<package_name>
```

 If your function uses the **arm64** instruction set architecture, run the following command. Replace `--python 3.x` with the version of the Python runtime you are using. 

```
pip install \
--platform manylinux2014_aarch64 \
--target=package \
--implementation cp \
--python-version 3.x \
--only-binary=:all: --upgrade \
<package_name>
```

### Working with source distributions
<a name="python-package-source-dist"></a>

 If your package is only available as a source distribution, you need to build the C/C\$1\$1 libraries yourself. To make your package compatible with the Lambda execution environment, you need to build it in an environment that uses the same Amazon Linux operating system. You can do this by building your package in an Amazon Elastic Compute Cloud (Amazon EC2) Linux instance. 

 To learn how to launch and connect to an Amazon EC2 Linux instance, see [Get started with Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) in the *Amazon EC2 User Guide*. 

## Creating and updating Python Lambda functions using .zip files
<a name="python-package-create-update"></a>

 After you have created your .zip deployment package, you can use it to create a new Lambda function or update an existing one. You can deploy your .zip package using the Lambda console, the AWS Command Line Interface, and the Lambda API. You can also create and update Lambda functions using AWS Serverless Application Model (AWS SAM) and CloudFormation. 

The maximum size for a .zip deployment package for Lambda is 250 MB (unzipped). Note that this limit applies to the combined size of all the files you upload, including any Lambda layers.

The Lambda runtime needs permission to read the files in your deployment package. In Linux permissions octal notation, Lambda needs 644 permissions for non-executable files (rw-r--r--) and 755 permissions (rwxr-xr-x) for directories and executable files.

In Linux and MacOS, use the `chmod` command to change file permissions on files and directories in your deployment package. For example, to give a non-executable file the correct permissions, run the following command.

```
chmod 644 <filepath>
```

To change file permissions in Windows, see [Set, View, Change, or Remove Permissions on an Object](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731667(v=ws.10)) in the Microsoft Windows documentation.

**Note**  
If you don't grant Lambda the permissions it needs to access directories in your deployment package, Lambda sets the permissions for those directories to 755 (rwxr-xr-x).

### Creating and updating functions with .zip files using the console
<a name="python-package-create-console"></a>

 To create a new function, you must first create the function in the console, then upload your .zip archive. To update an existing function, open the page for your function, then follow the same procedure to add your updated .zip file. 

 If your .zip file is less than 50MB, you can create or update a function by uploading the file directly from your local machine. For .zip files greater than 50MB, you must upload your package to an Amazon S3 bucket first. For instructions on how to upload a file to an Amazon S3 bucket using the AWS Management Console, see [Getting started with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html). To upload files using the AWS CLI, see [Move objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-objects-move) in the *AWS CLI User Guide*. 

**Note**  
You cannot change the [deployment package type](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html#lambda-CreateFunction-request-PackageType) (.zip or container image) for an existing function. For example, you cannot convert a container image function to use a .zip file archive. You must create a new function.

**To create a new function (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console and choose **Create Function**.

1. Choose **Author from scratch**.

1. Under **Basic information**, do the following:

   1. For **Function name**, enter the name for your function.

   1. For **Runtime**, select the runtime you want to use.

   1. (Optional) For **Architecture**, choose the instruction set architecture for your function. The default architecture is x86\$164. Ensure that the .zip deployment package for your function is compatible with the instruction set architecture you select.

1. (Optional) Under **Permissions**, expand **Change default execution role**. You can create a new **Execution role** or use an existing one.

1. Choose **Create function**. Lambda creates a basic 'Hello world' function using your chosen runtime.

**To upload a .zip archive from your local machine (console)**

1. In the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console, choose the function you want to upload the .zip file for.

1. Select the **Code** tab.

1. In the **Code source** pane, choose **Upload from**.

1. Choose **.zip file**.

1. To upload the .zip file, do the following:

   1. Select **Upload**, then select your .zip file in the file chooser.

   1. Choose **Open**.

   1. Choose **Save**.

**To upload a .zip archive from an Amazon S3 bucket (console)**

1. In the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console, choose the function you want to upload a new .zip file for.

1. Select the **Code** tab.

1. In the **Code source** pane, choose **Upload from**.

1. Choose **Amazon S3 location**.

1. Paste the Amazon S3 link URL of your .zip file and choose **Save**.

### Updating .zip file functions using the console code editor
<a name="python-package-console-edit"></a>

 For some functions with .zip deployment packages, you can use the Lambda console’s built-in code editor to update your function code directly. To use this feature, your function must meet the following criteria: 
+ Your function must use one of the interpreted language runtimes (Python, Node.js, or Ruby)
+ Your function’s deployment package must be smaller than 50 MB (unzipped).

Function code for functions with container image deployment packages cannot be edited directly in the console.

**To update function code using the console code editor**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console and select your function.

1. Select the **Code** tab.

1. In the **Code source** pane, select your source code file and edit it in the integrated code editor.

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

### Creating and updating functions with .zip files using the AWS CLI
<a name="python-package-create-cli"></a>

 You can can use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) to create a new function or to update an existing one using a .zip file. Use the [create-function](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) and [update-function-code](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) commands to deploy your .zip package. If your .zip file is smaller than 50MB, you can upload the .zip package from a file location on your local build machine. For larger files, you must upload your .zip package from an Amazon S3 bucket. For instructions on how to upload a file to an Amazon S3 bucket using the AWS CLI, see [Move objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-objects-move) in the *AWS CLI User Guide*. 

**Note**  
If you upload your .zip file from an Amazon S3 bucket using the AWS CLI, the bucket must be located in the same AWS Region as your function.

 To create a new function using a .zip file with the AWS CLI, you must specify the following: 
+ The name of your function (`--function-name`)
+ Your function’s runtime (`--runtime`)
+ The Amazon Resource Name (ARN) of your function’s [execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) (`--role`)
+ The name of the handler method in your function code (`--handler`)

 You must also specify the location of your .zip file. If your .zip file is located in a folder on your local build machine, use the `--zip-file` option to specify the file path, as shown in the following example command. 

```
aws lambda create-function --function-name myFunction \
--runtime python3.14 --handler lambda_function.lambda_handler \
--role arn:aws:iam::111122223333:role/service-role/my-lambda-role \
--zip-file fileb://myFunction.zip
```

 To specify the location of .zip file in an Amazon S3 bucket, use the `--code` option as shown in the following example command. You only need to use the `S3ObjectVersion` parameter for versioned objects. 

```
aws lambda create-function --function-name myFunction \
--runtime python3.14 --handler lambda_function.lambda_handler \
--role arn:aws:iam::111122223333:role/service-role/my-lambda-role \
--code S3Bucket=amzn-s3-demo-bucket,S3Key=myFileName.zip,S3ObjectVersion=myObjectVersion
```

 To update an existing function using the CLI, you specify the the name of your function using the `--function-name` parameter. You must also specify the location of the .zip file you want to use to update your function code. If your .zip file is located in a folder on your local build machine, use the `--zip-file` option to specify the file path, as shown in the following example command. 

```
aws lambda update-function-code --function-name myFunction \
--zip-file fileb://myFunction.zip
```

 To specify the location of .zip file in an Amazon S3 bucket, use the `--s3-bucket` and `--s3-key` options as shown in the following example command. You only need to use the `--s3-object-version` parameter for versioned objects. 

```
aws lambda update-function-code --function-name myFunction \
--s3-bucket amzn-s3-demo-bucket --s3-key myFileName.zip --s3-object-version myObject Version
```

### Creating and updating functions with .zip files using the Lambda API
<a name="python-package-create-api"></a>

 To create and update functions using a .zip file archive, use the following API operations: 
+ [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html)
+ [UpdateFunctionCode](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionCode.html)

### Creating and updating functions with .zip files using AWS SAM
<a name="python-package-create-sam"></a>

 The AWS Serverless Application Model (AWS SAM) is a toolkit that helps streamline the process of building and running serverless applications on AWS. You define the resources for your application in a YAML or JSON template and use the AWS SAM command line interface (AWS SAM CLI) to build, package, and deploy your applications. When you build a Lambda function from an AWS SAM template, AWS SAM automatically creates a .zip deployment package or container image with your function code and any dependencies you specify. To learn more about using AWS SAM to build and deploy Lambda functions, see [Getting started with AWS SAM](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started.html) in the *AWS Serverless Application Model Developer Guide*.

You can also use AWS SAM to create a Lambda function using an existing .zip file archive. To create a Lambda function using AWS SAM, you can save your .zip file in an Amazon S3 bucket or in a local folder on your build machine. For instructions on how to upload a file to an Amazon S3 bucket using the AWS CLI, see [Move objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-objects-move) in the *AWS CLI User Guide*. 

 In your AWS SAM template, the `AWS::Serverless::Function` resource specifies your Lambda function. In this resource, set the following properties to create a function using a .zip file archive: 
+ `PackageType` - set to `Zip`
+ `CodeUri` - set to the function code's Amazon S3 URI, path to local folder, or [FunctionCode](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-functioncode.html) object
+ `Runtime` - Set to your chosen runtime

 With AWS SAM, if your .zip file is larger than 50MB, you don’t need to upload it to an Amazon S3 bucket first. AWS SAM can upload .zip packages up to the maximum allowed size of 250MB (unzipped) from a location on your local build machine. 

 To learn more about deploying functions using .zip file in AWS SAM, see [AWS::Serverless::Function](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html) in the *AWS SAM Developer Guide*. 

### Creating and updating functions with .zip files using CloudFormation
<a name="python-package-create-cfn"></a>

 You can use CloudFormation to create a Lambda function using a .zip file archive. To create a Lambda function from a .zip file, you must first upload your file to an Amazon S3 bucket. For instructions on how to upload a file to an Amazon S3 bucket using the AWS CLI, see [Move objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-objects-move) in the *AWS CLI User Guide. *

For Node.js and Python runtimes, you can also provide inline source code in your CloudFormation template. CloudFormation then creates a .zip file containing your code when you build your function. 

**Using an existing .zip file**

In your CloudFormation template, the `AWS::Lambda::Function` resource specifies your Lambda function. In this resource, set the following properties to create a function using a .zip file archive:
+ `PackageType` - Set to `Zip`
+ `Code` - Enter the Amazon S3 bucket name and the .zip file name in the `S3Bucket` and `S3Key` fields
+ `Runtime` - Set to your chosen runtime

**Creating a .zip file from inline code**

You can declare simple functions written in Python or Node.js inline in an CloudFormation template. Because the code is embedded in YAML or JSON, you can't add any external dependenices to your deployment package. This means your function has to use the version of the AWS SDK that's included in the runtime. The requirements of the template, such as having to escape certain characters, also make it harder to use your IDE's syntax checking and code completion features. This means that your template might require additional testing. Because of these limitations, declaring functions inline is best suited for very simple code that does not change frequently. 

To create a .zip file from inline code for Node.js and Python runtimes, set the following properties in your template’s `AWS::Lambda::Function` resource:
+ `PackageType` - Set to `Zip`
+ `Code` - Enter your function code in the `ZipFile` field
+ `Runtime` - Set to your chosen runtime

 The .zip file that CloudFormation generates cannot exceed 4MB. To learn more about deploying functions using .zip file in CloudFormation, see [AWS::Lambda::Function](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html) in the *CloudFormation User Guide*. 

# Deploy Python Lambda functions with container images
<a name="python-image"></a>

There are three ways to build a container image for a Python Lambda function:
+ [Using an AWS base image for Python](#python-image-instructions)

  The [AWS base images](images-create.md#runtimes-images-lp) are preloaded with a language runtime, a runtime interface client to manage the interaction between Lambda and your function code, and a runtime interface emulator for local testing.
+ [Using an AWS OS-only base image](images-create.md#runtimes-images-provided)

  [AWS OS-only base images](https://gallery.ecr.aws/lambda/provided) contain an Amazon Linux distribution and the [runtime interface emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator/). These images are commonly used to create container images for compiled languages, such as [Go](go-image.md#go-image-provided) and [Rust](lambda-rust.md), and for a language or language version that Lambda doesn't provide a base image for, such as Node.js 19. You can also use OS-only base images to implement a [custom runtime](runtimes-custom.md). To make the image compatible with Lambda, you must include the [runtime interface client for Python](#python-image-clients) in the image.
+ [Using a non-AWS base image](#python-image-clients)

  You can use an alternative base image from another container registry, such as Alpine Linux or Debian. You can also use a custom image created by your organization. To make the image compatible with Lambda, you must include the [runtime interface client for Python](#python-image-clients) in the image.

**Tip**  
To reduce the time it takes for Lambda container functions to become active, see [Use multi-stage builds](https://docs.docker.com/build/building/multi-stage/) in the Docker documentation. To build efficient container images, follow the [Best practices for writing Dockerfiles](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/).

This page explains how to build, test, and deploy container images for Lambda.

**Topics**
+ [

## AWS base images for Python
](#python-image-base)
+ [

## Using an AWS base image for Python
](#python-image-instructions)
+ [

## Using an alternative base image with the runtime interface client
](#python-image-clients)

## AWS base images for Python
<a name="python-image-base"></a>

AWS provides the following base images for Python:


| Tags | Runtime | Operating system | Dockerfile | Deprecation | 
| --- | --- | --- | --- | --- | 
| 3.14 | Python 3.14 | Amazon Linux 2023 | [Dockerfile for Python 3.14 on GitHub](https://github.com/aws/aws-lambda-base-images/blob/python3.14/Dockerfile.python3.14) |   Jun 30, 2029   | 
| 3.13 | Python 3.13 | Amazon Linux 2023 | [Dockerfile for Python 3.13 on GitHub](https://github.com/aws/aws-lambda-base-images/blob/python3.13/Dockerfile.python3.13) |   Jun 30, 2029   | 
| 3.12 | Python 3.12 | Amazon Linux 2023 | [Dockerfile for Python 3.12 on GitHub](https://github.com/aws/aws-lambda-base-images/blob/python3.12/Dockerfile.python3.12) |   Oct 31, 2028   | 
| 3.11 | Python 3.11 | Amazon Linux 2 | [Dockerfile for Python 3.11 on GitHub](https://github.com/aws/aws-lambda-base-images/blob/python3.11/Dockerfile.python3.11) |   Jun 30, 2027   | 
| 3.10 | Python 3.10 | Amazon Linux 2 | [Dockerfile for Python 3.10 on GitHub](https://github.com/aws/aws-lambda-base-images/blob/python3.10/Dockerfile.python3.10) |   Oct 31, 2026   | 

Amazon ECR repository: [gallery.ecr.aws/lambda/python](https://gallery.ecr.aws/lambda/python)

Python 3.12 and later base images are based on the [Amazon Linux 2023 minimal container image](https://docs.aws.amazon.com/linux/al2023/ug/minimal-container.html). The Python 3.8-3.11 base images are based on the Amazon Linux 2 image. AL2023-based images provide several advantages over Amazon Linux 2, including a smaller deployment footprint and updated versions of libraries such as `glibc`.

AL2023-based images use `microdnf` (symlinked as `dnf`) as the package manager instead of `yum`, which is the default package manager in Amazon Linux 2. `microdnf` is a standalone implementation of `dnf`. For a list of packages that are included in AL2023-based images, refer to the **Minimal Container** columns in [Comparing packages installed on Amazon Linux 2023 Container Images](https://docs.aws.amazon.com/linux/al2023/ug/al2023-container-image-types.html). For more information about the differences between AL2023 and Amazon Linux 2, see [Introducing the Amazon Linux 2023 runtime for AWS Lambda](https://aws.amazon.com/blogs/compute/introducing-the-amazon-linux-2023-runtime-for-aws-lambda/) on the AWS Compute Blog.

**Note**  
To run AL2023-based images locally, including with AWS Serverless Application Model (AWS SAM), you must use Docker version 20.10.10 or later.

### Dependency search path in the base images
<a name="python-image-searchpath"></a>

When you use an `import` statement in your code, the Python runtime searches the directories in its search path until it finds the module or package. By default, the runtime searches the `{LAMBDA_TASK_ROOT}` directory first. If you include a version of a runtime-included library in your image, your version will take precedence over the version that's included in the runtime.

Other steps in the search path depend on which version of the Lambda base image for Python you're using:
+ **Python 3.11 and later**: Runtime-included libraries and pip-installed libraries are installed in the `/var/lang/lib/python3.11/site-packages` directory. This directory has precedence over `/var/runtime` in the search path. You can override the SDK by using pip to install a newer version. You can use pip to verify that the runtime-included SDK and its dependencies are compatible with any packages that you install.
+ **Python 3.8-3.10**: Runtime-included libraries are installed in the `/var/runtime` directory. Pip-installed libraries are installed in the `/var/lang/lib/python3.x/site-packages` directory. The `/var/runtime` directory has precedence over `/var/lang/lib/python3.x/site-packages` in the search path.

You can see the full search path for your Lambda function by adding the following code snippet.

```
import sys
      
search_path = sys.path
print(search_path)
```

## Using an AWS base image for Python
<a name="python-image-instructions"></a>

### Prerequisites
<a name="python-image-prerequisites"></a>

To complete the steps in this section, you must have the following:
+ [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [Docker](https://docs.docker.com/get-docker) (minimum version 25.0.0)
+ The Docker [buildx plugin](https://github.com/docker/buildx/blob/master/README.md).
+ Python

### Creating an image from a base image
<a name="python-image-create"></a>

**To create a container image from an AWS base image for Python**

1. Create a directory for the project, and then switch to that directory.

   ```
   mkdir example
   cd example
   ```

1. Create a new file called `lambda_function.py`. You can add the following sample function code to the file for testing, or use your own.  
**Example Python function**  

   ```
   import sys
   def handler(event, context):
       return 'Hello from AWS Lambda using Python' + sys.version + '!'
   ```

1. Create a new file called `requirements.txt`. If you're using the sample function code from the previous step, you can leave the file empty because there are no dependencies. Otherwise, list each required library. For example, here's what your `requirements.txt` should look like if your function uses the AWS SDK for Python (Boto3):  
**Example requirements.txt**  

   ```
   boto3
   ```

1. Create a new Dockerfile with the following configuration:
   + Set the `FROM` property to the [URI of the base image](https://gallery.ecr.aws/lambda/python/).
   + Use the COPY command to copy the function code and runtime dependencies to `{LAMBDA_TASK_ROOT}`, a [Lambda-defined environment variable](configuration-envvars.md#configuration-envvars-runtime).
   + Set the `CMD` argument to the Lambda function handler.

   Note that the example Dockerfile does not include a [USER instruction](https://docs.docker.com/reference/dockerfile/#user). When you deploy a container image to Lambda, Lambda automatically defines a default Linux user with least-privileged permissions. This is different from standard Docker behavior which defaults to the `root` user when no `USER` instruction is provided.  
**Example Dockerfile**  

   ```
   FROM public.ecr.aws/lambda/python:3.12
   
   # Copy requirements.txt
   COPY requirements.txt ${LAMBDA_TASK_ROOT}
   
   # Install the specified packages
   RUN pip install -r requirements.txt
   
   # Copy function code
   COPY lambda_function.py ${LAMBDA_TASK_ROOT}
   
   # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
   CMD [ "lambda_function.handler" ]
   ```

1. Build the Docker image with the [docker build](https://docs.docker.com/engine/reference/commandline/build/) command. The following example names the image `docker-image` and gives it the `test` [tag](https://docs.docker.com/engine/reference/commandline/build/#tag). To make your image compatible with Lambda, you must use the `--provenance=false` option.

   ```
   docker buildx build --platform linux/amd64 --provenance=false -t docker-image:test .
   ```
**Note**  
The command specifies the `--platform linux/amd64` option to ensure that your container is compatible with the Lambda execution environment regardless of the architecture of your build machine. If you intend to create a Lambda function using the ARM64 instruction set architecture, be sure to change the command to use the `--platform linux/arm64` option instead.

### (Optional) Test the image locally
<a name="python-image-test"></a>

1. Start the Docker image with the **docker run** command. In this example, `docker-image` is the image name and `test` is the tag.

   ```
   docker run --platform linux/amd64 -p 9000:8080 docker-image:test
   ```

   This command runs the image as a container and creates a local endpoint at `localhost:9000/2015-03-31/functions/function/invocations`.
**Note**  
If you built the Docker image for the ARM64 instruction set architecture, be sure to use the `--platform linux/arm64` option instead of `--platform linux/amd64`.

1. From a new terminal window, post an event to the local endpoint.

------
#### [ Linux/macOS ]

   In Linux and macOS, run the following `curl` command:

   ```
   curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
   ```

   This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

   ```
   curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'
   ```

------
#### [ PowerShell ]

   In PowerShell, run the following `Invoke-WebRequest` command:

   ```
   Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{}' -ContentType "application/json"
   ```

   This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

   ```
   Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{"payload":"hello world!"}' -ContentType "application/json"
   ```

------

1. Get the container ID.

   ```
   docker ps
   ```

1. Use the [docker kill](https://docs.docker.com/engine/reference/commandline/kill/) command to stop the container. In this command, replace `3766c4ab331c` with the container ID from the previous step.

   ```
   docker kill 3766c4ab331c
   ```

### Deploying the image
<a name="python-image-deploy"></a>

**To upload the image to Amazon ECR and create the Lambda function**

1. Run the [get-login-password](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/get-login-password.html) command to authenticate the Docker CLI to your Amazon ECR registry.
   + Set the `--region` value to the AWS Region where you want to create the Amazon ECR repository.
   + Replace `111122223333` with your AWS account ID.

   ```
   aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 111122223333.dkr.ecr.us-east-1.amazonaws.com
   ```

1. Create a repository in Amazon ECR using the [create-repository](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/create-repository.html) command.

   ```
   aws ecr create-repository --repository-name hello-world --region us-east-1 --image-scanning-configuration scanOnPush=true --image-tag-mutability MUTABLE
   ```
**Note**  
The Amazon ECR repository must be in the same AWS Region as the Lambda function.

   If successful, you see a response like this:

   ```
   {
       "repository": {
           "repositoryArn": "arn:aws:ecr:us-east-1:111122223333:repository/hello-world",
           "registryId": "111122223333",
           "repositoryName": "hello-world",
           "repositoryUri": "111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world",
           "createdAt": "2023-03-09T10:39:01+00:00",
           "imageTagMutability": "MUTABLE",
           "imageScanningConfiguration": {
               "scanOnPush": true
           },
           "encryptionConfiguration": {
               "encryptionType": "AES256"
           }
       }
   }
   ```

1. Copy the `repositoryUri` from the output in the previous step.

1. Run the [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) command to tag your local image into your Amazon ECR repository as the latest version. In this command:
   + `docker-image:test` is the name and [tag](https://docs.docker.com/engine/reference/commandline/build/#tag) of your Docker image. This is the image name and tag that you specified in the `docker build` command.
   + Replace `<ECRrepositoryUri>` with the `repositoryUri` that you copied. Make sure to include `:latest` at the end of the URI.

   ```
   docker tag docker-image:test <ECRrepositoryUri>:latest
   ```

   Example:

   ```
   docker tag docker-image:test 111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
   ```

1. Run the [docker push](https://docs.docker.com/engine/reference/commandline/push/) command to deploy your local image to the Amazon ECR repository. Make sure to include `:latest` at the end of the repository URI.

   ```
   docker push 111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
   ```

1. [Create an execution role](lambda-intro-execution-role.md#permissions-executionrole-api) for the function, if you don't already have one. You need the Amazon Resource Name (ARN) of the role in the next step.

1. Create the Lambda function. For `ImageUri`, specify the repository URI from earlier. Make sure to include `:latest` at the end of the URI.

   ```
   aws lambda create-function \
     --function-name hello-world \
     --package-type Image \
     --code ImageUri=111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest \
     --role arn:aws:iam::111122223333:role/lambda-ex
   ```
**Note**  
You can create a function using an image in a different AWS account, as long as the image is in the same Region as the Lambda function. For more information, see [Amazon ECR cross-account permissions](images-create.md#configuration-images-xaccount-permissions).

1. Invoke the function.

   ```
   aws lambda invoke --function-name hello-world response.json
   ```

   You should see a response like this:

   ```
   {
     "ExecutedVersion": "$LATEST", 
     "StatusCode": 200
   }
   ```

1. To see the output of the function, check the `response.json` file.

To update the function code, you must build the image again, upload the new image to the Amazon ECR repository, and then use the [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) command to deploy the image to the Lambda function.

Lambda resolves the image tag to a specific image digest. This means that if you point the image tag that was used to deploy the function to a new image in Amazon ECR, Lambda doesn't automatically update the function to use the new image.

To deploy the new image to the same Lambda function, you must use the [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) command, even if the image tag in Amazon ECR remains the same. In the following example, the `--publish` option creates a new version of the function using the updated container image.

```
aws lambda update-function-code \
  --function-name hello-world \
  --image-uri 111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest \
  --publish
```

## Using an alternative base image with the runtime interface client
<a name="python-image-clients"></a>

If you use an [OS-only base image](images-create.md#runtimes-images-provided) or an alternative base image, you must include the runtime interface client in your image. The runtime interface client extends the [Runtime API](runtimes-api.md), which manages the interaction between Lambda and your function code.

Install the the [runtime interface client for Python](https://pypi.org/project/awslambdaric) using the pip package manager:

```
pip install awslambdaric
```

You can also download the [Python runtime interface client](https://github.com/aws/aws-lambda-python-runtime-interface-client/) from GitHub.

The following example demonstrates how to build a container image for Python using a non-AWS base image. The example Dockerfile uses an official Python base image. The Dockerfile includes the runtime interface client for Python.

### Prerequisites
<a name="python-alt-prerequisites"></a>

To complete the steps in this section, you must have the following:
+ [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [Docker](https://docs.docker.com/get-docker) (minimum version 25.0.0)
+ The Docker [buildx plugin](https://github.com/docker/buildx/blob/master/README.md).
+ Python

### Creating an image from an alternative base image
<a name="python-alt-create"></a>

**To create a container image from a non-AWS base image**

1. Create a directory for the project, and then switch to that directory.

   ```
   mkdir example
   cd example
   ```

1. Create a new file called `lambda_function.py`. You can add the following sample function code to the file for testing, or use your own.  
**Example Python function**  

   ```
   import sys
   def handler(event, context):
       return 'Hello from AWS Lambda using Python' + sys.version + '!'
   ```

1. Create a new file called `requirements.txt`. If you're using the sample function code from the previous step, you can leave the file empty because there are no dependencies. Otherwise, list each required library. For example, here's what your `requirements.txt` should look like if your function uses the AWS SDK for Python (Boto3):  
**Example requirements.txt**  

   ```
   boto3
   ```

1. Create a new Dockerfile. The following Dockerfile uses an official Python base image instead of an [AWS base image](images-create.md#runtimes-images-lp). The Dockerfile includes the [runtime interface client](https://pypi.org/project/awslambdaric), which makes the image compatible with Lambda. The following example Dockerfile uses a [multi-stage build](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#use-multi-stage-builds).
   + Set the `FROM` property to the base image.
   + Set the `ENTRYPOINT` to the module that you want the Docker container to run when it starts. In this case, the module is the runtime interface client.
   + Set the `CMD` to the Lambda function handler.

   Note that the example Dockerfile does not include a [USER instruction](https://docs.docker.com/reference/dockerfile/#user). When you deploy a container image to Lambda, Lambda automatically defines a default Linux user with least-privileged permissions. This is different from standard Docker behavior which defaults to the `root` user when no `USER` instruction is provided.  
**Example Dockerfile**  

   ```
   # Define custom function directory
   ARG FUNCTION_DIR="/function"
   
   FROM python:3.12 AS build-image
   
   # Include global arg in this stage of the build
   ARG FUNCTION_DIR
   
   # Copy function code
   RUN mkdir -p ${FUNCTION_DIR}
   COPY . ${FUNCTION_DIR}
   
   # Install the function's dependencies
   RUN pip install \
       --target ${FUNCTION_DIR} \
           awslambdaric
   
   # Use a slim version of the base Python image to reduce the final image size
   FROM python:3.12-slim
   
   # Include global arg in this stage of the build
   ARG FUNCTION_DIR
   # Set working directory to function root directory
   WORKDIR ${FUNCTION_DIR}
   
   # Copy in the built dependencies
   COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
   
   # Set runtime interface client as default command for the container runtime
   ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
   # Pass the name of the function handler as an argument to the runtime
   CMD [ "lambda_function.handler" ]
   ```

1. Build the Docker image with the [docker build](https://docs.docker.com/engine/reference/commandline/build/) command. The following example names the image `docker-image` and gives it the `test` [tag](https://docs.docker.com/engine/reference/commandline/build/#tag). To make your image compatible with Lambda, you must use the `--provenance=false` option.

   ```
   docker buildx build --platform linux/amd64 --provenance=false -t docker-image:test .
   ```
**Note**  
The command specifies the `--platform linux/amd64` option to ensure that your container is compatible with the Lambda execution environment regardless of the architecture of your build machine. If you intend to create a Lambda function using the ARM64 instruction set architecture, be sure to change the command to use the `--platform linux/arm64` option instead.

### (Optional) Test the image locally
<a name="python-alt-test"></a>

Use the [runtime interface emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator/) to locally test the image. You can [build the emulator into your image](https://github.com/aws/aws-lambda-runtime-interface-emulator/?tab=readme-ov-file#build-rie-into-your-base-image) or use the following procedure to install it on your local machine.

**To install and run the runtime interface emulator on your local machine**

1. From your project directory, run the following command to download the runtime interface emulator (x86-64 architecture) from GitHub and install it on your local machine.

------
#### [ Linux/macOS ]

   ```
   mkdir -p ~/.aws-lambda-rie && \
       curl -Lo ~/.aws-lambda-rie/aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie && \
       chmod +x ~/.aws-lambda-rie/aws-lambda-rie
   ```

   To install the arm64 emulator, replace the GitHub repository URL in the previous command with the following:

   ```
   https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie-arm64
   ```

------
#### [ PowerShell ]

   ```
   $dirPath = "$HOME\.aws-lambda-rie"
   if (-not (Test-Path $dirPath)) {
       New-Item -Path $dirPath -ItemType Directory
   }
         
   $downloadLink = "https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie"
   $destinationPath = "$HOME\.aws-lambda-rie\aws-lambda-rie"
   Invoke-WebRequest -Uri $downloadLink -OutFile $destinationPath
   ```

   To install the arm64 emulator, replace the `$downloadLink` with the following:

   ```
   https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie-arm64
   ```

------

1. Start the Docker image with the **docker run** command. Note the following:
   + `docker-image` is the image name and `test` is the tag.
   + `/usr/local/bin/python -m awslambdaric lambda_function.handler` is the `ENTRYPOINT` followed by the `CMD` from your Dockerfile.

------
#### [ Linux/macOS ]

   ```
   docker run --platform linux/amd64 -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 \
       --entrypoint /aws-lambda/aws-lambda-rie \
       docker-image:test \
           /usr/local/bin/python -m awslambdaric lambda_function.handler
   ```

------
#### [ PowerShell ]

   ```
   docker run --platform linux/amd64 -d -v "$HOME\.aws-lambda-rie:/aws-lambda" -p 9000:8080 `
   --entrypoint /aws-lambda/aws-lambda-rie `
   docker-image:test `
       /usr/local/bin/python -m awslambdaric lambda_function.handler
   ```

------

   This command runs the image as a container and creates a local endpoint at `localhost:9000/2015-03-31/functions/function/invocations`.
**Note**  
If you built the Docker image for the ARM64 instruction set architecture, be sure to use the `--platform linux/arm64` option instead of `--platform linux/amd64`.

1. Post an event to the local endpoint.

------
#### [ Linux/macOS ]

   In Linux and macOS, run the following `curl` command:

   ```
   curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
   ```

   This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

   ```
   curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'
   ```

------
#### [ PowerShell ]

   In PowerShell, run the following `Invoke-WebRequest` command:

   ```
   Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{}' -ContentType "application/json"
   ```

   This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

   ```
   Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{"payload":"hello world!"}' -ContentType "application/json"
   ```

------

1. Get the container ID.

   ```
   docker ps
   ```

1. Use the [docker kill](https://docs.docker.com/engine/reference/commandline/kill/) command to stop the container. In this command, replace `3766c4ab331c` with the container ID from the previous step.

   ```
   docker kill 3766c4ab331c
   ```

### Deploying the image
<a name="python-alt-deploy"></a>

**To upload the image to Amazon ECR and create the Lambda function**

1. Run the [get-login-password](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/get-login-password.html) command to authenticate the Docker CLI to your Amazon ECR registry.
   + Set the `--region` value to the AWS Region where you want to create the Amazon ECR repository.
   + Replace `111122223333` with your AWS account ID.

   ```
   aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 111122223333.dkr.ecr.us-east-1.amazonaws.com
   ```

1. Create a repository in Amazon ECR using the [create-repository](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/create-repository.html) command.

   ```
   aws ecr create-repository --repository-name hello-world --region us-east-1 --image-scanning-configuration scanOnPush=true --image-tag-mutability MUTABLE
   ```
**Note**  
The Amazon ECR repository must be in the same AWS Region as the Lambda function.

   If successful, you see a response like this:

   ```
   {
       "repository": {
           "repositoryArn": "arn:aws:ecr:us-east-1:111122223333:repository/hello-world",
           "registryId": "111122223333",
           "repositoryName": "hello-world",
           "repositoryUri": "111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world",
           "createdAt": "2023-03-09T10:39:01+00:00",
           "imageTagMutability": "MUTABLE",
           "imageScanningConfiguration": {
               "scanOnPush": true
           },
           "encryptionConfiguration": {
               "encryptionType": "AES256"
           }
       }
   }
   ```

1. Copy the `repositoryUri` from the output in the previous step.

1. Run the [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) command to tag your local image into your Amazon ECR repository as the latest version. In this command:
   + `docker-image:test` is the name and [tag](https://docs.docker.com/engine/reference/commandline/build/#tag) of your Docker image. This is the image name and tag that you specified in the `docker build` command.
   + Replace `<ECRrepositoryUri>` with the `repositoryUri` that you copied. Make sure to include `:latest` at the end of the URI.

   ```
   docker tag docker-image:test <ECRrepositoryUri>:latest
   ```

   Example:

   ```
   docker tag docker-image:test 111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
   ```

1. Run the [docker push](https://docs.docker.com/engine/reference/commandline/push/) command to deploy your local image to the Amazon ECR repository. Make sure to include `:latest` at the end of the repository URI.

   ```
   docker push 111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
   ```

1. [Create an execution role](lambda-intro-execution-role.md#permissions-executionrole-api) for the function, if you don't already have one. You need the Amazon Resource Name (ARN) of the role in the next step.

1. Create the Lambda function. For `ImageUri`, specify the repository URI from earlier. Make sure to include `:latest` at the end of the URI.

   ```
   aws lambda create-function \
     --function-name hello-world \
     --package-type Image \
     --code ImageUri=111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest \
     --role arn:aws:iam::111122223333:role/lambda-ex
   ```
**Note**  
You can create a function using an image in a different AWS account, as long as the image is in the same Region as the Lambda function. For more information, see [Amazon ECR cross-account permissions](images-create.md#configuration-images-xaccount-permissions).

1. Invoke the function.

   ```
   aws lambda invoke --function-name hello-world response.json
   ```

   You should see a response like this:

   ```
   {
     "ExecutedVersion": "$LATEST", 
     "StatusCode": 200
   }
   ```

1. To see the output of the function, check the `response.json` file.

To update the function code, you must build the image again, upload the new image to the Amazon ECR repository, and then use the [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) command to deploy the image to the Lambda function.

Lambda resolves the image tag to a specific image digest. This means that if you point the image tag that was used to deploy the function to a new image in Amazon ECR, Lambda doesn't automatically update the function to use the new image.

To deploy the new image to the same Lambda function, you must use the [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) command, even if the image tag in Amazon ECR remains the same. In the following example, the `--publish` option creates a new version of the function using the updated container image.

```
aws lambda update-function-code \
  --function-name hello-world \
  --image-uri 111122223333.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest \
  --publish
```

For an example of how to create a Python image from an Alpine base image, see [Container image support for Lambda](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/) on the AWS Blog.

# Working with layers for Python Lambda functions
<a name="python-layers"></a>

Use [Lambda layers](chapter-layers.md) to package code and dependencies that you want to reuse across multiple functions. Layers usually contain library dependencies, a [custom runtime](runtimes-custom.md), or configuration files. Creating a layer involves three general steps:

1. Package your layer content. This means creating a .zip file archive that contains the dependencies you want to use in your functions.

1. Create the layer in Lambda.

1. Add the layer to your functions.

**Topics**
+ [

## Package your layer content
](#python-layers-package)
+ [

## Create the layer in Lambda
](#publishing-layer)
+ [

## Add the layer to your function
](#python-layer-adding)
+ [

## Sample app
](#python-layer-sample-app)

## Package your layer content
<a name="python-layers-package"></a>

To create a layer, bundle your packages into a .zip file archive that meets the following requirements:
+ Build the layer using the same Python version that you plan to use for the Lambda function. For example, if you build your layer using Python 3.14, use the Python 3.14 runtime for your function.
+ Your .zip file must include a `python` directory at the root level.
+ The packages in your layer must be compatible with Linux. Lambda functions run on Amazon Linux.

You can create layers that contain either third-party Python libraries installed with `pip` (such as `requests` or `pandas`) or your own Python modules and packages.

### Third-party dependencies
<a name="python-layers-third-party-dependencies"></a>

**To create a layer using pip packages**

1. Choose one of the following methods to install `pip` packages into the required top-level directory (`python/`):

------
#### [ pip install ]

   For pure Python packages (like requests or boto3):

   ```
   pip install requests -t python/
   ```

   Some Python packages, such as NumPy and Pandas, include compiled C components. If you're building a layer with these packages on macOS or Windows, you might need to use this command to install a Linux-compatible wheel:

   ```
   pip install numpy --platform manylinux2014_x86_64 --only-binary=:all: -t python/
   ```

   For more information about working with Python packages that contain compiled components, see [Creating .zip deployment packages with native libraries](python-package.md#python-package-native-libraries).

------
#### [ requirements.txt ]

   Using a `requirements.txt` file helps you manage package versions and ensure consistent installations.

**Example requirements.txt**  

   ```
   requests==2.31.0
   boto3==1.37.34
   numpy==1.26.4
   ```

   If your `requirements.txt` file includes only pure Python packages (like requests or boto3):

   ```
   pip install -r requirements.txt -t python/
   ```

   Some Python packages, such as NumPy and Pandas, include compiled C components. If you're building a layer with these packages on macOS or Windows, you might need to use this command to install a Linux-compatible wheel:

   ```
   pip install -r requirements.txt --platform manylinux2014_x86_64 --only-binary=:all: -t python/
   ```

   For more information about working with Python packages that contain compiled components, see [Creating .zip deployment packages with native libraries](python-package.md#python-package-native-libraries).

------

1. Zip the contents of the `python` directory.

------
#### [ Linux/macOS ]

   ```
   zip -r layer.zip python/
   ```

------
#### [ PowerShell ]

   ```
   Compress-Archive -Path .\python -DestinationPath .\layer.zip
   ```

------

   The directory structure of your .zip file should look like this:

   ```
   python/              # Required top-level directory
   └── requests/
   └── boto3/
   └── numpy/
   └── (dependencies of the other packages)
   ```
**Note**  
If you use a Python virtual environment (venv) to install packages, your directory structure will be different (for example, `python/lib/python3.x/site-packages`). As long as your .zip file includes the `python` directory at the root level, Lambda can locate and import your packages.

### Custom Python modules
<a name="custom-python-modules"></a>

**To create a layer using your own code**

1. Create the required top-level directory for your layer:

   ```
   mkdir python
   ```

1. Create your Python modules in the `python` directory. The following example module validates orders by confirming that they contain the required information.  
**Example custom module: validator.py**  

   ```
   import json
   
   def validate_order(order_data):
       """Validates an order and returns formatted data."""
       required_fields = ['product_id', 'quantity']
       
       # Check required fields
       missing_fields = [field for field in required_fields if field not in order_data]
       if missing_fields:
           raise ValueError(f"Missing required fields: {', '.join(missing_fields)}")
       
       # Validate quantity
       quantity = order_data['quantity']
       if not isinstance(quantity, int) or quantity < 1:
           raise ValueError("Quantity must be a positive integer")
       
       # Format and return the validated data
       return {
           'product_id': str(order_data['product_id']),
           'quantity': quantity,
           'shipping_priority': order_data.get('priority', 'standard')
       }
   
   def format_response(status_code, body):
       """Formats the API response."""
       return {
           'statusCode': status_code,
           'body': json.dumps(body)
       }
   ```

1. Zip the contents of the `python` directory.

------
#### [ Linux/macOS ]

   ```
   zip -r layer.zip python/
   ```

------
#### [ PowerShell ]

   ```
   Compress-Archive -Path .\python -DestinationPath .\layer.zip
   ```

------

   The directory structure of your .zip file should look like this:

   ```
   python/     # Required top-level directory
   └── validator.py
   ```

1. In your function, import and use the modules as you would with any Python package. Example:

   ```
   from validator import validate_order, format_response
   import json
   
   def lambda_handler(event, context):
       try:
           # Parse the order data from the event body
           order_data = json.loads(event.get('body', '{}'))
           
           # Validate and format the order
           validated_order = validate_order(order_data)
           
           return format_response(200, {
               'message': 'Order validated successfully',
               'order': validated_order
           })
       except ValueError as e:
           return format_response(400, {
               'error': str(e)
           })
       except Exception as e:
           return format_response(500, {
               'error': 'Internal server error'
           })
   ```

   You can use the following [test event](testing-functions.md#invoke-with-event) to invoke the function:

   ```
   {
       "body": "{\"product_id\": \"ABC123\", \"quantity\": 2, \"priority\": \"express\"}"
   }
   ```

   Expected response:

   ```
   {
     "statusCode": 200,
     "body": "{\"message\": \"Order validated successfully\", \"order\": {\"product_id\": \"ABC123\", \"quantity\": 2, \"shipping_priority\": \"express\"}}"
   }
   ```

## Create the layer in Lambda
<a name="publishing-layer"></a>

You can publish your layer using either the AWS CLI or the Lambda console.

------
#### [ AWS CLI ]

Run the [publish-layer-version](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/publish-layer-version.html) AWS CLI command to create the Lambda layer:

```
aws lambda publish-layer-version --layer-name my-layer --zip-file fileb://layer.zip --compatible-runtimes python3.14
```

The [compatible runtimes](https://docs.aws.amazon.com/lambda/latest/api/API_PublishLayerVersion.html#lambda-PublishLayerVersion-request-CompatibleRuntimes) parameter is optional. When specified, Lambda uses this parameter to filter layers in the Lambda console.

------
#### [ Console ]

**To create a layer (console)**

1. Open the [Layers page](https://console.aws.amazon.com/lambda/home#/layers) of the Lambda console.

1. Choose **Create layer**.

1. Choose **Upload a .zip file**, and then upload the .zip archive that you created earlier.

1. (Optional) For **Compatible runtimes**, choose the Python runtime that corresponds to the Python version you used to build your layer.

1. Choose **Create**.

------

## Add the layer to your function
<a name="python-layer-adding"></a>

------
#### [ AWS CLI ]

To attach the layer to your function, run the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) AWS CLI command. For the `--layers` parameter, use the layer ARN. The ARN must specify the version (for example, `arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1`). For more information, see [Layers and layer versions](chapter-layers.md#lambda-layer-versions).

```
aws lambda update-function-configuration --function-name my-function --cli-binary-format raw-in-base64-out --layers "arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"
```

The **cli-binary-format** option is required if you're using AWS CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [AWS CLI supported global command line options](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *AWS Command Line Interface User Guide for Version 2*.

------
#### [ Console ]

**To add a layer to a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the function.

1. Scroll down to the **Layers** section, and then choose **Add a layer**.

1. Under **Choose a layer**, select **Custom layers**, and then choose your layer.
**Note**  
If you didn't add a [compatible runtime](https://docs.aws.amazon.com/lambda/latest/api/API_PublishLayerVersion.html#lambda-PublishLayerVersion-request-CompatibleRuntimes) when you created the layer, your layer won't be listed here. You can specify the layer ARN instead.

1. Choose **Add**.

------

## Sample app
<a name="python-layer-sample-app"></a>

For more examples of how to use Lambda layers, see the [layer-python](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/layer-python) sample application in the AWS Lambda Developer Guide GitHub repository. This application includes two layers that contain Python libraries. After creating the layers, you can deploy and invoke the corresponding functions to confirm that the layers work as expected.

# Using the Lambda context object to retrieve Python function information
<a name="python-context"></a>

When Lambda runs your function, it passes a context object to the [handler](python-handler.md). This object provides methods and properties that provide information about the invocation, function, and execution environment. For more information on how the context object is passed to the function handler, see [Define Lambda function handler in Python](python-handler.md).

**Context methods**
+ `get_remaining_time_in_millis` – Returns the number of milliseconds left before the execution times out.

**Context properties**
+ `function_name` – The name of the Lambda function.
+ `function_version` – The [version](configuration-versions.md) of the function.
+ `invoked_function_arn` – The Amazon Resource Name (ARN) that's used to invoke the function. Indicates if the invoker specified a version number or alias.
+ `memory_limit_in_mb` – The amount of memory that's allocated for the function.
+ `aws_request_id` – The identifier of the invocation request.
+ `log_group_name` – The log group for the function.
+ `log_stream_name` – The log stream for the function instance.
+ `identity` – (mobile apps) Information about the Amazon Cognito identity that authorized the request.
  + `cognito_identity_id` – The authenticated Amazon Cognito identity.
  + `cognito_identity_pool_id` – The Amazon Cognito identity pool that authorized the invocation.
+ `client_context` – (mobile apps) Client context that's provided to Lambda by the client application.
  + `client.installation_id`
  + `client.app_title`
  + `client.app_version_name`
  + `client.app_version_code`
  + `client.app_package_name`
  + `custom` – A `dict` of custom values set by the mobile client application.
  + `env` – A `dict` of environment information provided by the AWS SDK.

Powertools for Lambda (Python) provides an interface definition for the Lambda context object. You can use the interface definition for type hints, or to further inspect the structure of the Lambda context object. For the interface definition, see [lambda\$1context.py](https://github.com/aws-powertools/powertools-lambda-python/blob/develop/aws_lambda_powertools/utilities/typing/lambda_context.py) in the *powertools-lambda-python* repository on GitHub.

The following example shows a handler function that logs context information.

**Example handler.py**  

```
import time

def lambda_handler(event, context):   
    print("Lambda function ARN:", context.invoked_function_arn)
    print("CloudWatch log stream name:", context.log_stream_name)
    print("CloudWatch log group name:",  context.log_group_name)
    print("Lambda Request ID:", context.aws_request_id)
    print("Lambda function memory limits in MB:", context.memory_limit_in_mb)
    # We have added a 1 second delay so you can see the time remaining in get_remaining_time_in_millis.
    time.sleep(1) 
    print("Lambda time remaining in MS:", context.get_remaining_time_in_millis())
```

In addition to the options listed above, you can also use the AWS X-Ray SDK for [Instrumenting Python code in AWS Lambda](python-tracing.md) to identify critical code paths, trace their performance and capture the data for analysis. 

# Log and monitor Python Lambda functions
<a name="python-logging"></a>

AWS Lambda automatically monitors Lambda functions and sends log entries to Amazon CloudWatch. Your Lambda function comes with a CloudWatch Logs log group and a log stream for each instance of your function. The Lambda runtime environment sends details about each invocation and other output from your function's code to the log stream. For more information about CloudWatch Logs, see [Sending Lambda function logs to CloudWatch Logs](monitoring-cloudwatchlogs.md).

To output logs from your function code, you can use the built-in [https://docs.python.org/3/library/logging.html](https://docs.python.org/3/library/logging.html) module. For more detailed entries, you can use any logging library that writes to `stdout` or `stderr`.

## Printing to the log
<a name="python-logging-output"></a>

To send basic output to the logs, you can use a `print` method in your function. The following example logs the values of the CloudWatch Logs log group and stream, and the event object.

Note that if your function outputs logs using Python `print` statements, Lambda can only send log outputs to CloudWatch Logs in plain text format. To capture logs in structured JSON, you need to use a supported logging library. See [Using Lambda advanced logging controls with Python](#python-logging-advanced) for more information.

**Example lambda\$1function.py**  

```
import os
def lambda_handler(event, context):
    print('## ENVIRONMENT VARIABLES')
    print(os.environ['AWS_LAMBDA_LOG_GROUP_NAME'])
    print(os.environ['AWS_LAMBDA_LOG_STREAM_NAME'])
    print('## EVENT')
    print(event)
```

**Example log output**  

```
START RequestId: 8f507cfc-xmpl-4697-b07a-ac58fc914c95 Version: $LATEST
## ENVIRONMENT VARIABLES
/aws/lambda/my-function
2025/08/31/[$LATEST]3893xmpl7fac4485b47bb75b671a283c
## EVENT
{'key': 'value'}
END RequestId: 8f507cfc-xmpl-4697-b07a-ac58fc914c95
REPORT RequestId: 8f507cfc-xmpl-4697-b07a-ac58fc914c95  Duration: 15.74 ms  Billed Duration: 147 ms Memory Size: 128 MB Max Memory Used: 56 MB  Init Duration: 130.49 ms
XRAY TraceId: 1-5e34a614-10bdxmplf1fb44f07bc535a1   SegmentId: 07f5xmpl2d1f6f85 Sampled: true
```

The Python runtime logs the `START`, `END`, and `REPORT` lines for each invocation. The `REPORT` line includes the following data:

**REPORT line data fields**
+ **RequestId** – The unique request ID for the invocation.
+ **Duration** – The amount of time that your function's handler method spent processing the event.
+ **Billed Duration** – The amount of time billed for the invocation.
+ **Memory Size** – The amount of memory allocated to the function.
+ **Max Memory Used** – The amount of memory used by the function. When invocations share an execution environment, Lambda reports the maximum memory used across all invocations. This behavior might result in a higher than expected reported value.
+ **Init Duration** – For the first request served, the amount of time it took the runtime to load the function and run code outside of the handler method.
+ **XRAY TraceId** – For traced requests, the [AWS X-Ray trace ID](services-xray.md).
+ **SegmentId** – For traced requests, the X-Ray segment ID.
+ **Sampled** – For traced requests, the sampling result.

## Using a logging library
<a name="python-logging-lib"></a>

For more detailed logs, use the [logging](https://docs.python.org/3/library/logging.html) module in the standard library, or any third party logging library that writes to `stdout` or `stderr`.

For supported Python runtimes, you can choose whether logs created using the standard `logging` module are captured in plain text or JSON. To learn more, see [Using Lambda advanced logging controls with Python](#python-logging-advanced).

Currently, the default log format for all Python runtimes is plain text. The following example shows how log outputs created using the standard `logging` module are captured in plain text in CloudWatch Logs.

```
import os
import logging
logger = logging.getLogger()
logger.setLevel("INFO")
  
def lambda_handler(event, context):
    logger.info('## ENVIRONMENT VARIABLES')
    logger.info(os.environ['AWS_LAMBDA_LOG_GROUP_NAME'])
    logger.info(os.environ['AWS_LAMBDA_LOG_STREAM_NAME'])
    logger.info('## EVENT')
    logger.info(event)
```

The output from `logger` includes the log level, timestamp, and request ID.

```
START RequestId: 1c8df7d3-xmpl-46da-9778-518e6eca8125 Version: $LATEST
[INFO]  2025-08-31T22:12:58.534Z    1c8df7d3-xmpl-46da-9778-518e6eca8125    ## ENVIRONMENT VARIABLES
[INFO]  2025-08-31T22:12:58.534Z    1c8df7d3-xmpl-46da-9778-518e6eca8125    /aws/lambda/my-function
[INFO]  2025-08-31T22:12:58.534Z    1c8df7d3-xmpl-46da-9778-518e6eca8125    2025/01/31/[$LATEST]1bbe51xmplb34a2788dbaa7433b0aa4d
[INFO]  2025-08-31T22:12:58.535Z    1c8df7d3-xmpl-46da-9778-518e6eca8125    ## EVENT
[INFO]  2025-08-31T22:12:58.535Z    1c8df7d3-xmpl-46da-9778-518e6eca8125    {'key': 'value'}
END RequestId: 1c8df7d3-xmpl-46da-9778-518e6eca8125
REPORT RequestId: 1c8df7d3-xmpl-46da-9778-518e6eca8125  Duration: 2.75 ms   Billed Duration: 117 ms Memory Size: 128 MB Max Memory Used: 56 MB  Init Duration: 113.51 ms
XRAY TraceId: 1-5e34a66a-474xmpl7c2534a87870b4370   SegmentId: 073cxmpl3e442861 Sampled: true
```

**Note**  
When your function's log format is set to plain text, the default log-level setting for Python runtimes is WARN. This means that Lambda only sends log outputs of level WARN and lower to CloudWatch Logs. To change the default log level, use the Python `logging` `setLevel()` method as shown in this example code. If you set your function's log format to JSON, we recommend that you configure your function's log level using Lambda Advanced Logging Controls and not by setting the log level in code. To learn more, see [Using log-level filtering with Python](#python-logging-levels)

## Using Lambda advanced logging controls with Python
<a name="python-logging-advanced"></a>

To give you more control over how your functions’ logs are captured, processed, and consumed, you can configure the following logging options for supported Lambda Python runtimes:
+ **Log format** - select between plain text and structured JSON format for your function’s logs
+ **Log level** - for logs in JSON format, choose the detail level of the logs Lambda sends to Amazon CloudWatch, such as ERROR, DEBUG, or INFO
+ **Log group** - choose the CloudWatch log group your function sends logs to

For more information about these logging options, and instructions on how to configure your function to use them, see [Configuring advanced logging controls for Lambda functions](monitoring-logs.md#monitoring-cloudwatchlogs-advanced).

To learn more about using the log format and log level options with your Python Lambda functions, see the guidance in the following sections.

### Using structured JSON logs with Python
<a name="python-logging-JSON"></a>

If you select JSON for your function’s log format, Lambda will send logs output by the Python standard logging library to CloudWatch as structured JSON. Each JSON log object contains at least four key value pairs with the following keys:
+ `"timestamp"` - the time the log message was generated
+ `"level"` - the log level assigned to the message
+ `"message"` - the contents of the log message
+ `"requestId"` - the unique request ID for the function invocation

The Python `logging` library can also add extra key value pairs such as `"logger"` to this JSON object.

The examples in the following sections show how log outputs generated using the Python `logging` library are captured in CloudWatch Logs when you configure your function's log format as JSON.

Note that if you use the `print` method to produce basic log outputs as described in [Printing to the log](#python-logging-output), Lambda will capture these outputs as plain text, even if you configure your function’s logging format as JSON.

#### Standard JSON log outputs using Python logging library
<a name="python-logging-standard"></a>

The following example code snippet and log output show how standard log outputs generated using the Python `logging` library are captured in CloudWatch Logs when your function's log format is set to JSON.

**Example Python logging code**  

```
import logging  
logger = logging.getLogger()

def lambda_handler(event, context):
    logger.info("Inside the handler function")
```

**Example JSON log record**  

```
{
    "timestamp":"2025-10-27T19:17:45.586Z",
    "level":"INFO",
    "message":"Inside the handler function",
    "logger": "root",
    "requestId":"79b4f56e-95b1-4643-9700-2807f4e68189"
}
```

#### Logging extra parameters in JSON
<a name="python-logging-extra"></a>

When your function's log format is set to JSON, you can also log additional parameters with the standard Python `logging` library by using the `extra` keyword to pass a Python dictionary to the log output.

**Example Python logging code**  

```
import logging

def lambda_handler(event, context):
    logging.info(
        "extra parameters example", 
        extra={"a":"b", "b": [3]},
    )
```

**Example JSON log record**  

```
{
  "timestamp": "2025-11-02T15:26:28Z",
  "level": "INFO",
  "message": "extra parameters example",
  "logger": "root",
  "requestId": "3dbd5759-65f6-45f8-8d7d-5bdc79a3bd01",
  "a": "b",
  "b": [
    3
  ]
}
```

#### Logging exceptions in JSON
<a name="python-logging-exception"></a>

The following code snippet shows how Python exceptions are captured in your function's log output when you configure the log format as JSON. Note that log outputs generated using `logging.exception` are assigned the log level ERROR.

**Example Python logging code**  

```
import logging

def lambda_handler(event, context):
    try:
        raise Exception("exception")
    except:
        logging.exception("msg")
```

**Example JSON log record**  

```
{
  "timestamp": "2025-11-02T16:18:57Z",
  "level": "ERROR",
  "message": "msg",
  "logger": "root",
  "stackTrace": [
    "  File \"/var/task/lambda_function.py\", line 15, in lambda_handler\n    raise Exception(\"exception\")\n"
  ],
  "errorType": "Exception",
  "errorMessage": "exception",
  "requestId": "3f9d155c-0f09-46b7-bdf1-e91dab220855",
  "location": "/var/task/lambda_function.py:lambda_handler:17"
}
```

#### JSON structured logs with other logging tools
<a name="python-logging-thirdparty"></a>

If your code already uses another logging library, such as Powertools for AWS Lambda, to produce JSON structured logs, you don’t need to make any changes. AWS Lambda doesn’t double-encode any logs that are already JSON encoded. Even if you configure your function to use the JSON log format, your logging outputs appear in CloudWatch in the JSON structure you define.

The following example shows how log outputs generated using the Powertools for AWS Lambda package are captured in CloudWatch Logs. The format of this log output is the same whether your function’s logging configuration is set to JSON or TEXT. For more information about using Powertools for AWS Lambda, see [Using Powertools for AWS Lambda (Python) and AWS SAM for structured logging](#python-logging-sam) and [Using Powertools for AWS Lambda (Python) and AWS CDK for structured logging](#python-logging-powertools-cdk)

**Example Python logging code snippet (using Powertools for AWS Lambda)**  

```
from aws_lambda_powertools import Logger

logger = Logger()

def lambda_handler(event, context):
    logger.info("Inside the handler function")
```

**Example JSON log record (using Powertools for AWS Lambda)**  

```
{ 
    "level": "INFO", 
    "location": "lambda_handler:7", 
    "message": "Inside the handler function", 
    "timestamp": "2025-10-31 22:38:21,010+0000", 
    "service": "service_undefined", 
    "xray_trace_id": "1-654181dc-65c15d6b0fecbdd1531ecb30" 
}
```

### Using log-level filtering with Python
<a name="python-logging-levels"></a>

By configuring log-level filtering, you can choose to send only logs of a certain logging level or lower to CloudWatch Logs. To learn how to configure log-level filtering for your function, see [Log-level filtering](monitoring-cloudwatchlogs-log-level.md).

For AWS Lambda to filter your application logs according to their log level, your function must use JSON formatted logs. You can achieve this in two ways:
+ Create log outputs using the standard Python `logging` library and configure your function to use JSON log formatting. AWS Lambda then filters your log outputs using the “level” key value pair in the JSON object described in [Using structured JSON logs with Python](#python-logging-JSON). To learn how to configure your function’s log format, see [Configuring advanced logging controls for Lambda functions](monitoring-logs.md#monitoring-cloudwatchlogs-advanced).
+ Use another logging library or method to create JSON structured logs in your code that include a “level” key value pair defining the level of the log output. For example, you can use Powertools for AWS Lambda to generate JSON structured log outputs from your code.

  You can also use a print statement to output a JSON object containing a log level identifier. The following print statement produces a JSON formatted output where the log level is set to INFO. AWS Lambda will send the JSON object to CloudWatch Logs if your function’s logging level is set to INFO, DEBUG, or TRACE.

  ```
  print('{"msg":"My log message", "level":"info"}')
  ```

For Lambda to filter your function's logs, you must also include a `"timestamp"` key value pair in your JSON log output. The time must be specified in valid [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) timestamp format. If you don't supply a valid timestamp, Lambda will assign the log the level INFO and add a timestamp for you.

## Viewing logs in Lambda console
<a name="python-logging-console"></a>

You can use the Lambda console to view log output after you invoke a Lambda function.

If your code can be tested from the embedded **Code** editor, you will find logs in the **execution results**. When you use the console test feature to invoke a function, you'll find **Log output** in the **Details** section.

## Viewing logs in CloudWatch console
<a name="python-logging-cwconsole"></a>

You can use the Amazon CloudWatch console to view logs for all Lambda function invocations.

**To view logs on the CloudWatch console**

1. Open the [Log groups page](https://console.aws.amazon.com/cloudwatch/home?#logs:) on the CloudWatch console.

1. Choose the log group for your function (**/aws/lambda/*your-function-name***).

1. Choose a log stream.

Each log stream corresponds to an [instance of your function](lambda-runtime-environment.md). A log stream appears when you update your Lambda function, and when additional instances are created to handle concurrent invocations. To find logs for a specific invocation, we recommend instrumenting your function with AWS X-Ray. X-Ray records details about the request and the log stream in the trace.

## Viewing logs with AWS CLI
<a name="python-logging-cli"></a>

The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell. To complete the steps in this section, you must have the [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

You can use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) to retrieve logs for an invocation using the `--log-type` command option. The response contains a `LogResult` field that contains up to 4 KB of base64-encoded logs from the invocation.

**Example retrieve a log ID**  
The following example shows how to retrieve a *log ID* from the `LogResult` field for a function named `my-function`.  

```
aws lambda invoke --function-name my-function out --log-type Tail
```
You should see the following output:  

```
{
    "StatusCode": 200,
    "LogResult": "U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...",
    "ExecutedVersion": "$LATEST"
}
```

**Example decode the logs**  
In the same command prompt, use the `base64` utility to decode the logs. The following example shows how to retrieve base64-encoded logs for `my-function`.  

```
aws lambda invoke --function-name my-function out --log-type Tail \
--query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 --decode
```
The **cli-binary-format** option is required if you're using AWS CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [AWS CLI supported global command line options](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *AWS Command Line Interface User Guide for Version 2*.  
You should see the following output:  

```
START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST
"AWS_SESSION_TOKEN": "AgoJb3JpZ2luX2VjELj...", "_X_AMZN_TRACE_ID": "Root=1-5d02e5ca-f5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0"",ask/lib:/opt/lib",
END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8
REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8  Duration: 79.67 ms      Billed Duration: 80 ms         Memory Size: 128 MB     Max Memory Used: 73 MB
```
The `base64` utility is available on Linux, macOS, and [Ubuntu on Windows](https://docs.microsoft.com/en-us/windows/wsl/install-win10). macOS users may need to use `base64 -D`.

**Example get-logs.sh script**  
In the same command prompt, use the following script to download the last five log events. The script uses `sed` to remove quotes from the output file, and sleeps for 15 seconds to allow time for the logs to become available. The output includes the response from Lambda and the output from the `get-log-events` command.   
Copy the contents of the following code sample and save in your Lambda project directory as `get-logs.sh`.  
The **cli-binary-format** option is required if you're using AWS CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [AWS CLI supported global command line options](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *AWS Command Line Interface User Guide for Version 2*.  

```
#!/bin/bash
aws lambda invoke --function-name my-function --cli-binary-format raw-in-base64-out --payload '{"key": "value"}' out
sed -i'' -e 's/"//g' out
sleep 15
aws logs get-log-events --log-group-name /aws/lambda/my-function --log-stream-name stream1 --limit 5
```

**Example macOS and Linux (only)**  
In the same command prompt, macOS and Linux users may need to run the following command to ensure the script is executable.  

```
chmod -R 755 get-logs.sh
```

**Example retrieve the last five log events**  
In the same command prompt, run the following script to get the last five log events.  

```
./get-logs.sh
```
You should see the following output:  

```
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{
    "events": [
        {
            "timestamp": 1559763003171,
            "message": "START RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf Version: $LATEST\n",
            "ingestionTime": 1559763003309
        },
        {
            "timestamp": 1559763003173,
            "message": "2019-06-05T19:30:03.173Z\t4ce9340a-b765-490f-ad8a-02ab3415e2bf\tINFO\tENVIRONMENT VARIABLES\r{\r  \"AWS_LAMBDA_FUNCTION_VERSION\": \"$LATEST\",\r ...",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003173,
            "message": "2019-06-05T19:30:03.173Z\t4ce9340a-b765-490f-ad8a-02ab3415e2bf\tINFO\tEVENT\r{\r  \"key\": \"value\"\r}\n",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003218,
            "message": "END RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf\n",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003218,
            "message": "REPORT RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf\tDuration: 26.73 ms\tBilled Duration: 27 ms \tMemory Size: 128 MB\tMax Memory Used: 75 MB\t\n",
            "ingestionTime": 1559763018353
        }
    ],
    "nextForwardToken": "f/34783877304859518393868359594929986069206639495374241795",
    "nextBackwardToken": "b/34783877303811383369537420289090800615709599058929582080"
}
```

## Deleting logs
<a name="python-logging-delete"></a>

Log groups aren't deleted automatically when you delete a function. To avoid storing logs indefinitely, delete the log group, or [configure a retention period](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention) after which logs are deleted automatically.

## Using other logging tools and libraries
<a name="python-tools-libraries"></a>

[Powertools for AWS Lambda (Python)](https://docs.aws.amazon.com/powertools/python/) is a developer toolkit to implement Serverless best practices and increase developer velocity. The [Logger utility](https://docs.aws.amazon.com/powertools/python/latest/core/logger/) provides a Lambda optimized logger which includes additional information about function context across all your functions with output structured as JSON. Use this utility to do the following:
+ Capture key fields from the Lambda context, cold start and structures logging output as JSON
+ Log Lambda invocation events when instructed (disabled by default)
+ Print all the logs only for a percentage of invocations via log sampling (disabled by default)
+ Append additional keys to structured log at any point in time
+ Use a custom log formatter (Bring Your Own Formatter) to output logs in a structure compatible with your organization’s Logging RFC

## Using Powertools for AWS Lambda (Python) and AWS SAM for structured logging
<a name="python-logging-sam"></a>

Follow the steps below to download, build, and deploy a sample Hello World Python application with integrated [Powertools for Python](https://docs.aws.amazon.com/powertools/python/latest/) modules using the AWS SAM. This application implements a basic API backend and uses Powertools for emitting logs, metrics, and traces. It consists of an Amazon API Gateway endpoint and a Lambda function. When you send a GET request to the API Gateway endpoint, the Lambda function invokes, sends logs and metrics using Embedded Metric Format to CloudWatch, and sends traces to AWS X-Ray. The function returns a `hello world` message.

**Prerequisites**

To complete the steps in this section, you must have the following:
+ Python 3.9
+ [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [AWS SAM CLI version 1.75 or later](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). If you have an older version of the AWS SAM CLI, see [Upgrading the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/manage-sam-cli-versions.html#manage-sam-cli-versions-upgrade).

**Deploy a sample AWS SAM application**

1. Initialize the application using the Hello World Python template.

   ```
   sam init --app-template hello-world-powertools-python --name sam-app --package-type Zip --runtime python3.9 --no-tracing
   ```

1. Build the app.

   ```
   cd sam-app && sam build
   ```

1. Deploy the app.

   ```
   sam deploy --guided
   ```

1. Follow the on-screen prompts. To accept the default options provided in the interactive experience, press `Enter`.
**Note**  
For **HelloWorldFunction may not have authorization defined, Is this okay?**, make sure to enter `y`.

1. Get the URL of the deployed application:

   ```
   aws cloudformation describe-stacks --stack-name sam-app --query 'Stacks[0].Outputs[?OutputKey==`HelloWorldApi`].OutputValue' --output text
   ```

1. Invoke the API endpoint:

   ```
   curl GET <URL_FROM_PREVIOUS_STEP>
   ```

   If successful, you'll see this response:

   ```
   {"message":"hello world"}
   ```

1. To get the logs for the function, run [sam logs](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-logs.html). For more information, see [Working with logs](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-logging.html) in the *AWS Serverless Application Model Developer Guide*.

   ```
   sam logs --stack-name sam-app
   ```

   The log output looks like this:

   ```
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:50.371000 INIT_START Runtime Version: python:3.9.v16    Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:07a48df201798d627f2b950f03bb227aab4a655a1d019c3296406f95937e2525
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.112000 START RequestId: d455cfc4-7704-46df-901b-2a5cce9405be Version: $LATEST
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.114000 {
     "level": "INFO",
     "location": "hello:23",
     "message": "Hello world API - HTTP 200",
     "timestamp": "2025-02-03 14:59:51,113+0000",
     "service": "PowertoolsHelloWorld",
     "cold_start": true,
     "function_name": "sam-app-HelloWorldFunction-YBg8yfYtOc9j",
     "function_memory_size": "128",
     "function_arn": "arn:aws:lambda:us-east-1:111122223333:function:sam-app-HelloWorldFunction-YBg8yfYtOc9j",
     "function_request_id": "d455cfc4-7704-46df-901b-2a5cce9405be",
     "correlation_id": "e73f8aef-5e07-436e-a30b-63e4b23f0047",
     "xray_trace_id": "1-63dd2166-434a12c22e1307ff2114f299"
   }
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.126000 {
     "_aws": {
       "Timestamp": 1675436391126,
       "CloudWatchMetrics": [
         {
           "Namespace": "Powertools",
           "Dimensions": [
             [
               "function_name",
               "service"
             ]
           ],
           "Metrics": [
             {
               "Name": "ColdStart",
               "Unit": "Count"
             }
           ]
         }
       ]
     },
     "function_name": "sam-app-HelloWorldFunction-YBg8yfYtOc9j",
     "service": "PowertoolsHelloWorld",
     "ColdStart": [
       1.0
     ]
   }
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.126000 {
     "_aws": {
       "Timestamp": 1675436391126,
       "CloudWatchMetrics": [
         {
           "Namespace": "Powertools",
           "Dimensions": [
             [
               "service"
             ]
           ],
           "Metrics": [
             {
               "Name": "HelloWorldInvocations",
               "Unit": "Count"
             }
           ]
         }
       ]
     },
     "service": "PowertoolsHelloWorld",
     "HelloWorldInvocations": [
       1.0
     ]
   }
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.128000 END RequestId: d455cfc4-7704-46df-901b-2a5cce9405be
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.128000 REPORT RequestId: d455cfc4-7704-46df-901b-2a5cce9405be    Duration: 16.33 ms    Billed Duration: 756 ms    Memory Size: 128 MB    Max Memory Used: 64 MB    Init Duration: 739.46 ms    
   XRAY TraceId: 1-63dd2166-434a12c22e1307ff2114f299    SegmentId: 3c5d18d735a1ced0    Sampled: true
   ```

1. This is a public API endpoint that is accessible over the internet. We recommend that you delete the endpoint after testing.

   ```
   sam delete
   ```

### Managing log retention
<a name="python-log-retention"></a>

Log groups aren't deleted automatically when you delete a function. To avoid storing logs indefinitely, delete the log group, or configure a retention period after which CloudWatch automatically deletes the logs. To set up log retention, add the following to your AWS SAM template:

```
Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      # Omitting other properties

  LogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: !Sub "/aws/lambda/${HelloWorldFunction}"
      RetentionInDays: 7
```

## Using Powertools for AWS Lambda (Python) and AWS CDK for structured logging
<a name="python-logging-powertools-cdk"></a>

Follow the steps below to download, build, and deploy a sample Hello World Python application with integrated [Powertools for AWS Lambda (Python)](https://docs.aws.amazon.com/powertools/python/latest/) modules using the AWS CDK. This application implements a basic API backend and uses Powertools for emitting logs, metrics, and traces. It consists of an Amazon API Gateway endpoint and a Lambda function. When you send a GET request to the API Gateway endpoint, the Lambda function invokes, sends logs and metrics using Embedded Metric Format to CloudWatch, and sends traces to AWS X-Ray. The function returns a hello world message.

**Prerequisites**

To complete the steps in this section, you must have the following:
+ Python 3.9
+ [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [AWS CDK version 2](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites)
+ [AWS SAM CLI version 1.75 or later](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). If you have an older version of the AWS SAM CLI, see [Upgrading the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/manage-sam-cli-versions.html#manage-sam-cli-versions-upgrade).

**Deploy a sample AWS CDK application**

1. Create a project directory for your new application.

   ```
   mkdir hello-world
   cd hello-world
   ```

1. Initialize the app.

   ```
   cdk init app --language python
   ```

1.  Install the Python dependencies.

   ```
   pip install -r requirements.txt
   ```

1. Create a directory **lambda\$1function** under the root folder.

   ```
   mkdir lambda_function
   cd lambda_function
   ```

1. Create a file **app.py** and add the following code to the file. This is the code for the Lambda function.

   ```
   from aws_lambda_powertools.event_handler import APIGatewayRestResolver
   from aws_lambda_powertools.utilities.typing import LambdaContext
   from aws_lambda_powertools.logging import correlation_paths
   from aws_lambda_powertools import Logger
   from aws_lambda_powertools import Tracer
   from aws_lambda_powertools import Metrics
   from aws_lambda_powertools.metrics import MetricUnit
   
   app = APIGatewayRestResolver()
   tracer = Tracer()
   logger = Logger()
   metrics = Metrics(namespace="PowertoolsSample")
   
   @app.get("/hello")
   @tracer.capture_method
   def hello():
       # adding custom metrics
       # See: https://docs.aws.amazon.com/powertools/python/latest//latest/core/metrics/
       metrics.add_metric(name="HelloWorldInvocations", unit=MetricUnit.Count, value=1)
   
       # structured log
       # See: https://docs.aws.amazon.com/powertools/python/latest//latest/core/logger/
       logger.info("Hello world API - HTTP 200")
       return {"message": "hello world"}
   
   # Enrich logging with contextual information from Lambda
   @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
   # Adding tracer
   # See: https://docs.aws.amazon.com/powertools/python/latest//latest/core/tracer/
   @tracer.capture_lambda_handler
   # ensures metrics are flushed upon request completion/failure and capturing ColdStart metric
   @metrics.log_metrics(capture_cold_start_metric=True)
   def lambda_handler(event: dict, context: LambdaContext) -> dict:
       return app.resolve(event, context)
   ```

1. Open the **hello\$1world** directory. You should see a file called **hello\$1world\$1stack.py**.

   ```
   cd ..
   cd hello_world
   ```

1. Open **hello\$1world\$1stack.py** and add the following code to the file. This contains the [Lambda Constructor](https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_lambda.html), which creates the Lambda function, configures environment variables for Powertools and sets log retention to one week, and the [ ApiGatewayv1 Constructor](https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigateway.html), which creates the REST API.

   ```
   from aws_cdk import (
       Stack,
       aws_apigateway as apigwv1,
       aws_lambda as lambda_,
       CfnOutput,
       Duration
   )
   from constructs import Construct
   
   class HelloWorldStack(Stack):
   
       def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
           super().__init__(scope, construct_id, **kwargs)
   
           # Powertools Lambda Layer
           powertools_layer = lambda_.LayerVersion.from_layer_version_arn(
               self,
               id="lambda-powertools",
               # Using AWS Lambda Powertools via Lambda Layer
               # This imports the Powertools layer which provides observability features for Lambda functions
               # For available versions, see: https://docs.aws.amazon.com/powertools/python/latest/#lambda-layer
           )
   
           function = lambda_.Function(self,
               'sample-app-lambda',
               runtime=lambda_.Runtime.PYTHON_3_9,
               layers=[powertools_layer],
               code = lambda_.Code.from_asset("./lambda_function/"),
               handler="app.lambda_handler",
               memory_size=128,
               timeout=Duration.seconds(3),
               architecture=lambda_.Architecture.X86_64,
               environment={
                   "POWERTOOLS_SERVICE_NAME": "PowertoolsHelloWorld",
                   "POWERTOOLS_METRICS_NAMESPACE": "PowertoolsSample",
                   "LOG_LEVEL": "INFO"
               }
           )
   
           apigw = apigwv1.RestApi(self, "PowertoolsAPI", deploy_options=apigwv1.StageOptions(stage_name="dev"))
   
           hello_api = apigw.root.add_resource("hello")
           hello_api.add_method("GET", apigwv1.LambdaIntegration(function, proxy=True))
   
           CfnOutput(self, "apiUrl", value=f"{apigw.url}hello")
   ```

1. Deploy your application.

   ```
   cd ..
   cdk deploy
   ```

1. Get the URL of the deployed application:

   ```
   aws cloudformation describe-stacks --stack-name HelloWorldStack --query 'Stacks[0].Outputs[?OutputKey==`apiUrl`].OutputValue' --output text
   ```

1. Invoke the API endpoint:

   ```
   curl GET <URL_FROM_PREVIOUS_STEP>
   ```

   If successful, you'll see this response:

   ```
   {"message":"hello world"}
   ```

1. To get the logs for the function, run [sam logs](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-logs.html). For more information, see [Working with logs](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-logging.html) in the *AWS Serverless Application Model Developer Guide*.

   ```
   sam logs --stack-name HelloWorldStack
   ```

   The log output looks like this:

   ```
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:50.371000 INIT_START Runtime Version: python:3.9.v16    Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:07a48df201798d627f2b950f03bb227aab4a655a1d019c3296406f95937e2525
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.112000 START RequestId: d455cfc4-7704-46df-901b-2a5cce9405be Version: $LATEST
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.114000 {
     "level": "INFO",
     "location": "hello:23",
       "message": "Hello world API - HTTP 200",
     "timestamp": "2025-02-03 14:59:51,113+0000",
     "service": "PowertoolsHelloWorld",
     "cold_start": true,
     "function_name": "sam-app-HelloWorldFunction-YBg8yfYtOc9j",
     "function_memory_size": "128",
     "function_arn": "arn:aws:lambda:us-east-1:111122223333:function:sam-app-HelloWorldFunction-YBg8yfYtOc9j",
     "function_request_id": "d455cfc4-7704-46df-901b-2a5cce9405be",
     "correlation_id": "e73f8aef-5e07-436e-a30b-63e4b23f0047",
     "xray_trace_id": "1-63dd2166-434a12c22e1307ff2114f299"
   }
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.126000 {
     "_aws": {
       "Timestamp": 1675436391126,
       "CloudWatchMetrics": [
         {
           "Namespace": "Powertools",
           "Dimensions": [
             [
               "function_name",
               "service"
             ]
           ],
           "Metrics": [
             {
               "Name": "ColdStart",
               "Unit": "Count"
             }
           ]
         }
       ]
     },
     "function_name": "sam-app-HelloWorldFunction-YBg8yfYtOc9j",
     "service": "PowertoolsHelloWorld",
     "ColdStart": [
       1.0
     ]
   }
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.126000 {
     "_aws": {
       "Timestamp": 1675436391126,
       "CloudWatchMetrics": [
         {
           "Namespace": "Powertools",
           "Dimensions": [
             [
               "service"
             ]
           ],
           "Metrics": [
             {
               "Name": "HelloWorldInvocations",
               "Unit": "Count"
             }
           ]
         }
       ]
     },
     "service": "PowertoolsHelloWorld",
     "HelloWorldInvocations": [
       1.0
     ]
   }
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.128000 END RequestId: d455cfc4-7704-46df-901b-2a5cce9405be
   2025/02/03/[$LATEST]ea9a64ec87294bf6bbc9026c05a01e04 2025-02-03T14:59:51.128000 REPORT RequestId: d455cfc4-7704-46df-901b-2a5cce9405be    Duration: 16.33 ms    Billed Duration: 756 ms    Memory Size: 128 MB    Max Memory Used: 64 MB    Init Duration: 739.46 ms    
   XRAY TraceId: 1-63dd2166-434a12c22e1307ff2114f299    SegmentId: 3c5d18d735a1ced0    Sampled: true
   ```

1. This is a public API endpoint that is accessible over the internet. We recommend that you delete the endpoint after testing.

   ```
   cdk destroy
   ```

# AWS Lambda function testing in Python
<a name="python-testing"></a>

**Note**  
See the [Testing functions](testing-guide.md) chapter for a complete introduction to techniques and best practices for testing serverless solutions. 

 Testing serverless functions uses traditional test types and techniques, but you must also consider testing serverless applications as a whole. Cloud-based tests will provide the **most accurate** measure of quality of both your functions and serverless applications. 

 A serverless application architecture includes managed services that provide critical application functionality through API calls. For this reason, your development cycle should include automated tests that verify functionality when your function and services interact. 

 If you do not create cloud-based tests, you could encounter issues due to differences between your local environment and the deployed environment. Your continuous integration process should run tests against a suite of resources provisioned in the cloud before promoting your code to the next deployment environment, such as QA, Staging, or Production. 

 Continue reading this short guide to learn about testing strategies for serverless applications, or visit the [Serverless Test Samples repository](https://github.com/aws-samples/serverless-test-samples) to dive in with practical examples, specific to your chosen language and runtime. 

 ![\[illustration showing the relationship between types of tests\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-type-illustration2.png) 

 For serverless testing, you will still write *unit*, *integration* and *end-to-end* tests. 
+ **Unit tests** - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge given a particular item and destination.
+ **Integration tests** - Tests involving two or more components or services that interact, typically in a cloud environment. For example, verifying a function processes events from a queue.
+ **End-to-end tests** - Tests that verify behavior across an entire application. For example, ensuring infrastructure is set up correctly and that events flow between services as expected to record a customer's order.

## Testing your serverless applications
<a name="python-testing-techniques-for-serverless-applications"></a>

 You will generally use a mix of approaches to test your serverless application code, including testing in the cloud, testing with mocks, and occasionally testing with emulators. 

### Testing in the cloud
<a name="python-testing-in-the-cloud"></a>

 Testing in the cloud is valuable for all phases of testing, including unit tests, integration tests, and end-to-end tests. You run tests against code deployed in the cloud and interacting with cloud-based services. This approach provides the **most accurate** measure of quality of your code. 

 A convenient way to debug your Lambda function in the cloud is through the console with a test event. A *test event* is a JSON input to your function. If your function does not require input, the event can be an empty JSON document `({})`. The console provides sample events for a variety of service integrations. After creating an event in the console, you can share it with your team to make testing easier and consistent. 

**Note**  
[Testing a function in the console](testing-functions.md) is a quick way to get started, but automating your test cycles ensures application quality and development speed. 

### Testing tools
<a name="python-testing-tools"></a>

 Tools and techniques exist to accelerate development feedback loops. For example, [AWS SAM Accelerate](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/accelerate.html) and [AWS CDK watch mode](https://docs.aws.amazon.com/cdk/v2/guide/cli.html#cli-deploy-watch) both decrease the time required to update cloud environments. 

[Moto](https://pypi.org/project/moto/) is a Python library for mocking AWS services and resources, so that you can test your functions with little or no modification using decorators to intercept and simulate responses. 

 The validation feature of the [Powertools for AWS Lambda (Python)](https://docs.powertools.aws.dev/lambda-python/latest/utilities/validation/) provides decorators so you can validate input events and output responses from your Python functions. 

 For more information, read the blog post [Unit Testing Lambda with Python and Mock AWS Services](https://aws.amazon.com/blogs/devops/unit-testing-aws-lambda-with-python-and-mock-aws-services/). 

 To reduce the latency involved with cloud deployment iterations, see [AWS Serverless Application Model (AWS SAM) Accelerate](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/using-sam-cli-sync.html), [AWS Cloud Development Kit (AWS CDK) watch mode](https://docs.aws.amazon.com/cdk/v2/guide/cli.html#cli-deploy-watch). These tools monitor your infrastructure and code for changes. They react to these changes by creating and deploying incremental updates automatically into your cloud environment. 

 Examples that use these tools are available in the [Python Test Samples](https://github.com/aws-samples/serverless-test-samples/tree/main/python-test-samples) code repository. 

# Instrumenting Python code in AWS Lambda
<a name="python-tracing"></a>

Lambda integrates with AWS X-Ray to help you trace, debug, and optimize Lambda applications. You can use X-Ray to trace a request as it traverses resources in your application, which may include Lambda functions and other AWS services.

To send tracing data to X-Ray, you can use one of three SDK libraries:
+ [AWS Distro for OpenTelemetry (ADOT)](https://aws.amazon.com/otel) – A secure, production-ready, AWS-supported distribution of the OpenTelemetry (OTel) SDK.
+ [AWS X-Ray SDK for Python](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python.html) – An SDK for generating and sending trace data to X-Ray.
+ [Powertools for AWS Lambda (Python)](https://docs.aws.amazon.com/powertools/python/latest/) – A developer toolkit to implement Serverless best practices and increase developer velocity.

Each of the SDKs offer ways to send your telemetry data to the X-Ray service. You can then use X-Ray to view, filter, and gain insights into your application's performance metrics to identify issues and opportunities for optimization.

**Important**  
The X-Ray and Powertools for AWS Lambda SDKs are part of a tightly integrated instrumentation solution offered by AWS. The ADOT Lambda Layers are part of an industry-wide standard for tracing instrumentation that collect more data in general, but may not be suited for all use cases. You can implement end-to-end tracing in X-Ray using either solution. To learn more about choosing between them, see [Choosing between the AWS Distro for Open Telemetry and X-Ray SDKs](https://docs.aws.amazon.com/xray/latest/devguide/xray-instrumenting-your-app.html#xray-instrumenting-choosing).

**Topics**
+ [

## Using Powertools for AWS Lambda (Python) and AWS SAM for tracing
](#python-tracing-sam)
+ [

## Using Powertools for AWS Lambda (Python) and the AWS CDK for tracing
](#python-logging-cdk)
+ [

## Using ADOT to instrument your Python functions
](#python-adot)
+ [

## Using the X-Ray SDK to instrument your Python functions
](#python-xray-sdk)
+ [

## Activating tracing with the Lambda console
](#python-tracing-console)
+ [

## Activating tracing with the Lambda API
](#python-tracing-api)
+ [

## Activating tracing with CloudFormation
](#python-tracing-cloudformation)
+ [

## Interpreting an X-Ray trace
](#python-tracing-interpretation)
+ [

## Storing runtime dependencies in a layer (X-Ray SDK)
](#python-tracing-layers)

## Using Powertools for AWS Lambda (Python) and AWS SAM for tracing
<a name="python-tracing-sam"></a>

Follow the steps below to download, build, and deploy a sample Hello World Python application with integrated [Powertools for AWS Lambda (Python)](https://docs.powertools.aws.dev/lambda-python) modules using the AWS SAM. This application implements a basic API backend and uses Powertools for emitting logs, metrics, and traces. It consists of an Amazon API Gateway endpoint and a Lambda function. When you send a GET request to the API Gateway endpoint, the Lambda function invokes, sends logs and metrics using Embedded Metric Format to CloudWatch, and sends traces to AWS X-Ray. The function returns a hello world message.

**Prerequisites**

To complete the steps in this section, you must have the following:
+ Python 3.11
+ [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [AWS SAM CLI version 1.75 or later](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). If you have an older version of the AWS SAM CLI, see [Upgrading the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/manage-sam-cli-versions.html#manage-sam-cli-versions-upgrade).

**Deploy a sample AWS SAM application**

1. Initialize the application using the Hello World Python template.

   ```
   sam init --app-template hello-world-powertools-python --name sam-app --package-type Zip --runtime python3.11 --no-tracing
   ```

1. Build the app.

   ```
   cd sam-app && sam build
   ```

1. Deploy the app.

   ```
   sam deploy --guided
   ```

1. Follow the on-screen prompts. To accept the default options provided in the interactive experience, press `Enter`.
**Note**  
For **HelloWorldFunction may not have authorization defined, Is this okay?**, make sure to enter `y`.

1. Get the URL of the deployed application:

   ```
   aws cloudformation describe-stacks --stack-name sam-app --query 'Stacks[0].Outputs[?OutputKey==`HelloWorldApi`].OutputValue' --output text
   ```

1. Invoke the API endpoint:

   ```
   curl -X GET <URL_FROM_PREVIOUS_STEP>
   ```

   If successful, you'll see this response:

   ```
   {"message":"hello world"}
   ```

1. To get the traces for the function, run [sam traces](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-traces.html).

   ```
   sam traces
   ```

   The trace output looks like this:

   ```
   New XRay Service Graph
     Start time: 2023-02-03 14:59:50+00:00
     End time: 2023-02-03 14:59:50+00:00
     Reference Id: 0 - (Root) AWS::Lambda - sam-app-HelloWorldFunction-YBg8yfYtOc9j - Edges: [1]
      Summary_statistics:
        - total requests: 1
        - ok count(2XX): 1
        - error count(4XX): 0
        - fault count(5XX): 0
        - total response time: 0.924
     Reference Id: 1 - AWS::Lambda::Function - sam-app-HelloWorldFunction-YBg8yfYtOc9j - Edges: []
      Summary_statistics:
        - total requests: 1
        - ok count(2XX): 1
        - error count(4XX): 0
        - fault count(5XX): 0
        - total response time: 0.016
     Reference Id: 2 - client - sam-app-HelloWorldFunction-YBg8yfYtOc9j - Edges: [0]
      Summary_statistics:
        - total requests: 0
        - ok count(2XX): 0
        - error count(4XX): 0
        - fault count(5XX): 0
        - total response time: 0
   
   XRay Event [revision 1] at (2023-02-03T14:59:50.204000) with id (1-63dd2166-434a12c22e1307ff2114f299) and duration (0.924s)
    - 0.924s - sam-app-HelloWorldFunction-YBg8yfYtOc9j [HTTP: 200]
    - 0.016s - sam-app-HelloWorldFunction-YBg8yfYtOc9j
      - 0.739s - Initialization
      - 0.016s - Invocation
        - 0.013s - ## lambda_handler
          - 0.000s - ## app.hello
      - 0.000s - Overhead
   ```

1. This is a public API endpoint that is accessible over the internet. We recommend that you delete the endpoint after testing.

   ```
   sam delete
   ```

X-Ray doesn't trace all requests to your application. X-Ray applies a sampling algorithm to ensure that tracing is efficient, while still providing a representative sample of all requests. The sampling rate is 1 request per second and 5 percent of additional requests. You can't configure the X-Ray sampling rate for your functions.

## Using Powertools for AWS Lambda (Python) and the AWS CDK for tracing
<a name="python-logging-cdk"></a>

Follow the steps below to download, build, and deploy a sample Hello World Python application with integrated [Powertools for AWS Lambda (Python)](https://docs.powertools.aws.dev/lambda-python) modules using the AWS CDK. This application implements a basic API backend and uses Powertools for emitting logs, metrics, and traces. It consists of an Amazon API Gateway endpoint and a Lambda function. When you send a GET request to the API Gateway endpoint, the Lambda function invokes, sends logs and metrics using Embedded Metric Format to CloudWatch, and sends traces to AWS X-Ray. The function returns a hello world message.

**Prerequisites**

To complete the steps in this section, you must have the following:
+ Python 3.11
+ [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [AWS CDK version 2](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites)
+ [AWS SAM CLI version 1.75 or later](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). If you have an older version of the AWS SAM CLI, see [Upgrading the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/manage-sam-cli-versions.html#manage-sam-cli-versions-upgrade).

**Deploy a sample AWS CDK application**

1. Create a project directory for your new application.

   ```
   mkdir hello-world
   cd hello-world
   ```

1. Initialize the app.

   ```
   cdk init app --language python
   ```

1.  Install the Python dependencies.

   ```
   pip install -r requirements.txt
   ```

1. Create a directory **lambda\$1function** under the root folder.

   ```
   mkdir lambda_function
   cd lambda_function
   ```

1. Create a file **app.py** and add the following code to the file. This is the code for the Lambda function.

   ```
   from aws_lambda_powertools.event_handler import APIGatewayRestResolver
   from aws_lambda_powertools.utilities.typing import LambdaContext
   from aws_lambda_powertools.logging import correlation_paths
   from aws_lambda_powertools import Logger
   from aws_lambda_powertools import Tracer
   from aws_lambda_powertools import Metrics
   from aws_lambda_powertools.metrics import MetricUnit
   
   app = APIGatewayRestResolver()
   tracer = Tracer()
   logger = Logger()
   metrics = Metrics(namespace="PowertoolsSample")
   
   @app.get("/hello")
   @tracer.capture_method
   def hello():
       # adding custom metrics
       # See: https://docs.powertools.aws.dev/lambda-python/latest/core/metrics/
       metrics.add_metric(name="HelloWorldInvocations", unit=MetricUnit.Count, value=1)
   
       # structured log
       # See: https://docs.powertools.aws.dev/lambda-python/latest/core/logger/
       logger.info("Hello world API - HTTP 200")
       return {"message": "hello world"}
   
   # Enrich logging with contextual information from Lambda
   @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
   # Adding tracer
   # See: https://docs.powertools.aws.dev/lambda-python/latest/core/tracer/
   @tracer.capture_lambda_handler
   # ensures metrics are flushed upon request completion/failure and capturing ColdStart metric
   @metrics.log_metrics(capture_cold_start_metric=True)
   def lambda_handler(event: dict, context: LambdaContext) -> dict:
       return app.resolve(event, context)
   ```

1. Open the **hello\$1world** directory. You should see a file called **hello\$1world\$1stack.py**.

   ```
   cd ..
   cd hello_world
   ```

1. Open **hello\$1world\$1stack.py** and add the following code to the file. This contains the [Lambda Constructor](https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_lambda.html), which creates the Lambda function, configures environment variables for Powertools and sets log retention to one week, and the [ ApiGatewayv1 Constructor](https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigateway.html), which creates the REST API.

   ```
   from aws_cdk import (
       Stack,
       aws_apigateway as apigwv1,
       aws_lambda as lambda_,
       CfnOutput,
       Duration
   )
   from constructs import Construct
   
   class HelloWorldStack(Stack):
   
       def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
           super().__init__(scope, construct_id, **kwargs)
   
           # Powertools Lambda Layer
           powertools_layer = lambda_.LayerVersion.from_layer_version_arn(
               self,
               id="lambda-powertools",
               # At the moment we wrote this example, the aws_lambda_python_alpha CDK constructor is in Alpha, o we use layer to make the example simpler
               # See https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_lambda_python_alpha/README.html
               # Check all Powertools layers versions here: https://docs.powertools.aws.dev/lambda-python/latest/#lambda-layer
               layer_version_arn=f"arn:aws:lambda:{self.region}:017000801446:layer:AWSLambdaPowertoolsPythonV2:21"
           )
   
           function = lambda_.Function(self,
               'sample-app-lambda',
               runtime=lambda_.Runtime.PYTHON_3_11,
               layers=[powertools_layer],
               code = lambda_.Code.from_asset("./lambda_function/"),
               handler="app.lambda_handler",
               memory_size=128,
               timeout=Duration.seconds(3),
               architecture=lambda_.Architecture.X86_64,
               environment={
                   "POWERTOOLS_SERVICE_NAME": "PowertoolsHelloWorld",
                   "POWERTOOLS_METRICS_NAMESPACE": "PowertoolsSample",
                   "LOG_LEVEL": "INFO"
               }
           )
   
           apigw = apigwv1.RestApi(self, "PowertoolsAPI", deploy_options=apigwv1.StageOptions(stage_name="dev"))
   
           hello_api = apigw.root.add_resource("hello")
           hello_api.add_method("GET", apigwv1.LambdaIntegration(function, proxy=True))
   
           CfnOutput(self, "apiUrl", value=f"{apigw.url}hello")
   ```

1. Deploy your application.

   ```
   cd ..
   cdk deploy
   ```

1. Get the URL of the deployed application:

   ```
   aws cloudformation describe-stacks --stack-name HelloWorldStack --query 'Stacks[0].Outputs[?OutputKey==`apiUrl`].OutputValue' --output text
   ```

1. Invoke the API endpoint:

   ```
   curl -X GET <URL_FROM_PREVIOUS_STEP>
   ```

   If successful, you'll see this response:

   ```
   {"message":"hello world"}
   ```

1. To get the traces for the function, run [sam traces](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-traces.html).

   ```
   sam traces
   ```

   The traces output looks like this:

   ```
   New XRay Service Graph
     Start time: 2023-02-03 14:59:50+00:00
     End time: 2023-02-03 14:59:50+00:00
     Reference Id: 0 - (Root) AWS::Lambda - sam-app-HelloWorldFunction-YBg8yfYtOc9j - Edges: [1]
      Summary_statistics:
        - total requests: 1
        - ok count(2XX): 1
        - error count(4XX): 0
        - fault count(5XX): 0
        - total response time: 0.924
     Reference Id: 1 - AWS::Lambda::Function - sam-app-HelloWorldFunction-YBg8yfYtOc9j - Edges: []
      Summary_statistics:
        - total requests: 1
        - ok count(2XX): 1
        - error count(4XX): 0
        - fault count(5XX): 0
        - total response time: 0.016
     Reference Id: 2 - client - sam-app-HelloWorldFunction-YBg8yfYtOc9j - Edges: [0]
      Summary_statistics:
        - total requests: 0
        - ok count(2XX): 0
        - error count(4XX): 0
        - fault count(5XX): 0
        - total response time: 0
   
   XRay Event [revision 1] at (2023-02-03T14:59:50.204000) with id (1-63dd2166-434a12c22e1307ff2114f299) and duration (0.924s)
    - 0.924s - sam-app-HelloWorldFunction-YBg8yfYtOc9j [HTTP: 200]
    - 0.016s - sam-app-HelloWorldFunction-YBg8yfYtOc9j
      - 0.739s - Initialization
      - 0.016s - Invocation
        - 0.013s - ## lambda_handler
          - 0.000s - ## app.hello
      - 0.000s - Overhead
   ```

1. This is a public API endpoint that is accessible over the internet. We recommend that you delete the endpoint after testing.

   ```
   cdk destroy
   ```

## Using ADOT to instrument your Python functions
<a name="python-adot"></a>

ADOT provides fully managed Lambda [layers](chapter-layers.md) that package everything you need to collect telemetry data using the OTel SDK. By consuming this layer, you can instrument your Lambda functions without having to modify any function code. You can also configure your layer to do custom initialization of OTel. For more information, see [Custom configuration for the ADOT Collector on Lambda](https://aws-otel.github.io/docs/getting-started/lambda#custom-configuration-for-the-adot-collector-on-lambda) in the ADOT documentation.

For Python runtimes, you can add the **AWS managed Lambda layer for ADOT Python** to automatically instrument your functions. This layer works for both arm64 and x86\$164 architectures. For detailed instructions on how to add this layer, see [AWS Distro for OpenTelemetry Lambda Support for Python](https://aws-otel.github.io/docs/getting-started/lambda/lambda-python) in the ADOT documentation.

## Using the X-Ray SDK to instrument your Python functions
<a name="python-xray-sdk"></a>

To record details about calls that your Lambda function makes to other resources in your application, you can also use the AWS X-Ray SDK for Python. To get the SDK, add the `aws-xray-sdk` package to your application's dependencies.

**Example [requirements.txt](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-python/function/requirements.txt)**  

```
jsonpickle==1.3
aws-xray-sdk==2.4.3
```

In your function code, you can instrument AWS SDK clients by patching the `boto3` library with the `aws_xray_sdk.core` module.

**Example [function – Tracing an AWS SDK client](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-python/function/lambda_function.py)**  

```
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all

logger = logging.getLogger()
logger.setLevel(logging.INFO)
patch_all()

client = boto3.client('lambda')
client.get_account_settings()

def lambda_handler(event, context):
    logger.info('## ENVIRONMENT VARIABLES\r' + jsonpickle.encode(dict(**os.environ)))
  ...
```

After you add the correct dependencies and make the necessary code changes, activate tracing in your function's configuration via the Lambda console or the API.

## Activating tracing with the Lambda console
<a name="python-tracing-console"></a>

To toggle active tracing on your Lambda function with the console, follow these steps:

**To turn on active tracing**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose **Configuration** and then choose **Monitoring and operations tools**.

1. Under **Additional monitoring tools**, choose **Edit**.

1. Under **CloudWatch Application Signals and AWS X-Ray**, choose **Enable** for **Lambda service traces**.

1. Choose **Save**.

## Activating tracing with the Lambda API
<a name="python-tracing-api"></a>

Configure tracing on your Lambda function with the AWS CLI or AWS SDK, use the following API operations:
+ [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html)
+ [GetFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_GetFunctionConfiguration.html)
+ [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html)

The following example AWS CLI command enables active tracing on a function named **my-function**.

```
aws lambda update-function-configuration --function-name my-function \
--tracing-config Mode=Active
```

Tracing mode is part of the version-specific configuration when you publish a version of your function. You can't change the tracing mode on a published version.

## Activating tracing with CloudFormation
<a name="python-tracing-cloudformation"></a>

To activate tracing on an `AWS::Lambda::Function` resource in an CloudFormation template, use the `TracingConfig` property.

**Example [function-inline.yml](https://github.com/awsdocs/aws-lambda-developer-guide/blob/master/templates/function-inline.yml) – Tracing configuration**  

```
Resources:
  function:
    Type: [AWS::Lambda::Function](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html)
    Properties:
      TracingConfig:
        Mode: Active
      ...
```

For an AWS Serverless Application Model (AWS SAM) `AWS::Serverless::Function` resource, use the `Tracing` property.

**Example [template.yml](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-nodejs/template.yml) – Tracing configuration**  

```
Resources:
  function:
    Type: [AWS::Serverless::Function](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html)
    Properties:
      Tracing: Active
      ...
```

## Interpreting an X-Ray trace
<a name="python-tracing-interpretation"></a>

Your function needs permission to upload trace data to X-Ray. When you activate tracing in the Lambda console, Lambda adds the required permissions to your function's [execution role](lambda-intro-execution-role.md). Otherwise, add the [AWSXRayDaemonWriteAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess) policy to the execution role.

After you've configured active tracing, you can observe specific requests through your application. The [ X-Ray service graph](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html#xray-concepts-servicegraph) shows information about your application and all its components. The following example shows an application with two functions. The primary function processes events and sometimes returns errors. The second function at the top processes errors that appear in the first's log group and uses the AWS SDK to call X-Ray, Amazon Simple Storage Service (Amazon S3), and Amazon CloudWatch Logs.

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/sample-errorprocessor-servicemap.png)


X-Ray doesn't trace all requests to your application. X-Ray applies a sampling algorithm to ensure that tracing is efficient, while still providing a representative sample of all requests. The sampling rate is 1 request per second and 5 percent of additional requests. You can't configure the X-Ray sampling rate for your functions.

In X-Ray, a *trace* records information about a request that is processed by one or more *services*. Lambda records 2 segments per trace, which creates two nodes on the service graph. The following image highlights these two nodes:

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/xray-servicemap-function.png)


The first node on the left represents the Lambda service, which receives the invocation request. The second node represents your specific Lambda function. The following example shows a trace with these two segments. Both are named **my-function**, but one has an origin of `AWS::Lambda` and the other has an origin of `AWS::Lambda::Function`. If the `AWS::Lambda` segment shows an error, the Lambda service had an issue. If the `AWS::Lambda::Function` segment shows an error, your function had an issue.

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/V2_sandbox_images/my-function-2-v1.png)


This example expands the `AWS::Lambda::Function` segment to show its three subsegments.

**Note**  
AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account.  
The example trace shown here illustrates the old-style function segment. The differences between the old- and new-style segments are described in the following paragraphs.  
These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the new-format log messages and trace segments.

The old-style function segment contains the following subsegments:
+ **Initialization** – Represents time spent loading your function and running [initialization code](foundation-progmodel.md). This subsegment only appears for the first event that each instance of your function processes.
+ **Invocation** – Represents the time spent running your handler code.
+ **Overhead** – Represents the time the Lambda runtime spends preparing to handle the next event.

The new-style function segment doesn't contain an `Invocation` subsegment. Instead, customer subsegments are attached directly to the function segment. For more information about the structure of the old- and new-style function segments, see [Understanding X-Ray traces](services-xray.md#services-xray-traces).

You can also instrument HTTP clients, record SQL queries, and create custom subsegments with annotations and metadata. For more information, see the [AWS X-Ray SDK for Python](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python.html) in the *AWS X-Ray Developer Guide*.

**Pricing**  
You can use X-Ray tracing for free each month up to a certain limit as part of the AWS Free Tier. Beyond that threshold, X-Ray charges for trace storage and retrieval. For more information, see [AWS X-Ray pricing](https://aws.amazon.com/xray/pricing/).

## Storing runtime dependencies in a layer (X-Ray SDK)
<a name="python-tracing-layers"></a>

If you use the X-Ray SDK to instrument AWS SDK clients your function code, your deployment package can become quite large. To avoid uploading runtime dependencies every time you update your function code, package the X-Ray SDK in a [Lambda layer](chapter-layers.md).

The following example shows an `AWS::Serverless::LayerVersion` resource that stores the AWS X-Ray SDK for Python.

**Example [template.yml](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-python/template.yml) – Dependencies layer**  

```
Resources:
  function:
    Type: [AWS::Serverless::Function](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html)
    Properties:
      CodeUri: function/.
      Tracing: Active
      Layers:
        - !Ref libs
      ...
  libs:
    Type: [AWS::Serverless::LayerVersion](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-layerversion.html)
    Properties:
      LayerName: blank-python-lib
      Description: Dependencies for the blank-python sample app.
      ContentUri: package/.
      CompatibleRuntimes:
        - python3.11
```

With this configuration, you update the library layer only if you change your runtime dependencies. Since the function deployment package contains only your code, this can help reduce upload times.

Creating a layer for dependencies requires build changes to generate the layer archive prior to deployment. For a working example, see the [blank-python](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-python) sample application.