

# Create a Hybrid Job


 This section shows you how to create a Hybrid Job using a Python script. Alternatively, to create a hybrid job from local Python code, such as your preferred integrated development environment (IDE) or a Braket notebook, see [Run your local code as a hybrid job](braket-hybrid-job-decorator.md).

**Topics**
+ [

## Create and run
](#braket-jobs-first-create)
+ [

## Monitor your results
](#braket-jobs-first-monitor-results)
+ [

## Save your results
](#braket-jobs-save-results)
+ [

## Using checkpoints
](#braket-jobs-checkpoints)
+ [

# Run your local code as a hybrid job
](braket-hybrid-job-decorator.md)
+ [

# Using the API with Hybrid Jobs
](braket-jobs-api.md)
+ [

# Create and debug a hybrid job with local mode
](braket-jobs-local-mode.md)

## Create and run


Once you have a role with permissions to run a hybrid job, you are ready to proceed. The key piece of your first Braket hybrid job is the *algorithm script*. It defines the algorithm you want to run and contains the classical logic and quantum tasks that are part of your algorithm. In addition to your algorithm script, you can provide other dependency files. The algorithm script together with its dependencies is called the *source module*. The *entry point* defines the first file or function to run in your source module when the hybrid job starts.

![\[Diagram showing the workflow of creating a quantum job using a console or notebook, running the algorithm script on a quantum device, and analyzing results.\]](http://docs.aws.amazon.com/braket/latest/developerguide/images/braket-jobs-first-workflow.jpg)


First, consider the following basic example of an algorithm script that creates five bell states and prints the corresponding measurement results.

```
import os

from braket.aws import AwsDevice
from braket.circuits import Circuit


def start_here():

    print("Test job started!")

    # Use the device declared in the job script
    device = AwsDevice(os.environ["AMZN_BRAKET_DEVICE_ARN"])

    bell = Circuit().h(0).cnot(0, 1)
    for count in range(5):
        task = device.run(bell, shots=100)
        print(task.result().measurement_counts)

    print("Test job completed!")
```

Save this file with the name *algorithm\$1script.py* in your current working directory on your Braket notebook or local environment. The algorithm\$1script.py file has `start_here()` as the planned entry point.

Next, create a Python file or Python notebook in the same directory as the algorithm\$1script.py file. This script kicks off the hybrid job and handles any asynchronous processing, such as printing the status or key outcomes that we are interested in. At a minimum, this script needs to specify your hybrid job script and your primary device.

**Note**  
For more information about how to create a Braket notebook or upload a file, such as the *algorithm\$1script.py* file, in the same directory as the notebooks, see [Run your first circuit using the Amazon Braket Python SDK](braket-get-started-run-circuit.md) 

For this basic first case, you target a simulator. Whichever type of quantum device you target, a simulator or an actual quantum processing unit (QPU), the device you specify with `device` in the following script is used to schedule the hybrid job and is available to the algorithm scripts as the environment variable `AMZN_BRAKET_DEVICE_ARN`.

**Note**  
You can only use devices that are available in the AWS Region of your hybrid job. The Amazon Braket SDK auto selects this AWS Region. For example, a hybrid job in us-east-1 can use IonQ, SV1, DM1, and TN1 devices, but not Rigetti devices.

If you choose a quantum computer instead of a simulator, Braket schedules your hybrid jobs to run all of their quantum tasks with priority access.

```
from braket.aws import AwsQuantumJob
from braket.devices import Devices

job = AwsQuantumJob.create(
    Devices.Amazon.SV1,
    source_module="algorithm_script.py",
    entry_point="algorithm_script:start_here",
    wait_until_complete=True
)
```

The parameter `wait_until_complete=True` sets a verbose mode so that your job prints output from the actual job as it's running. You should see an output similar to the following example.

```
Initializing Braket Job: arn:aws:braket:us-west-2:111122223333:job/braket-job-default-123456789012
Job queue position: 1
Job queue position: 1
Job queue position: 1
..............
.
.
.
Beginning Setup
Checking for Additional Requirements
Additional Requirements Check Finished
Running Code As Process
Test job started!
Counter({'00': 58, '11': 42})
Counter({'00': 55, '11': 45})
Counter({'11': 51, '00': 49})
Counter({'00': 56, '11': 44})
Counter({'11': 56, '00': 44})
Test job completed!
Code Run Finished
2025-09-24 23:13:40,962 sagemaker-training-toolkit INFO     Reporting training SUCCESS
```

**Note**  
You can also use your custom-made module with the [AwsQuantumJob.create](https://amazon-braket-sdk-python.readthedocs.io/en/latest/_apidoc/braket.aws.aws_quantum_job.html#braket.aws.aws_quantum_job.AwsQuantumJob.create) method by passing its location (either the path to a local directory or file, or an S3 URI of a tar.gz file). For a working example, see [Parallelize\$1training\$1for\$1QML.ipynb](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/5_Parallelize_training_for_QML/Parallelize_training_for_QML.ipynb) file in the hybrid jobs folder in the [Amazon Braket examples Github repo](https://github.com/amazon-braket/amazon-braket-examples/tree/main).

## Monitor your results


Alternatively, you can access the log output from Amazon CloudWatch. To do this, go to the **Log groups** tab on the left menu of the job detail page, select the log group `aws/braket/jobs`, and then choose the log stream that contains the job name. In the example above, this is `braket-job-default-1631915042705/algo-1-1631915190`.

![\[CloudWatch log group showing list of log events with file paths and timestamps for Amazon Braket SDK Python tests.\]](http://docs.aws.amazon.com/braket/latest/developerguide/images/braket-jobs-first-cw-log.png)


You can also view the status of the hybrid job in the console by selecting the **Hybrid Jobs** page and then choose **Settings**.

![\[Amazon Braket hybrid job details showing summary, event times, source code and instance configuration, and stopping conditions.\]](http://docs.aws.amazon.com/braket/latest/developerguide/images/braket-jobs-first-console-status.png)


Your hybrid job produces some artifacts in Amazon S3 while it runs. The default S3 bucket name is `amazon-braket-<region>-<accountid>` and the content is in the `jobs/<jobname>/<timestamp>` directory. You can configure the S3 locations where these artifacts are stored by specifying a different `code_location` when the hybrid job is created with the Braket Python SDK.

**Note**  
This S3 bucket must be located in the same AWS Region as your job script.

The `jobs/<jobname>/<timestamp>` directory contains a subfolder with the output from the entry point script in a `model.tar.gz` file. There is also a directory called `script` that contains your algorithm script artifacts in a `source.tar.gz` file. The results from your actual quantum tasks are in the directory named `jobs/<jobname>/tasks`.

## Save your results


You can save the results generated by the algorithm script so that they are available from the hybrid job object in the hybrid job script as well as from the output folder in Amazon S3 (in a tar-zipped file named model.tar.gz).

The output must be saved in a file using a JavaScript Object Notation (JSON) format. If the data can not be readily serialized to text, as in the case of a numpy array, you could pass in an option to serialize using a pickled data format. See the [braket.jobs.data\$1persistence module](https://amazon-braket-sdk-python.readthedocs.io/en/latest/_apidoc/braket.jobs.data_persistence.html#braket.jobs.data_persistence.save_job_result) for more details.

To save the results of the hybrid jobs, add the following lines commented with \$1ADD to the algorithm\$1script.py file.

```
import os

from braket.aws import AwsDevice
from braket.circuits import Circuit
from braket.jobs import save_job_result  # ADD


def start_here():

    print("Test job started!")

    device = AwsDevice(os.environ['AMZN_BRAKET_DEVICE_ARN'])

    results = []  # ADD

    bell = Circuit().h(0).cnot(0, 1)
    for count in range(5):
        task = device.run(bell, shots=100)
        print(task.result().measurement_counts)
        results.append(task.result().measurement_counts)  # ADD

        save_job_result({"measurement_counts": results})  # ADD

    print("Test job completed!")
```

You can then display the results of the job from your job script by appending the line ** `print(job.result())` ** commented with \$1ADD.

```
import time
from braket.aws import AwsQuantumJob

job = AwsQuantumJob.create(
    source_module="algorithm_script.py",
    entry_point="algorithm_script:start_here",
    device="arn:aws:braket:::device/quantum-simulator/amazon/sv1",
)

print(job.arn)
while job.state() not in AwsQuantumJob.TERMINAL_STATES:
    print(job.state())
    time.sleep(10)

print(job.state())
print(job.result())   # ADD
```

In this example, we have removed `wait_until_complete=True` to suppress verbose output. You can add it back in for debugging. When you run this hybrid job, it outputs the identifier and the `job-arn`, followed by the state of the hybrid job every 10 seconds until the hybrid job is `COMPLETED`, after which it shows you the results of the bell circuit. See the following example.

```
arn:aws:braket:us-west-2:111122223333:job/braket-job-default-123456789012
INITIALIZED
RUNNING
RUNNING
RUNNING
RUNNING
RUNNING
RUNNING
RUNNING
RUNNING
RUNNING
RUNNING
...
RUNNING
RUNNING
COMPLETED
{'measurement_counts': [{'11': 53, '00': 47},..., {'00': 51, '11': 49}]}
```

## Using checkpoints


You can save intermediate iterations of your hybrid jobs using checkpoints. In the algorithm script example from the previous section, you would add the following lines commented with \$1ADD to create checkpoint files.

```
from braket.aws import AwsDevice
from braket.circuits import Circuit
from braket.jobs import save_job_checkpoint  # ADD
import os


def start_here():

    print("Test job starts!")

    device = AwsDevice(os.environ["AMZN_BRAKET_DEVICE_ARN"])

    # ADD the following code
    job_name = os.environ["AMZN_BRAKET_JOB_NAME"]
    save_job_checkpoint(checkpoint_data={"data": f"data for checkpoint from {job_name}"}, checkpoint_file_suffix="checkpoint-1")  # End of ADD

    bell = Circuit().h(0).cnot(0, 1)
    for count in range(5):
        task = device.run(bell, shots=100)
        print(task.result().measurement_counts)

    print("Test hybrid job completed!")
```

When you run the hybrid job, it creates the file *<jobname>-checkpoint-1.json* in your hybrid job artifacts in the checkpoints directory with a default `/opt/jobs/checkpoints` path. The hybrid job script remains unchanged unless you want to change this default path.

If you want to load a hybrid job from a checkpoint generated by a previous hybrid job, the algorithm script uses `from braket.jobs import load_job_checkpoint`. The logic to load in your algorithm script is as follows.

```
from braket.jobs import load_job_checkpoint

checkpoint_1 = load_job_checkpoint(
    "previous_job_name",
    checkpoint_file_suffix="checkpoint-1",
)
```

After loading this checkpoint, you can continue your logic based on the content loaded to `checkpoint-1`.

**Note**  
The *checkpoint\$1file\$1suffix* must match the suffix previously specified when creating the checkpoint.

Your orchestration script needs to specify the `job-arn` from the previous hybrid job with the line commented with \$1ADD.

```
from braket.aws import AwsQuantumJob

job = AwsQuantumJob.create(
    source_module="source_dir",
    entry_point="source_dir.algorithm_script:start_here",
    device="arn:aws:braket:::device/quantum-simulator/amazon/sv1",
    copy_checkpoints_from_job="<previous-job-ARN>", #ADD
    )
```

# Run your local code as a hybrid job


Amazon Braket Hybrid Jobs provides a fully managed orchestration of hybrid quantum-classical algorithms, combining Amazon EC2 compute resources with Amazon Braket Quantum Processing Unit (QPU) access. Quantum tasks created in a hybrid job have priority queueing over individual quantum tasks so that your algorithms won't be interrupted by fluctuations in the quantum task queue. Each QPU maintains a separate hybrid jobs queue, ensuring that only one hybrid job can run at any given time.

**Topics**
+ [

## Create a hybrid job from local Python code
](#create-hybrid-job-from-local-python-code)
+ [

## Install additional Python packages and source code
](#install-python-packages-and-code)
+ [

## Save and load data into a hybrid job instance
](#save-load-data-into-instance)
+ [

## Best practices for hybrid job decorators
](#best-practices)

## Create a hybrid job from local Python code


You can run your local Python code as an Amazon Braket Hybrid Job. You can do this by annotating your code with an `@hybrid_job` decorator, as shown in the following code example. For custom environments, you can opt to [use a custom container](braket-jobs-byoc.md) from Amazon Elastic Container Registry (ECR). 

**Note**  
Only Python 3.12 is supported by default.

 You can use the `@hybrid_job` decorator to annotate a function. Braket transforms the code inside the decorator into a Braket hybrid job [algorithm script](braket-jobs-first.md). The hybrid job then invokes the function inside the decorator on an Amazon EC2 instance. You can monitor the progress of the job with `job.state()` or with the Braket console. The following code example shows how to run a sequence of five states on the State Vector Simulator (SV1) device. 

```
from braket.aws import AwsDevice
from braket.circuits import Circuit, FreeParameter, Observable
from braket.devices import Devices
from braket.jobs.hybrid_job import hybrid_job
from braket.jobs.metrics import log_metric

device_arn = Devices.Amazon.SV1


@hybrid_job(device=device_arn)  # Choose priority device
def run_hybrid_job(num_tasks=1):
    device = AwsDevice(device_arn)  # Declare AwsDevice within the hybrid job

    # Create a parametric circuit
    circ = Circuit()
    circ.rx(0, FreeParameter("theta"))
    circ.cnot(0, 1)
    circ.expectation(observable=Observable.X(), target=0)

    theta = 0.0  # Initial parameter

    for i in range(num_tasks):
        task = device.run(circ, shots=100, inputs={"theta": theta})  # Input parameters
        exp_val = task.result().values[0]

        theta += exp_val  # Modify the parameter (possibly gradient descent)

        log_metric(metric_name="exp_val", value=exp_val, iteration_number=i)

    return {"final_theta": theta, "final_exp_val": exp_val}
```

You create the hybrid job by invoking the function as you would normal Python functions. However, the decorator function returns the hybrid job handle rather than the result of the function. To retrieve the results after it has completed, use `job.result()`. 

```
job = run_hybrid_job(num_tasks=1)
result = job.result()
```

The device argument in the `@hybrid_job` decorator specifies the device that the hybrid job has priority access to - in this case, the SV1 simulator. To get QPU priority, you must ensure that the device ARN used within the function matches that specified in the decorator. For convenience, you can use the helper function `get_job_device_arn()` to capture the device ARN declared in `@hybrid_job`. 

**Note**  
Each hybrid job has at least a one minute startup time since it creates a containerized environment on Amazon EC2. So for very short workloads, such as a single circuit or a batch of circuits, it may suffice for you to use quantum tasks.

**Hyperparameters** 

The `run_hybrid_job()` function takes the argument `num_tasks` to control the number of quantum tasks created. The hybrid job automatically captures this as a [hyperparameter](braket-jobs-hyperparameters.md).

**Note**  
Hyperparameters are displayed in the Braket console as strings, that are limited to 2500 characters. 

**Metrics and logging** 

Within the `run_hybrid_job()` function, metrics from iterative algorithms are recorded with `log_metrics`. Metrics are automatically plotted in the Braket console page under the hybrid job tab. You can use metrics to track the quantum task costs in near-real time during the hybrid job run with the [Braket cost tracker](braket-pricing.md). The example above uses the metric name “probability” that records the first probability from the [result type](braket-result-types.md).

**Retrieving results** 

After the hybrid job has completed, you use `job.result()` to retrieve the hybrid jobs results. Any objects in the return statement are automatically captured by Braket. Note that the objects returned by the function must be a tuple with each element being serializable. For example, the following code shows a working, and a failing example. 

```
import numpy as np


# Working example
@hybrid_job(device=Devices.Amazon.SV1)
def passing():
    np_array = np.random.rand(5)
    return np_array  # Serializable

# # Failing example
# @hybrid_job(device=Devices.Amazon.SV1)
# def failing():
#     return MyObject() # Not serializable
```

**Job name** 

By default, the name for this hybrid job is inferred from the function name. You may also specify a custom name up to 50 characters long. For example, in the following code the job name is "my-job-name".

```
@hybrid_job(device=Devices.Amazon.SV1, job_name="my-job-name")
def function():
    pass
```

**Local mode** 

[Local jobs](braket-jobs-local-mode.md) are be created by adding the argument `local=True` to the decorator. This runs the hybrid job in a containerized environment on your local compute environment, such as your laptop. Local jobs **do not** have priority queueing for quantum tasks. For advanced cases such as multi-node or MPI, local jobs may have access to the required Braket environment variables. The following code creates a local hybrid job with the device as the SV1 simulator. 

```
@hybrid_job(device=Devices.Amazon.SV1, local=True)
def run_hybrid_job(num_tasks=1):
    return ...
```

All other hybrid job options are supported. For a list of options see the [braket.jobs.quantum\$1job\$1creation module](https://amazon-braket-sdk-python.readthedocs.io/en/stable/_apidoc/braket.jobs.quantum_job_creation.html). 

## Install additional Python packages and source code


You can customize your runtime environment to use your preferred Python packages. You can use either a `requirements.txt` file, a list of package names, or [bring your own container (BYOC)](braket-jobs-byoc.md). For example, the `requirements.txt` file may include other packages to install.

```
qiskit 
pennylane >= 0.31
mitiq == 0.29
```

To customize a runtime environment using a `requirements.txt` file, refer to the following code example.

```
@hybrid_job(device=Devices.Amazon.SV1, dependencies="requirements.txt")
def run_hybrid_job(num_tasks=1):
    return ...
```

Alternatively, you may supply the package names as a Python list as follows.

```
@hybrid_job(device=Devices.Amazon.SV1, dependencies=["qiskit", "pennylane>=0.31", "mitiq==0.29"])
def run_hybrid_job(num_tasks=1):
    return ...
```

Additional source code can be specified either as a list of modules, or a single module as in the following code example. 

```
@hybrid_job(device=Devices.Amazon.SV1, include_modules=["my_module1", "my_module2"])
def run_hybrid_job(num_tasks=1):
    return ...
```

## Save and load data into a hybrid job instance


**Specifying input training data**

When you create a hybrid job, you may provide an input training datasets by specifying an Amazon Simple Storage Service (Amazon S3) bucket. You may also specify a local path, then Braket automatically uploads the data to Amazon S3 at `s3://<default_bucket_name>/jobs/<job_name>/<timestamp>/data/<channel_name>` . If you specify a local path, the channel name defaults to “input”. The following code shows a numpy file from the local path `data/file.npy`. 

```
import numpy as np


@hybrid_job(device=Devices.Amazon.SV1, input_data="data/file.npy")
def run_hybrid_job(num_tasks=1):
    data = np.load("data/file.npy")
    return ...
```

For S3, you must use the `get_input_data_dir()` helper funciton.

```
import numpy as np
from braket.jobs import get_input_data_dir

s3_path = "s3://amazon-braket-us-east-1-123456789012/job-data/file.npy"


@hybrid_job(device=None, input_data=s3_path)
def job_s3_input():
    np.load(get_input_data_dir() + "/file.npy")


@hybrid_job(device=None, input_data={"channel": s3_path})
def job_s3_input_channel():
    np.load(get_input_data_dir("channel") + "/file.npy")
```

You can specify multiple input data sources by providing a dictionary of channel values and S3 URIs or local paths. 

```
import numpy as np
from braket.jobs import get_input_data_dir

input_data = {
    "input": "data/file.npy",
    "input_2": "s3://amzn-s3-demo-bucket/data.json"
}


@hybrid_job(device=None, input_data=input_data)
def multiple_input_job():
    np.load(get_input_data_dir("input") + "/file.npy")
    np.load(get_input_data_dir("input_2") + "/data.json")
```

**Note**  
When the input data is large (>1GB), there is a long wait time before the job is created. This is due to the local input data when it is first uploaded to an S3 bucket, then the S3 path is added to the job request. Finally, the job request is submitted to the Braket service.

**Saving results to S3**

To save results not included in the return statement of the decorated function, you must append the correct directory to all file writing operations. The following example, shows saving a numpy array and a matplotlib figure.

```
import matplotlib.pyplot as plt
import numpy as np


@hybrid_job(device=Devices.Amazon.SV1)
def run_hybrid_job(num_tasks=1):
    result = np.random.rand(5)

    # Save a numpy array
    np.save("result.npy", result)

    # Save a matplotlib figure
    plt.plot(result)
    plt.savefig("fig.png")
    return ...
```

All results are compressed into a file named `model.tar.gz`. You can download the results with the Python function `job.result()` , or by navigating to the results folder from the hybrid job page in the Braket management console. 

**Saving and resuming from checkpoints**

For long-running hybrid jobs, its recommended to periodically save the intermediate state of the algorithm. You can use the built-in `save_job_checkpoint()` helper function, or save files to the `AMZN_BRAKET_JOB_RESULTS_DIR` path. The later is available with the helper function `get_job_results_dir()`.

The following is a minimal working example for saving and loading checkpoints with a hybrid job decorator:

```
from braket.jobs import save_job_checkpoint, load_job_checkpoint, hybrid_job


@hybrid_job(device=None, wait_until_complete=True)
def function():
    save_job_checkpoint({"a": 1})


job = function()
job_name = job.name
job_arn = job.arn


@hybrid_job(device=None, wait_until_complete=True, copy_checkpoints_from_job=job_arn)
def continued_function():
    load_job_checkpoint(job_name)


continued_job = continued_function()
```

In the first hybrid job, `save_job_checkpoint()` is called with a dictionary containing the data we want to save. By default, every value must be serializable as text. For checkpointing more complex Python objects, such as numpy arrays, you can set `data_format = PersistedJobDataFormat.PICKLED_V4`. This code creates and overwrites a checkpoint file with default name `<jobname>.json` in your hybrid job artifacts under a subfolder called "checkpoints".

To create a new hybrid job to continue from the checkpoint, we need to pass `copy_checkpoints_from_job=job_arn` where `job_arn` is the hybrid job ARN of the previous job. Then we use `load_job_checkpoint(job_name)` to load from the checkpoint.

## Best practices for hybrid job decorators


**Embrace asynchronicity**

Hybrid jobs created with the decorator annotation are asynchronous - they run once the classical and quantum resources are available. You monitor the progress of the algorithm using the Braket Management Console or Amazon CloudWatch. When you submit your algorithm to run, Braket runs your algorithm in a scalable containerized environment and results are retrieved when the algorithm is complete.

**Run iterative variational algorithms**

Hybrid jobs gives you the tools to run iterative quantum-classical algorithms. For purely quantum problems, use [quantum tasks](braket-submit-tasks.md) or a [batch of quantum tasks](braket-batching-tasks.md). The priority access to certain QPUs is most beneficial for long-running variational algorithms requiring multiple iterative calls to the QPUs with classical processing in between. 

**Debug using local mode**

Before you run a hybrid job on a QPU, its recommended to first run on the simulator SV1 to confirm it runs as expected. For small scale tests, you can run with local mode for rapid iteration and debugging. 

**Improve reproducibility with [Bring your own container (BYOC)](braket-jobs-byoc.md)**

Create a reproducible experiment by encapsulating your software and its dependencies within a containerized environment. By packaging all your code, dependencies, and settings in a container, you prevent potential conflicts and versioning issues. 

**Multi-instance distributed simulators**

To run a large number of circuits, consider using built-in MPI support to run local simulators on multiple instances within a single hybrid job. For more information, see [embedded simulators](pennylane-embedded-simulators.md).

**Use parametric circuits**

Parametric circuits that you submit from a hybrid job are automatically compiled on certain QPUs using [parametric compilation](braket-jobs-parametric-compilation.md) to improve the runtimes of your algorithms. 

**Checkpoint periodically **

For long-running hybrid jobs, its recommended to periodically save the intermediate state of the algorithm. 

**For further examples, use cases, and best-practices, see [Amazon Braket examples GitHub](https://github.com/amazon-braket/amazon-braket-examples).**

# Using the API with Hybrid Jobs


You can access and interact with Amazon Braket Hybrid Jobs directly using the API. However, defaults and convenience methods are not available when using the API directly.

**Note**  
We strongly recommend that you interact with Amazon Braket Hybrid Jobs using the [Amazon Braket Python SDK](https://github.com/aws/amazon-braket-sdk-python). It offers convenient defaults and protections that help your hybrid jobs run successfully.

This topic covers the basics of using the API. If you choose to use the API, keep in mind that this approach can be more complex and be prepared for several iterations to get your hybrid job to run.

To use the API, your account should have a role with the `AmazonBraketFullAccess` managed policy.

**Note**  
For more information on how to obtain a role with the `AmazonBraketFullAccess` managed policy, see the [Enable Amazon Braket ](braket-enable-overview.md) page.

Additionally, you need an **execution role**. This role will be passed to the service. You can create the role using the ** Amazon Braket console**. Use the **Execution roles** tab on the **Permissions and settings** page to create a default role for hybrid jobs.

The `CreateJob` API requires that you specify all the required parameters for the hybrid job. To use Python, compress your algorithm script files to a tar bundle, such as an input.tar.gz file, and run the following script. Update the parts of the code within angled brackets (`<>`) to match your account information and entry point that specify the path, file, and method where your hybrid job starts.

```
from braket.aws import AwsDevice, AwsSession
import boto3
from datetime import datetime

s3_client = boto3.client("s3")
client = boto3.client("braket")

project_name = "job-test"
job_name = project_name + "-" + datetime.strftime(datetime.now(), "%Y%m%d%H%M%S")
bucket = "amazon-braket-<your_bucket>"
s3_prefix = job_name

job_script = "input.tar.gz"
job_object = f"{s3_prefix}/script/{job_script}"
s3_client.upload_file(job_script, bucket, job_object)

input_data = "inputdata.csv"
input_object = f"{s3_prefix}/input/{input_data}"
s3_client.upload_file(input_data, bucket, input_object)

job = client.create_job(
    jobName=job_name,
    roleArn="arn:aws:iam::<your_account>:role/service-role/AmazonBraketJobsExecutionRole",  # https://docs.aws.amazon.com/braket/latest/developerguide/braket-manage-access.html#about-amazonbraketjobsexecution
    algorithmSpecification={
        "scriptModeConfig": {
            "entryPoint": "<your_execution_module>:<your_execution_method>",
            "containerImage": {"uri": "292282985366.dkr.ecr.us-west-1.amazonaws.com/amazon-braket-base-jobs:1.0-cpu-py37-ubuntu18.04"},   # Change to the specific region you are using
            "s3Uri": f"s3://{bucket}/{job_object}",
            "compressionType": "GZIP"
        }
    },
    inputDataConfig=[
        {
            "channelName": "hellothere",
            "compressionType": "NONE",
            "dataSource": {
                "s3DataSource": {
                    "s3Uri": f"s3://{bucket}/{s3_prefix}/input",
                    "s3DataType": "S3_PREFIX"
                }
            }
        }
    ],
    outputDataConfig={
        "s3Path": f"s3://{bucket}/{s3_prefix}/output"
    },
    instanceConfig={
        "instanceType": "ml.m5.large",
        "instanceCount": 1,
        "volumeSizeInGb": 1
    },
    checkpointConfig={
        "s3Uri":  f"s3://{bucket}/{s3_prefix}/checkpoints",
        "localPath": "/opt/omega/checkpoints"
    },
    deviceConfig={
        "priorityAccess": {
            "devices": [
                "arn:aws:braket:us-west-1::device/qpu/rigetti/Ankaa-3"
            ]
        }
    },
    hyperParameters={
        "hyperparameter key you wish to pass": "<hyperparameter value you wish to pass>",
    },
    stoppingCondition={
        "maxRuntimeInSeconds": 1200,
        "maximumTaskLimit": 10
    },
)
```

Once you create your hybrid job, you can access the hybrid job details through the `GetJob` API or the console. To get the hybrid job details from the Python session in which you ran the `createJob` code as in the previous example, use the following Python command.

```
getJob = client.get_job(jobArn=job["jobArn"])
```

To cancel a hybrid job, call the `CancelJob` API with the Amazon Resource Name of the job ('JobArn').

```
cancelJob = client.cancel_job(jobArn=job["jobArn"])
```

You can specify checkpoints as part of the `createJob` API using the `checkpointConfig` parameter.

```
    checkpointConfig = {
        "localPath" : "/opt/omega/checkpoints",
        "s3Uri": f"s3://{bucket}/{s3_prefix}/checkpoints"
    },
```

**Note**  
The localPath of `checkpointConfig` cannot start with any of the following reserved paths: `/opt/ml`, `/opt/braket`, `/tmp`, or `/usr/local/nvidia`.

# Create and debug a hybrid job with local mode


When you are building a new hybrid algorithm, local mode helps you to debug and test your algorithm script. Local mode is a feature that allows you to run code you plan to use in Amazon Braket Hybrid Jobs, but without needing Braket to manage the infrastructure for running the hybrid job. Instead, run hybrid jobs locally on your Amazon Braket Notebook instance or on a preferred client, such as a laptop or desktop computer. 

In local mode, you can still send quantum tasks to actual devices, but you do not get the performance benefits when running against an actual Quantum processing unit (QPU) while in local mode.

To use local mode, modify `AwsQuantumJob` to `LocalQuantumJob` wherever it occurs inside of your program. For instance, to run the example from [Create your first hybrid job](braket-jobs-first.md), edit the hybrid job script in the code as follows.

```
from braket.jobs.local import LocalQuantumJob

job = LocalQuantumJob.create(
    device="arn:aws:braket:::device/quantum-simulator/amazon/sv1",
    source_module="algorithm_script.py",
    entry_point="algorithm_script:start_here",
)
```

**Note**  
Docker, which is already pre-installed in the Amazon Braket notebooks, needs to be installed in your local environment to use this feature. Instructions for installing Docker can be found on the [Get Docker](https://docs.docker.com/get-started/get-docker/) page. In addition, not all parameters are supported in local mode.