

# Customizing your Hybrid Job
<a name="braket-jobs-customize"></a>

Amazon Braket provides several ways to customize how your hybrid jobs run, allowing you to tailor the environment to your specific needs. This section explores options for customizing hybrid jobs, from defining the algorithm script environment to bringing your own container. You'll learn how to optimize your workflow using hyperparameters, configure job instances, and leverage parametric compilation for improved performance. These customization techniques help you maximize the potential of your hybrid quantum computations on Amazon Braket.

**Topics**
+ [Define the environment for your algorithm script](braket-jobs-script-environment.md)
+ [Using hyperparameters](braket-jobs-hyperparameters.md)
+ [Configure your hybrid job instance](braket-jobs-configure-job-instance-for-script.md)
+ [Using parametric compilation to speed up Hybrid Jobs](braket-jobs-parametric-compilation.md)

# Define the environment for your algorithm script
<a name="braket-jobs-script-environment"></a>

Amazon Braket supports environments defined by containers for your algorithm script:
+ A base container (the default, if no `image_uri` is specified)
+ A container with CUDA-Q
+ A container with Tensorflow and PennyLane
+ A container with PyTorch, PennyLane, and CUDA-Q

The following table provides details about the containers and the libraries they include.


**Amazon Braket containers**  

| Type | Base | CUDA-Q | TensorFlow | PyTorch | 
| --- | --- | --- | --- | --- | 
|   **Image URI**   |  292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-base-jobs:latest  |  292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-cudaq-jobs:latest  |  292282985366.dkr.ecr.us-east-1.amazonaws.com/amazon-braket-tensorflow-jobs:latest  |  292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-pytorch-jobs:latest  | 
|   **Inherited Libraries**   |  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  | 
|   **Additional Libraries**   |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-script-environment.html)  | 

You can view and access the open source container definitions at [aws/amazon-braket-containers](https://github.com/aws/amazon-braket-containers). Choose the container that best matches your use case. You can use any of the available AWS Regions in Braket (us-east-1, us-west-1, us-west-2, eu-north-1, eu-west-2), but the container Region must match the Region for your hybrid job. Specify the container image when you create a hybrid job by adding one of the following three arguments to your `create(…​)` call in the hybrid job script. You can install additional dependencies into the container you choose at runtime (at the cost of startup or runtime) because the Amazon Braket containers have internet connectivity. The following example is for the us-west-2 Region.
+  **Base image:** image\$1uri="292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-base-jobs:latest"
+  **CUDA-Q image:** image\$1uri="292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-cudaq-jobs:latest"
+  **Tensorflow image:** image\$1uri="292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-tensorflow-jobs:latest"
+  **PyTorch image:** image\$1uri="292282985366.dkr.ecr.us-west-2.amazonaws.com/amazon-braket-pytorch-jobs:latest"

The `image-uris` can also be retrieved using the `retrieve_image()` function in the Amazon Braket SDK. The following example shows how to retrieve them from the us-west-2 AWS Region.

```
from braket.jobs.image_uris import retrieve_image, Framework

image_uri_base = retrieve_image(Framework.BASE, "us-west-2")
image_uri_cudaq = retrieve_image(Framework.CUDAQ, "us-west-2")
image_uri_tf = retrieve_image(Framework.PL_TENSORFLOW, "us-west-2")
image_uri_pytorch = retrieve_image(Framework.PL_PYTORCH, "us-west-2")
```

# Bring your own container (BYOC)
<a name="braket-jobs-byoc"></a>

Amazon Braket Hybrid Jobs provides three pre-built containers for running code in different environments. If one of these containers supports your use case, you only have to provide your algorithm script when you create a hybrid job. Minor missing dependencies can be added from your algorithm script or from a `requirements.txt` file using `pip`.

If none of these containers support your use case, or if you wish to expand on them, Braket Hybrid Jobs supports running hybrid jobs with your own custom Docker container image, or bring your own container (BYOC). Make sure it is the right feature for your use case. 

**Topics**
+ [When is bringing my own container the right decision?](#bring-own-container-decision)
+ [Recipe for bringing your own container](bring-own-container-recipe.md)
+ [Running Braket hybrid jobs in your own container](running-hybrid-jobs-in-own-container.md)

## When is bringing my own container the right decision?
<a name="bring-own-container-decision"></a>

Bringing your own container (BYOC) to Braket Hybrid Jobs offers the flexibility to use your own software by installing it in a packaged environment. Depending on your specific needs, there may be ways to achieve the same flexibility without having to go through the full BYOC Docker build - Amazon ECR upload - custom image URI cycle.

**Note**  
BYOC may not be the right choice if you want to add a small number of additional Python packages (generally fewer than 10) which are publicly available. For example, if you're using PyPi.

In this case, you can use one of the pre-built Braket images, and then include a `requirements.txt` file in your source directory at the job submission. The file is automatically read, and `pip` will install the packages with the specified versions as normal. If you're installing a large number of packages, the runtime of your jobs may be substantially increased. Check the Python and, if applicable, CUDA version of the prebuilt container you want to use to test if your software will work.

BYOC is necessary when you want to use a non-Python language (like C\$1\$1 or Rust) for your job script, or if you want to use a Python version not available through the Braket pre-built containers. It is also a good choice if:
+ You're using software with a license key, and you need to authenticate that key against a licensing server to run the software. With BYOC, you can embed the license key in your Docker image and include code to authenticate it.
+ You are using software that is not publicly available. For example, the software is hosted on a private GitLab or GitHub repository that you need a particular SSH key to access.
+ You need to install a large suite of software that is not packaged in the Braket provided containers. BYOC will allow you to eliminate long startup times for your hybrid jobs containers due to software installation.

BYOC also enables you to make your custom SDK or algorithm available to customers by building a Docker container with your software and making it available to your users. You can do this by setting appropriate permissions in Amazon ECR.

**Note**  
You must comply with all applicable software licenses.

# Recipe for bringing your own container
<a name="bring-own-container-recipe"></a>

In this section, we provide a step-by-step guide of what you need to bring your own container (BYOC) to Braket Hybrid Jobs — the scripts, files, and steps to combine them to get up and running with your custom Docker images. The recipes for two common cases:

1. Install additional software in a Docker image and use only Python algorithm scripts in your jobs.

1. Use algorithm scripts written in a non-Python language with Hybrid Jobs, or a CPU architecture besides x86.

Defining the *container entry script* is more complex for case 2.

When Braket runs your Hybrid Job, it launches the requested number and type of Amazon EC2 instances, then runs the Docker image specified by the image URI input to job creation on them. When using the BYOC feature, you specify an image URI hosted in a [private Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html) that you have Read access to. Braket Hybrid Jobs uses that custom image to run the job.

The specific components you need to build a Docker image that can be used with Hybrid Jobs. If you are unfamiliar with writing and building `Dockerfiles`, refer to the [Dockerfile documentation](https://docs.docker.com/reference/dockerfile/) and the [Amazon ECR CLI documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html).

**Topics**
+ [A base image for your Dockerfile](#base-image-dockerfile)
+ [(Optional) A modified container entry point script](#modified-container-entry-point)
+ [Install needed software and container script with `Dockerfile`](#install-docketfile)

## A base image for your Dockerfile
<a name="base-image-dockerfile"></a>

If you are using Python and want to install software on top of what is provided in the Braket provided containers, an option for a base image is one of the Braket container images, hosted in our [GitHub repo](https://github.com/amazon-braket/amazon-braket-containers) and on Amazon ECR. You will need to [authenticate to Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html#cli-authenticate-registry) to pull the image and build on top of it. For example, the first line of your BYOC Docker file could be: `FROM [IMAGE_URI_HERE]`

Next, fill out the rest of the Dockerfile to install and set up the software that you want to add to the container. The pre-built Braket images will already contain the appropriate container entry point script, so you do not need to worry about including that.

If you want to use a non-Python language, such as C\$1\$1, Rust, or Julia, or if you want to build an image for a non-x86 CPU architecture, like ARM, you may need to build on top of a barebones public image. You can find many such images at the [Amazon Elastic Container Registry Public Gallery](https://gallery.ecr.aws/). Make sure you choose one that is appropriate for the CPU architecture, and if necessary, the GPU you want to use.

## (Optional) A modified container entry point script
<a name="modified-container-entry-point"></a>

**Note**  
If you're only adding additional software to a pre-built Braket image, you can skip this section.

To run non-Python code as part of your hybrid job, modify the Python script which defines the container entry point. For example, the [`braket_container.py` python script on the Amazon Braket Github ](https://github.com/amazon-braket/amazon-braket-containers/blob/main/src/braket_container.py). This is the script the images pre-built by Braket use to launch your algorithm script and set appropriate environment variables. The container entry point script itself **must** be in Python, but can launch non-Python scripts. In the pre-built example, you can see that Python algorithm scripts are launched either as a [Python subprocess](https://github.com/amazon-braket/amazon-braket-containers/blob/main/src/braket_container.py#L274) or as a [fully new process](https://github.com/amazon-braket/amazon-braket-containers/blob/main/src/braket_container.py#L257). By modifying this logic, you can enable the entry point script to launch non-Python algorithm scripts. For example, you could modify [https://github.com/amazon-braket/amazon-braket-containers/blob/main/src/braket_container.py#L139](https://github.com/amazon-braket/amazon-braket-containers/blob/main/src/braket_container.py#L139) function to launch Rust processes dependent on the file extension ending.

You can also choose to write a completely new `braket_container.py`. It should copy input data, source archives, and other necessary files from Amazon S3 into the container, and define the appropriate environment variables.

## Install needed software and container script with `Dockerfile`
<a name="install-docketfile"></a>

**Note**  
If you use a pre-built Braket image as your Docker base image, the container script is already present.

If you created a modified container script in the previous step, you'll need to copy it into the container **and** define the environment variable `SAGEMAKER_PROGRAM` to `braket_container.py`, or what you have named your new container entry point script.

The following is an example of a `Dockerfile` that allows you to use Julia on GPU-accelerated Jobs instances:

```
FROM nvidia/cuda:12.2.0-devel-ubuntu22.04

    
 ARG DEBIAN_FRONTEND=noninteractive
 ARG JULIA_RELEASE=1.8
 ARG JULIA_VERSION=1.8.3


 ARG PYTHON=python3.11 
 ARG PYTHON_PIP=python3-pip
 ARG PIP=pip


 ARG JULIA_URL = https://julialang-s3.julialang.org/bin/linux/x64/${JULIA_RELEASE}/
 ARG TAR_NAME = julia-${JULIA_VERSION}-linux-x86_64.tar.gz


 ARG PYTHON_PKGS = # list your Python packages and versions here


 RUN curl -s -L ${JULIA_URL}/${TAR_NAME} | tar -C /usr/local -x -z --strip-components=1 -f -


 RUN apt-get update \

    && apt-get install -y --no-install-recommends \

    build-essential \

    tzdata \

    openssh-client \

    openssh-server \

    ca-certificates \

    curl \

    git \

    libtemplate-perl \

    libssl1.1 \

    openssl \

    unzip \ 

    wget \

    zlib1g-dev \

    ${PYTHON_PIP} \

    ${PYTHON}-dev \




 RUN ${PIP} install --no-cache --upgrade ${PYTHON_PKGS}


 RUN ${PIP} install --no-cache --upgrade sagemaker-training==4.1.3


 # Add EFA and SMDDP to LD library path
 ENV LD_LIBRARY_PATH="/opt/conda/lib/python${PYTHON_SHORT_VERSION}/site-packages/smdistributed/dataparallel/lib:$LD_LIBRARY_PATH"
 ENV LD_LIBRARY_PATH=/opt/amazon/efa/lib/:$LD_LIBRARY_PATH


 # Julia specific installation instructions
 COPY Project.toml /usr/local/share/julia/environments/v${JULIA_RELEASE}/
 RUN JULIA_DEPOT_PATH=/usr/local/share/julia \

    julia -e 'using Pkg; Pkg.instantiate(); Pkg.API.precompile()'
 # generate the device runtime library for all known and supported devices
 RUN JULIA_DEPOT_PATH=/usr/local/share/julia \

    julia -e 'using CUDA; CUDA.precompile_runtime()'


 # Open source compliance scripts
 RUN HOME_DIR=/root \

 && curl -o ${HOME_DIR}/oss_compliance.zip https://aws-dlinfra-utilities.s3.amazonaws.com/oss_compliance.zip \

 && unzip ${HOME_DIR}/oss_compliance.zip -d ${HOME_DIR}/ \

 && cp ${HOME_DIR}/oss_compliance/test/testOSSCompliance /usr/local/bin/testOSSCompliance \

 && chmod +x /usr/local/bin/testOSSCompliance \

 && chmod +x ${HOME_DIR}/oss_compliance/generate_oss_compliance.sh \

 && ${HOME_DIR}/oss_compliance/generate_oss_compliance.sh ${HOME_DIR} ${PYTHON} \

 && rm -rf ${HOME_DIR}/oss_compliance*


 # Copying the container entry point script
 COPY braket_container.py /opt/ml/code/braket_container.py
 ENV SAGEMAKER_PROGRAM braket_container.py
```

This example, downloads and runs scripts provided by AWS to ensure compliance with all relevant Open-Source licenses. For example, by properly attributing any installed code governed by an MIT license.

If you need to include non-public code, for instance code that is hosted in a private GitHub or GitLab repository, **do not** embed SSH keys in the Docker image to access it. Instead, use Docker Compose when you build to allow Docker to access SSH on the host machine it is built on. For more information, see the [Securely using SSH keys in Docker to access private Github repositories](https://www.fastruby.io/blog/docker/docker-ssh-keys.html) guide.

**Building and uploading your Docker image**

With a properly defined `Dockerfile`, you are now ready to follow the steps to [create a private Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html), if one does not already exist. You can also build, tag, and upload your container image to the repository.

You are ready to build, tag, and push the image. See the [Docker build documentation](https://docs.docker.com/reference/cli/docker/buildx/build/) for a full explanation of options to `docker build` and some examples.

For the sample file defined above, you could run:

```
aws ecr get-login-password --region ${your_region} | docker login --username AWS --password-stdin ${aws_account_id}.dkr.ecr.${your_region}.amazonaws.com
 docker build -t braket-julia .
 docker tag braket-julia:latest ${aws_account_id}.dkr.ecr.${your_region}.amazonaws.com/braket-julia:latest
 docker push ${aws_account_id}.dkr.ecr.${your_region}.amazonaws.com/braket-julia:latest
```

**Assigning appropriate Amazon ECR permissions**

Braket Hybrid Jobs Docker images must be hosted in private Amazon ECR repositories. By default, a private Amazon ECR repo does **not** provide read access to the Braket Hybrid Jobs IAM role or to any other users that want to use your image, such as a collaborator or student. You must [set a repository policy](https://docs.aws.amazon.com/AmazonECR/latest/userguide/set-repository-policy.html) to grant the appropriate permissions. In general, only give permission to those specific users and IAM roles you want to access your images, rather than allowing anyone with the image URI to pull them.

# Running Braket hybrid jobs in your own container
<a name="running-hybrid-jobs-in-own-container"></a>

To create a hybrid job with your own container, call `AwsQuantumJob.create()` with the argument `image_uri` specified. You can use a QPU, an on-demand simulator, or run your code locally on the classical processor available with Braket Hybrid Jobs. We recommend testing your code out on a simulator like SV1, DM1, or TN1 before running on a real QPU.

To run your code on the classical processor, specify the `instanceType` and the `instanceCount` you use by updating the `InstanceConfig`. Note that if you specify an `instance_count` > 1, you need to make sure that your code can run across multiple hosts. The upper limit for the number of instances you can choose is 5. For example:

```
job = AwsQuantumJob.create(
    source_module="source_dir",
    entry_point="source_dir.algorithm_script:start_here",
    image_uri="111122223333.dkr.ecr.us-west-2.amazonaws.com/my-byoc-container:latest",
    instance_config=InstanceConfig(instanceType="ml.g4dn.xlarge", instanceCount=3),
    device="local:braket/braket.local.qubit",
    # ...)
```

**Note**  
Use the device ARN to track the simulator you used as hybrid job metadata. Acceptable values must follow the format `device = "local:<provider>/<simulator_name>"`. Remember that `<provider>` and `<simulator_name>` must consist only of letters, numbers, `_`, `-`, and `.` . The string is limited to 256 characters.  
If you plan to use BYOC and you're not using the Braket SDK to create quantum tasks, you should pass the value of the environmental variable `AMZN_BRAKET_JOB_TOKEN` to the `jobToken` parameter in the `CreateQuantumTask` request. If you don't, the quantum tasks don't get priority and are billed as regular standalone quantum tasks.

# Using hyperparameters
<a name="braket-jobs-hyperparameters"></a>

You can define hyperparameters needed by your algorithm, such as the learning rate or step size, when you create a hybrid job. Hyperparameter values are typically used to control various aspects of the algorithm, and can often be tuned to optimize the algorithm's performance. To use hyperparameters in a Braket hybrid job, you need to specify their names and values explicitly as a dictionary. Specify the hyperparameter values to test when searching for the optimal set of values. The first step to using hyperparameters is to set up and define the hyperparameters as a dictionary, which can be seen in the following code.

```
from braket.devices import Devices

device_arn = Devices.Amazon.SV1

hyperparameters = {"shots": 1_000}
```

Then pass the hyperparameters defined in the code snippet given above to be used in the algorithm of your choice. To run the following code example, create a directory named “src” in the same path as your hyperparameter file. Inside of the "src" directory, add [0\$1Getting\$1started\$1papermill.ipynb](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/7_Running_notebooks_as_hybrid_jobs/src/0_Getting_started_papermill.ipynb), [notebook\$1runner.py](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/7_Running_notebooks_as_hybrid_jobs/src/notebook_runner.py), and [requirements.txt](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/7_Running_notebooks_as_hybrid_jobs/src/requirements.txt) code files. 

```
import time
from braket.aws import AwsQuantumJob

job = AwsQuantumJob.create(
    device=device_arn,
    source_module="src",
    entry_point="src.notebook_runner:run_notebook",
    input_data="src/0_Getting_started_papermill.ipynb",
    hyperparameters=hyperparameters,
    job_name=f"papermill-job-demo-{int(time.time())}",
)

# Print job to record the ARN
print(job)
```

To access your hyperparameters from *within* your hybrid job script, see the `load_jobs_hyperparams()` function in the [notebook\$1runner.py](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/7_Running_notebooks_as_hybrid_jobs/src/notebook_runner.py) python file. To access your hyperparameters *outside* of your hybrid job script, run the following code. 

```
from braket.aws import AwsQuantumJob

# Get the job using the ARN
job_arn = "arn:aws:braket:us-east-1:111122223333:job/5eabb790-d3ff-47cc-98ed-b4025e9e296f"  # Replace with your job ARN
job = AwsQuantumJob(arn=job_arn)

# Access the hyperparameters
job_metadata = job.metadata()
hyperparameters = job_metadata.get("hyperParameters", {})
print(hyperparameters)
```

For more information on learning how to use hyperparamters, see the [QAOA with Amazon Braket Hybrid Jobs and PennyLane](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/2_Using_PennyLane_with_Braket_Hybrid_Jobs/Using_PennyLane_with_Braket_Hybrid_Jobs.ipynb) and [Quantum machine learning in Amazon Braket Hybrid Jobs](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/1_Quantum_machine_learning_in_Amazon_Braket_Hybrid_Jobs/Quantum_machine_learning_in_Amazon_Braket_Hybrid_Jobs.ipynb) tutorials.

# Configure your hybrid job instance
<a name="braket-jobs-configure-job-instance-for-script"></a>

Depending on your algorithm, you may have different requirements. By default, Amazon Braket runs your algorithm script on an `ml.m5.large` instance. However, you can customize this instance type when you create a hybrid job using the following import and configuration argument.

```
from braket.jobs.config import InstanceConfig

job = AwsQuantumJob.create(
    ...
    instance_config=InstanceConfig(instanceType="ml.g4dn.xlarge"), # Use NVIDIA T4 instance with 4 GPUs.
    ...
    ),
```

If you are running an embedded simulation and have specified a local device in the device configuration, you can additionally request more than one instance in the `InstanceConfig` by specifying the `instanceCount` and setting it to be greater than one. The upper limit is 5. For instance, you can choose 3 instances as follows.

```
from braket.jobs.config import InstanceConfig
job = AwsQuantumJob.create(
    ...
    instance_config=InstanceConfig(instanceType="ml.g4dn.xlarge", instanceCount=3), # Use 3 NVIDIA T4 instances
    ...
    ),
```

When you use multiple instances, consider distributing your hybrid job using the data parallel feature. See the following example notebook for more details on how-to see this [Parallelize training for QML](https://github.com/amazon-braket/amazon-braket-examples/blob/main/examples/hybrid_jobs/5_Parallelize_training_for_QML/Parallelize_training_for_QML.ipynb) example.

The following three tables list the available instance types and specs for standard, high performance, and GPU accelerated instances.

**Note**  
To view the default classical compute instance quotas for Hybrid Jobs, see the [Amazon Braket Quotas](braket-quotas.md) page.


| Standard Instances | vCPU | Memory (GiB) | 
| --- | --- | --- | 
|  ml.m5.large (default)  |  4  |  16  | 
|  ml.m5.xlarge  |  4  |  16  | 
|  ml.m5.2xlarge  |  8  |  32  | 
|  ml.m5.4xlarge  |  16  |  64  | 
|  ml.m5.12xlarge  |  48  |  192  | 
|  ml.m5.24xlarge  |  96  |  384  | 


| High performance Instances | vCPU | Memory (GiB) | 
| --- | --- | --- | 
|  ml.c5.xlarge  |  4  |  8  | 
|  ml.c5.2xlarge  |  8  |  16  | 
|  ml.c5.4xlarge  |  16  |  32  | 
|  ml.c5.9xlarge  |  36  |  72  | 
|  ml.c5.18xlarge  |  72  |  144  | 
|  ml.c5n.xlarge  |  4  |  10.5  | 
|  ml.c5n.2xlarge  |  8  |  21  | 
|  ml.c5n.4xlarge  |  16  |  32  | 
|  ml.c5n.9xlarge  |  36  |  72  | 
|  ml.c5n.18xlarge  |  72  |  192  | 


| GPU accelerated Instances | GPUs | vCPU | Memory (GiB) | GPU Memory (GiB) | 
| --- | --- | --- | --- | --- | 
|  ml.p4d.24xlarge  |  8  |  96  |  1152  |  320  | 
|  ml.g4dn.xlarge  |  1  |  4  |  16  |  16  | 
|  ml.g4dn.2xlarge  |  1  |  8  |  32  |  16  | 
|  ml.g4dn.4xlarge  |  1  |  16  |  64  |  16  | 
|  ml.g4dn.8xlarge  |  1  |  32  |  128  |  16  | 
|  ml.g4dn.12xlarge  |  4  |  48  |  192  |  64  | 
|  ml.g4dn.16xlarge  |  1  |  64  |  256  |  16  | 

Each instance uses a default configuration of data storage (SSD) of 30 GB. But you can adjust the storage in the same way that you configure the `instanceType`. The following example shows how to increase the total storage to 50 GB.

```
from braket.jobs.config import InstanceConfig

job = AwsQuantumJob.create(
    ...
    instance_config=InstanceConfig(
        instanceType="ml.g4dn.xlarge",
        volumeSizeInGb=50,
    ),
    ...
    ),
```

## Configure the default bucket in `AwsSession`
<a name="braket-jobs-configure-default-bucket"></a>

Utilizing your own `AwsSession` instance provides you with enhanced flexibility, such as the ability to specify a custom location for your default Amazon S3 bucket. By default, an `AwsSession` has a pre-configured Amazon S3 bucket location of `"amazon-braket-{id}-{region}"`. However, you have the option to override the default Amazon S3 bucket location when creating an `AwsSession`. Users can optionally pass in an `AwsSession` object into the `AwsQuantumJob.create()` method, by providing the `aws_session` parameter as demonstrated in the following code example.

```
aws_session = AwsSession(default_bucket="amazon-braket-s3-demo-bucket")

# Then you can use that AwsSession when creating a hybrid job
job = AwsQuantumJob.create(
    ...
    aws_session=aws_session
)
```

# Using parametric compilation to speed up Hybrid Jobs
<a name="braket-jobs-parametric-compilation"></a>

 Amazon Braket supports parametric compilation on certain QPUs. This enables you to reduce the overhead associated with the computationally expensive compilation step by compiling a circuit only once and not for every iteration in your hybrid algorithm. This can improve runtimes dramatically for Hybrid Jobs, since you avoid the need to recompile your circuit at each step. Just submit parametrized circuits to one of our supported QPUs as a Braket Hybrid Job. For long running hybrid jobs, Braket automatically uses the updated calibration data from the hardware provider when compiling your circuit to ensure the highest quality results.

To create a parametric circuit, you first need to provide parameters as inputs in your algorithm script. In this example, we use a small parametric circuit and ignore any classical processing between each iteration. For typical workloads, you would submit many circuits in batch and perform classical processing such as updating the parameters in each iteration.

```
import os

from braket.aws import AwsDevice
from braket.circuits import Circuit, FreeParameter

def start_here():

    print("Test job started.")

    # Use the device declared in the job script
    device = AwsDevice(os.environ["AMZN_BRAKET_DEVICE_ARN"])

    circuit = Circuit().rx(0, FreeParameter("theta"))
    parameter_list = [0.1, 0.2, 0.3]
    
    for parameter in parameter_list:
        result = device.run(circuit, shots=1000, inputs={"theta": parameter})

    print("Test job completed.")
```

You can submit the algorithm script to run as a Hybrid Job with the following job script. When running the Hybrid Job on a QPU that supports parametric compilation, the circuit is compiled only on the first run. In following runs, the compiled circuit is reused, increasing the runtime performance of the Hybrid Job without any additional lines of code. 

```
from braket.aws import AwsQuantumJob

job = AwsQuantumJob.create(
    device=device_arn,
    source_module="algorithm_script.py",
)
```

**Note**  
Parametric compilation is supported on all superconducting, gate-based QPUs from Rigetti Computing with the exception of pulse level programs.