

# Build jobs to submit to Deadline Cloud
<a name="building-jobs"></a>

You submit jobs to Deadline Cloud using job bundles. A job bundle is a collection of files, including an [Open Job Description (OpenJD)](https://github.com/OpenJobDescription/openjd-specifications) job template and any asset files needed to render the job.

 The job template describes how workers process and access the assets, and provides the script that the worker runs. Job bundles enable artists, technical directors, and pipeline developers to easily submit complex jobs to Deadline Cloud from their local workstations or on-premises render farm. Job bundles are particularly useful for teams working on large-scale visual effects, animation, or other media rendering projects that require scalable, on-demand computing resources.

You can create the job bundle using the local file system to store files and a text editor to create the job template. After creating the bundle, submit the job to Deadline Cloud using either the Deadline Cloud CLI or a tool like a Deadline Cloud submitter

You can store your assets in a file system shared between your workers, or you can use Deadline Cloud job attachments to automate moving assets to S3 buckets where your workers can access them. Job attachments also help move the output from your jobs back to your workstations.

 The following sections provide detailed instructions on creating and submitting job bundles to Deadline Cloud. 

**Topics**
+ [Open Job Description (OpenJD) templates for Deadline Cloud](build-job-bundle.md)
+ [Using files in your jobs](using-files-in-your-jobs.md)
+ [Use job attachments to share files](build-job-attachments.md)
+ [Create resource limits for jobs](build-job-limits.md)
+ [How to submit a job to Deadline Cloud](submit-jobs-how.md)
+ [Schedule jobs in Deadline Cloud](build-jobs-scheduling.md)
+ [Modify a job in Deadline Cloud](build-jobs-modifying.md)

# Open Job Description (OpenJD) templates for Deadline Cloud
<a name="build-job-bundle"></a>

A *job bundle* is one of the tools you use to define jobs for AWS Deadline Cloud. They group an [Open Job Description (OpenJD)](https://github.com/OpenJobDescription/openjd-specifications) template with additional information such as files and directories that your jobs use with job attachments. You use the Deadline Cloud command-line interface (CLI) to use a job bundle to submit jobs for a queue to run.

A job bundle is a directory structure that contains an OpenJD job template, other files that define the job, and job-specific files required as input for your job. You can specify the files that define your job as either YAML or JSON files.

The only required file is either `template.yaml` or `template.json`. You can also include the following files:

```
/template.yaml (or template.json)
/asset_references.yaml (or asset_references.json)
/parameter_values.yaml (or parameter_values.json)
/other job-specific files and directories
```

Use a job bundle for custom job submissions with the Deadline Cloud CLI and a job attachment, or you can use an graphical submission interface. For example, the following is the Blender sample from GitHub. To run the sample using the following command in [the Blender sample directory](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline/job_bundles):

```
deadline bundle gui-submit blender_render
```

![\[An example of a custom job submission interface for Blender.\]](http://docs.aws.amazon.com/deadline-cloud/latest/developerguide/images/blender_submit_shared_settings.png)


The job-specific settings panel are generated from the `userInterface` properties of the job parameters defined in the job template.

To submit a job using the command line, you can use a command similar to the following

```
deadline bundle submit \
    --yes \
    --name Demo \
     -p BlenderSceneFile=location of scene file \
     -p OutputDir=file pathe for job output \
      blender_render/
```

Or you can use the `deadline.client.api.create_job_from_job_bundle` function in the `deadline` Python package.

All of the job submitter plugins provided with Deadline Cloud, such as the Autodesk Maya plugin, generate a job bundle for your submission and then use the Deadline Cloud Python package to submit your job to Deadline Cloud. You can see the job bundles submitted in the job history directory of you workstation or by using a submitter. You can find your job history directory with the following command:

```
deadline config get settings.job_history_dir
```

When your job is running on a Deadline Cloud worker, it has access to environment variables that provide it with information about the job. The environment variables are:


| Variable name | Available | 
| --- | --- | 
| DEADLINE\$1FARM\$1ID | All actions | 
| DEADLINE\$1FLEET\$1ID | All actions | 
| DEADLINE\$1WORKER\$1ID | All actions | 
| DEADLINE\$1QUEUE\$1ID | All actions | 
| DEADLINE\$1JOB\$1ID | All actions | 
| DEADLINE\$1STEP\$1ID | Task actions | 
| DEADLINE\$1SESSION\$1ID | All actions | 
| DEADLINE\$1TASK\$1ID | Task actions | 
| DEADLINE\$1SESSIONACTION\$1ID | All actions | 

**Topics**
+ [Job template elements for job bundles](build-job-bundle-template.md)
+ [Task chunking for job templates](build-job-bundle-chunking.md)
+ [Parameter values elements for job bundles](build-job-bundle-parameters.md)
+ [Asset references elements for job bundles](build-job-bundle-assets.md)

# Job template elements for job bundles
<a name="build-job-bundle-template"></a>

The job template defines the runtime environment and the processes that run as part of a Deadline Cloud job. You can create parameters in a template so that it can be used to create jobs that differ only in input values, much like a function in a programming language.

When you submit a job to Deadline Cloud, it runs in any queue environments applied to the queue. Queue environments are built using the Open Job Description (OpenJD) external environments specification. For details, see the [Environment template](https://github.com/OpenJobDescription/openjd-specifications/wiki/2023-09-Template-Schemas#12-environment-template) in the OpenJD GitHub repository.

For an introduction creating a job with an OpenJD job template, see [Introduction to creating a job](https://github.com/OpenJobDescription/openjd-specifications/wiki/Introduction-to-Creating-a-Job) in the OpenJD GitHub repository. Additional information can be found in [How jobs are run](https://github.com/OpenJobDescription/openjd-specifications/wiki/How-Jobs-Are-Run). There are job template samples in the in the OpenJD GitHub repository's `samples` directory.

You can define the job template in either YAML format (`template.yaml`) or JSON format (`template.json`). The examples in this section are shown in YAML format.

For example, the job template for the `blender_render` sample defines an input parameter `BlenderSceneFile` as a file path:

```
- name: BlenderSceneFile
  type: PATH
  objectType: FILE
  dataFlow: IN
  userInterface:
    control: CHOOSE_INPUT_FILE
    label: Blender Scene File
    groupLabel: Render Parameters
    fileFilters:
    - label: Blender Scene Files
      patterns: ["*.blend"]
    - label: All Files
      patterns: ["*"]
  description: >
    Choose the Blender scene file to render. Use the 'Job Attachments' tab
    to add textures and other files that the job needs.
```

The `userInterface` property defines the behavior of automatically generated user interfaces for both the command line using the `deadline bundle gui-submit` command and within the job submission plugins for applications like Autodesk Maya.

In this example, the UI widget for inputting a value for the `BlenderSceneFile` parameter is a file-selection dialog that shows only `.blend` files.

![\[A user-interface widget for entering the scene file parameter for an OpenJD job template.\]](http://docs.aws.amazon.com/deadline-cloud/latest/developerguide/images/blender_submit_scene_file_widget.png)


For more examples of using the `userInteface` element, see the [gui\$1control\$1showcase](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline/job_bundles/gui_control_showcase) sample in the [deadline-cloud-samples](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline) repository on GitHub.

The `objectType` and `dataFlow` properties control the behavior of job attachments when you submit a job from a job bundle. In this case, `objectType: FILE` and `dataFlow:IN` mean that the value of `BlenderSceneFile` is an input file for job attachments.

In contrast, the definition of the `OutputDir` parameter has `objectType: DIRECTORY` and `dataFlow: OUT`:

```
- name: OutputDir
  type: PATH
  objectType: DIRECTORY
  dataFlow: OUT
  userInterface:
    control: CHOOSE_DIRECTORY
    label: Output Directory
    groupLabel: Render Parameters
  default: "./output"
  description: Choose the render output directory.
```

The value of the `OutputDir` parameter is used by job attachments as the directory where the job writes output files.

For more information about the `objectType` and `dataFlow` properties, see [JobPathParameterDefinition](https://github.com/OpenJobDescription/openjd-specifications/wiki/2023-09-Template-Schemas#22-jobpathparameterdefinition) in the [Open Job Description specification](https://github.com/OpenJobDescription/openjd-specifications)

The rest of the `blender_render` job template sample defines the job's workflow as a singe step with each frame in the animation rendered as a separate task:

```
steps:
- name: RenderBlender
  parameterSpace:
    taskParameterDefinitions:
    - name: Frame
      type: INT
      range: "{{Param.Frames}}"
  script:
    actions:
      onRun:
        command: bash
        # Note: {{Task.File.Run}} is a variable that expands to the filename on the worker host's
        # disk where the contents of the 'Run' embedded file, below, is written.
        args: ['{{Task.File.Run}}']
    embeddedFiles:
      - name: Run
        type: TEXT
        data: |
          # Configure the task to fail if any individual command fails.
          set -xeuo pipefail

          mkdir -p '{{Param.OutputDir}}'

          blender --background '{{Param.BlenderSceneFile}}' \
                  --render-output '{{Param.OutputDir}}/{{Param.OutputPattern}}' \
                  --render-format {{Param.Format}} \
                  --use-extension 1 \
                  --render-frame {{Task.Param.Frame}}
```

For example, if the value of the `Frames` parameter is `1-10`, it defines 10 tasks. Each has task has a different value for the `Frame` parameter. To run a task:

1. All of the variable references in the `data` property of the embedded file are expanded, for example `--render-frame 1`.

1. The contents of the `data` property is written to a file in the session working directory on disk.

1. The task's `onRun` command resolves to `bash location of embedded file` and then runs.

For more information about embedded files, sessions, and path-mapped locations, see [How jobs are run](https://github.com/OpenJobDescription/openjd-specifications/wiki/How-Jobs-Are-Run) in the [Open Job Description specification](https://github.com/OpenJobDescription/openjd-specifications/wiki/How-Jobs-Are-Run).

There are more examples of job templates in the [deadline-cloud-samples/job\$1bundles](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline/job_bundles) repository, as well as the [template samples](https://github.com/OpenJobDescription/openjd-specifications/tree/mainline/samples) provided with the Open Job Descriptions specification.

# Task chunking for job templates
<a name="build-job-bundle-chunking"></a>

Task chunking lets you group multiple tasks into a single unit of work called a chunk. In a render job, for example, this means Deadline Cloud can dispatch multiple frames together instead of one frame per command invocation. This reduces the overhead of starting applications for each task and shortens total job runtime. For details, see [Running multiple frames at a time](https://github.com/OpenJobDescription/openjd-specifications/wiki/Job-Intro-03-Creating-a-Job-Template#42-running-multiple-frames-at-a-time) in the OpenJD wiki.

OpenJD supports extensions that add optional features to job templates. Task chunking is enabled by adding the `TASK_CHUNKING` extension. To use chunking, add the extension to your job template and use the `CHUNK[INT]` task parameter type. Submit chunked jobs using the same `deadline bundle submit` command. For example, the following job template renders frames in chunks of 10:

```
specificationVersion: 'jobtemplate-2023-09'
extensions:
  - TASK_CHUNKING
name: Blender Render with Contiguous Chunking
parameterDefinitions:
  - name: BlenderSceneFile
    type: PATH
    objectType: FILE
    dataFlow: IN
  - name: Frames
    type: STRING
    default: "1-100"
  - name: OutputDir
    type: PATH
    objectType: DIRECTORY
    dataFlow: OUT
    default: "./output"
steps:
  - name: RenderBlender
    parameterSpace:
      taskParameterDefinitions:
        - name: Frame
          type: CHUNK[INT]
          range: "{{Param.Frames}}"
          chunks:
            defaultTaskCount: 10
            rangeConstraint: CONTIGUOUS
    script:
      actions:
        onRun:
          command: bash
          args: ["{{Task.File.Run}}"]
      embeddedFiles:
        - name: Run
          type: TEXT
          data: |
            set -xeuo pipefail
            
            mkdir -p '{{Param.OutputDir}}'
            
            # Parse the chunk range (e.g., "1-10") into start and end frames
            START_FRAME="$(echo '{{Task.Param.Frame}}' | cut -d- -f1)"
            END_FRAME="$(echo '{{Task.Param.Frame}}' | cut -d- -f2)"
            
            blender --background '{{Param.BlenderSceneFile}}' \
                    --render-output '{{Param.OutputDir}}/output_####' \
                    --render-format PNG \
                    --use-extension 1 \
                    -s "$START_FRAME" \
                    -e "$END_FRAME" \
                    --render-anim
```

In this example, Deadline Cloud divides the 100 frames into chunks like `1-10`, `11-20`, and so on. The `{{Task.Param.Frame}}` variable expands to a range expression like `1-10`. Because `rangeConstraint` is set to `CONTIGUOUS`, the range is always in `start-end` format. The script parses this range and passes the start and end frames to Blender using the `-s` and `-e` options with `--render-anim`.

The `chunks` property supports the following fields:
+ `defaultTaskCount` – (Required) How many tasks to combine into a single chunk. The maximum value is 150.
+ `rangeConstraint` – (Required) If `CONTIGUOUS`, a chunk is always a contiguous range like `1-10`. If `NONCONTIGUOUS`, a chunk can be an arbitrary set like `1,3,7-10`.
+ `targetRuntimeSeconds` – (Optional) The target runtime in seconds for each chunk. Deadline Cloud can dynamically adjust the chunk size to approach this target once some chunks have completed.

For more task chunking examples, including basic and Blender examples with both contiguous and non-contiguous chunks, see the [task chunking samples](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline/job_bundles/task_chunking) in the Deadline Cloud samples repository on GitHub.

**Customer-managed fleet requirements**  
Task chunking requires a compatible worker agent version. If you use customer-managed fleets, ensure your worker agents are updated before submitting jobs with chunking. Service-managed fleets always use a compatible worker agent version.

**Downloading output for chunked jobs**  
When you download output for a single task in a chunked job, Deadline Cloud downloads the output for the entire chunk. For example, if frames 1-10 were processed together, downloading the output for frame 3 includes all frames 1-10. This feature requires `deadline-cloud` version 0.53.3 or later.

# Parameter values elements for job bundles
<a name="build-job-bundle-parameters"></a>

You can use the parameters file to set the values of some of the job parameters in the job template or [CreateJob](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html) operation request arguments in the job bundle so that you don't need to set values when submitting a job. The UI for job submission enables you to modify these values.

You can define the job template in either YAML format (`parameter_values.yaml`) or JSON format (`parameter_values.json`). The examples in this section are shown in YAML format.

In YAML, the format of the file is:

```
parameterValues:
- name: <string>
  value: <integer>, <float>, or <string>
- name: <string>
  value: <integer>, <float>, or <string>ab
... repeating as necessary
```

Each element of the `parameterValues` list must be one of the following:
+ A job parameter defined in the job template.
+ A job parameter defined in a queue environment for the queue that you submit the job to..
+ A special parameter passed to the `CreateJob` operation when creating a job.
  + `deadline:priority` – The value must be an integer. It is passed to the `CreateJob` operation as the [priority](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html#deadlinecloud-CreateJob-request-priority) parameter.
  + `deadline:targetTaskRunStatus` – The value must be a string. It is passed to the `CreateJob` operation as the [targetTaskRunStatus](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html#deadlinecloud-CreateJob-request-targetTaskRunStatus) parameter.
  + `deadline:maxFailedTasksCount` – The value must be an integer. It is passed to the `CreateJob` operation as the [maxFailedTasksCount](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html#deadlinecloud-CreateJob-request-maxFailedTasksCount) parameter.
  + `deadline:maxRetriesPerTask` – The value must be an integer. It is passed to the `CreateJob` operation as the [maxRetriesPerTask](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html#deadlinecloud-CreateJob-request-maxRetriesPerTask) parameter.
  + `deadline:maxWorkercount` – The value must be an integer. It is passed to the `CreateJob` operation as the [maxWorkerCount](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html#deadlinecloud-CreateJob-request-maxRetriesPerTask) parameter.

A job template is always a template rather than a specific job to run. A parameter values file enables a job bundle to either act as a template if some parameters don't have values defined in this file, or as a specific job submission if all parameters have values.

For example, the [blender\$1render sample](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline/job_bundles/blender_render) doesn't have a parameters file and its job template defines parameters with no default values. This template must be used as a template to create jobs. After you create a job using this job bundle, Deadline Cloud writes a new job bundle to the job history directory. 

For example, when you submit a job with the following command:

```
deadline bundle gui-submit blender_render/
```

The new job bundle contains a `parameter_values.yaml` file that contains the specified parameters:

```
% cat ~/.deadline/job_history/\(default\)/2024-06/2024-06-20-01-JobBundle-Demo/parameter_values.yaml
parameterValues:
- name: deadline:targetTaskRunStatus
  value: READY
- name: deadline:maxFailedTasksCount
  value: 10
- name: deadline:maxRetriesPerTask
  value: 5
- name: deadline:priority
  value: 75
- name: BlenderSceneFile
  value: /private/tmp/bundle_demo/bmw27_cpu.blend
- name: Frames
  value: 1-10
- name: OutputDir
  value: /private/tmp/bundle_demo/output
- name: OutputPattern
  value: output_####
- name: Format
  value: PNG
- name: CondaPackages
  value: blender
- name: RezPackages
  value: blender
```

You can create the same job with the following command:

```
deadline bundle submit ~/.deadline/job_history/\(default\)/2024-06/2024-06-20-01-JobBundle-Demo/
```

**Note**  
The job bundle that you submit is saved to your job history directory. You can find the location of that directory with the following command:  

```
deadline config get settings.job_history_dir
```

# Asset references elements for job bundles
<a name="build-job-bundle-assets"></a>

You can use Deadline Cloud [job attachments](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/storage-job-attachments.html) to transfer files back and forth between your workstation and Deadline Cloud. The asset reference file lists input files and directories, as well as output directories for your attachments. If you don't list all of the files and directories in this file, you can select them when you submit a job with the `deadline bundle gui-submit` command.

This file has no effect if you are not using job attachments.

You can define the job template in either YAML format (`asset_references.yaml`) or JSON format (`asset_references.json`). The examples in this section are shown in YAML format.

In YAML, the format of the file is:

```
assetReferences:
    inputs:
        # Filenames on the submitting workstation whose file contents are needed as 
        # inputs to run the job.
        filenames:
        - list of file paths
        # Directories on the submitting workstation whose contents are needed as inputs
        # to run the job.
        directories:
        - list of directory paths

    outputs:
        # Directories on the submitting workstation where the job writes output files
        # if running locally.
        directories:
        - list of directory paths

    # Paths referenced by the job, but not necessarily input or output.
    # Use this if your job uses the name of a path in some way, but does not explicitly need
    # the contents of that path.
    referencedPaths:
    - list of directory paths
```

When selecting the input or output file to upload to Amazon S3, Deadline Cloud compares the file path against the paths listed in your storage profiles. Each `SHARED`-type file system location in a storage profile abstracts a network file share that is mounted on your workstations and worker hosts. Deadline Cloud uploads only files that are not on one of these file shares.

For more information about creating and using storage profiles, see [Shared storage in Deadline Cloud](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/storage-shared.html) in the *AWS Deadline Cloud User Guide*.

**Example - The asset reference file created by the Deadline Cloud GUI**  
Use the following command to submit a job using the [blender\$1render sample](https://github.com/aws-deadline/deadline-cloud-samples/tree/mainline/job_bundles/blender_render).  

```
deadline bundle gui-submit blender_render/
```
Add some additional files to the job on the **Job attachments** tab:  

![\[The job attachments pane of the Deadline Cloud job submission GUI. Add the input file /private/tmp/bundle_demo/a_texture.png and the input directory /private/tmp/bundle_demo/assets.\]](http://docs.aws.amazon.com/deadline-cloud/latest/developerguide/images/blender_submit_add_job_attachments.png)

After you submit the job, you can look at the `asset_references.yaml` file in the job bundle in the job history directory to see the assets in the YAML file:  

```
% cat ~/.deadline/job_history/\(default\)/2024-06/2024-06-20-01-JobBundle-Demo/asset_references.yaml 
assetReferences:
  inputs:
    filenames:
    - /private/tmp/bundle_demo/a_texture.png
    directories:
    - /private/tmp/bundle_demo/assets
  outputs:
    directories: []
  referencedPaths: []
```

# Using files in your jobs
<a name="using-files-in-your-jobs"></a>

 Many of the jobs that you submit to AWS Deadline Cloud have input and output files. Your input files and output directories may be located on a combination of shared filesystems and local drives. Jobs need to locate the content in those locations. Deadline Cloud provides two features, [job attachments](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/storage-job-attachments.html) and [storage profiles](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/storage-shared.html) that work together to help your jobs locate the files that they need. 

Job attachments offer several benefits
+ Move files between hosts using Amazon S3
+ Transfer files from your work station to worker hosts and vice versa
+ Available for jobs in queues where you enable the feature
+ Primarily used with service-managed fleets, but also compatible with customer-managed fleets.

 Use storage profiles to map the layout of shared filesystem locations on your workstation and worker hosts. This mapping helps your jobs locate shared files and directories when their locations differ between your workstation and worker hosts, such as cross-platform setups with Windows-based workstations and Linux-based worker hosts. Storage profile's map of your filesystem configuration is also used by job attachments to identify the files it needs to shuttle between hosts through Amazon S3. 

 If you are not using job attachments, and you don't need to remap file and directory locations between workstations and worker hosts then you don't need to model your fileshares with storage profiles. 

**Topics**
+ [Sample project infrastructure](sample-project-infrastructure.md)
+ [Storage profiles and path mapping](storage-profiles-and-path-mapping.md)

# Sample project infrastructure
<a name="sample-project-infrastructure"></a>

To demonstrate using job attachments and storage profiles, set up a test environment with two separate projects. You can use the Deadline Cloud console to create the test resources.

1. If you haven't already, create a test farm. To create a farm, follow the procedure in [Create a farm](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/farms.html). 

1. Create two queues for jobs in each of the two projects. To create queues, follow the procedure in [Create a queue](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/create-queue.html).

   1. Create the first queue called **Q1**. Use the following configuration, use the defaults for all other items.
      + For job attachments, choose **Create a new Amazon S3 bucket**.
      + Select **Enable association with customer-managed fleets**.
      + For the run as user, enter **jobuser** for both the POSIX user and group.
      + For the queue service role, create a new role named **AssetDemoFarm-Q1-Role**
      + Clear the default conda queue environment checkbox.

   1. Create the second queue called **Q2**. Use the following configuration, use the defaults for all other items.
      + For job attachments, choose **Create a new Amazon S3 bucket**.
      + Select **Enable association with customer-managed fleets**.
      + For the run as user, enter **jobuser** for both the POSIX user and group.
      + For the queue service role, create a new role named **AssetDemoFarm-Q2-Role**
      + Clear the default conda queue environment checkbox.

1. Create a single customer-managed fleet that runs the jobs from both queues. To create the fleet, follow the procedure in [Create a customer-managed fleet](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/create-a-cmf.html). Use the following configuration:
   + For **Name**, use **DemoFleet**.
   + For **Fleet type** choose **Customer managed**
   + For **Fleet service role**, create a new role named **AssetDemoFarm-Fleet-Role**.
   + Don't associate the fleet with any queues.

The test environment assumes that there are three file systems shared between hosts using network file shares. In this example, the locations have the following names:
+ `FSCommon` - contains input job assets that are common to both projects.
+ `FS1` - contains input and output job assets for project 1.
+ `FS2` - contains input and output job assets for project 2.

The test environment also assumes that there are three workstations, as follows:
+ `WSAll` - A Linux-based workstation used by developers for all projects. The shared file system locations are:
  + `FSCommon`: `/shared/common`
  + `FS1`: `/shared/projects/project1`
  + `FS2`: `/shared/projects/project2`
+ `WS1` - A Windows-based workstation used for project 1. The shared file system locations are:
  + `FSCommon`: `S:\`
  + `FS1`: `Z:\`
  + `FS2`: Not available
+ `WS1` - A macOS-based workstation used for project 2.The shared file system locations are:
  + `FSCommon`: `/Volumes/common`
  + `FS1`: Not available
  + `FS2`: `/Volumes/projects/project2`

Finally, define the shared file system locations for the workers in your fleet. The examples that follow refer to this configuration as `WorkerConfig`. The shared locations are: 
+ `FSCommon`: `/mnt/common`
+ `FS1`: `/mnt/projects/project1`
+ `FS2`: `/mnt/projects/project2`

 You don't need to set up any shared file systems, workstations, or workers that match this configuration. The shared locations don't need to exist for the demonstration. 

# Storage profiles and path mapping
<a name="storage-profiles-and-path-mapping"></a>

Use storage profiles to model the file systems on your workstation and worker hosts. Each storage profile describes the operating system and file system layout of one of your system configurations. This topic describes how to use storage profiles to model the file system configurations of your hosts so Deadline Cloud can generate path mapping rules for your jobs, and how those path mapping rules are generated from your storage profiles.

When you submit a job to Deadline Cloud you can provide an optional storage profile ID for the job. This storage profile describes the submitting workstation's file system. It describes the original file system configuration that the file paths in the job template use.

You can also associate a storage profile with a fleet. The storage profile describes the file system configuration of all worker hosts in the fleet. If you have workers with different file system configuration, those workers must be assigned to a different fleet in your farm.

 Path mapping rules describe how paths should be remapped from how they are specified in the job to the path's actual location on a worker host. Deadline Cloud compares the file system configuration described in a job's storage profile with the storage profile of the fleet that is running the job to derive these path mapping rules. 

**Topics**
+ [Model shared file system locations with storage profiles](modeling-your-shared-filesystem-locations-with-storage-profiles.md)
+ [Configure storage profiles for fleets](configuring-storage-profiles-for-fleets.md)
+ [Configure storage profiles for queues](storage-profiles-for-queues.md)
+ [Derive path mapping rules from storage profiles](deriving-path-mapping-rules-from-storage-profiles.md)

# Model shared file system locations with storage profiles
<a name="modeling-your-shared-filesystem-locations-with-storage-profiles"></a>

 A storage profile models the file system configuration of one of your host configurations. There are four different host configurations in the [sample project infrastructure](). In this example you create a separate storage profile for each. You can create a storage profile using any of the following:
+ [CreateStorageProfile API](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateStorageProfile.html)
+ [AWS::Deadline::StorageProfile](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-deadline-storageprofile.html) CloudFormation resource
+ [AWS console](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/storage-shared.html#storage-profile)

 A storage profile is made up of a list of file system locations that each tell Deadline Cloud the location and type of a file system location that is relevant for jobs submitted from or run on a host. A storage profile should only model the locations that are relevant for jobs. For example, the shared `FSCommon` location is located on workstation `WS1` at `S:\`, so the corresponding file system location is: 

```
{
    "name": "FSCommon",
    "path": "S:\\",
    "type": "SHARED"
}
```

 Use the following commands to create the storage profile for workstation configurations `WS1`, `WS2`, and `WS3` and the worker configuration `WorkerConfig` using the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) in [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html): 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff

aws deadline create-storage-profile --farm-id $FARM_ID \
  --display-name WSAll \
  --os-family LINUX \
  --file-system-locations \
  '[
      {"name": "FSCommon", "type":"SHARED", "path":"/shared/common"},
      {"name": "FS1", "type":"SHARED", "path":"/shared/projects/project1"},
      {"name": "FS2", "type":"SHARED", "path":"/shared/projects/project2"}
  ]'

aws deadline create-storage-profile --farm-id $FARM_ID \
  --display-name WS1 \
  --os-family WINDOWS \
  --file-system-locations \
  '[
      {"name": "FSCommon", "type":"SHARED", "path":"S:\\"},
      {"name": "FS1", "type":"SHARED", "path":"Z:\\"}
   ]'

aws deadline create-storage-profile --farm-id $FARM_ID \
  --display-name WS2 \
  --os-family MACOS \
  --file-system-locations \
  '[
      {"name": "FSCommon", "type":"SHARED", "path":"/Volumes/common"},
      {"name": "FS2", "type":"SHARED", "path":"/Volumes/projects/project2"}
  ]'

aws deadline create-storage-profile --farm-id $FARM_ID \
  --display-name WorkerCfg \
  --os-family LINUX \
  --file-system-locations \
  '[
      {"name": "FSCommon", "type":"SHARED", "path":"/mnt/common"},
      {"name": "FS1", "type":"SHARED", "path":"/mnt/projects/project1"},
      {"name": "FS2", "type":"SHARED", "path":"/mnt/projects/project2"}
  ]'
```

**Note**  
You must refer to the file system locations in your storage profiles using the same values for the `name` property across all storage profiles in your farm. Deadline Cloud compares the names to determine that file system locations from different storage profiles are referring to the same location when generating path mapping rules. 

# Configure storage profiles for fleets
<a name="configuring-storage-profiles-for-fleets"></a>

You can configure a fleet to include a storage profile that models the file system locations on all workers in the fleet. The host file system configuration of all workers in a fleet must match their fleet's storage profile. Workers with different file system configurations must be in separate fleets. 

To set your fleet's configuration to use the `WorkerConfig` storage profile use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) in [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html): 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of FLEET_ID to your fleet's identifier
FLEET_ID=fleet-00112233445566778899aabbccddeeff
# Change the value of WORKER_CFG_ID to your storage profile named WorkerConfig
WORKER_CFG_ID=sp-00112233445566778899aabbccddeeff

FLEET_WORKER_MODE=$( \
  aws deadline get-fleet --farm-id $FARM_ID --fleet-id $FLEET_ID \
   --query '.configuration.customerManaged.mode' \
)
FLEET_WORKER_CAPABILITIES=$( \
  aws deadline get-fleet --farm-id $FARM_ID --fleet-id $FLEET_ID \
   --query '.configuration.customerManaged.workerCapabilities' \
)

aws deadline update-fleet --farm-id $FARM_ID --fleet-id $FLEET_ID \
  --configuration \
  "{
    \"customerManaged\": {
      \"storageProfileId\": \"$WORKER_CFG_ID\",
      \"mode\": $FLEET_WORKER_MODE,
      \"workerCapabilities\": $FLEET_WORKER_CAPABILITIES
    }
  }"
```

# Configure storage profiles for queues
<a name="storage-profiles-for-queues"></a>

 A queue's configuration includes a list of case-sensitive names of the shared file system locations that jobs submitted to the queue require access to. for example, jobs submitted to queue `Q1` require file system locations `FSCommon` and `FS1`. Jobs submitted to queue `Q2` require file system locations `FSCommon` and `FS2`. 

To set the queue's configurations to require these file system locations, use the following script: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff
# Change the value of QUEUE2_ID to queue Q2's identifier
QUEUE2_ID=queue-00112233445566778899aabbccddeeff

aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID \
  --required-file-system-location-names-to-add FSComm FS1

aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE2_ID \
  --required-file-system-location-names-to-add FSComm FS2
```

 A queue's configuration also includes a list of allowed storage profiles that applies to jobs submitted to and fleets associated with that queue. Only storage profiles that define file system locations for all of the required file system locations for the queue are allowed in the queue's list of allowed storage profiles. 

A job fails if you submit it with a storage profile that isn't in the list of allowed storage profiles for the queue. You can always submit a job with no storage profile to a queue. The workstation configurations labeled `WSAll` and `WS1` both have the required file system locations (`FSCommon` and `FS1`) for queue `Q1`. They need to be allowed to submit jobs to the queue. Similarly, workstation configurations `WSAll` and `WS2` meet the requirements for queue `Q2`. They need to be allowed to submit jobs to that queue. Update both queue configurations to allow jobs to be submitted with these storage profiles using the following script: 

```
# Change the value of WSALL_ID to the identifier of the WSAll storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff
# Change the value of WS1 to the identifier of the WS1 storage profile
WS1_ID=sp-00112233445566778899aabbccddeeff
# Change the value of WS2 to the identifier of the WS2 storage profile
WS2_ID=sp-00112233445566778899aabbccddeeff

aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID \
  --allowed-storage-profile-ids-to-add $WSALL_ID $WS1_ID

aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE2_ID \
  --allowed-storage-profile-ids-to-add $WSALL_ID $WS2_ID
```

 If you add the `WS2` storage profile to the list of allowed storage profiles for queue `Q1` it fails: 

```
$ aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID \
  --allowed-storage-profile-ids-to-add $WS2_ID

An error occurred (ValidationException) when calling the UpdateQueue operation: Storage profile id: sp-00112233445566778899aabbccddeeff does not have required file system location: FS1
```

 This is because the `WS2` storage profile doesn't contain a definition for the file system location named `FS1` that queue `Q1` requires. 

 Associating a fleet that is configured with a storage profile that is not in the queue's list of allowed storage profiles also fails. For example: 

```
$ aws deadline create-queue-fleet-association --farm-id $FARM_ID \
   --fleet-id $FLEET_ID \
   --queue-id $QUEUE1_ID

An error occurred (ValidationException) when calling the CreateQueueFleetAssociation operation: Mismatch between storage profile ids.
```

To fix the error, add the storage profile named `WorkerConfig` to the list of allowed storage profiles for both queue `Q1` and queue `Q2`. Then, associate the fleet with these queues so that workers in the fleet can run jobs from both queues. 

```
# Change the value of FLEET_ID to your fleet's identifier
FLEET_ID=fleet-00112233445566778899aabbccddeeff
# Change the value of WORKER_CFG_ID to your storage profile named WorkerCfg
WORKER_CFG_ID=sp-00112233445566778899aabbccddeeff

aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID \
  --allowed-storage-profile-ids-to-add $WORKER_CFG_ID

aws deadline update-queue --farm-id $FARM_ID --queue-id $QUEUE2_ID \
  --allowed-storage-profile-ids-to-add $WORKER_CFG_ID

aws deadline create-queue-fleet-association --farm-id $FARM_ID \
  --fleet-id $FLEET_ID \
  --queue-id $QUEUE1_ID

aws deadline create-queue-fleet-association --farm-id $FARM_ID \
  --fleet-id $FLEET_ID \
  --queue-id $QUEUE2_ID
```

# Derive path mapping rules from storage profiles
<a name="deriving-path-mapping-rules-from-storage-profiles"></a>

 Path mapping rules describe how paths should be remapped from the job to the path's actual location on a worker host. When a task is running on a worker, the storage profile from the job is compared to the storage profile of the worker's fleet to derive the path mapping rules for the task. 

 Deadline Cloud creates a mapping rule for each of the required file system locations in the queue's configuration. For example, a job submitted with the `WSAll` storage profile to queue `Q1` has the path mapping rules: 
+  `FSComm`: `/shared/common -> /mnt/common` 
+  `FS1`: `/shared/projects/project1 -> /mnt/projects/project1` 

 Deadline Cloud creates rules for the `FSComm` and `FS1` file system locations, but not the `FS2` file system location even though both the `WSAll` and `WorkerConfig` storage profiles define `FS2`. This is because queue `Q1`'s list of required file system locations is `["FSComm", "FS1"]`. 

 You can confirm the path mapping rules available to jobs submitted with a particular storage profile by submitting a job that prints out [Open Job Description's path mapping rules file](https://github.com/OpenJobDescription/openjd-specifications/wiki/How-Jobs-Are-Run#path-mapping), and then reading the session log after the job has completed: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff
# Change the value of WSALL_ID to the identifier of the WSALL storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff

aws deadline create-job --farm-id $FARM_ID --queue-id $QUEUE1_ID \
  --priority 50 \
  --storage-profile-id $WSALL_ID \
  --template-type JSON --template \
  '{
    "specificationVersion": "jobtemplate-2023-09",
    "name": "DemoPathMapping",
    "steps": [
      {
        "name": "ShowPathMappingRules",
        "script": {
          "actions": {
            "onRun": {
              "command": "/bin/cat",
              "args": [ "{{Session.PathMappingRulesFile}}" ]
            }
          }
        }
      }
    ]
  }'
```

 If you use the [Deadline Cloud CLI](https://pypi.org/project/deadline/) to submit jobs, its configuration `settings.storage_profile_id` setting sets the storage profile that jobs submitted with the CLI will have. To submit jobs with the `WSAll` storage profile, set: 

```
deadline config set settings.storage_profile_id $WSALL_ID
```

 To run a customer-managed worker as though it is running in the sample infrastructure, follow the procedure in [Run the worker agent](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/run-worker.html) in the *Deadline Cloud User Guide* to run a worker with AWS CloudShell. If you followed those instructions before, delete the `~/demoenv-logs` and `~/demoenv-persist` directories first. Also, set the values of the `DEV_FARM_ID` and `DEV_CMF_ID` environment variables that the directions reference as follows before doing so: 

```
DEV_FARM_ID=$FARM_ID
DEV_CMF_ID=$FLEET_ID
```

 After the job runs, you can see the path mapping rules in the job's log file: 

```
cat demoenv-logs/${QUEUE1_ID}/*.log
...
JJSON log results (see below)
...
```

The log contains mapping for both the `FS1` and `FSComm` file systems. Reformatted for readability, the log entry looks like this:

```
{
    "version": "pathmapping-1.0",
    "path_mapping_rules": [
        {
            "source_path_format": "POSIX",
            "source_path": "/shared/projects/project1",
            "destination_path": "/mnt/projects/project1"
        },
        {
            "source_path_format": "POSIX",
            "source_path": "/shared/common",
            "destination_path": "/mnt/common"
        }
    ]
```

 You can submit jobs with different storage profiles to see how the path mapping rules change. 

# Use job attachments to share files
<a name="build-job-attachments"></a>

Use *job attachments* to make files not in shared directories available for your jobs, and to capture the output files if they are not written to shared directories. Job attachments uses Amazon S3 to shuttle files between hosts. Files are stored in S3 buckets, and you don't need to upload a file if its content hasn't changed.

You must use job attachments when running jobs on [service-managed fleets](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/smf-manage.html) because hosts don't share file system locations. Job attachments are also useful with [customer-managed fleets](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/manage-cmf.html) when a job’s input or output files stored on a shared network file system, such as when your [job bundle](https://docs.aws.amazon.com/deadline-cloud/latest/userguide/submit-job-bundle.html) contains shell or Python scripts. 

 When you submit a job bundle with either the [Deadline Cloud CLI](https://pypi.org/project/deadline/) or a Deadline Cloud submitter, job attachments use the job’s storage profile and the queue’s required file system locations to identify the input files that are not on a worker host and should be uploaded to Amazon S3 as part of job submission. These storage profiles also help Deadline Cloud identify the output files in worker host locations that must be uploaded to Amazon S3 so that they are available to your workstation. 

 The job attachments examples use the farm, fleet, queues, and storage profiles configurations from [Sample project infrastructure](sample-project-infrastructure.md) and [Storage profiles and path mapping](storage-profiles-and-path-mapping.md). You should go through those sections before this one. 

In the following examples, you use a sample job bundle as a starting point, then modify it to explore job attachment’s functionality. Job bundles are the best way for your jobs to use job attachments. They combine an [Open Job Description](https://github.com/OpenJobDescription/openjd-specifications/wiki) job template in a directory with additional files that list the files and directories required by jobs using the job bundle. For more information about job bundles, see [Open Job Description (OpenJD) templates for Deadline Cloud](build-job-bundle.md).

# Submitting files with a job
<a name="submitting-files-with-a-job"></a>

With Deadline Cloud, you can enable job workflows to access input files that are unavailable in shared file system locations on worker hosts. Job attachments allow rendering jobs to access files residing only on a local workstation drive or a service-managed fleet environment. When submitting a job bundle, you can include lists of input files and directories required by the job. Deadline Cloud identifies these non-shared files, uploads them from the local machine to Amazon S3, and downloads them to the worker host. It streamlines the process of transferring input assets to render nodes, ensuring all required files are accessible for distributed job execution.

You can specify the files for jobs directly in the job bundle, use parameters in the job template that you provide using environment variables or a script, and use the job's `assets_references` file. You can use one of these methods or a combination of all three. You can specify a storage profile for the bundle for the job so that it only uploads files that have changed on the local workstation.

This section uses an example job bundle from GitHub to demonstrate how Deadline Cloud identifies the files in your job to upload, how those files are organized in Amazon S3, and how they are made available to the worker hosts processing your jobs. 

**Topics**
+ [How Deadline Cloud uploads files to Amazon S3](what-job-attachments-uploads-to-amazon-s3.md)
+ [How Deadline Cloud chooses the files to upload](how-job-attachments-decides-what-to-upload-to-amazon-s3.md)
+ [How jobs find job attachment input files](how-jobs-find-job-attachments-input-files.md)

# How Deadline Cloud uploads files to Amazon S3
<a name="what-job-attachments-uploads-to-amazon-s3"></a>

This example shows how Deadline Cloud uploads files from your workstation or worker host to Amazon S3 so that they can be shared. It uses a sample job bundle from GitHub and the Deadline Cloud CLI to submit jobs.

 Start by cloning the [Deadline Cloud samples GitHub repository](https://github.com/aws-deadline/deadline-cloud-samples) into your [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) environment, then copy the `job_attachments_devguide` job bundle into your home directory: 

```
git clone https://github.com/aws-deadline/deadline-cloud-samples.git
cp -r deadline-cloud-samples/job_bundles/job_attachments_devguide ~/
```

 Install the [Deadline Cloud CLI](https://pypi.org/project/deadline/) to submit job bundles: 

```
pip install deadline --upgrade
```

 The `job_attachments_devguide` job bundle has a single step with a task that runs a bash shell script whose file system location is passed as a job parameter. The job parameter’s definition is: 

```
...
- name: ScriptFile
  type: PATH
  default: script.sh
  dataFlow: IN
  objectType: FILE
...
```

 The `dataFlow` property’s `IN` value tells job attachments that the value of the `ScriptFile` parameter is an input to the job. The value of the `default` property is a relative location to the job bundle’s directory, but it can also be an absolute path. This parameter definition declares the `script.sh` file in the job bundle’s directory as an input file required for the job to run. 

 Next, make sure that the Deadline Cloud CLI does not have a storage profile configured then submit the job to queue `Q1`: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff

deadline config set settings.storage_profile_id ''

deadline bundle submit --farm-id $FARM_ID --queue-id $QUEUE1_ID job_attachments_devguide/
```

 The output from the Deadline Cloud CLI after this command is run looks like: 

```
Submitting to Queue: Q1
...
Hashing Attachments  [####################################]  100%
Hashing Summary:
    Processed 1 file totaling 39.0 B.
    Skipped re-processing 0 files totaling 0.0 B.
    Total processing time of 0.0327 seconds at 1.19 KB/s.

Uploading Attachments  [####################################]  100%
Upload Summary:
    Processed 1 file totaling 39.0 B.
    Skipped re-processing 0 files totaling 0.0 B.
    Total processing time of 0.25639 seconds at 152.0 B/s.

Waiting for Job to be created...
Submitted job bundle:
   job_attachments_devguide/
Job creation completed successfully
job-74148c13342e4514b63c7a7518657005
```

When you submit the job, Deadline Cloud first hashes the `script.sh` file and then it uploads it to Amazon S3. 

Deadline Cloud treats the S3 bucket as content-addressable storage. Files are uploaded to S3 objects. The object name is derived from a hash of the file’s contents. If two files have identical contents they have the same hash value regardless of where the files are located or what they are named. This content-addressable storage enables Deadline Cloud to avoid uploading a file if it is already available.

 You can use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) to see the objects that were uploaded to Amazon S3: 

```
# The name of queue `Q1`'s job attachments S3 bucket
Q1_S3_BUCKET=$(
  aws deadline get-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID \
    --query 'jobAttachmentSettings.s3BucketName' | tr -d '"'
)

aws s3 ls s3://$Q1_S3_BUCKET --recursive
```

 Two objects were uploaded to S3: 
+  `DeadlineCloud/Data/87cb19095dd5d78fcaf56384ef0e6241.xxh128` – The contents of `script.sh`. The value `87cb19095dd5d78fcaf56384ef0e6241` in the object key is the hash of the file’s contents, and the extension `xxh128` indicates that the hash value was calculated as a 128 bit [xxhash](https://xxhash.com/). 
+  `DeadlineCloud/Manifests/<farm-id>/<queue-id>/Inputs/<guid>/a1d221c7fd97b08175b3872a37428e8c_input` – The manifest object for the job submission. The values `<farm-id>`, `<queue-id>`, and `<guid>` are your farm identifier, queue identifier, and a random hexidecimal value. The value `a1d221c7fd97b08175b3872a37428e8c` in this example is a hash value calculated from the string `/home/cloudshell-user/job_attachments_devguide`, the directory where `script.sh` is located. 

 The manifest object contains the information for the input files on a specific root path uploaded to S3 as part of the job’s submission. Download this manifest file (`aws s3 cp s3://$Q1_S3_BUCKET/<objectname>`). Its contents are similar to: 

```
{
    "hashAlg": "xxh128",
    "manifestVersion": "2023-03-03",
    "paths": [
        {
            "hash": "87cb19095dd5d78fcaf56384ef0e6241",
            "mtime": 1721147454416085,
            "path": "script.sh",
            "size": 39
        }
    ],
    "totalSize": 39
}
```

This indicates that the file `script.sh` was uploaded, and the hash of that file’s contents is `87cb19095dd5d78fcaf56384ef0e6241`. This hash value matches the value in the object name `DeadlineCloud/Data/87cb19095dd5d78fcaf56384ef0e6241.xxh128`. It is used by Deadline Cloud to know which object to download for this file’s contents.

 The full schema for this file is [available in GitHub](https://github.com/aws-deadline/deadline-cloud/blob/mainline/src/deadline/job_attachments/asset_manifests/v2023_03_03/validate.py). 

When you use the [CreateJob operation](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateJob.html) you can set the location of the manifest objects. You can use the [GetJob operation](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_GetJob.html) to see the location: 

```
{
    "attachments": {
        "file system": "COPIED",
        "manifests": [
            {
                "inputManifestHash": "5b0db3d311805ea8de7787b64cbbe8b3",
                "inputManifestPath": "<farm-id>/<queue-id>/Inputs/<guid>/a1d221c7fd97b08175b3872a37428e8c_input",
                "rootPath": "/home/cloudshell-user/job_attachments_devguide",
                "rootPathFormat": "posix"
            }
        ]
    },
    ...
}
```

# How Deadline Cloud chooses the files to upload
<a name="how-job-attachments-decides-what-to-upload-to-amazon-s3"></a>

 The files and directories that job attachments considers for upload to Amazon S3 as inputs to your job are: 
+  The values of all `PATH`-type job parameters defined in the job bundle’s job template with a `dataFlow` value of `IN` or `INOUT`.
+  The files and directories listed as inputs in the job bundle’s asset references file. 

 If you submit a job with no storage profile, all of the files considered for uploading are uploaded. If you submit a job with a storage profile, files are not uploaded to Amazon S3 if they are located in the storage profile’s `SHARED`-type file system locations that are also required file system locations for the queue. These locations are expected to be available on the worker hosts that run the job, so there is no need to upload them to S3. 

 In this example, you create `SHARED` file system locations in `WSAll` in your AWS CloudShell environment and then add files to those file system locations. Use the following command: 

```
# Change the value of WSALL_ID to the identifier of the WSAll storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff

sudo mkdir -p /shared/common /shared/projects/project1 /shared/projects/project2
sudo chown -R cloudshell-user:cloudshell-user /shared

for d in /shared/common /shared/projects/project1 /shared/projects/project2; do
  echo "File contents for $d" > ${d}/file.txt
done
```

 Next, add an asset references file to the job bundle that includes all the files that you created as inputs for the job. Use the following command: 

```
cat > ${HOME}/job_attachments_devguide/asset_references.yaml << EOF
assetReferences:
  inputs:
    filenames:
    - /shared/common/file.txt
    directories:
    - /shared/projects/project1
    - /shared/projects/project2
EOF
```

 Next, configure the Deadline Cloud CLI to submit jobs with the `WSAll` storage profile, and then submit the job bundle: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff
# Change the value of WSALL_ID to the identifier of the WSAll storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff

deadline config set settings.storage_profile_id $WSALL_ID

deadline bundle submit --farm-id $FARM_ID --queue-id $QUEUE1_ID job_attachments_devguide/
```

Deadline Cloud uploads two files to Amazon S3 when you submit the job. You can download the manifest objects for the job from S3 to see the uploaded files: 

```
for manifest in $( \
  aws deadline get-job --farm-id $FARM_ID --queue-id $QUEUE1_ID --job-id $JOB_ID \
    --query 'attachments.manifests[].inputManifestPath' \
    | jq -r '.[]'
); do
  echo "Manifest object: $manifest"
  aws s3 cp --quiet s3://$Q1_S3_BUCKET/DeadlineCloud/Manifests/$manifest /dev/stdout | jq .
done
```

 In this example, there is a single manifest file with the following contents: 

```
{
    "hashAlg": "xxh128",
    "manifestVersion": "2023-03-03",
    "paths": [
        {
            "hash": "87cb19095dd5d78fcaf56384ef0e6241",
            "mtime": 1721147454416085,
            "path": "home/cloudshell-user/job_attachments_devguide/script.sh",
            "size": 39
        },
        {
            "hash": "af5a605a3a4e86ce7be7ac5237b51b79",
            "mtime": 1721163773582362,
            "path": "shared/projects/project2/file.txt",
            "size": 44
        }
    ],
    "totalSize": 83
}
```

 Use the [GetJob operation](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_GetJob.html) for the manifest to see that the `rootPath` is "/". 

```
aws deadline get-job --farm-id $FARM_ID --queue-id $QUEUE1_ID --job-id $JOB_ID --query 'attachments.manifests[*]'
```

 The root path for set of input files is always the longest common subpath of those files. If your job was submitted from Windows instead and there are input files with no common subpath because they were on different drives, you see a separate root path on each drive. The paths in a manifest are always relative to the root path of the manifest, so the input files that were uploaded are: 
+  `/home/cloudshell-user/job_attachments_devguide/script.sh` – The script file in the job bundle. 
+  `/shared/projects/project2/file.txt` – The file in a `SHARED` file system location in the `WSAll` storage profile that is **not** in the list of required file system locations for queue `Q1`. 

The files in file system locations `FSCommon` (`/shared/common/file.txt`) and `FS1` (`/shared/projects/project1/file.txt`) are not in the list. This is because those file system locations are `SHARED` in the `WSAll` storage profile and they both are in the list of required file system locations in queue `Q1`. 

You can see the file system locations considered `SHARED` for a job that is submitted with a particular storage profile with the [GetStorageProfileForQueue operation](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_GetStorageProfileForQueue.html). To query for storage profile `WSAll` for queue `Q1` use the following command: 

```
aws deadline get-storage-profile --farm-id $FARM_ID --storage-profile-id $WSALL_ID

aws deadline get-storage-profile-for-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID --storage-profile-id $WSALL_ID
```

# How jobs find job attachment input files
<a name="how-jobs-find-job-attachments-input-files"></a>

 For a job to use the files that Deadline Cloud uploads to Amazon S3 using job attachments, your job needs those files available through the file system on the worker hosts. When a [session](https://github.com/OpenJobDescription/openjd-specifications/wiki/How-Jobs-Are-Run#sessions) for your job runs on a worker host, Deadline Cloud downloads the input files for the job into a temporary directory on the worker host’s local drive and adds path mapping rules for each of the job’s root paths to its file system location on the local drive. 

 For this example, start the Deadline Cloud worker agent in an AWS CloudShell tab. Let any previously submitted jobs finish running, and then delete the job logs from the logs directory: 

```
rm -rf ~/devdemo-logs/queue-*
```

 The following script modifies the job bundle to show all files in the session’s temporary working directory and the contents of the path mapping rules file, and then submits a job with the modified bundle: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff
# Change the value of WSALL_ID to the identifier of the WSAll storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff

deadline config set settings.storage_profile_id $WSALL_ID

cat > ~/job_attachments_devguide/script.sh << EOF
#!/bin/bash

echo "Session working directory is: \$(pwd)"
echo
echo "Contents:"
find . -type f
echo
echo "Path mapping rules file: \$1"
jq . \$1
EOF

cat > ~/job_attachments_devguide/template.yaml << EOF
specificationVersion: jobtemplate-2023-09
name: "Job Attachments Explorer"
parameterDefinitions:
- name: ScriptFile
  type: PATH
  default: script.sh
  dataFlow: IN
  objectType: FILE
steps:
- name: Step
  script:
    actions:
      onRun:
        command: /bin/bash
        args:
        - "{{Param.ScriptFile}}"
        - "{{Session.PathMappingRulesFile}}"
EOF

deadline bundle submit --farm-id $FARM_ID --queue-id $QUEUE1_ID job_attachments_devguide/
```

 You can look at the log of the job’s run after it has been run by the worker in your AWS CloudShell environment: 

```
cat demoenv-logs/queue-*/session*.log
```

The log shows that the first thing that occurs in the session is the two input files for the job are downloaded to the worker: 

```
2024-07-17 01:26:37,824 INFO ==============================================
2024-07-17 01:26:37,825 INFO --------- Job Attachments Download for Job
2024-07-17 01:26:37,825 INFO ==============================================
2024-07-17 01:26:37,825 INFO Syncing inputs using Job Attachments
2024-07-17 01:26:38,116 INFO Downloaded 142.0 B / 186.0 B of 2 files (Transfer rate: 0.0 B/s)
2024-07-17 01:26:38,174 INFO Downloaded 186.0 B / 186.0 B of 2 files (Transfer rate: 733.0 B/s)
2024-07-17 01:26:38,176 INFO Summary Statistics for file downloads:
Processed 2 files totaling 186.0 B.
Skipped re-processing 0 files totaling 0.0 B.
Total processing time of 0.09752 seconds at 1.91 KB/s.
```

 Next is the output from `script.sh` run by the job: 
+  The input files uploaded when the job was submitted are located under a directory whose name begins with "assetroot" in the session’s temporary directory. 
+  The input files’ paths have been relocated relative to the "assetroot" directory instead of relative to the root path for the job’s input manifest (`"/"`).
+  The path mapping rules file contains an additional rule that remaps `"/"` to the absolute path of the "assetroot" directory. 

 For example: 

```
2024-07-17 01:26:38,264 INFO Output:
2024-07-17 01:26:38,267 INFO Session working directory is: /sessions/session-5b33f
2024-07-17 01:26:38,267 INFO 
2024-07-17 01:26:38,267 INFO Contents:
2024-07-17 01:26:38,269 INFO ./tmp_xdhbsdo.sh
2024-07-17 01:26:38,269 INFO ./tmpdi00052b.json
2024-07-17 01:26:38,269 INFO ./assetroot-assetroot-3751a/shared/projects/project2/file.txt
2024-07-17 01:26:38,269 INFO ./assetroot-assetroot-3751a/home/cloudshell-user/job_attachments_devguide/script.sh
2024-07-17 01:26:38,269 INFO 
2024-07-17 01:26:38,270 INFO Path mapping rules file: /sessions/session-5b33f/tmpdi00052b.json
2024-07-17 01:26:38,282 INFO {
2024-07-17 01:26:38,282 INFO   "version": "pathmapping-1.0",
2024-07-17 01:26:38,282 INFO   "path_mapping_rules": [
2024-07-17 01:26:38,282 INFO     {
2024-07-17 01:26:38,282 INFO       "source_path_format": "POSIX",
2024-07-17 01:26:38,282 INFO       "source_path": "/shared/projects/project1",
2024-07-17 01:26:38,283 INFO       "destination_path": "/mnt/projects/project1"
2024-07-17 01:26:38,283 INFO     },
2024-07-17 01:26:38,283 INFO     {
2024-07-17 01:26:38,283 INFO       "source_path_format": "POSIX",
2024-07-17 01:26:38,283 INFO       "source_path": "/shared/common",
2024-07-17 01:26:38,283 INFO       "destination_path": "/mnt/common"
2024-07-17 01:26:38,283 INFO     },
2024-07-17 01:26:38,283 INFO     {
2024-07-17 01:26:38,283 INFO       "source_path_format": "POSIX",
2024-07-17 01:26:38,283 INFO       "source_path": "/",
2024-07-17 01:26:38,283 INFO       "destination_path": "/sessions/session-5b33f/assetroot-assetroot-3751a"
2024-07-17 01:26:38,283 INFO     }
2024-07-17 01:26:38,283 INFO   ]
2024-07-17 01:26:38,283 INFO }
```

**Note**  
 If the job you submit has multiple manifests with different root paths, there is a different "assetroot"-named directory for each of the root paths. 

 If you need to reference the relocated file system location of one of your input files, directories, or file system locations you can either process the path mapping rules file in your job and perform the remapping yourself, or add a `PATH` type job parameter to the job template in your job bundle and pass the value that you need to remap as the value of that parameter. For example, the following example modifies the job bundle to have one of these job parameters and then submits a job with the file system location `/shared/projects/project2` as its value: 

```
cat > ~/job_attachments_devguide/template.yaml << EOF
specificationVersion: jobtemplate-2023-09
name: "Job Attachments Explorer"
parameterDefinitions:
- name: LocationToRemap
  type: PATH
steps:
- name: Step
  script:
    actions:
      onRun:
        command: /bin/echo
        args:
        - "The location of {{RawParam.LocationToRemap}} in the session is {{Param.LocationToRemap}}"
EOF

deadline bundle submit --farm-id $FARM_ID --queue-id $QUEUE1_ID job_attachments_devguide/ \
  -p LocationToRemap=/shared/projects/project2
```

 The log file for this job’s run contains its output: 

```
2024-07-17 01:40:35,283 INFO Output:
2024-07-17 01:40:35,284 INFO The location of /shared/projects/project2 in the session is /sessions/session-5b33f/assetroot-assetroot-3751a
```

# Getting output files from a job
<a name="getting-output-files-from-a-job"></a>

This example shows how Deadline Cloud identifies the output files that your jobs generate, decides whether to upload those files to Amazon S3, and how you can get those output files on your workstation. 

 Use the `job_attachments_devguide_output` job bundle instead of the `job_attachments_devguide` job bundle for this example. Start by making a copy of the bundle in your AWS CloudShell environment from your clone of the Deadline Cloud samples GitHub repository: 

```
cp -r deadline-cloud-samples/job_bundles/job_attachments_devguide_output ~/
```

 The important difference between this job bundle and the `job_attachments_devguide` job bundle is the addition of a new job parameter in the job template: 

```
...
parameterDefinitions:
...
- name: OutputDir
  type: PATH
  objectType: DIRECTORY
  dataFlow: OUT
  default: ./output_dir
  description: This directory contains the output for all steps.
...
```

 The `dataFlow` property of the parameter has the value `OUT`. Deadline Cloud uses the value of `dataFlow` job parameters with a value of `OUT` or `INOUT` as outputs of your job. If the file system location passed as a value to these kinds of job parameters is remapped to a local file system location on the worker that runs the job, then Deadline Cloud will look for new files at the location and upload those to Amazon S3 as job outputs. 

 To see how this works, first start the Deadline Cloud worker agent in an AWS CloudShell tab. Let any previously submitted jobs finish running. Then delete the job logs from the logs directory: 

```
rm -rf ~/devdemo-logs/queue-*
```

 Next, submit a job with this job bundle. After the worker running in your CloudShell runs, look at the logs: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff
# Change the value of WSALL_ID to the identifier of the WSAll storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff

deadline config set settings.storage_profile_id $WSALL_ID

deadline bundle submit --farm-id $FARM_ID --queue-id $QUEUE1_ID ./job_attachments_devguide_output
```

 The log shows that a file was detected as output and uploaded to Amazon S3: 

```
2024-07-17 02:13:10,873 INFO ----------------------------------------------
2024-07-17 02:13:10,873 INFO Uploading output files to Job Attachments
2024-07-17 02:13:10,873 INFO ----------------------------------------------
2024-07-17 02:13:10,873 INFO Started syncing outputs using Job Attachments
2024-07-17 02:13:10,955 INFO Found 1 file totaling 117.0 B in output directory: /sessions/session-7efa/assetroot-assetroot-3751a/output_dir
2024-07-17 02:13:10,956 INFO Uploading output manifest to DeadlineCloud/Manifests/farm-0011/queue-2233/job-4455/step-6677/task-6677-0/2024-07-17T02:13:10.835545Z_sessionaction-8899-1/c6808439dfc59f86763aff5b07b9a76c_output
2024-07-17 02:13:10,988 INFO Uploading 1 output file to S3: s3BucketName/DeadlineCloud/Data
2024-07-17 02:13:11,011 INFO Uploaded 117.0 B / 117.0 B of 1 file (Transfer rate: 0.0 B/s)
2024-07-17 02:13:11,011 INFO Summary Statistics for file uploads:
Processed 1 file totaling 117.0 B.
Skipped re-processing 0 files totaling 0.0 B.
Total processing time of 0.02281 seconds at 5.13 KB/s.
```

 The log also shows that Deadline Cloud created a new manifest object in the Amazon S3 bucket configured for use by job attachments on queue `Q1`. The name of the manifest object is derived from the farm, queue, job, step, task, timestamp, and `sessionaction` identifiers of the task that generated the output. Download this manifest file to see where Deadline Cloud placed the output files for this task: 

```
# The name of queue `Q1`'s job attachments S3 bucket
Q1_S3_BUCKET=$(
  aws deadline get-queue --farm-id $FARM_ID --queue-id $QUEUE1_ID \
    --query 'jobAttachmentSettings.s3BucketName' | tr -d '"'
)

# Fill this in with the object name from your log
OBJECT_KEY="DeadlineCloud/Manifests/..."

aws s3 cp --quiet s3://$Q1_S3_BUCKET/$OBJECT_KEY /dev/stdout | jq .
```

 The manifest looks like: 

```
{
  "hashAlg": "xxh128",
  "manifestVersion": "2023-03-03",
  "paths": [
    {
      "hash": "34178940e1ef9956db8ea7f7c97ed842",
      "mtime": 1721182390859777,
      "path": "output_dir/output.txt",
      "size": 117
    }
  ],
  "totalSize": 117
}
```

 This shows that the content of the output file is saved to Amazon S3 the same way that job input files are saved. Similar to input files, the output file is stored in S3 with an object name containing the hash of the file and the prefix `DeadlineCloud/Data`. 

```
$ aws s3 ls --recursive s3://$Q1_S3_BUCKET | grep 34178940e1ef9956db8ea7f7c97ed842
2024-07-17 02:13:11        117 DeadlineCloud/Data/34178940e1ef9956db8ea7f7c97ed842.xxh128
```

 You can download the output of a job to your workstation using the Deadline Cloud monitor or the Deadline Cloud CLI: 

```
deadline job download-output --farm-id $FARM_ID --queue-id $QUEUE1_ID --job-id $JOB_ID
```

 The value of the `OutputDir` job parameter in the submitted job is `./output_dir`, so the output are downloaded to a directory called `output_dir` within the job bundle directory. If you specified an absolute path or different relative location as the value for `OutputDir`, then the output files would be downloaded to that location instead. 

```
$ deadline job download-output --farm-id $FARM_ID --queue-id $QUEUE1_ID --job-id $JOB_ID
Downloading output from Job 'Job Attachments Explorer: Output'

Summary of files to download:
    /home/cloudshell-user/job_attachments_devguide_output/output_dir/output.txt (1 file)

You are about to download files which may come from multiple root directories. Here are a list of the current root directories:
[0] /home/cloudshell-user/job_attachments_devguide_output
> Please enter the index of root directory to edit, y to proceed without changes, or n to cancel the download (0, y, n) [y]: 

Downloading Outputs  [####################################]  100%
Download Summary:
    Downloaded 1 files totaling 117.0 B.
    Total download time of 0.14189 seconds at 824.0 B/s.
    Download locations (total file counts):
        /home/cloudshell-user/job_attachments_devguide_output (1 file)
```

# Using files from a step in a dependent step
<a name="using-files-output-from-a-step-in-a-dependent-step"></a>

This example shows how one step in a job can access the outputs from a step that it depends on in the same job. 

 To make the outputs of one step available to another, Deadline Cloud adds additional actions to a session to download those outputs before running tasks in the session. You tell it which steps to download the outputs from by declaring those steps as dependencies of the step that needs to use the outputs. 

Use the `job_attachments_devguide_output` job bundle for this example. Start by making a copy in your AWS CloudShell environment from your clone of the Deadline Cloud samples GitHub repository. Modify it to add a dependent step that only runs after the existing step and uses that step’s output: 

```
cp -r deadline-cloud-samples/job_bundles/job_attachments_devguide_output ~/

cat >> job_attachments_devguide_output/template.yaml << EOF
- name: DependentStep
  dependencies:
  - dependsOn: Step
  script:
    actions:
      onRun:
        command: /bin/cat
        args:
        - "{{Param.OutputDir}}/output.txt"
EOF
```

 The job created with this modified job bundle runs as two separate sessions, one for the task in the step "Step" and then a second for the task in the step "DependentStep". 

First start the Deadline Cloud worker agent in an CloudShell tab. Let any previously submitted jobs finish running, then delete the job logs from the logs directory: 

```
rm -rf ~/devdemo-logs/queue-*
```

 Next, submit a job using the modified `job_attachments_devguide_output` job bundle. Wait for it to finish running on the worker in your CloudShell environment. Look at the logs for the two sessions: 

```
# Change the value of FARM_ID to your farm's identifier
FARM_ID=farm-00112233445566778899aabbccddeeff
# Change the value of QUEUE1_ID to queue Q1's identifier
QUEUE1_ID=queue-00112233445566778899aabbccddeeff
# Change the value of WSALL_ID to the identifier of the WSAll storage profile
WSALL_ID=sp-00112233445566778899aabbccddeeff

deadline config set settings.storage_profile_id $WSALL_ID

deadline bundle submit --farm-id $FARM_ID --queue-id $QUEUE1_ID ./job_attachments_devguide_output

# Wait for the job to finish running, and then:

cat demoenv-logs/queue-*/session-*
```

 In the session log for the task in the step named `DependentStep`, there are two separate download actions run: 

```
2024-07-17 02:52:05,666 INFO ==============================================
2024-07-17 02:52:05,666 INFO --------- Job Attachments Download for Job
2024-07-17 02:52:05,667 INFO ==============================================
2024-07-17 02:52:05,667 INFO Syncing inputs using Job Attachments
2024-07-17 02:52:05,928 INFO Downloaded 207.0 B / 207.0 B of 1 file (Transfer rate: 0.0 B/s)
2024-07-17 02:52:05,929 INFO Summary Statistics for file downloads:
Processed 1 file totaling 207.0 B.
Skipped re-processing 0 files totaling 0.0 B.
Total processing time of 0.03954 seconds at 5.23 KB/s.

2024-07-17 02:52:05,979 INFO 
2024-07-17 02:52:05,979 INFO ==============================================
2024-07-17 02:52:05,979 INFO --------- Job Attachments Download for Step
2024-07-17 02:52:05,979 INFO ==============================================
2024-07-17 02:52:05,980 INFO Syncing inputs using Job Attachments
2024-07-17 02:52:06,133 INFO Downloaded 117.0 B / 117.0 B of 1 file (Transfer rate: 0.0 B/s)
2024-07-17 02:52:06,134 INFO Summary Statistics for file downloads:
Processed 1 file totaling 117.0 B.
Skipped re-processing 0 files totaling 0.0 B.
Total processing time of 0.03227 seconds at 3.62 KB/s.
```

 The first action downloads the `script.sh` file used by the step named "Step." The second action downloads the outputs from that step. Deadline Cloud determines which files to download by using the output manifest generated by that step as an input manifest. 

 Late in the same log, you can see the output from the step named "DependentStep": 

```
2024-07-17 02:52:06,213 INFO Output:
2024-07-17 02:52:06,216 INFO Script location: /sessions/session-5b33f/assetroot-assetroot-3751a/script.sh
```

# Create resource limits for jobs
<a name="build-job-limits"></a>

Jobs submitted to Deadline Cloud may depend on resources that are shared between multiple jobs. For example, a farm may have more workers than floating licences for a specific resource. Or a shared file server may only be able to serve data to a limited number of workers at the same time. In some cases, one or more jobs can claim all of these resources, causing errors due to unavailable resources when new workers start. 

To help solve this, you can use *limits* for these constrained resources. Deadline Cloud accounts for the availability of constrained resources and uses that information to ensure that resources are available as new workers start up so that jobs have a lower likelihood of failing due to unavailable resources.

Limits are created for the entire farm. Jobs submitted to a queue can only acquire limits associated with the queue. If you specify a limit for a job that is not associated with the queue, the job isn't compatible and won't run.

To use a limit, you 
+ [Create a limit](job-limit-create.md)
+ [Associate a limit and a queue](job-limit-associate.md)
+ [Submit a job requiring limits](job-limit-job.md)

**Note**  
If you run a job that has constrained resources in a queue that is not associated with a limit, that job can consume all of the resources. If you have a constrained resource, make sure that all of the steps in jobs in queues that use the resource are associated with a limit.

For limits defined in a farm, associated with a queue, and specified in a job, one of four things can happen:
+ If you create a limit, associate it with a queue, and specify the limit in a job's template, the job runs and uses only the resources defined in the limit.
+ If you create a limit, specify it in a job template, but don't associate the limit with a queue, the job is marked incompatible and won't run.
+ If you create a limit, don't associate it with a queue, and don't specify the limit in a job's template, the job runs but does not use the limit.
+ If you don't use a limit at all, the job runs.

If you associate a limit to multiple queues, the queues share the resources constrained by the limit. For example, if you create a limit of 100, and one queue is using 60 resources, other queues can only use 40 resources. When a resource is released, it can be taken by a task from any queue.

Deadline Cloud provides two AWS CloudFormation metrics to help you monitor the resources provided by a limit. You can monitor the current number of resources in use and the maximum number of resources available in the limit. For more information, see [Resource limit metrics](https://docs.aws.amazon.com/deadline-cloud/latest/developerguide/cloudwatch-metrics.html#cloudwatch-metrics-limits) in the *Deadline Cloud Developer Guide*.

You apply a limit to a job step in a job template. When you specify the amount requirement name of a limit in the `amounts` section of the `hostRequirements` of a step and a limit with the same `amountRequirementName` is associated with the job's queue, tasks scheduled for this step are constrained by the limit for the resource.

If a step requires a resource that is constrained by a limit that is reached, tasks in that step won't be picked up by additional workers.

You can apply more than one limit to a job step. For example, if the step uses two different software licenses, you can apply a separate limit for each license. If a step requires two limits and the limit for one of the resources is reached, tasks in that step won't be picked up by additional workers until the resources become available.

## Stopping and deleting limits
<a name="job-limit-stop-delete"></a>

When you stop or delete the association between a queue and a limit, a job using the limit stops scheduling tasks from steps that require this limit and blocks the creation of new sessions for a step.

Tasks that are in the READY state remain ready, and tasks automatically resume with the association between the queue and the limit becomes active again. You don't need to requeue any jobs.

When you stop or delete the association between a queue and a limit, you have two choices on how to stop running tasks:
+ Stop and cancel tasks – Workers with sessions that acquired the limit cancel all tasks.
+ Stop and finish running tasks – Workers with sessions that acquired the limit complete their tasks.

When you delete a limit using the console, workers first stop running tasks immediately or eventually when they complete. When the association is deleted, the following happens: 
+ Steps requiring the limit are marked not compatible.
+ The entire job containing those steps is canceled, including steps that don't require the limit.
+ The job is marked not compatible.

If the queue associated with the limit has an associated fleet with a fleet capability that matches the amount requirement name of the limit, that fleet will continue to process jobs with the specified limit.

# Create a limit
<a name="job-limit-create"></a>

You create a limit using the Deadline Cloud console or the [CreateLimit operation in the Deadline Cloud API](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateLimit.html). Limits are defined for a farm, but associated with queues. After you create a limit, you can associate it with one or more queues.

**To create a limit**

1. From the Deadline Cloud console ([Deadline Cloud console](https://console.aws.amazon.com/deadlinecloud/home)) dashboard, select the farm that you want to create a queue for.

1. Choose the farm to add the limit to, choose the **Limits ** tab, and then choose **Create limit**.

1. Provide the details for the limit. The **Amount requirement name** is the name used in the job template to identify the limit. It must begin with the prefix **amount.** followed by the amount name. The amount requirement name must be unique in queues associated with the limit.

1. If you choose **Set a maximum amount**, that is the total number of resources allowed by this limit. If you choose **No maximum amount**, resource usage isn't limited. Even when resource usage isn't limited, the `CurrentCount` Amazon CloudWatch metric is emitted so that you can track usage. For more information, see [CloudWatch metrics](https://docs.aws.amazon.com/deadline-cloud/latest/developerguide/cloudwatch-metrics.html) in the *Deadline Cloud Developer Guide*.

1. If you already know the queues that should use the limit, you can choose them now. You don't need to associate a queue to create a limit.

1. Choose **Create limit**.

# Associate a limit and a queue
<a name="job-limit-associate"></a>

After you create a limit, you can associate one or more queues with the limit. Only queues that are associated with a limit use the values specified in the limit.

You create an association with a queue using the Deadline Cloud console or the [CreateQueueLimitAssociation operation in the Deadline Cloud API](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateQueueLimitAssociation.html).

**To associate a queue with a limit**

1. From the Deadline Cloud console ([Deadline Cloud console](https://console.aws.amazon.com/deadlinecloud/home)) dashboard, select the farm where you want to associate a limit with a queue.

1. Choose the **Limits ** tab, choose the limit to associate a queue with, and then choose **Edit limit**.

1. In the **Associate queues** section, choose the queues to associate with the limit.

1. Choose **Save changes**.

# Submit a job requiring limits
<a name="job-limit-job"></a>

You apply a limit by specifying it as a host requirement for the job or job step. If you don't specify a limit in a step and that step uses an associated resource, the step's usage isn't counted against the limit when jobs are scheduled..

Some Deadline Cloud submitters enable you to set a host requirement. You can specify the limit's amount requirement name in the submitter to apply the limit.

If your submitter doesn't support adding host requirements, you can also apply a limit by editing the job template for the job.

**To apply a limit to a job step in the job bundle**

1. Open the job template for the job using a text editor. The job template is located in the job bundle directory for the job. For more information, see [Job bundles](https://docs.aws.amazon.com/deadline-cloud/latest/developerguide/build-job-bundle.html) in the *Deadline Cloud Developer Guide*.

1. Find the step definition for the step to apply the limit to.

1. Add the following to the step definition. Replace *amount.name* with the amount requirement name of your limit. For typical use, you should set the `min` value to 1.

------
#### [ YAML ]

   ```
     hostRequirements:
       amounts:
       - name: amount.name
         min: 1
   ```

------
#### [ JSON ]

   ```
   "hostRequirements": {
       "amounts": [
           {
               "name": "amount.name",
               "min": "1"
           }
       }
   }
   ```

------

   You can add multiple limits to a job step as follows. Replace *amount.name\$11* and *amount.name\$12* with the amount requirement names of your limits.

------
#### [ YAML ]

   ```
     hostRequirements:
       amounts:
       - name: amount.name_1
         min: 1
       - name: amount.name_2
         min: 1
   ```

------
#### [ JSON ]

   ```
   "hostRequirements": {
       "amounts": [
           {
               "name": "amount.name_1",
               "min": "1"
           },
           {
               "name": "amount.name_2",
               "min": "1"
           }
       }
   }
   ```

------

1. Save the changes to the job template.

# How to submit a job to Deadline Cloud
<a name="submit-jobs-how"></a>

There are many different ways to submit jobs to AWS Deadline Cloud. This section describes some of the ways that you can submit jobs using the tools provided by Deadline Cloud or by creating your own custom tools for your workloads. 
+ From a terminal – for when you’re first developing a job bundle, or when users submitting a job are comfortable using the command line
+ From a script – for customizing and automating workloads
+ From an application – for when the user’s work is in an application, or when an application’s context is important.

 The following examples use the `deadline` Python library and the `deadline` command line tool. Both are available from [PyPi](https://pypi.org/project/deadline/) and [hosted on GitHub](https://github.com/aws-deadline/deadline-cloud). 

**Topics**
+ [Submit a job to Deadline Cloud from a terminal](from-a-terminal.md)
+ [Submit a job to Deadline Cloud using a script](from-a-script.md)
+ [Submit a job within an application](from-within-applications.md)

# Submit a job to Deadline Cloud from a terminal
<a name="from-a-terminal"></a>

Using only a job bundle and the Deadline Cloud CLI, you or your more technical users can rapidly iterate on writing job bundles to test submitting a job. Use the following command to submit a job bundle: 

```
deadline bundle submit <path-to-job-bundle>
```

 If you submit a job bundle with parameters that do not have defaults in the bundle, you can specify them with the `-p` / `--parameter` option. 

```
deadline bundle submit <path-to-job-bundle> -p <parameter-name>=<parameter-value> -p ...
```

 For a complete list of the available options, run the help command: 

```
deadline bundle submit --help
```

## Submit a job to Deadline Cloud using a GUI
<a name="with-a-submission-window"></a>

 The Deadline Cloud CLI also comes with a graphical user interface that enables users to see the parameters they must provide before submitting a job. If your users prefer not to interact with the command line, you can write a desktop shortcut that opens a dialog to submit a specific job bundle: 

```
deadline bundle gui-submit <path-to-job-bundle>
```

 Use the `--browse` option can so the user can select a job bundle: 

```
deadline bundle gui-submit --browse
```

 For a complete list of available options, run the help command: 

```
deadline bundle gui-submit --help
```

# Submit a job to Deadline Cloud using a script
<a name="from-a-script"></a>

 To automate submitting jobs to Deadline Cloud, you can script them using tools such as bash, Powershell, and batch files. 

You can add functionality like populating job parameters from environment variables or other applications. You can also submit multiple jobs in a row, or script the creation of a job bundle to submit. 

## Submit a job using Python
<a name="with-python"></a>

Deadline Cloud also has an open-source Python library to interact with the service. The [source code is available on GitHub](https://github.com/aws-deadline/deadline-cloud). 

The library is available on pypi via pip (`pip install deadline`). It's the same library used by the Deadline Cloud CLI tool: 

```
from deadline.client import api

job_bundle_path = "/path/to/job/bundle"
job_parameters = [
    {
        "name": "parameter_name",
        "value": "parameter_value"
    },
]

job_id = api.create_job_from_job_bundle(
    job_bundle_path,
    job_parameters
)
print(job_id)
```

 To create a dialog like the `deadline bundle gui-submit` command, you can use of `show_job_bundle_submitter` function from the [`deadline.client.ui.job_bundle_submitter`.](https://github.com/aws-deadline/deadline-cloud/blob/mainline/src/deadline/client/ui/job_bundle_submitter.py) 

 The following example starts a Qt application and shows the job bundle submitter: 

```
# The GUI components must be installed with pip install "deadline[gui]"
import sys
from qtpy.QtWidgets import QApplication
from deadline.client.ui.job_bundle_submitter import show_job_bundle_submitter

app = QApplication(sys.argv)
submitter = show_job_bundle_submitter(browse=True)
submitter.show()
app.exec()
print(submitter.create_job_response)
```

To make your own dialog you can use the `SubmitJobToDeadlineDialog` class in [https://github.com/aws-deadline/deadline-cloud/blob/mainline/src/deadline/client/ui/dialogs/submit_job_to_deadline_dialog.py](https://github.com/aws-deadline/deadline-cloud/blob/mainline/src/deadline/client/ui/dialogs/submit_job_to_deadline_dialog.py). You can pass in values, embed your own job specific tab, and determine how the job bundle gets created (or passed in). 

# Submit a job within an application
<a name="from-within-applications"></a>

 To make it easy for users to submit jobs, you can use the scripting runtimes or plugin systems provided by an application. Users have a familiar interface and you can create powerful tools that assist the users when submitting a workload. 

## Embed job bundles in an application
<a name="simple-embedding"></a>

This example demonstrates submitting job bundles that you make available in the application.

 To give a user access to these job bundles, create a script embedded in a menu item that launches the Deadline Cloud CLI. 

 The following script enables a user to select the job bundle: 

```
deadline bundle gui-submit --install-gui
```

 To use a specific job bundle in a menu item instead, use the following: 

```
deadline bundle gui-submit </path/to/job/bundle> --install-gui
```

 This opens a dialog where the user can modify the job parameters, inputs, and outputs, and then submit the job. You can have different menu items for different job bundles for a user to submit in an application. 

If the job that you submit with a job bundle contains similar parameters and asset references across submissions, you can fill in the default values in the underlying job bundle. 

## Get information from an application
<a name="deep-integration"></a>

To pull information from an application so that users don't have to manually add it to the submission, you can integrate Deadline Cloud with the application so that your users can submit jobs using a familiar interface without needing exit the application or use command line tools.

If your application has a scripting runtime that supports Python and pyside/pyqt, you can use the GUI components from the [Deadline Cloud client library](https://github.com/aws-deadline/deadline-cloud) to create a UI. For an example, see [Deadline Cloud for Maya integration](https://github.com/aws-deadline/deadline-cloud-for-maya) on GitHub. 

The Deadline Cloud client library provides operations that do the following to help you provide a strong integrated user experience:
+ Pull queue environment parameters, job parameters, and asset references form environment variables and by calling the application SDK.
+ Set the parameters in the job bundle. To avoid modifying the original bundle, you should make a copy of the bundle and submit the copy.

If you use the `deadline bundle gui-submit` command to submit the job bundle, you must programmatically the `parameter_values.yaml` and `asset_references.yaml` files to pass the information from the application. For more information about these files see [Open Job Description (OpenJD) templates for Deadline Cloud](build-job-bundle.md).

If you need more complex controls than the ones offered by OpenJD, need to abstract the job from the user, or want to make the integration match the application's visual style, you can write your own dialog that calls the Deadline Cloud client library to submit the job.

# Schedule jobs in Deadline Cloud
<a name="build-jobs-scheduling"></a>

After you create a job, AWS Deadline Cloud schedules it to be processed on one or more of the fleets associated with a queue. The fleet that processes a particular task is chosen based on the scheduling configuration, the capabilities configured for the fleet, and the host requirements of a specific step.

The following sections provide details of the process of scheduling a job.

## Scheduling configurations
<a name="jobs-scheduling-configuration"></a>

You can configure how Deadline Cloud schedules jobs in a queue by setting a scheduling configuration on the queue. The scheduling configuration controls how workers are distributed across jobs.

You can set the scheduling configuration using the Deadline Cloud console or by calling the [CreateQueue](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_CreateQueue.html) or [UpdateQueue](https://docs.aws.amazon.com/deadline-cloud/latest/APIReference/API_UpdateQueue.html) APIs.

There are three available scheduling configurations:
+ **Priority, first-in-first-out** (`priorityFifo`) – Schedules the highest priority, earliest submitted job first (default).
+ **Priority, balanced** (`priorityBalanced`) – Distributes workers evenly across jobs at the highest priority.
+ **Weighted, balanced** (`weightedBalanced`) – Uses a weighted formula to determine how workers are distributed across jobs.

In all scheduling configurations, in-progress tasks run to completion before a new scheduling decision is made. If you change the scheduling configuration while tasks are running, the change applies only when workers are assigned next. Running tasks are not interrupted or reassigned.

### Priority, first-in-first-out
<a name="jobs-scheduling-priority-fifo"></a>

Priority, first-in-first-out (`priorityFifo`) is the default scheduling configuration for new queues. Deadline Cloud assigns workers to the highest-priority job first. When multiple jobs share the same priority, the oldest (earliest submitted) job receives all available workers first.

Use priority FIFO when you want strict ordering of jobs. This configuration is appropriate when jobs should complete one at a time in the order they were submitted, such as sequential pipeline stages or batch processing where each job must finish before the next one starts.

This configuration has no additional parameters.

### Priority, balanced
<a name="jobs-scheduling-priority-balanced"></a>

Priority, balanced (`priorityBalanced`) distributes workers evenly across all jobs at the highest priority level. When only one job exists at the highest priority, Deadline Cloud assigns all workers to that job. When multiple jobs share the highest priority, workers are split evenly among them. If the workers cannot be evenly divided, the extra workers are distributed among the highest priority jobs.

Use priority balanced when multiple artists or users submit jobs at the same priority and each user needs immediate feedback. This configuration ensures that no single job monopolizes all available workers, so that all users are allocated workers shortly after submission.

If a job has fewer remaining tasks than its share of workers, the surplus workers are redistributed to other jobs at the same priority level. If all jobs at the highest priority are fully allocated, surplus workers cascade to jobs at the next highest priority level.

This configuration has the following parameter:

`renderingTaskBuffer`  
Controls worker stickiness. A worker switches from its current job to another job at the same priority only if the difference in rendering tasks exceeds the `renderingTaskBuffer` value. A higher value keeps workers on their current jobs longer, reducing context switching. The default value is `1`.

### Weighted, balanced
<a name="jobs-scheduling-weighted-balanced"></a>

Weighted, balanced (`weightedBalanced`) uses a formula to calculate a weight for each job. Deadline Cloud assigns workers to the highest-weight job first. If multiple jobs have the same weight, workers are distributed among them.

Use weighted balanced when you need fine-grained control over how workers are distributed across jobs with varying priorities, error rates, and submission times. This configuration is appropriate for complex render farm environments where you want to tune the balance between job priority, job age, error handling, and worker stickiness.

The weight for each job is calculated as follows:

```
weight = (job.Priority * priorityWeight) +
         (job.Errors * errorWeight) +
         ((currentTimeInSeconds - job.SubmissionTime) * submissionTimeWeight) +
         ((job.RenderingTasks - renderingTaskBuffer) * renderingTaskWeight)
```

The `renderingTaskBuffer` component is applied only if the worker is currently working on the job. The `renderingTaskWeight` is usually set to a negative value so that jobs with assigned workers receive a lower weight, bringing other jobs to the front of the queue. The `errorWeight` is also usually negative so that jobs with errors are deprioritized. You can use scheduling overrides for minimum and maximum priority jobs.

This configuration has the following parameters:

`priorityWeight`  
The weight applied to a job's priority. A positive value means higher-priority jobs are scheduled first. The default value is `100.0`. Range: `0` to `10000`.

`errorWeight`  
The weight applied to a job's error count. A negative value means jobs without errors are scheduled first. The default value is `-10.0`. Range: `-10000` to `10000`.

`submissionTimeWeight`  
The weight applied to a job's submission time (in seconds). A positive value means earlier submitted jobs are scheduled first. The default value is `3.0`. Range: `0` to `10000`.

`renderingTaskWeight`  
The weight applied to the number of tasks currently rendering for a job. A negative value means jobs with fewer workers are scheduled next. The default value is `-100.0`. Range: `-10000` to `10000`.

`renderingTaskBuffer`  
The number of rendering tasks before the rendering task weight takes effect. A positive value keeps workers on their current jobs. The default value is `1`. Range: `0` to `1000`.

`maxPriorityOverride`  
Optional. When set to `alwaysScheduleFirst`, jobs at the maximum priority (100) are always scheduled before other jobs, regardless of the weighted formula. When multiple jobs have the maximum priority, ties are broken using the standard weighted formula. When the override is absent, maximum priority jobs use the standard weighted formula with no special treatment.

`minPriorityOverride`  
Optional. When set to `alwaysScheduleLast`, jobs at the minimum priority (0) are always scheduled after other jobs, regardless of the weighted formula. When multiple jobs have the minimum priority, ties are broken using the standard weighted formula. When the override is absent, minimum priority jobs use the standard weighted formula with no special treatment.

## Determine fleet compatibility
<a name="jobs-scheduling-compatibility"></a>

After you create a job, Deadline Cloud checks the host requirements for each step in the job against the capabilities of the fleets associated with the queue the job was submitted to. If a fleet meets the host requirements, the job is put into the `READY` state.

If any step in the job has requirements that can't be met by a fleet associated with the queue, the step's status is set to `NOT_COMPATIBLE`. In addition, the rest of the steps in the job are canceled.

Capabilities for a fleet are set at the fleet level. Even if a worker in a fleet meets the job's requirements, it won't be assigned tasks from the job if its fleet doesn't meet the job's requirements.

The following job template has a step that specifies host requirements for the step:

```
name: Sample Job With Host Requirements
specificationVersion: jobtemplate-2023-09
steps:
- name: Step 1
  script:
    actions:
      onRun:
        args:
        - '1'
        command: /usr/bin/sleep
  hostRequirements:
    amounts:
    # Capabilities starting with "amount." are amount capabilities. If they start with "amount.worker.",
    # they are defined by the OpenJD specification. Other names are free for custom usage.
    - name: amount.worker.vcpu
      min: 4
      max: 8
    attributes:
    - name: attr.worker.os.family
      anyOf:
      - linux
```

This job can be scheduled to a fleet with the following capabilities:

```
{
    "vCpuCount": {"min": 4, "max": 8},
    "memoryMiB": {"min": 1024},
    "osFamily": "linux",
    "cpuArchitectureType": "x86_64"
}
```

This job can't be scheduled to a fleet with any of the following capabilities:

```
{
    "vCpuCount": {"min": 4},
    "memoryMiB": {"min": 1024},
    "osFamily": "linux",
    "cpuArchitectureType": "x86_64"
}
    The vCpuCount has no maximum, so it exceeds the maximum vCPU host requirement.
    
{
    "vCpuCount": {"max": 8},
    "memoryMiB": {"min": 1024},
    "osFamily": "linux",
    "cpuArchitectureType": "x86_64"
}
    The vCpuCount has no minimum, so it doesn't satisfy the minimum vCPU host requirement.

{
    "vCpuCount": {"min": 4, "max": 8},
    "memoryMiB": {"min": 1024},
    "osFamily": "windows",
    "cpuArchitectureType": "x86_64"
}    
    The osFamily doesn't match.
```

## Fleet scaling
<a name="jobs-scheduling-scaling"></a>

When a job is assigned to a compatible service-managed fleet, the fleet is auto scaled. The number of workers in the fleet changes based on the number of tasks available for the fleet to run.

When a job is assigned to a customer-managed fleet, workers might already exist or can be created using event-based auto scaling. For more information, see [Use EventBridge to handle auto scaling events ](https://docs.aws.amazon.com/autoscaling/ec2/userguide/automating-ec2-auto-scaling-with-eventbridge.html) in the *Amazon EC2 Auto Scaling User Guide*.

## Sessions
<a name="jobs-scheduling-sessions"></a>

The tasks in a job are divided into one or more sessions. Workers run the sessions to set up the environment, run the tasks, and then tear down the environment. Each session is composed of one or more actions that a worker must take.

As a worker completes section actions, additional session actions can be sent to the worker. The worker reuses existing environments and job attachments in the session to complete tasks more efficiently.

On service-managed fleet workers, session directories are deleted after the session ends, but other directories are retained between sessions. This behavior allows you to implement caching strategies for data that can be reused across multiple sessions. To cache data between sessions, store it under the home directory of the user running the job. For example, conda packages are cached under the job user's home directory at `C:\Users\job-user\.conda-pkgs` on Windows workers and `/home/job-user/.conda-pkgs` on Linux workers. This data remains available until the worker shuts down.

Job attachments are created by the submitter that you use as part of your Deadline Cloud CLI job bundle. You can also create job attachments using the `--attachments` option for the `create-job` AWS CLI command. Environments are defined in two places: queue environments attached to a specific queue, and job and step environments defined in the job template.

There are four session action types:
+ `syncInputJobAttachments` – Downloads the input job attachments to the worker.
+ `envEnter` – Performs the `onEnter` actions for an environment.
+ `taskRun` – Performs the `onRun` actions for a task.
+ `envExit` – Performs the `onExit` actions for an environment.

The following job template has a step environment. It has an `onEnter` definition to set up the step environment, an `onRun` definition that defines the task to run, and an `onExit` definition to tear down the step environment. The sessions created for this job will include an `envEnter` action, one or more `taskRun` actions, and then an `envExit` action.

```
name: Sample Job with Maya Environment
specificationVersion: jobtemplate-2023-09
steps:
- name: Maya Step
  stepEnvironments:
  - name: Maya
    description: Runs Maya in the background.
    script:
      embeddedFiles:
      - name: initData
        filename: init-data.yaml
        type: TEXT
        data: |
          scene_file: MyAwesomeSceneFile
          renderer: arnold
          camera: persp
      actions:
        onEnter:
          command: MayaAdaptor
          args:
          - daemon
          - start
          - --init-data
          - file://{{Env.File.initData}}
        onExit:
          command: MayaAdaptor
          args:
          - daemon
          - stop
  parameterSpace:
    taskParameterDefinitions:
    - name: Frame
      range: 1-5
      type: INT
  script:
    embeddedFiles:
    - name: runData
      filename: run-data.yaml
      type: TEXT
      data: |
        frame: {{Task.Param.Frame}}
    actions:
      onRun:
        command: MayaAdaptor
        args:
        - daemon
        - run
        - --run-data
        - file://{{ Task.File.runData }}
```

### Session actions pipelining
<a name="jobs-session-pipelining"></a>

Session actions pipelining lets a scheduler pre-assign multiple session actions to a worker. The worker can then run these actions sequentially, reducing or eliminating idle time between tasks.

To create an initial assignment, the scheduler creates a session with one task, the worker completes the task, and then the scheduler analyzes the task duration to determine future assignments.

For the scheduler to be effective, there are task duration rules. For tasks under one minute, the scheduler uses a power-of-2 growth pattern. For example, for a 1-second task, the scheduler assigns 2 new tasks, then 4, then 8. For tasks over one minute, the scheduler assigns only one new task and pipelining remains disabled.

To calculate pipeline size, the scheduler does the following:
+ Uses average task duration from completed tasks
+ Aims to keep the worker busy for one minute
+ Considers only tasks within the same session
+ Does not share duration data across workers

With session actions piplelining, workers start new tasks immediately and there's no waiting time between scheduler requests. It also provides improved worker efficiency and better task distribution for long-running processes.

Additionally, if there is a new higher priority job available, the worker will finish all of its previously assigned work before its current session ends and a new session from a higher priority job is assigned.

## Step dependencies
<a name="jobs-scheduling-dependencies"></a>

Deadline Cloud supports defining dependencies between steps so that one step waits until another step is complete before starting. You can define more than one dependency for a step. A step with a dependency isn't scheduled until all of its dependencies are complete.

If the job template defines a circular dependency, the job is rejected and the job status is set to `CREATE_FAILED`.

The following job template creates a job with two steps. `StepB` depends on `StepA`. `StepB` only runs after `StepA` completes successfully. 

After the job is created, `StepA` is in the `READY` state and `StepB` is in the `PENDING` state. After `StepA` finishes, `StepB` moves to the `READY` state. If `StepA` fails, or if `StepA` is canceled, `StepB` moves to the `CANCELED` state.

You can set a dependency on multiple steps. For example, if `StepC` depends on both `StepA` and `StepB`, `StepC` won't start until the other two steps finish.

Step dependencies have the following restrictions:
+ **Dependencies per step** – A step can depend on a maximum of 128 other steps.
+ **Consumers per step** – A maximum of 32 other steps can depend on a single step.

```
name: Step-Step Dependency Test
specificationVersion: 'jobtemplate-2023-09'
steps:
- name: A
  script:
    actions:
      onRun:
        command: bash
        args: ['{{ Task.File.run }}']
    embeddedFiles:
      - name: run
        type: TEXT
        data: |
          #!/bin/env bash

          set -euo pipefail

          sleep 1
          echo Task A Done!
- name: B
  dependencies:
  - dependsOn: A # This means Step B depends on Step A
  script:
    actions:
      onRun:
        command: bash
        args: ['{{ Task.File.run }}']
    embeddedFiles:
      - name: run
        type: TEXT
        data: |
          #!/bin/env bash

          set -euo pipefail

          sleep 1
          echo Task B Done!
```

# Modify a job in Deadline Cloud
<a name="build-jobs-modifying"></a>

You can use the following AWS Command Line Interface (AWS CLI) `update` commands to modify the configuration of a job, or to set the target status of a job, step, or task: ``
+ `aws deadline update-job`
+ `aws deadline update-step`
+ `aws deadline update-task`

In the following examples of the `update` commands, replace each *`user input placeholder`* with your own information.

**Example – Requeue a job**  
All tasks in the job switch to the `READY` status, unless there are step dependencies. Steps with dependencies switch to either `READY` or `PENDING` as they are restored.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--target-task-run-status PENDING
```

**Example – Cancel a job**  
All tasks in the job that don't have the status `SUCCEEDED` or `FAILED` are marked `CANCELED`.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--target-task-run-status CANCELED
```

**Example – Mark a job failed**  
All tasks in the job that have the status `SUCCEEDED` are left unchanged. All other tasks are marked `FAILED`.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--target-task-run-status FAILED
```

**Example – Mark a job successful**  
All tasks in the job move to the `SUCCEEDED` state.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--target-task-run-status SUCCEEDED
```

**Example – Suspend a job**  
Tasks in the job in the `SUCCEEDED`, `CANCELED`, or `FAILED` state don't change. All other tasks are marked `SUSPENDED`.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--target-task-run-status SUSPENDED
```

**Example – Change the priority of a job**  
Updates the priority of a job in a queue to change the order that it is scheduled. Higher priority jobs are generally scheduled first.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--priority 100
```

**Example – Change the number of failed tasks allowed**  
Updates the maximum number of failed tasks that the job can have before the remaining tasks are canceled.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--max-failed-tasks-count 200
```

**Example – Change the number of task retries allowed**  
Updates the maximum number of retries for a task before the task fails. A task that has reached the maximum number of retries can't be requeued until this value is increased.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--max-retries-per-task 10
```

**Example – Archive a job**  
Updates the job's lifecycle status to `ARCHIVED`. Archived jobs can't be scheduled or modified. You can only archive a job that is in the `FAILED`, `CANCELED`, `SUCCEEDED`, or `SUSPENDED` state.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--lifecycle-status ARCHIVED
```

**Example – Change the name of a job**  
Updates the display name of a job. The job name can be up to 128 characters long.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--name "New Job Name"
```

**Example – Change the description of a job**  
Updates the description of a job. The description can be up to 2048 characters long. To remove the existing description, pass an empty string.  

```
aws deadline update-job \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--description "New Job Description"
```

**Example – Requeue a step**  
All tasks in the step switch to the `READY` state, unless there are step dependencies. Tasks in steps with dependencies switch to either `READY` or `PENDING`, and the task is restored.  

```
aws deadline update-step \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--step-id stepID \
--target-task-run-status PENDING
```

**Example – Cancel a step**  
All tasks in the step that don't have the status `SUCCEEDED` or `FAILED` are marked `CANCELED`.  

```
aws deadline update-step \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--step-id stepID \
--target-task-run-status CANCELED
```

**Example – Mark a step failed**  
All tasks in the step that have the status `SUCCEEDED` are left unchanged. All other tasks are marked `FAILED`.  

```
aws deadline update-step \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--step-id stepID \
--target-task-run-status FAILED
```

**Example – Mark a step successful**  
All tasks in the step are marked `SUCCEEDED`.  

```
aws deadline update-step \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--step-id stepID \
--target-task-run-status SUCCEEDED
```

**Example – Suspend a step**  
Tasks in the step in the `SUCCEEDED`, `CANCELED`, or `FAILED` state don't change. All other tasks are marked `SUSPENDED`.  

```
aws deadline update-step \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--step-id stepID \
--target-task-run-status SUSPENDED
```

**Example – Change the status of a task**  
When you use the `update-task` Deadline Cloud CLI command, the task switches to the specified status.  

```
aws deadline update-task \
--farm-id farmID \
--queue-id queueID \
--job-id jobID \
--step-id stepID \
--task-id taskID \
--target-task-run-status SUCCEEDED | SUSPENDED | CANCELED | FAILED | PENDING
```