

# Configuration
<a name="configuration"></a>

By default, AWS ParallelCluster uses the `~/.parallelcluster/config` file for all configuration parameters. You can specify a custom configuration file by using the `-c` or `--config` command line option or the `AWS_PCLUSTER_CONFIG_FILE` environment variable.

An example configuration file is installed with AWS ParallelCluster in the Python directory at `site-packages/aws-parallelcluster/examples/config`. The example configuration file is also available on GitHub, at [https://github.com/aws/aws-parallelcluster/blob/v2.11.9/cli/src/pcluster/examples/config](https://github.com/aws/aws-parallelcluster/blob/v2.11.9/cli/src/pcluster/examples/config).

Current AWS ParallelCluster 2 version: 2.11.9.

**Topics**
+ [Layout](#layout)
+ [`[global]` section](global.md)
+ [`[aws]` section](aws.md)
+ [`[aliases]` section](aliases.md)
+ [`[cluster]` section](cluster-definition.md)
+ [`[compute_resource]` section](compute-resource-section.md)
+ [`[cw_log]` section](cw-log-section.md)
+ [`[dashboard]` section](dashboard-section.md)
+ [`[dcv]` section](dcv-section.md)
+ [`[ebs]` section](ebs-section.md)
+ [`[efs]` section](efs-section.md)
+ [`[fsx]` section](fsx-section.md)
+ [`[queue]` section](queue-section.md)
+ [`[raid]` section](raid-section.md)
+ [`[scaling]` section](scaling-section.md)
+ [`[vpc]` section](vpc-section.md)
+ [Examples](examples.md)

## Layout
<a name="layout"></a>

An AWS ParallelCluster configuration is defined in multiple sections.

The following sections are required: [`[global]` section](global.md) and [`[aws]` section](aws.md).

You also must include at least one [`[cluster]` section](cluster-definition.md) and one [`[vpc]` section](vpc-section.md).

A section starts with the section name in brackets, followed by parameters and configuration.

```
[global]
cluster_template = default
update_check = true
sanity_check = true
```

# `[global]` section
<a name="global"></a>

Specifies global configuration options related to `pcluster`.

```
[global]
```

**Topics**
+ [`cluster_template`](#cluster-template)
+ [`update_check`](#update-check)
+ [`sanity_check`](#sanity-check)

## `cluster_template`
<a name="cluster-template"></a>

Defines the name of the `cluster` section that's used for the cluster by default. For additional information about `cluster` sections, see [`[cluster]` section](cluster-definition.md). The cluster name must start with a letter, contain no more than 60 characters, and only contain letters, numbers, and hyphens (-).

For example, the following setting specifies that the section that starts `[cluster default]` is used by default.

```
cluster_template = default
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

## `update_check`
<a name="update-check"></a>

**(Optional)** Checks for updates to `pcluster`.

The default value is `true`.

```
update_check = true
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

## `sanity_check`
<a name="sanity-check"></a>

**(Optional)** Attempts to validate the configuration of the resources that are defined in the cluster parameters.

The default value is `true`.

**Warning**  
If `sanity_check` is set to `false`, important checks are skipped. This might cause your configuration to not function as intended.

```
sanity_check = true
```

**Note**  
Before AWS ParallelCluster version 2.5.0, [`sanity_check`](#sanity-check) defaulted to `false`.

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

# `[aws]` section
<a name="aws"></a>

**(Optional)** Used to select the AWS Region.

Cluster creation uses this priority order to select the AWS Region for a new cluster:

1. `-r` or `--region` parameter to [`pcluster create`](pluster.create.md).

1. `AWS_DEFAULT_REGION` environment variable.

1. `aws_region_name` setting in `[aws]` section of AWS ParallelCluster config file (default location is `~/.parallelcluster/config`.) This is the location updated by the [`pcluster configure`](pcluster.configure.md) command.

1. `region` setting in `[default]` section of AWS CLI config file (`~/.aws/config`.)

**Note**  
Before AWS ParallelCluster version 2.10.0, these settings were required and applied to all clusters.

To store credentials, you can use the environment, IAM roles for Amazon EC2, or the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html), rather than saving credentials into the AWS ParallelCluster config file.

```
[aws]
aws_region_name = Region
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

# `[aliases]` section
<a name="aliases"></a>

**Topics**

Specifies aliases, and enables you to customize the `ssh` command.

Note the following default settings:
+ `CFN_USER` is set to the default user name for the OS
+ `MASTER_IP` is set to the IP address of the head node
+ `ARGS` is set to whatever arguments the user provides after *`pcluster ssh cluster_name`*

```
[aliases]
# This is the aliases section, you can configure
# ssh alias here
ssh = ssh {CFN_USER}@{MASTER_IP} {ARGS}
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

# `[cluster]` section
<a name="cluster-definition"></a>

Defines a cluster template that can be used to create a cluster. A config file can contain multiple `[cluster]` sections.

The same cluster template can be used to create multiple clusters.

The format is `[cluster cluster-template-name]`. The [`[cluster]` section](#cluster-definition) named by the [`cluster_template`](global.md#cluster-template) setting in the [`[global]` section](global.md) is used by default, but can be overridden on the [`pcluster`](pcluster.md) command line.

*cluster-template-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[cluster default]
```

**Topics**
+ [`additional_cfn_template`](#additional-cfn-template)
+ [`additional_iam_policies`](#additional-iam-policies)
+ [`base_os`](#base-os)
+ [`cluster_resource_bucket`](#cluster-resource-bucket-section)
+ [`cluster_type`](#cluster-type)
+ [`compute_instance_type`](#compute-instance-type)
+ [`compute_root_volume_size`](#compute-root-volume-size)
+ [`custom_ami`](#custom-ami-section)
+ [`cw_log_settings`](#cw-log-settings)
+ [`dashboard_settings`](#dashboard-settings)
+ [`dcv_settings`](#dcv-settings)
+ [`desired_vcpus`](#desired-vcpus)
+ [`disable_cluster_dns`](#disable-cluster-dns-settings)
+ [`disable_hyperthreading`](#disable-hyperthreading)
+ [`ebs_settings`](#ebs-settings)
+ [`ec2_iam_role`](#ec2-iam-role)
+ [`efs_settings`](#efs-settings)
+ [`enable_efa`](#enable-efa)
+ [`enable_efa_gdr`](#enable-efa-gdr)
+ [`enable_intel_hpc_platform`](#enable-intel-hpc-platform)
+ [`encrypted_ephemeral`](#encrypted-ephemeral)
+ [`ephemeral_dir`](#ephemeral-dir)
+ [`extra_json`](#extra-json)
+ [`fsx_settings`](#fsx-settings)
+ [`iam_lambda_role`](#iam-lambda-role)
+ [`initial_queue_size`](#configuration-initial-queue-size)
+ [`key_name`](#key-name)
+ [`maintain_initial_size`](#maintain-initial-size)
+ [`master_instance_type`](#master-instance-type)
+ [`master_root_volume_size`](#master-root-volume-size)
+ [`max_queue_size`](#configuration-max-queue-size)
+ [`max_vcpus`](#max-vcpus)
+ [`min_vcpus`](#min-vcpus)
+ [`placement`](#placement)
+ [`placement_group`](#placement-group)
+ [`post_install`](#post-install)
+ [`post_install_args`](#post-install-args)
+ [`pre_install`](#pre-install)
+ [`pre_install_args`](#pre-install-args)
+ [`proxy_server`](#proxy-server)
+ [`queue_settings`](#queue-settings)
+ [`raid_settings`](#raid-settings)
+ [`s3_read_resource`](#s3-read-resource)
+ [`s3_read_write_resource`](#s3-read-write-resource)
+ [`scaling_settings`](#scaling-settings)
+ [`scheduler`](#scheduler)
+ [`shared_dir`](#cluster-shared-dir)
+ [`spot_bid_percentage`](#spot-bid-percentage)
+ [`spot_price`](#spot-price)
+ [`tags`](#tags)
+ [`template_url`](#template-url)
+ [`vpc_settings`](#vpc-settings)

## `additional_cfn_template`
<a name="additional-cfn-template"></a>

**(Optional)** Defines an additional AWS CloudFormation template to launch along with the cluster. This additional template is used for creating resources that are outside of the cluster but are part of the cluster's lifecycle.

The value must be an HTTP URL to a public template, with all parameters provided.

There is no default value.

```
additional_cfn_template = https://<bucket-name>.s3.amazonaws.com/my-cfn-template.yaml
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `additional_iam_policies`
<a name="additional-iam-policies"></a>

**(Optional)** Specifies a list of Amazon Resource Names (ARNs) of IAM policies for Amazon EC2. This list is attached to the root role used in the cluster in addition to the permissions required by AWS ParallelCluster separated by commas. An IAM policy name and its ARN are different. Names can't be used as an argument to `additional_iam_policies`.

If your intent is to add extra policies to the default settings for cluster nodes, we recommend that you pass the additional custom IAM policies with the `additional_iam_policies` setting instead of using the [`ec2_iam_role`](#ec2-iam-role) settings to add your specific EC2 policies. This is because `additional_iam_policies` are added to the default permissions that AWS ParallelCluster requires. An existing [`ec2_iam_role`](#ec2-iam-role) must include all permissions required. However, because the permissions required often change from release to release as features are added, an existing [`ec2_iam_role`](#ec2-iam-role) can become obsolete.

There is no default value.

```
additional_iam_policies = arn:aws:iam::123456789012:policy/CustomEC2Policy
```

**Note**  
Support for [`additional_iam_policies`](#additional-iam-policies) was added in AWS ParallelCluster version 2.5.0.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `base_os`
<a name="base-os"></a>

**(Required)** Specifies which OS type is used in the cluster.

Available options are:
+ `alinux2`
+ `centos7`
+ `ubuntu1804`
+ `ubuntu2004`

**Note**  
For AWS Graviton-based instances, only `alinux2`, `ubuntu1804`, or `ubuntu2004` are supported.

**Note**  
Support for `centos8` was removed in AWS ParallelCluster version 2.11.4. Support for `ubuntu2004` was added and support for `alinux` and `ubuntu1604` was removed in AWS ParallelCluster version 2.11.0. Support for `centos8` was added and support for `centos6` was removed in AWS ParallelCluster version 2.10.0. Support for `alinux2` was added in AWS ParallelCluster version 2.6.0. Support for `ubuntu1804` was added, and support for `ubuntu1404` was removed in AWS ParallelCluster version 2.5.0.

Other than the specific AWS Regions mentioned in the following table that don't support `centos7`. All other AWS commercial Regions support all of the following operating systems.


| Partition (AWS Regions) | `alinux2` | `centos7` | `ubuntu1804` and `ubuntu2004` | 
| --- | --- | --- | --- | 
| Commercial (All AWS Regions not specifically mentioned) | True | True | True | 
| AWS GovCloud (US-East) (us-gov-east-1) | True | False | True | 
| AWS GovCloud (US-West) (us-gov-west-1) | True | False | True | 
| China (Beijing) (cn-north-1) | True | False | True | 
| China (Ningxia) (cn-northwest-1) | True | False | True | 

**Note**  
The [`base_os`](#base-os) parameter also determines the user name that's used to log into the cluster.
+ `centos7`: `centos` 
+ `ubuntu1804` and `ubuntu2004`: `ubuntu` 
+ `alinux2`: `ec2-user` 

**Note**  
Before AWS ParallelCluster version 2.7.0, the [`base_os`](#base-os) parameter was optional, and the default was `alinux`. Starting with AWS ParallelCluster version 2.7.0, the [`base_os`](#base-os) parameter is required.

**Note**  
If the [`scheduler`](#scheduler) parameter is `awsbatch`, only `alinux2` is supported.

```
base_os = alinux2
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `cluster_resource_bucket`
<a name="cluster-resource-bucket-section"></a>

**(Optional)** Specifies the name of the Amazon S3 bucket that's used to host resources that are generated when the cluster is created. The bucket must have versioning enabled. For more information, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) in the *Amazon Simple Storage Service User Guide*. This bucket can be used for multiple clusters. The bucket must be in the same Region as the cluster.

If this parameter isn't specified, a new bucket is created when the cluster is created. The new bucket has the name of `parallelcluster-random_string`. In this name, *random\$1string* is a random string of alphanumeric characters. All cluster resources are stored in this bucket in a path with the form `bucket_name/resource_directory`. `resource_directory` has the form `stack_name-random_string`, where *stack\$1name* is the name of one of the CloudFormation stacks used by AWS ParallelCluster. The value of *bucket\$1name* can be found in the `ResourcesS3Bucket` value in the output of the `parallelcluster-clustername` stack. The value of *resource\$1directory* can be found in the value of the `ArtifactS3RootDirectory` output from the same stack.

The default value is `parallelcluster-random_string`.

```
cluster_resource_bucket = amzn-s3-demo-bucket
```

**Note**  
Support for [`cluster_resource_bucket`](#cluster-resource-bucket-section) was added in AWS ParallelCluster version 2.10.0.

[Update policy: If this setting is changed, the update is not allowed. Updating this setting cannot be forced.](using-pcluster-update.md#update-policy-read-only-resource-bucket)

## `cluster_type`
<a name="cluster-type"></a>

**(Optional)** Defines the type of cluster to launch. If the [`queue_settings`](#queue-settings) setting is defined, then this setting must be replaced by the [`compute_type`](queue-section.md#queue-compute-type) settings in the [`[queue]` sections](queue-section.md).

Valid options are: `ondemand`, and `spot`.

The default value is `ondemand`.

For more information about Spot Instances, see [Working with Spot Instances](spot.md).

**Note**  
Using Spot Instances requires that the `AWSServiceRoleForEC2Spot` service-linked role exist in your account. To create this role in your account using the AWS CLI, run the following command:  

```
aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
```
For more information, see [Service-linked role for Spot Instance requests](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#service-linked-roles-spot-instance-requests) in the *Amazon EC2 User Guide*.

```
cluster_type = ondemand
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `compute_instance_type`
<a name="compute-instance-type"></a>

**(Optional)** Defines the Amazon EC2 instance type that's used for the cluster compute nodes. The architecture of the instance type must be the same as the architecture used for the [`master_instance_type`](#master-instance-type) setting. If the [`queue_settings`](#queue-settings) setting is defined, then this setting must be replaced by the [`instance_type`](compute-resource-section.md#compute-resource-instance-type) settings in the [`[compute_resource]` sections](compute-resource-section.md).

If you're using the `awsbatch` scheduler, see the Compute Environments creation in the AWS Batch UI for a list of supported instance types.

Defaults to `t2.micro`, `optimal` when the scheduler is `awsbatch`.

```
compute_instance_type = t2.micro
```

**Note**  
Support for AWS Graviton-based instances (including `A1` and `C6g` instances) was added in AWS ParallelCluster version 2.8.0.

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `compute_root_volume_size`
<a name="compute-root-volume-size"></a>

**(Optional)** Specifies the ComputeFleet root volume size in gibibytes (GiB). The AMI must support `growroot`.

The default value is `35`.

**Note**  
For AWS ParallelCluster versions between 2.5.0 and 2.10.4, the default was 25. Before AWS ParallelCluster version 2.5.0, the default was 20.

```
compute_root_volume_size = 35
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `custom_ami`
<a name="custom-ami-section"></a>

**(Optional)** Specifies the ID of a custom AMI to use for the head and compute nodes instead of the default [published AMIs](https://github.com/aws/aws-parallelcluster/blob/v2.11.9/amis.txt). For more information, see [Modify an AMI](tutorials_02_ami_customization.md#modify-an-aws-parallelcluster-ami) or [Build a Custom AWS ParallelCluster AMI](tutorials_02_ami_customization.md#build-a-custom-aws-parallelcluster-ami).

There is no default value.

```
custom_ami = ami-00d4efc81188687a0
```

If the custom AMI requires additional permissions for its launch, these permissions must be added to both the user and head node policies.

For example, if a custom AMI has an encrypted snapshot associated with it, the following additional policies are required in both the user and head node policies:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111122223333:key/<AWS_KMS_KEY_ID>"
            ]
        }
    ]
}
```

------

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `cw_log_settings`
<a name="cw-log-settings"></a>

**(Optional)** Identifies the `[cw_log]` section with the CloudWatch Logs configuration. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[cw_log]` section](cw-log-section.md), [Amazon CloudWatch dashboard](cloudwatch-dashboard.md), and [Integration with Amazon CloudWatch Logs](cloudwatch-logs.md).

For example, the following setting specifies that the section that starts `[cw_log custom-cw]` is used for the CloudWatch Logs configuration.

```
cw_log_settings = custom-cw
```

**Note**  
Support for [`cw_log_settings`](#cw-log-settings) was added in AWS ParallelCluster version 2.6.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `dashboard_settings`
<a name="dashboard-settings"></a>

**(Optional)** Identifies the `[dashboard]` section with the CloudWatch dashboard configuration. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[dashboard]` section](dashboard-section.md).

For example, the following setting specifies that the section that starts `[dashboard custom-dashboard` is used for the CloudWatch dashboard configuration.

```
dashboard_settings = custom-dashboard
```

**Note**  
Support for [`dashboard_settings`](#dashboard-settings) was added in AWS ParallelCluster version 2.10.0.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `dcv_settings`
<a name="dcv-settings"></a>

**(Optional)** Identifies the `[dcv]` section with the Amazon DCV configuration. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[dcv]` section](dcv-section.md).

For example, the following setting specifies that the section that starts `[dcv custom-dcv]` is used for the Amazon DCV configuration.

```
dcv_settings = custom-dcv
```

**Note**  
On AWS Graviton-based instances, Amazon DCV is only supported on `alinux2`.

**Note**  
Support for [`dcv_settings`](#dcv-settings) was added in AWS ParallelCluster version 2.5.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `desired_vcpus`
<a name="desired-vcpus"></a>

**(Optional)** Specifies the desired number of vCPUs in the compute environment. Used only if the scheduler is `awsbatch`.

The default value is `4`.

```
desired_vcpus = 4
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

## `disable_cluster_dns`
<a name="disable-cluster-dns-settings"></a>

**(Optional)** Specifies if the DNS entries for the cluster shouldn't be created. By default, AWS ParallelCluster creates a Route 53 hosted zone. If `disable_cluster_dns` is set to `true`, the hosted zone isn't created.

The default value is `false`.

```
disable_cluster_dns = true
```

**Warning**  
A name resolution system is required for the cluster to operate properly. If `disable_cluster_dns` is set to `true`, an additional name resolution system must also be provided.

**Important**  
[`disable_cluster_dns`](#disable-cluster-dns-settings) = `true` is only supported if the [`queue_settings`](#queue-settings) setting is specified.

**Note**  
Support for [`disable_cluster_dns`](#disable-cluster-dns-settings) was added in AWS ParallelCluster version 2.9.1.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `disable_hyperthreading`
<a name="disable-hyperthreading"></a>

**(Optional)** Disables hyperthreading on the head and compute nodes. Not all instance types can disable hyperthreading. For a list of instance types that support disabling hyperthreading, see [CPU cores and threads for each CPU core for each instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-instances-values) in the *Amazon EC2 User Guide*. If the [`queue_settings`](#queue-settings) setting is defined, either this setting can be defined, or the [`disable_hyperthreading`](queue-section.md#queue-disable-hyperthreading) settings in the [`[queue]` sections](queue-section.md) can be defined.

The default value is `false`.

```
disable_hyperthreading = true
```

**Note**  
[`disable_hyperthreading`](#disable-hyperthreading) only affects the head node when `scheduler = awsbatch`.

**Note**  
Support for [`disable_hyperthreading`](#disable-hyperthreading) was added in AWS ParallelCluster version 2.5.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ebs_settings`
<a name="ebs-settings"></a>

**(Optional)** Identifies the `[ebs]` sections with the Amazon EBS volumes that are mounted on the head node. When using multiple Amazon EBS volumes, enter these parameters in a list with each one separated by a comma. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

Up to five (5) additional Amazon EBS volumes are supported.

For more information, see the [`[ebs]` section](ebs-section.md).

For example, the following setting specifies that the sections that start `[ebs custom1]` and `[ebs custom2]` are used for the Amazon EBS volumes.

```
ebs_settings = custom1, custom2
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ec2_iam_role`
<a name="ec2-iam-role"></a>

**(Optional)** Defines the name of an existing IAM role for Amazon EC2 that's attached to all instances in the cluster. An IAM role name and its Amazon Resource Name (ARN) are different. ARNs can't be used as an argument to `ec2_iam_role`.

If this option is specified, the [`additional_iam_policies`](#additional-iam-policies) setting is ignored. If your intent is to add extra policies to the default settings for cluster nodes, we recommend that you pass the additional custom IAM policies with the [`additional_iam_policies`](#additional-iam-policies) setting instead of using the `ec2_iam_role` settings.

If this option isn't specified, the default AWS ParallelCluster IAM role for Amazon EC2 is used. For more information, see [AWS Identity and Access Management roles in AWS ParallelCluster](iam.md).

There is no default value.

```
ec2_iam_role = ParallelClusterInstanceRole
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `efs_settings`
<a name="efs-settings"></a>

**(Optional)** Specifies settings related to the Amazon EFS file system. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[efs]` section](efs-section.md).

For example, the following setting specifies that the section that starts `[efs customfs]` is used for the Amazon EFS file system configuration.

```
efs_settings = customfs
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `enable_efa`
<a name="enable-efa"></a>

**(Optional)** If present, specifies that Elastic Fabric Adapter (EFA) is enabled for the compute nodes. To view the list of EC2 instances that support EFA, see [Supported instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html#efa-instance-types) in the *Amazon EC2 User Guide for Linux Instances*. For more information, see [Elastic Fabric Adapter](efa.md). If the [`queue_settings`](#queue-settings) setting is defined, either this setting can be defined, or the [`enable_efa`](queue-section.md#queue-enable-efa) settings in the [`[queue]` section](queue-section.md) can be defined. A cluster placement group should be used to minimize latencies between instances. For more information, see [`placement`](#placement) and [`placement_group`](#placement-group).

```
enable_efa = compute
```

**Note**  
Support for EFA on Arm-based Graviton2 instances was added in AWS ParallelCluster version 2.10.1.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `enable_efa_gdr`
<a name="enable-efa-gdr"></a>

**(Optional)** Starting with AWS ParallelCluster version 2.11.3, this setting has no effect. Elastic Fabric Adapter (EFA) support for GPUDirect RDMA (remote direct memory access) is always enabled if it's supported by both the instance type and the operating system.

**Note**  
AWS ParallelCluster version 2.10.0 through 2.11.2: If `compute`, specifies that Elastic Fabric Adapter (EFA) support for GPUDirect RDMA (remote direct memory access) is enabled for the compute nodes. Setting this setting to `compute` requires that the [`enable_efa`](#enable-efa) setting is set to `compute`. EFA support for GPUDirect RDMA is supported by specific instance types (`p4d.24xlarge`) on specific operating systems ([`base_os`](#base-os) is `alinux2`, `centos7`, `ubuntu1804`, or `ubuntu2004`). If the [`queue_settings`](#queue-settings) setting is defined, either this setting can be defined, or the [`enable_efa_gdr`](queue-section.md#queue-enable-efa-gdr) settings in the [`[queue]` sections](queue-section.md) can be defined. A cluster placement group should be used to minimize latencies between instances. For more information, see [`placement`](#placement) and [`placement_group`](#placement-group).

```
enable_efa_gdr = compute
```

**Note**  
Support for `enable_efa_gdr` was added in AWS ParallelCluster version 2.10.0.

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `enable_intel_hpc_platform`
<a name="enable-intel-hpc-platform"></a>

**(Optional)** If present, indicates that the [End user license agreement](https://software.intel.com/en-us/articles/end-user-license-agreement) for Intel Parallel Studio is accepted. This causes Intel Parallel Studio to be installed on the head node and shared with the compute nodes. This adds several minutes to the time it takes the head node to bootstrap. The [`enable_intel_hpc_platform`](#enable-intel-hpc-platform) setting is only supported on CentOS 7 ([`base_os`](#base-os)` = centos7`).

The default value is `false`.

```
enable_intel_hpc_platform = true
```

**Note**  
The [`enable_intel_hpc_platform`](#enable-intel-hpc-platform) parameter isn't compatible with AWS Graviton-based instances.

**Note**  
Support for [`enable_intel_hpc_platform`](#enable-intel-hpc-platform) was added in AWS ParallelCluster version 2.5.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `encrypted_ephemeral`
<a name="encrypted-ephemeral"></a>

**(Optional)** Encrypts the ephemeral instance store volumes with non-recoverable in-memory keys, using LUKS (Linux Unified Key Setup).

For more information, see [https://gitlab.com/cryptsetup/cryptsetup/blob/master/README.md](https://gitlab.com/cryptsetup/cryptsetup/blob/master/README.md).

The default value is `false`.

```
encrypted_ephemeral = true
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ephemeral_dir`
<a name="ephemeral-dir"></a>

**(Optional)** Defines the path where instance store volumes are mounted if they are used.

The default value is `/scratch`.

```
ephemeral_dir = /scratch
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `extra_json`
<a name="extra-json"></a>

**(Optional)** Defines the extra JSON that's merged into the Chef `dna.json`. For more information, see [Building a Custom AWS ParallelCluster AMI](tutorials_02_ami_customization.md).

The default value is `{}`.

```
extra_json = {}
```

**Note**  
Starting with AWS ParallelCluster version 2.6.1, most of the install recipes are skipped by default when launching nodes to improve start up times. To run all of the install recipes for better backwards compatibility at the expense of startup times, add `"skip_install_recipes" : "no"` to the `cluster` key in the [`extra_json`](#extra-json) setting. For example:  

```
extra_json = { "cluster" : { "skip_install_recipes" : "no" } }
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `fsx_settings`
<a name="fsx-settings"></a>

**(Optional)** Specifies the section that defines the FSx for Lustre configuration. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[fsx]` section](fsx-section.md).

For example, the following setting specifies that the section that starts `[fsx fs]` is used for the FSx for Lustre configuration.

```
fsx_settings = fs
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `iam_lambda_role`
<a name="iam-lambda-role"></a>

**(Optional)** Defines the name of an existing AWS Lambda execution role. This role is attached to all Lambda functions in the cluster. For more information, see [AWS Lambda execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) in the *AWS Lambda Developer Guide*.

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

An IAM role name and its Amazon Resource Name (ARN) are different. ARNs can't be used as an argument to `iam_lambda_role`. If both [`ec2_iam_role`](#ec2-iam-role)and `iam_lambda_role` are defined, and the [`scheduler`](#scheduler) is `sge`, `slurm`, or `torque`, then there will be no roles created. If the [`scheduler`](#scheduler) is `awsbatch`, then there will be roles created during [`pcluster start`](pcluster.start.md). For example policies, see [`ParallelClusterLambdaPolicy` using SGE, Slurm, or Torque](iam.md#parallelcluster-lambda-policy) and [`ParallelClusterLambdaPolicy` using `awsbatch`](iam.md#parallelcluster-lambda-policy-batch).

There is no default value.

```
iam_lambda_role = ParallelClusterLambdaRole
```

**Note**  
Support for `iam_lambda_role` was added in AWS ParallelCluster version 2.10.1.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `initial_queue_size`
<a name="configuration-initial-queue-size"></a>

**(Optional)** Sets the initial number of Amazon EC2 instances to launch as compute nodes in the cluster. If the [`queue_settings`](#queue-settings) setting is defined, then this setting must be removed and replaced by the [`initial_count`](compute-resource-section.md#compute-resource-initial-count) settings in the [`[compute_resource]` sections](compute-resource-section.md).

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

This setting is applicable only for traditional schedulers (SGE, Slurm, and Torque). If the [`maintain_initial_size`](#maintain-initial-size) setting is `true`, then the [`initial_queue_size`](#configuration-initial-queue-size) setting must be at least one (1).

If the scheduler is `awsbatch`, use [`min_vcpus`](#min-vcpus) instead.

Defaults to `2`.

```
initial_queue_size = 2
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `key_name`
<a name="key-name"></a>

**(Optional)** Names an existing Amazon EC2 key pair with which to enable SSH access to the instances.

```
key_name = mykey
```

**Note**  
Before AWS ParallelCluster version 2.11.0, `key_name` was a required setting.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `maintain_initial_size`
<a name="maintain-initial-size"></a>

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

**(Optional)** Maintains the initial size of the Auto Scaling group for traditional schedulers (SGE, Slurm, and Torque).

If the scheduler is `awsbatch`, use [`desired_vcpus`](#desired-vcpus) instead.

This setting is a Boolean flag. If set to `true`, the Auto Scaling group doesn't ever have fewer members than the value of [`initial_queue_size`](#configuration-initial-queue-size), and the value of [`initial_queue_size`](#configuration-initial-queue-size) must be one (1) or greater. The cluster can still scale up to the value of [`max_queue_size`](#configuration-max-queue-size). If `cluster_type = spot` then the Auto Scaling group can have instances interrupted and the size can drop under [`initial_queue_size`](#configuration-initial-queue-size).

If set to `false`, the Auto Scaling group can scale down to zero (0) members to prevent resources from sitting idle when they aren't needed.

If the [`queue_settings`](#queue-settings) setting is defined then this setting must be removed and replaced by the [`initial_count`](compute-resource-section.md#compute-resource-initial-count) and [`min_count`](compute-resource-section.md#compute-resource-min-count) settings in the [`[compute_resource]` sections](compute-resource-section.md).

Defaults to `false`.

```
maintain_initial_size = false
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `master_instance_type`
<a name="master-instance-type"></a>

**(Optional)** Defines the Amazon EC2 instance type that's used for the head node. The architecture of the instance type must be the same as the architecture used for the [`compute_instance_type`](#compute-instance-type) setting.

In AWS Regions that have a Free Tier, defaults to the Free Tier instance type (`t2.micro` or `t3.micro`). In AWS Regions that do not have a Free Tier, defaults to `t3.micro`. For more information about the AWS Free Tier, see [AWS Free Tier FAQs](https://aws.amazon.com/free/free-tier-faqs/).

```
master_instance_type = t2.micro
```

**Note**  
Before AWS ParallelCluster version 2.10.1, defaulted to `t2.micro` in all AWS Regions. In AWS ParallelCluster version 2.10.0, the `p4d.24xlarge` wasn't supported for the head node. Support for AWS Graviton-based instances (such as `A1` and `C6g`) was added in AWS ParallelCluster version 2.8.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `master_root_volume_size`
<a name="master-root-volume-size"></a>

**(Optional)** Specifies the head node root volume size in gibibytes (GiB). The AMI must support `growroot`.

The default value is `35`.

**Note**  
For AWS ParallelCluster versions between 2.5.0 and 2.10.4, the default was 25. Before AWS ParallelCluster version 2.5.0, the default was 20.

```
master_root_volume_size = 35
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `max_queue_size`
<a name="configuration-max-queue-size"></a>

**(Optional)** Sets the maximum number of Amazon EC2 instances that can be launched in the cluster. If the [`queue_settings`](#queue-settings) setting is defined, then this setting must be removed and replaced by the [`max_count`](compute-resource-section.md#compute-resource-max-count) settings in the [`[compute_resource]` sections](compute-resource-section.md).

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

This setting is applicable only for traditional schedulers (SGE, Slurm, and Torque).

If the scheduler is `awsbatch`, use [`max_vcpus`](#max-vcpus) instead.

Defaults to `10`.

```
max_queue_size = 10
```

Update policy: This setting can be changed during an update, but the compute fleet should be stopped if the value is reduced. Otherwise, existing nodes may be terminated.

## `max_vcpus`
<a name="max-vcpus"></a>

**(Optional)** Specifies the maximum number of vCPUs in the compute environment. Used only if the scheduler is `awsbatch`.

The default value is `20`.

```
max_vcpus = 20
```

[Update policy: This setting can't be decreased during an update.](using-pcluster-update.md#update-policy-no-decrease)

## `min_vcpus`
<a name="min-vcpus"></a>

**(Optional)** Maintains the initial size of the Auto Scaling group for the `awsbatch` scheduler.

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

If the scheduler is SGE, Slurm, or Torque, use [`maintain_initial_size`](#maintain-initial-size) instead.

The compute environment never has fewer members than the value of [`min_vcpus`](#min-vcpus).

Defaults to `0`.

```
min_vcpus = 0
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `placement`
<a name="placement"></a>

**(Optional)** Defines the cluster placement group logic, enabling either the whole cluster or only the compute instances to use the cluster placement group.

If the [`queue_settings`](#queue-settings) setting is defined, then this setting should be removed and replaced with [`placement_group`](queue-section.md#queue-placement-group) settings for each of the [`[queue]` sections](queue-section.md). If the same placement group is used for different instance types, it’s more likely that the request might fail due to an insufficient capacity error. For more information, see [Insufficient instance capacity](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-capacity) in the *Amazon EC2 User Guide*. Multiple queues can only share a placement group if it’s created in advance and configured in the [`placement_group`](queue-section.md#queue-placement-group) setting for each queue. If each [`[queue]` sections](queue-section.md) defines a [`placement_group`](queue-section.md#queue-placement-group) setting, then the head node can't be in the placement group for a queue.

Valid options are `cluster` or `compute`.

This parameter isn't used when the scheduler is `awsbatch`.

The default value is `compute`.

```
placement = compute
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `placement_group`
<a name="placement-group"></a>

**(Optional)** Defines the cluster placement group. If the [`queue_settings`](#queue-settings) setting is defined, then this setting should be removed and replaced by the [`placement_group`](queue-section.md#queue-placement-group) settings in the [`[queue]` sections](queue-section.md).

Valid options are the following values:
+ `DYNAMIC`
+ An existing Amazon EC2 cluster placement group name

When set to `DYNAMIC`, a unique placement group is created and deleted as part of the cluster stack.

This parameter isn't used when the scheduler is `awsbatch`.

For more information about placement groups, see [Placement groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) in the *Amazon EC2 User Guide*. If the same placement group is used for different instance types, it’s more likely that the request might fail due to an insufficient capacity error. For more information, see [Insufficient instance capacity](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-capacity) in the *Amazon EC2 User Guide*.

There is no default value.

Not all instance types support cluster placement groups. For example, the default instance type of `t3.micro` doesn't support cluster placement groups. For information about the list of instance types that support cluster placement groups, see [Cluster placement group rules and limitations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-cluster) in the *Amazon EC2 User Guide*. See [Placement groups and instance launch issues](troubleshooting.md#placement-groups-and-instance-launch-issues) for tips when working with placement groups.

```
placement_group = DYNAMIC
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `post_install`
<a name="post-install"></a>

**(Optional)** Specifies the URL of a post-install script that's run after all of the node bootstrap actions are complete. For more information, see [Custom Bootstrap Actions](pre_post_install.md).

When using `awsbatch` as the scheduler, the post-install script is run only on the head node.

The parameter format can be either `http://hostname/path/to/script.sh` or `s3://bucket-name/path/to/script.sh`.

There is no default value.

```
post_install = s3://<bucket-name>/my-post-install-script.sh
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `post_install_args`
<a name="post-install-args"></a>

**(Optional)** Specifies a quoted list of arguments to pass to the post-install script.

There is no default value.

```
post_install_args = "argument-1 argument-2"
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `pre_install`
<a name="pre-install"></a>

**(Optional)** Specifies the URL of a pre-install script that's run before any node deployment bootstrap action is started. For more information, see [Custom Bootstrap Actions](pre_post_install.md).

When using `awsbatch` as the scheduler, the pre-install script is run only on the head node.

The parameter format can be either `http://hostname/path/to/script.sh` or `s3://bucket-name/path/to/script.sh`.

There is no default value.

```
pre_install = s3://bucket-name/my-pre-install-script.sh
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `pre_install_args`
<a name="pre-install-args"></a>

**(Optional)** Specifies a quoted list of arguments to pass to the pre-install script.

There is no default value.

```
pre_install_args = "argument-3 argument-4"
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `proxy_server`
<a name="proxy-server"></a>

**(Optional)** Defines an HTTP or HTTPS proxy server, typically `http://x.x.x.x:8080`.

There is no default value.

```
proxy_server = http://10.11.12.13:8080
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `queue_settings`
<a name="queue-settings"></a>

**(Optional)** Specifies that the cluster uses queues instead of a homogeneous compute fleet, and which [`[queue]` sections](queue-section.md) are used. The first [`[queue]` section](queue-section.md) listed is the default scheduler queue. The `queue` section names must start with a lowercase letter, contain no more than 30 characters, and only contain lowercase letters, numbers, and hyphens (-).

**Important**  
[`queue_settings`](#queue-settings) is only supported when [`scheduler`](#scheduler) is set to `slurm`. The [`cluster_type`](#cluster-type), [`compute_instance_type`](#compute-instance-type), [`initial_queue_size`](#configuration-initial-queue-size), [`maintain_initial_size`](#maintain-initial-size), [`max_queue_size`](#configuration-max-queue-size), [`placement`](#placement), [`placement_group`](#placement-group), and [`spot_price`](#spot-price) settings must not be specified. The [`disable_hyperthreading`](#disable-hyperthreading) and [`enable_efa`](#enable-efa) settings can either be specified in the [`[cluster]` section](#cluster-definition) or the [`[queue]` sections](queue-section.md), but not both.

Up to five (5) [`[queue]` sections](queue-section.md) are supported.

For more information, see the [`[queue]` section](queue-section.md).

For example, the following setting specifies that the sections that start `[queue q1]` and `[queue q2]` are used.

```
queue_settings = q1, q2
```

**Note**  
Support for [`queue_settings`](#queue-settings) was added in AWS ParallelCluster version 2.9.0.

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `raid_settings`
<a name="raid-settings"></a>

**(Optional)** Identifies the `[raid]` section with the Amazon EBS volume RAID configuration. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[raid]` section](raid-section.md).

For example, the following setting specifies that the section that starts `[raid rs]` be used for the Auto Scaling configuration.

```
raid_settings = rs
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `s3_read_resource`
<a name="s3-read-resource"></a>

**(Optional)** Specifies an Amazon S3 resource to which AWS ParallelCluster nodes are granted read-only access.

For example, `arn:aws:s3:::my_corporate_bucket*` provides read-only access to the *my\$1corporate\$1bucket* bucket and to the objects in the bucket.

See [working with Amazon S3](s3_resources.md) for details on format.

There is no default value.

```
s3_read_resource = arn:aws:s3:::my_corporate_bucket*
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `s3_read_write_resource`
<a name="s3-read-write-resource"></a>

**(Optional)** Specifies an Amazon S3 resource which AWS ParallelCluster nodes are granted read/write access to.

For example, `arn:aws:s3:::my_corporate_bucket/Development/*` provides read/write access to all objects in the `Development` folder of the *my\$1corporate\$1bucket* bucket.

See [working with Amazon S3](s3_resources.md) for details on format.

There is no default value.

```
s3_read_write_resource = arn:aws:s3:::my_corporate_bucket/*
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `scaling_settings`
<a name="scaling-settings"></a>

Identifies the `[scaling]` section with the Auto Scaling configuration. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[scaling]` section](scaling-section.md).

For example, the following setting specifies that the section that starts `[scaling custom]` is used for the Auto Scaling configuration.

```
scaling_settings = custom
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `scheduler`
<a name="scheduler"></a>

**(Required)** Defines the cluster scheduler.

Valid options are the following values:

`awsbatch`  
AWS Batch  
For more information about the `awsbatch` scheduler, see [networking setup](networking.md#awsbatch-networking) and [AWS Batch (`awsbatch`)](awsbatchcli.md).

`sge`  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.
Son of Grid Engine (SGE)

`slurm`  
Slurm Workload Manager (Slurm)

`torque`  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.
Torque Resource Manager (Torque)

**Note**  
Before AWS ParallelCluster version 2.7.0, the `scheduler` parameter was optional, and the default was `sge`. Starting with AWS ParallelCluster version 2.7.0, the `scheduler` parameter is required.

```
scheduler = slurm
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `shared_dir`
<a name="cluster-shared-dir"></a>

**(Optional)** Defines the path where the shared Amazon EBS volume is mounted.

Don't use this option with multiple Amazon EBS volumes. Instead, provide [`shared_dir`](#cluster-shared-dir) values under each [`[ebs]` section](ebs-section.md).

See the [`[ebs]` section](ebs-section.md) for details on working with multiple Amazon EBS volumes.

The default value is `/shared`.

The following example shows a shared Amazon EBS volume mounted at `/myshared`.

```
shared_dir = myshared
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `spot_bid_percentage`
<a name="spot-bid-percentage"></a>

**(Optional)** Sets the on-demand percentage used to calculate the maximum Spot price for the ComputeFleet, when `awsbatch` is the scheduler.

If unspecified, the current spot market price is selected, capped at the On-Demand price.

```
spot_bid_percentage = 85
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `spot_price`
<a name="spot-price"></a>

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

**(Optional)** Sets the maximum Spot price for the ComputeFleet on traditional schedulers (SGE, Slurm, and Torque). Used only when the [`cluster_type`](#cluster-type) setting is set to `spot`. If you don't specify a value, you are charged the Spot price, capped at the On-Demand price. If the [`queue_settings`](#queue-settings) setting is defined, then this setting must be removed and replaced by the [`spot_price`](compute-resource-section.md#compute-resource-spot-price) settings in the [`[compute_resource]` sections](compute-resource-section.md).

If the scheduler is `awsbatch`, use [spot\$1bid\$1percentage](#spot-bid-percentage) instead.

For assistance finding a Spot Instance that meets your needs, see the [Spot Instance advisor](https://aws.amazon.com/ec2/spot/instance-advisor/).

```
spot_price = 1.50
```

**Note**  
In AWS ParallelCluster version 2.5.0, if `cluster_type = spot` but [`spot_price`](#spot-price) isn't specified, the instance launches of the ComputeFleet fail. This was fixed in AWS ParallelCluster version 2.5.1.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `tags`
<a name="tags"></a>

**(Optional)** Defines tags to be used by CloudFormation.

If command line tags are specified via *--tags*, they are merged with config tags.

Command line tags overwrite config tags that have the same key.

Tags are JSON formatted. Don't use quotes outside of the curly braces.

For more information, see [CloudFormation resource tags type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) in the *AWS CloudFormation User Guide*.

```
tags = {"key" : "value", "key2" : "value2"}
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

**Note**  
The update policy did not support changing the `tags` setting for AWS ParallelCluster version 2.8.0 through version 2.9.1.  
For versions 2.10.0 through version 2.11.7, the listed update policy that supported changing the `tags` setting isn't accurate. A cluster update when modifying this setting isn't supported.

## `template_url`
<a name="template-url"></a>

**(Optional)** Defines the path to the AWS CloudFormation template that's used to create the cluster.

Updates use the template that was originally used to create the stack.

Defaults to `https://aws_region_name-aws-parallelcluster.s3.amazonaws.com/templates/aws-parallelcluster-version.cfn.json`.

**Warning**  
This is an advanced parameter. Any change to this setting is done at your own risk.

```
template_url = https://us-east-1-aws-parallelcluster.s3.amazonaws.com/templates/aws-parallelcluster-2.11.9.cfn.json
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update.md#update-policy-setting-ignored)

## `vpc_settings`
<a name="vpc-settings"></a>

**(Required)** Identifies the `[vpc]` section with the Amazon VPC configuration where the cluster is deployed. The section name must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

For more information, see the [`[vpc]` section](vpc-section.md).

For example, the following setting specifies that the section that starts `[vpc public]` is used for the Amazon VPC configuration.

```
vpc_settings = public
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

# `[compute_resource]` section
<a name="compute-resource-section"></a>

Defines configuration settings for a compute resource. [`[compute_resource]` sections](#compute-resource-section) are referenced by the [`compute_resource_settings`](queue-section.md#queue-compute-resource-settings) setting in the [`[queue]` section](queue-section.md). [`[compute_resource]` sections](#compute-resource-section) are only supported when [`scheduler`](cluster-definition.md#scheduler) is set to `slurm`.

The format is `[compute_resource <compute-resource-name>]`. *compute-resource-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[compute_resource cr1]
instance_type = c5.xlarge
min_count = 0
initial_count = 2
max_count = 10
spot_price = 0.5
```

**Note**  
Support for the [`[compute_resource]` section](#compute-resource-section) was added in AWS ParallelCluster version 2.9.0.

**Topics**
+ [`initial_count`](#compute-resource-initial-count)
+ [`instance_type`](#compute-resource-instance-type)
+ [`max_count`](#compute-resource-max-count)
+ [`min_count`](#compute-resource-min-count)
+ [`spot_price`](#compute-resource-spot-price)

## `initial_count`
<a name="compute-resource-initial-count"></a>

**(Optional)** Sets the initial number of Amazon EC2 instances to launch for this compute resource. Cluster creation doesn't complete until at least this many nodes have been launched into the compute resource. If the [`compute_type`](queue-section.md#queue-compute-type) setting for the queue is `spot` and there aren't enough Spot Instances available, the cluster creation might time out and fail. Any count larger than the [`min_count`](#compute-resource-min-count) setting is dynamic capacity subject to the [`scaledown_idletime`](scaling-section.md#scaledown-idletime) setting. This setting replaces the [`initial_queue_size`](cluster-definition.md#configuration-initial-queue-size) setting.

Defaults to `0`.

```
initial_count = 2
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `instance_type`
<a name="compute-resource-instance-type"></a>

**(Required)** Defines the Amazon EC2 instance type that's used for this compute resource. The architecture of the instance type must be the same as the architecture used for the [`master_instance_type`](cluster-definition.md#master-instance-type) setting. The `instance_type` setting must be unique for each [`[compute_resource]` section](#compute-resource-section) referenced by a [`[queue]` section](queue-section.md). This setting replaces the [`compute_instance_type`](cluster-definition.md#compute-instance-type) setting.

```
instance_type = t2.micro
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `max_count`
<a name="compute-resource-max-count"></a>

**(Optional)** Sets the maximum number of Amazon EC2 instances that can be launched in this compute resource. Any count larger than the [`initial_count`](#compute-resource-initial-count) setting is started in a power down mode. This setting replaces the [`max_queue_size`](cluster-definition.md#configuration-max-queue-size) setting.

Defaults to `10`.

```
max_count = 10
```

[Update policy: Reducing the size of a queue below the current number of nodes requires that the compute fleet be stopped first.](using-pcluster-update.md#update-policy-max-count)

**Note**  
The update policy did not support changing the `max_count` setting until the compute fleet was stopped for AWS ParallelCluster version 2.0.0 through version 2.9.1.

## `min_count`
<a name="compute-resource-min-count"></a>

**(Optional)** Sets the minimum number of Amazon EC2 instances that can be launched in this compute resource. These nodes are all static capacity. Cluster creation doesn't complete until at least this number of nodes has been launched into the compute resource.

Defaults to `0`.

```
min_count = 1
```

[Update policy: Reducing the number of static nodes in a queue requires that the compute fleet be stopped first.](using-pcluster-update.md#update-policy-min-count)

**Note**  
The update policy did not support changing the `min_count` setting until the compute fleet was stopped for AWS ParallelCluster version 2.0.0 through version 2.9.1.

## `spot_price`
<a name="compute-resource-spot-price"></a>

**(Optional)** Sets the maximum Spot price for this compute resource. Used only when the [`compute_type`](queue-section.md#queue-compute-type) setting for the queue containing this compute resources is set to `spot`. This setting replaces the [`spot_price`](cluster-definition.md#spot-price) setting.

If you don't specify a value, you're charged the Spot price, capped at the On-Demand price.

For assistance finding a Spot Instance that meets your needs, see the [Spot Instance advisor](https://aws.amazon.com/ec2/spot/instance-advisor/).

```
spot_price = 1.50
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

# `[cw_log]` section
<a name="cw-log-section"></a>

Defines configuration settings for CloudWatch Logs.

The format is `[cw_log cw-log-name]`. *cw-log-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[cw_log custom-cw-log]
enable = true
retention_days = 14
```

For more information, see [Integration with Amazon CloudWatch Logs](cloudwatch-logs.md), [Amazon CloudWatch dashboard](cloudwatch-dashboard.md), and [Integration with Amazon CloudWatch Logs](cloudwatch-logs.md).

**Note**  
Support for `cw_log` was added in AWS ParallelCluster version 2.6.0.

## `enable`
<a name="cw-log-section-enable"></a>

 **(Optional)** Indicates whether CloudWatch Logs is enabled.

The default value is `true`. Use `false` to disable CloudWatch Logs.

The following example enables CloudWatch Logs.

```
enable = true
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `retention_days`
<a name="cw-log-section-retention-days"></a>

 **(Optional)** Indicates how many days CloudWatch Logs retains individual log events.

The default value is `14`. The supported values are 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.

The following example configures CloudWatch Logs to retain log events for 30 days.

```
retention_days = 30
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

# `[dashboard]` section
<a name="dashboard-section"></a>

Defines configuration settings for the CloudWatch dashboard.

The format is `[dashboard dashboard-name]`. *dashboard-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[dashboard custom-dashboard]
enable = true
```

**Note**  
Support for `dashboard` was added in AWS ParallelCluster version 2.10.0.

## `enable`
<a name="dashboard-section-enable"></a>

 **(Optional)** Indicates whether the CloudWatch dashboard is enabled.

The default value is `true`. Use `false` to disable the CloudWatch dashboard.

The following example enables the CloudWatch dashboard.

```
enable = true
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

# `[dcv]` section
<a name="dcv-section"></a>

Defines configuration settings for the Amazon DCV server running on the head node.

To create and configure a Amazon DCV server, specify the cluster [`dcv_settings`](cluster-definition.md#dcv-settings) with the name you define in the `dcv` section, and set [`enable`](#dcv-section-enable) to `master`, and [`base_os`](cluster-definition.md#base-os) to `alinux2`, `centos7`, `ubuntu1804` or `ubuntu2004`. If the head node is an ARM instance, set [`base_os`](cluster-definition.md#base-os) to `alinux2`, `centos7`, or `ubuntu1804`.

The format is `[dcv dcv-name]`. *dcv-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[dcv custom-dcv]
enable = master
port = 8443
access_from = 0.0.0.0/0
```

For more information, see [Connect to the head node through Amazon DCV](dcv.md)

**Important**  
By default the Amazon DCV port setup by AWS ParallelCluster is open to all IPv4 addresses. However, you can connect to a Amazon DCV port only if you have the URL for the Amazon DCV session and connect to the Amazon DCV session within 30 seconds of when the URL is returned from `pcluster dcv connect`. Use the [`access_from`](#dcv-section-access-from) setting to further restrict access to the Amazon DCV port with a CIDR-formatted IP range, and use the [`port`](#dcv-section-port) setting to set a nonstandard port.

**Note**  
Support for the [`[dcv]` section](#dcv-section)on `centos8` was removed in AWS ParallelCluster version 2.10.4. Support for the [`[dcv]` section](#dcv-section) on `centos8` was added in AWS ParallelCluster version 2.10.0. Support for the [`[dcv]` section](#dcv-section) on AWS Graviton-based instances was added in AWS ParallelCluster version 2.9.0. Support for the [`[dcv]` section](#dcv-section) on `alinux2` and `ubuntu1804` was added in AWS ParallelCluster version 2.6.0. Support for the [`[dcv]` section](#dcv-section)on `centos7` was added in AWS ParallelCluster version 2.5.0.

## `access_from`
<a name="dcv-section-access-from"></a>

 **(Optional, Recommended)** Specifies the CIDR-formatted IP range for connections to Amazon DCV. This setting is used only when AWS ParallelCluster creates the security group.

The default value is `0.0.0.0/0`, which allows access from any internet address.

```
access_from = 0.0.0.0/0
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `enable`
<a name="dcv-section-enable"></a>

 **(Required)** Indicates whether Amazon DCV is enabled on the head node. To enable Amazon DCV on the head node and configure the required security group rule, set the `enable` setting to `master`.

The following example enables Amazon DCV on the head node.

```
enable = master
```

**Note**  
Amazon DCV automatically generates a self-signed certificate that's used to secure traffic between the Amazon DCV client and Amazon DCV server running on the head node. To configure your own certificate, see [Amazon DCV HTTPS certificate](dcv.md#dcv-certificate).

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `port`
<a name="dcv-section-port"></a>

 **(Optional)** Specifies the port for Amazon DCV.

The default value is `8443`.

```
port = 8443
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

# `[ebs]` section
<a name="ebs-section"></a>

Defines Amazon EBS volume configuration settings for volumes that are mounted on the head node and shared to the compute nodes through NFS.

To learn how to include Amazon EBS volumes in your cluster definition, see ``[cluster]` section` / ``ebs_settings``.

To use an existing Amazon EBS volume for long-term permanent storage that's independent of the cluster life cycle, specify [`ebs_volume_id`](#ebs-volume-id).

If you don't specify [`ebs_volume_id`](#ebs-volume-id), AWS ParallelCluster creates the EBS volume from the `[ebs]` settings when it creates the cluster and deletes the volume and data when the cluster is deleted.

For more information, see [Best practices: moving a cluster to a new AWS ParallelCluster minor or patch version](best-practices.md#best-practices-cluster-upgrades).

The format is `[ebs ebs-name]`. *ebs-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[ebs custom1]
shared_dir = vol1
ebs_snapshot_id = snap-xxxxx
volume_type = io1
volume_iops = 200
...

[ebs custom2]
shared_dir = vol2
...

...
```

**Topics**
+ [`shared_dir`](#ebs-shared-dir)
+ [`ebs_kms_key_id`](#ebs-kms-key-id)
+ [`ebs_snapshot_id`](#ebs-snapshot-id)
+ [`ebs_volume_id`](#ebs-volume-id)
+ [`encrypted`](#encrypted)
+ [`volume_iops`](#volume-iops)
+ [`volume_size`](#volume-size)
+ [`volume_throughput`](#volume-throughput)
+ [`volume_type`](#volume-type)

## `shared_dir`
<a name="ebs-shared-dir"></a>

**(Required)** Specifies the path where the shared Amazon EBS volume is mounted.

This parameter is required when using multiple Amazon EBS volumes.

When you use one Amazon EBS volume, this option overwrites the [`shared_dir`](cluster-definition.md#cluster-shared-dir) that's specified under the [`[cluster]` section](cluster-definition.md). In the following example, the volume mounts to `/vol1`.

```
shared_dir = vol1
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ebs_kms_key_id`
<a name="ebs-kms-key-id"></a>

**(Optional)** Specifies a custom AWS KMS key to use for encryption.

This parameter must be used together with `encrypted = true`. It also must have a custom [`ec2_iam_role`](cluster-definition.md#ec2-iam-role).

For more information, see [Disk encryption with a custom KMS Key](tutorials_04_encrypted_kms_fs.md).

```
ebs_kms_key_id = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ebs_snapshot_id`
<a name="ebs-snapshot-id"></a>

**(Optional)** Defines the Amazon EBS snapshot ID if you're using a snapshot as the source for the volume.

There is no default value.

```
ebs_snapshot_id = snap-xxxxx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ebs_volume_id`
<a name="ebs-volume-id"></a>

**(Optional)** Defines the volume ID of an existing Amazon EBS volume to attach to the head node.

There is no default value.

```
ebs_volume_id = vol-xxxxxx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `encrypted`
<a name="encrypted"></a>

**(Optional)** Specifies whether the Amazon EBS volume is encrypted. Note: Do *not* use with snapshots.

The default value is `false`.

```
encrypted = false
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `volume_iops`
<a name="volume-iops"></a>

**(Optional)** Defines the number of IOPS for `io1`, `io2`, and `gp3` type volumes.

The default value, supported values, and `volume_iops` to `volume_size` ratio varies by [`volume_type`](raid-section.md#raid-volume-type) and [`volume_size`](#volume-size).

`volume_type` = `io1`  
Default `volume_iops` = 100  
Supported values `volume_iops` = 100–64000 †  
Maximum `volume_iops` to `volume_size` ratio = 50 IOPS for each GiB. 5000 IOPS requires a `volume_size` of at least 100 GiB.

`volume_type` = `io2`  
Default `volume_iops` = 100  
Supported values `volume_iops` = 100–64000 (256000 for `io2` Block Express volumes) †  
Maximum `volume_iops` to `volume_size` ratio = 500 IOPS for each GiB. 5000 IOPS requires a `volume_size` of at least 10 GiB.

`volume_type` = `gp3`  
Default `volume_iops` = 3000  
Supported values `volume_iops` = 3000–16000  
Maximum `volume_iops` to `volume_size` ratio = 500 IOPS for each GiB. 5000 IOPS requires a `volume_size` of at least 10 GiB.

```
volume_iops = 200
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

† Maximum IOPS is guaranteed only on [Instances built on the Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) provisioned with more than 32,000 IOPS. Other instances guarantee up to 32,000 IOPS. Unless you [modify the volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html), earlier `io1` volumes might not reach full performance. `io2` Block Express volumes support `volume_iops` values up to 256,000. For more information, see [`io2` Block Express volumes (In preview)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#io2-block-express) in the *Amazon EC2 User Guide*.

## `volume_size`
<a name="volume-size"></a>

**(Optional)** Specifies the size of the volume to be created, in GiB (if you're not using a snapshot).

The default value and supported values varies by [`volume_type`](#volume-type).

`volume_type` = `standard`  
Default `volume_size` = 20 GiB  
Supported values `volume_size` = 1–1024 GiB

`volume_type` = `gp2`, `io1`, `io2`, and `gp3`  
Default `volume_size` = 20 GiB  
Supported values `volume_size` = 1–16384 GiB

`volume_type` = `sc1` and `st1`  
Default `volume_size` = 500 GiB  
Supported values `volume_size` = 500–16384 GiB

```
volume_size = 20
```

**Note**  
Before AWS ParallelCluster version 2.10.1, the default value for all volume types was 20 GiB.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `volume_throughput`
<a name="volume-throughput"></a>

**(Optional)** Defines the throughput for `gp3` volume types, in MiB/s.

The default value is `125`.

Supported values `volume_throughput` = 125–1000 MiB/s

The ratio of `volume_throughput` to `volume_iops` can be no more than 0.25. The maximum throughput of 1000 MiB/s requires that the `volume_iops` setting is at least 4000.

```
volume_throughput = 1000
```

**Note**  
Support for `volume_throughput` was added in AWS ParallelCluster version 2.10.1.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `volume_type`
<a name="volume-type"></a>

**(Optional)** Specifies the [Amazon EBS volume type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) of the volume that you want to launch.

Valid options are the following volume types:

`gp2`, `gp3`  
General purpose SSD

`io1`, `io2`  
Provisioned IOPS SSD

`st1`  
Throughput optimized HDD

`sc1`  
Cold HDD

`standard`  
Previous generation magnetic

For more information, see [Amazon EBS volume types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) in the *Amazon EC2 User Guide*.

The default value is `gp2`.

```
volume_type = io2
```

**Note**  
Support for `gp3` and `io2` was added in AWS ParallelCluster version 2.10.1.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

# `[efs]` section
<a name="efs-section"></a>

Defines the configuration settings for the Amazon EFS that's mounted on the head and compute nodes. For more information, see [CreateFileSystem](https://docs.aws.amazon.com/efs/latest/ug/API_CreateFileSystem.html) in the *Amazon EFS API Reference*.

To learn how to include Amazon EFS file systems in your cluster definition, see ``[cluster]` section` / ``efs_settings``.

To use an existing Amazon EFS file system for long-term permanent storage that's independent of the cluster life cycle, specify [`efs_fs_id`](#efs-efs-fs-id).

If you don't specify [`efs_fs_id`](#efs-efs-fs-id), AWS ParallelCluster creates the Amazon EFS file system from the `[efs]` settings when it creates the cluster and deletes the file system and data when the cluster is deleted.

For more information, see [Best practices: moving a cluster to a new AWS ParallelCluster minor or patch version](best-practices.md#best-practices-cluster-upgrades).

The format is `[efs efs-name]`. *efs-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[efs customfs]
shared_dir = efs
encrypted = false
performance_mode = generalPurpose
```

**Topics**
+ [`efs_fs_id`](#efs-efs-fs-id)
+ [`efs_kms_key_id`](#efs-efs-kms-key-id)
+ [`encrypted`](#efs-encrypted)
+ [`performance_mode`](#efs-performance-mode)
+ [`provisioned_throughput`](#efs-provisioned-throughput)
+ [`shared_dir`](#efs-shared-dir)
+ [`throughput_mode`](#efs-throughput-mode)

## `efs_fs_id`
<a name="efs-efs-fs-id"></a>

**(Optional)** Defines the Amazon EFS file system ID for an existing file system.

Specifying this option voids all other Amazon EFS options except for [`shared_dir`](cluster-definition.md#cluster-shared-dir).

If you set this option, it only supports the following types of file systems:
+ File systems that don't have a mount target in the stack's Availability Zone.
+ File systems that have an existing mount target in the stack's Availability Zone with both inbound and outbound NFS traffic allowed from `0.0.0.0/0`.

The sanity check for validating [`efs_fs_id`](#efs-efs-fs-id) requires the IAM role to have the following permissions:
+ `elasticfilesystem:DescribeMountTargets`
+ `elasticfilesystem:DescribeMountTargetSecurityGroups`
+ `ec2:DescribeSubnets`
+ `ec2:DescribeSecurityGroups`
+ `ec2:DescribeNetworkInterfaceAttribute`

To avoid errors, you must add these permissions to your IAM role, or set `sanity_check = false`.

**Important**  
When you set a mount target with inbound and outbound NFS traffic allowed from `0.0.0.0/0`, it exposes the file system to NFS mounting requests from anywhere in the mount target's Availability Zone. AWS doesn't recommend creating a mount target in the stack's Availability Zone. Instead, let AWS handle this step. If you want to have a mount target in the stack's Availability Zone, consider using a custom security group by providing a [`vpc_security_group_id`](vpc-section.md#vpc-security-group-id) option under the [`[vpc]` section](vpc-section.md). Then, add that security group to the mount target and turn off `sanity_check` to create the cluster.

There is no default value.

```
efs_fs_id = fs-12345
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `efs_kms_key_id`
<a name="efs-efs-kms-key-id"></a>

**(Optional)** Identifies the AWS Key Management Service (AWS KMS) customer managed key to be used to protect the encrypted file system. If this is set, the [`encrypted`](#efs-encrypted) setting must be set to `true`. This corresponds to the [KmsKeyId](https://docs.aws.amazon.com/efs/latest/ug/API_CreateFileSystem.html#efs-CreateFileSystem-request-KmsKeyId) parameter in the *Amazon EFS API Reference*.

There is no default value.

```
efs_kms_key_id = 1234abcd-12ab-34cd-56ef-1234567890ab
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `encrypted`
<a name="efs-encrypted"></a>

**(Optional)** Indicates whether the file system is encrypted. This corresponds to the [Encrypted](https://docs.aws.amazon.com/efs/latest/ug/API_CreateFileSystem.html#efs-CreateFileSystem-request-Encrypted) parameter in the *Amazon EFS API Reference*.

The default value is `false`.

```
encrypted = true
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `performance_mode`
<a name="efs-performance-mode"></a>

**(Optional)** Defines the performance mode of the file system. This corresponds to the [PerformanceMode](https://docs.aws.amazon.com/efs/latest/ug/API_CreateFileSystem.html#efs-CreateFileSystem-request-PerformanceMode) parameter in the *Amazon EFS API Reference*.

Valid options are the following values:
+ `generalPurpose`
+ `maxIO`

 Both values are case sensitive.

We recommend the `generalPurpose` performance mode for most file systems.

File systems that use the `maxIO` performance mode can scale to higher levels of aggregate throughput and operations per second. However, there's a trade-off of slightly higher latencies for most file operations.

After the file system is created, this parameter can't be changed.

The default value is `generalPurpose`.

```
performance_mode = generalPurpose
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `provisioned_throughput`
<a name="efs-provisioned-throughput"></a>

**(Optional)** Defines the provisioned throughput of the file system, measured in MiB/s. This corresponds to the [ProvisionedThroughputInMibps](https://docs.aws.amazon.com/efs/latest/ug/API_CreateFileSystem.html#efs-CreateFileSystem-response-ProvisionedThroughputInMibps) parameter in the *Amazon EFS API Reference*.

If you use this parameter, you must set [`throughput_mode`](#efs-throughput-mode) to `provisioned`.

The quota on throughput is `1024` MiB/s. To request a quota increase, contact Support.

The minimum value is `0.0` MiB/s.

```
provisioned_throughput = 1024
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `shared_dir`
<a name="efs-shared-dir"></a>

**(Required)** Defines the Amazon EFS mount point on the head and compute nodes.

This parameter is required. The Amazon EFS section is used only if [`shared_dir`](cluster-definition.md#cluster-shared-dir) is specified.

Don't use `NONE` or `/NONE` as the shared directory.

The following example mounts Amazon EFS at `/efs`.

```
shared_dir = efs
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `throughput_mode`
<a name="efs-throughput-mode"></a>

**(Optional)** Defines the throughput mode of the file system. This corresponds to the [ThroughputMode](https://docs.aws.amazon.com/efs/latest/ug/API_CreateFileSystem.html#efs-CreateFileSystem-request-ThroughputMode) parameter in the *Amazon EFS API Reference*.

Valid options are the following values:
+ `bursting`
+ `provisioned`

The default value is `bursting`.

```
throughput_mode = provisioned
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

# `[fsx]` section
<a name="fsx-section"></a>

Defines configuration settings for an attached FSx for Lustre file system. For more information, see [Amazon FSx CreateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystem.html) in the *Amazon FSx API Reference*.

If the [`base_os`](cluster-definition.md#base-os) is `alinux2`, `centos7`, `ubuntu1804`, or `ubuntu2004`, FSx for Lustre is supported.

When using Amazon Linux, the kernel must be `4.14.104-78.84.amzn1.x86_64` or a later version. For instructions, see [Installing the lustre client](https://docs.aws.amazon.com/fsx/latest/LustreGuide/install-lustre-client.html) in the *Amazon FSx for Lustre User Guide*.

**Note**  
FSx for Lustre isn't currently supported when using `awsbatch` as a scheduler.

**Note**  
Support for FSx for Lustre on `centos8` was removed in AWS ParallelCluster version 2.10.4. Support for FSx for Lustre on `ubuntu2004` was added in AWS ParallelCluster version 2.11.0. Support for FSx for Lustre on `centos8` was added in AWS ParallelCluster version 2.10.0. Support for FSx for Lustre on `alinux2`, `ubuntu1604`, and `ubuntu1804` was added in AWS ParallelCluster version 2.6.0. Support for FSx for Lustre on `centos7` was added in AWS ParallelCluster version 2.4.0.

If using an existing file system, it must be associated to a security group that allows inbound TCP traffic to port `988`. Setting the source to `0.0.0.0/0` on a security group rule provides client access from all the IP ranges within your VPC security group for the protocol and port range for that rule. To further limit access to your file systems, we recommend using more restrictive sources for your security group rules. For example, you can use more specific CIDR ranges, IP addresses, or security group IDs. This is done automatically when not using [`vpc_security_group_id`](vpc-section.md#vpc-security-group-id).

To use an existing Amazon FSx file system for long-term permanent storage that's independent of the cluster life cycle, specify [`fsx_fs_id`](#fsx-fs-id).

If you don't specify [`fsx_fs_id`](#fsx-fs-id), AWS ParallelCluster creates the FSx for Lustre file system from the `[fsx]` settings when it creates the cluster and deletes the file system and data when the cluster is deleted.

For more information, see [Best practices: moving a cluster to a new AWS ParallelCluster minor or patch version](best-practices.md#best-practices-cluster-upgrades).

The format is `[fsx fsx-name]`. *fsx-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[fsx fs]
shared_dir = /fsx
fsx_fs_id = fs-073c3803dca3e28a6
```

To create and configure a new file system, use the following parameters:

```
[fsx fs]
shared_dir = /fsx
storage_capacity = 3600
imported_file_chunk_size = 1024
export_path = s3://bucket/folder
import_path = s3://bucket
weekly_maintenance_start_time = 1:00:00
```

**Topics**
+ [`auto_import_policy`](#fsx-auto-import-policy)
+ [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days)
+ [`copy_tags_to_backups`](#fsx-copy-tags-to-backups)
+ [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time)
+ [`data_compression_type`](#fsx-data-compression-type)
+ [`deployment_type`](#fsx-deployment-type)
+ [`drive_cache_type`](#fsx-drive-cache-type)
+ [`export_path`](#fsx-export-path)
+ [`fsx_backup_id`](#fsx-backup-id)
+ [`fsx_fs_id`](#fsx-fs-id)
+ [`fsx_kms_key_id`](#fsx-kms-key-id)
+ [`import_path`](#fsx-import-path)
+ [`imported_file_chunk_size`](#fsx-imported-file-chunk-size)
+ [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput)
+ [`shared_dir`](#fsx-shared-dir)
+ [`storage_capacity`](#fsx-storage-capacity)
+ [`storage_type`](#fsx-storage-type)
+ [`weekly_maintenance_start_time`](#fsx-weekly-maintenance-start-time)

## `auto_import_policy`
<a name="fsx-auto-import-policy"></a>

**(Optional)** Specifies the automatic import policy for reflecting changes in the S3 bucket used to create the FSx for Lustre file system. The possible values are the following:

`NEW`  
FSx for Lustre automatically imports directory listings of any new objects that are added to the linked S3 bucket that don't currently exist in the FSx for Lustre file system. 

`NEW_CHANGED`  
FSx for Lustre automatically imports file and directory listings of any new objects that are added to the S3 bucket and any existing objects that are changed in the S3 bucket. 

This corresponds to the [AutoImportPolicy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-autoimportpolicy) property. For more information, see [Automatically import updates from your S3 bucket](https://docs.aws.amazon.com/fsx/latest/LustreGuide/autoimport-data-repo.html) in the *Amazon FSx for Lustre User Guide*. When the [`auto_import_policy`](#fsx-auto-import-policy) parameter is specified, the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days), [`copy_tags_to_backups`](#fsx-copy-tags-to-backups), [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time), and [`fsx_backup_id`](#fsx-backup-id) parameters must not be specified.

If the `auto_import_policy` setting isn't specified, automatic imports are disabled. FSx for Lustre only updates file and directory listings from the linked S3 bucket when the file system is created.

```
auto_import_policy = NEW_CHANGED
```

**Note**  
Support for [`auto_import_policy`](#fsx-auto-import-policy) was added in AWS ParallelCluster version 2.10.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `automatic_backup_retention_days`
<a name="fsx-automatic-backup-retention-days"></a>

**(Optional)** Specifies the number of days to retain automatic backups. This is only valid for use with `PERSISTENT_1` deployment types. When the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days) parameter is specified, the [`auto_import_policy`](#fsx-auto-import-policy), [`export_path`](#fsx-export-path), [`import_path`](#fsx-import-path), and [`imported_file_chunk_size`](#fsx-imported-file-chunk-size) parameters must not be specified. This corresponds to the [AutomaticBackupRetentionDays](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-automaticbackupretentiondays) property.

The default value is 0. This setting disables automatic backups. The possible values are integers between 0 and 35, inclusive.

```
automatic_backup_retention_days = 35
```

**Note**  
Support for [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days) was added in AWS ParallelCluster version 2.8.0.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `copy_tags_to_backups`
<a name="fsx-copy-tags-to-backups"></a>

**(Optional)** Specifies whether tags for the filesystem are copied to the backups. This is only valid for use with `PERSISTENT_1` deployment types. When the [`copy_tags_to_backups`](#fsx-copy-tags-to-backups) parameter is specified, the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days) must be specified with a value greater than 0, and the [`auto_import_policy`](#fsx-auto-import-policy), [`export_path`](#fsx-export-path), [`import_path`](#fsx-import-path), and [`imported_file_chunk_size`](#fsx-imported-file-chunk-size) parameters must not be specified. This corresponds to the [CopyTagsToBackups](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-copytagstobackups) property.

The default value is `false`.

```
copy_tags_to_backups = true
```

**Note**  
Support for [`copy_tags_to_backups`](#fsx-copy-tags-to-backups) was added in AWS ParallelCluster version 2.8.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `daily_automatic_backup_start_time`
<a name="fsx-daily-automatic-backup-start-time"></a>

**(Optional)** Specifies the time of day (UTC) to start automatic backups. This is only valid for use with `PERSISTENT_1` deployment types. When the [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time) parameter is specified, the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days) must be specified with a value greater than 0, and the [`auto_import_policy`](#fsx-auto-import-policy), [`export_path`](#fsx-export-path), [`import_path`](#fsx-import-path), and [`imported_file_chunk_size`](#fsx-imported-file-chunk-size) parameters must not be specified. This corresponds to the [DailyAutomaticBackupStartTime](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-dailyautomaticbackupstarttime) property.

The format is `HH:MM`, where `HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour. For example, 1:03 A.M. UTC is the following.

```
daily_automatic_backup_start_time = 01:03
```

The default value is a random time between `00:00` and `23:59`.

**Note**  
Support for [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time) was added in AWS ParallelCluster version 2.8.0.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `data_compression_type`
<a name="fsx-data-compression-type"></a>

**(Optional)** Specifies the FSx for Lustre data compression type. This corresponds to the [DataCompressionType](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-datacompressiontype) property. For more information, see [FSx for Lustre data compression](https://docs.aws.amazon.com/fsx/latest/LustreGuide/data-compression.html) in the *Amazon FSx for Lustre User Guide*.

The only valid value is `LZ4`. To disable data compression, remove the [`data_compression_type`](#fsx-data-compression-type) parameter.

```
data_compression_type = LZ4
```

**Note**  
Support for [`data_compression_type`](#fsx-data-compression-type) was added in AWS ParallelCluster version 2.11.0.

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `deployment_type`
<a name="fsx-deployment-type"></a>

**(Optional)** Specifies the FSx for Lustre deployment type. This corresponds to the [DeploymentType](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-deploymenttype) property. For more information, see [FSx for Lustre deployment options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html) in the *Amazon FSx for Lustre User Guide*. Choose a scratch deployment type for temporary storage and shorter-term processing of data. `SCRATCH_2` is the latest generation of scratch file systems. It offers higher burst throughput over baseline throughput and the in-transit encryption of data.

The valid values are `SCRATCH_1`, `SCRATCH_2`, and `PERSISTENT_1`.

`SCRATCH_1`  
The default deployment type for FSx for Lustre. With this deployment type, the [`storage_capacity`](#fsx-storage-capacity) setting has possible values of 1200, 2400, and any multiple of 3600. Support for `SCRATCH_1` was added in AWS ParallelCluster version 2.4.0.

`SCRATCH_2`  
The latest generation of scratch file systems. It supports up to six times the baseline throughput for spiky workloads. It also supports in-transit encryption of data for supported instance types in supported AWS Regions. For more information, see [Encrypting data in transit](https://docs.aws.amazon.com/fsx/latest/LustreGuide/encryption-in-transit-fsxl.html) in the *Amazon FSx for Lustre User Guide*. With this deployment type, the [`storage_capacity`](#fsx-storage-capacity) setting has possible values of 1200 and any multiple of 2400. Support for `SCRATCH_2` was added in AWS ParallelCluster version 2.6.0.

`PERSISTENT_1`  
Designed for longer-term storage. The file servers are highly available and the data is replicated within the file systems' AWS Availability Zone. It supports in-transit encryption of data for supported instance types. With this deployment type, the [`storage_capacity`](#fsx-storage-capacity) setting has possible values of 1200 and any multiple of 2400. Support for `PERSISTENT_1` was added in AWS ParallelCluster version 2.6.0.

The default value is `SCRATCH_1`.

```
deployment_type = SCRATCH_2
```

**Note**  
Support for [`deployment_type`](#fsx-deployment-type) was added in AWS ParallelCluster version 2.6.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `drive_cache_type`
<a name="fsx-drive-cache-type"></a>

**(Optional)** Specifies that the file system has an SSD drive cache. This can only be set if the [`storage_type`](#fsx-storage-type) setting is set to `HDD`. This corresponds to the [DriveCacheType](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-drivecachetype) property. For more information, see [FSx for Lustre deployment options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html) in the *Amazon FSx for Lustre User Guide*.

The only valid value is `READ`. To disable the SSD drive cache, don’t specify the `drive_cache_type` setting.

```
drive_cache_type = READ
```

**Note**  
Support for [`drive_cache_type`](#fsx-drive-cache-type) was added in AWS ParallelCluster version 2.10.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `export_path`
<a name="fsx-export-path"></a>

**(Optional)** Specifies the Amazon S3 path where the root of your file system is exported. When the [`export_path`](#fsx-export-path) parameter is specified, the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days), [`copy_tags_to_backups`](#fsx-copy-tags-to-backups), [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time), and [`fsx_backup_id`](#fsx-backup-id) parameters must not be specified. This corresponds to the [ExportPath](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-exportpath) property. File data and metadata isn't automatically exported to the `export_path`. For information about exporting data and metadata, see [Exporting changes to the data repository](https://docs.aws.amazon.com/fsx/latest/LustreGuide/export-changed-data-meta-dra.html) in the *Amazon FSx for Lustre User Guide*.

The default value is `s3://import-bucket/FSxLustre[creation-timestamp]`, where `import-bucket` is the bucket provided in the [`import_path`](#fsx-import-path) parameter.

```
export_path = s3://bucket/folder
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `fsx_backup_id`
<a name="fsx-backup-id"></a>

**(Optional)** Specifies the ID of the backup to use for restoring the file system from an existing backup. When the [`fsx_backup_id`](#fsx-backup-id) parameter is specified, the [`auto_import_policy`](#fsx-auto-import-policy), [`deployment_type`](#fsx-deployment-type), [`export_path`](#fsx-export-path), [`fsx_kms_key_id`](#fsx-kms-key-id), [`import_path`](#fsx-import-path), [`imported_file_chunk_size`](#fsx-imported-file-chunk-size), [`storage_capacity`](#fsx-storage-capacity), and [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput) parameters must not be specified. These parameters are read from the backup. Additionally, the [`auto_import_policy`](#fsx-auto-import-policy), [`export_path`](#fsx-export-path), [`import_path`](#fsx-import-path), and [`imported_file_chunk_size`](#fsx-imported-file-chunk-size) parameters must not be specified.

This corresponds to the [BackupId](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-fsx-filesystem.html#cfn-fsx-filesystem-backupid) property.

```
fsx_backup_id = backup-fedcba98
```

**Note**  
Support for [`fsx_backup_id`](#fsx-backup-id) was added in AWS ParallelCluster version 2.8.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `fsx_fs_id`
<a name="fsx-fs-id"></a>

**(Optional)** Attaches an existing FSx for Lustre file system.

If this option is specified, only the [`shared_dir`](#fsx-shared-dir) and [`fsx_fs_id`](#fsx-fs-id) settings in the [`[fsx]` section](#fsx-section) are used and any other settings in the [`[fsx]` section](#fsx-section) are ignored.

```
fsx_fs_id = fs-073c3803dca3e28a6
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `fsx_kms_key_id`
<a name="fsx-kms-key-id"></a>

**(Optional)** Specifies the key ID of your AWS Key Management Service (AWS KMS) customer managed key.

This key is used to encrypt the data in your file system at rest.

This must be used with a custom [`ec2_iam_role`](cluster-definition.md#ec2-iam-role). For more information, see [Disk encryption with a custom KMS Key](tutorials_04_encrypted_kms_fs.md). This corresponds to the [KmsKeyId](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystem.html#FSx-CreateFileSystem-request-KmsKeyId) parameter in the *Amazon FSx API Reference*.

```
fsx_kms_key_id = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```

**Note**  
Support for [`fsx_kms_key_id`](#fsx-kms-key-id) was added in AWS ParallelCluster version 2.6.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `import_path`
<a name="fsx-import-path"></a>

**(Optional)** Specifies the S3 bucket to load data from into the file system and serve as the export bucket. For more information, see [`export_path`](#fsx-export-path). If you specify the [`import_path`](#fsx-import-path) parameter, the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days), [`copy_tags_to_backups`](#fsx-copy-tags-to-backups), [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time), and [`fsx_backup_id`](#fsx-backup-id) parameters must not be specified. This corresponds to the [ImportPath](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystemLustreConfiguration.html#FSx-Type-CreateFileSystemLustreConfiguration-ImportPath) parameter in the *Amazon FSx API Reference*.

Import occurs on cluster creation. For more information, see [Importing data from your data repository](https://docs.aws.amazon.com/fsx/latest/LustreGuide/importing-files.html) in the *Amazon FSx for Lustre User Guide*. On import, only file metadata (name, ownership, timestamp, and permissions) is imported. File data isn't imported from the S3 bucket until the file is first accessed. For information about pre-loading the file contents, see [Pre-loading files into your file system](https://docs.aws.amazon.com/fsx/latest/LustreGuide/preload-file-contents-hsm-dra.html) in the *Amazon FSx for Lustre User Guide*.

If a value isn't provided, the file system is empty.

```
import_path =  s3://bucket
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `imported_file_chunk_size`
<a name="fsx-imported-file-chunk-size"></a>

**(Optional)** Determines the stripe count and the maximum amount of data for each file (in MiB) stored on a single physical disk for files that are imported from a data repository (using [`import_path`](#fsx-import-path)). The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system. When the [`imported_file_chunk_size`](#fsx-imported-file-chunk-size) parameter is specified, the [`automatic_backup_retention_days`](#fsx-automatic-backup-retention-days), [`copy_tags_to_backups`](#fsx-copy-tags-to-backups), [`daily_automatic_backup_start_time`](#fsx-daily-automatic-backup-start-time), and [`fsx_backup_id`](#fsx-backup-id) parameters must not be specified. This corresponds to the [ImportedFileChunkSize](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-importedfilechunksize) property.

The chunk size default is `1024` (1 GiB), and it can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.

```
imported_file_chunk_size = 1024
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `per_unit_storage_throughput`
<a name="fsx-per-unit-storage-throughput"></a>

**(Required for `PERSISTENT_1` deployment types)** For the [`deployment_type`](#fsx-deployment-type)` = PERSISTENT_1` deployment type, describes the amount of read and write throughput for each 1 tebibyte (TiB) of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying ﬁle system storage capacity (TiB) by the [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput) (MB/s/TiB). For a 2.4 TiB ﬁle system, provisioning 50 MB/s/TiB of [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput) yields 120 MB/s of ﬁle system throughput. You pay for the amount of throughput that you provision. This corresponds to the [PerUnitStorageThroughput](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-perunitstoragethroughput) property.

The possible values depend on the value of the [`storage_type`](#fsx-storage-type) setting.

`storage\$1type = SSD`  
The possible values are 50, 100, 200.

`storage\$1type = HDD`  
The possible values are 12, 40.

```
per_unit_storage_throughput = 200
```

**Note**  
Support for [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput) was added in AWS ParallelCluster version 2.6.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `shared_dir`
<a name="fsx-shared-dir"></a>

**(Required)** Defines the mount point for the FSx for Lustre file system on the head and compute nodes.

Don't use `NONE` or `/NONE` as the shared directory.

The following example mounts the file system at `/fsx`.

```
shared_dir = /fsx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `storage_capacity`
<a name="fsx-storage-capacity"></a>

**(Required)** Specifies the storage capacity of the file system, in GiB. This corresponds to the [StorageCapacity](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-fsx-filesystem.html#cfn-fsx-filesystem-storagecapacity) property.

The storage capacity possible values vary based on the [`deployment_type`](#fsx-deployment-type) setting.

`SCRATCH_1`  
The possible values are 1200, 2400, and any multiple of 3600.

`SCRATCH_2`  
The possible values are 1200 and any multiple of 2400.

`PERSISTENT_1`  
The possible values vary based on the values of other settings.    
`storage\$1type = SSD`  
The possible values are 1200 and any multiple of 2400.  
`storage\$1type = HDD`  
The possible values vary based on the setting of the [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput) setting.    
`per\$1unit\$1storage\$1throughput = 12`  
The possible values are any multiple of 6000.  
`per\$1unit\$1storage\$1throughput = 40`  
The possible values are any multiple of 1800.

```
storage_capacity = 7200
```

**Note**  
For AWS ParallelCluster version 2.5.0 and 2.5.1, [`storage_capacity`](#fsx-storage-capacity) supported possible values of 1200, 2400, and any multiple of 3600. For versions earlier than AWS ParallelCluster version 2.5.0, [`storage_capacity`](#fsx-storage-capacity) had a minimum size of 3600.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `storage_type`
<a name="fsx-storage-type"></a>

**(Optional)** Specifies the storage type of the file system. This corresponds to the [StorageType](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-fsx-filesystem.html#cfn-fsx-filesystem-storagetype) property. The possible values are `SSD` and `HDD`. The default is `SSD`.

The storage type changes the possible values of other settings.

`storage_type = SSD`   
Specifies a sold-state drive (SSD) storage type.  
`storage_type = SSD` changes the possible values of several other settings.    
[`drive_cache_type`](#fsx-drive-cache-type)  
This setting cannot be specified.  
[`deployment_type`](#fsx-deployment-type)  
This setting can be set to `SCRATCH_1`, `SCRATCH_2`, or `PERSISTENT_1`.  
[`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput)  
This setting must be specified if [`deployment_type`](#fsx-deployment-type) is set to `PERSISTENT_1`. The possible values are 50, 100, or 200.  
[`storage_capacity`](#fsx-storage-capacity)  
This setting must be specified. The possible values vary based on [`deployment_type`](#fsx-deployment-type).    
`deployment_type = SCRATCH_1`  
[`storage_capacity`](#fsx-storage-capacity) can be 1200, 2400, or any multiple of 3600.  
`deployment_type = SCRATCH_2` or `deployment_type = PERSISTENT_1`  
[`storage_capacity`](#fsx-storage-capacity) can be 1200 or any multiple of 2400.

`storage_type = HDD`  
Specifies a hard disk drive (HDD) storage type.  
`storage_type = HDD` changes the possible values of other settings.    
[`drive_cache_type`](#fsx-drive-cache-type)  
This setting can be specified.  
[`deployment_type`](#fsx-deployment-type)  
This setting must be set to `PERSISTENT_1`.  
[`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput)  
This setting must be specified. The possible values are 12, or 40.  
[`storage_capacity`](#fsx-storage-capacity)  
This setting must be specified. The possible values vary based on the [`per_unit_storage_throughput`](#fsx-per-unit-storage-throughput) setting.    
`storage_capacity = 12`  
[`storage_capacity`](#fsx-storage-capacity) can be any multiple of 6000.  
`storage_capacity = 40`  
[`storage_capacity`](#fsx-storage-capacity) can be any multiple of 1800.

```
storage_type = SSD
```

**Note**  
Support for the [`storage_type`](#fsx-storage-type) setting was added in AWS ParallelCluster version 2.10.0.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `weekly_maintenance_start_time`
<a name="fsx-weekly-maintenance-start-time"></a>

**(Optional)** Specifies a preferred time to perform weekly maintenance, in the UTC time zone. This corresponds to the [WeeklyMaintenanceStartTime](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-weeklymaintenancestarttime) property.

The format is [day of week]:[hour of day]:[minute of hour]. For example, Monday at Midnight is as follows.

```
weekly_maintenance_start_time = 1:00:00
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

# `[queue]` section
<a name="queue-section"></a>

Defines configuration settings for a single queue. [`[queue]` sections](#queue-section) are only supported when [`scheduler`](cluster-definition.md#scheduler) is set to `slurm`.

The format is `[queue <queue-name>]`. *queue-name* must start with a lowercase letter, contain no more than 30 characters, and only contain lowercase letters, numbers, and hyphens (-).

```
[queue q1]
compute_resource_settings = i1,i2
placement_group = DYNAMIC
enable_efa = true
disable_hyperthreading = false
compute_type = spot
```

**Note**  
Support for the [`[queue]` section](#queue-section) was added in AWS ParallelCluster version 2.9.0.

**Topics**
+ [`compute_resource_settings`](#queue-compute-resource-settings)
+ [`compute_type`](#queue-compute-type)
+ [`disable_hyperthreading`](#queue-disable-hyperthreading)
+ [`enable_efa`](#queue-enable-efa)
+ [`enable_efa_gdr`](#queue-enable-efa-gdr)
+ [`placement_group`](#queue-placement-group)

## `compute_resource_settings`
<a name="queue-compute-resource-settings"></a>

**(Required)** Identifies the [`[compute_resource]` sections](compute-resource-section.md) containing the compute resources configurations for this queue. The section names must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

Up to three (3) [`[compute_resource]` sections](compute-resource-section.md) are supported for each [`[queue]` section](#queue-section).

For example, the following setting specifies that the sections that start `[compute_resource cr1]` and `[compute_resource cr2]` are used.

```
compute_resource_settings = cr1, cr2
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `compute_type`
<a name="queue-compute-type"></a>

**(Optional)** Defines the type of instances to launch for this queue. This setting replaces the [`cluster_type`](cluster-definition.md#cluster-type) setting.

Valid options are: `ondemand`, and `spot`.

The default value is `ondemand`.

For more information about Spot Instances, see [Working with Spot Instances](spot.md).

**Note**  
Using Spot Instances requires that the `AWSServiceRoleForEC2Spot` service-linked role exist in your account. To create this role in your account using the AWS CLI, run the following command:  

```
aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
```
For more information, see [Service-linked role for Spot Instance requests](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#service-linked-roles-spot-instance-requests) in the *Amazon EC2 User Guide*.

The following example uses SpotInstances for the compute nodes in this queue.

```
compute_type = spot
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `disable_hyperthreading`
<a name="queue-disable-hyperthreading"></a>

**(Optional)** Disables hyperthreading on the nodes in this queue. Not all instance types can disable hyperthreading. For a list of instance types that support disabling hyperthreading, see [CPU cores and threads for each CPU core per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-instances-values) in the *Amazon EC2 User Guide*. If the [`disable_hyperthreading`](cluster-definition.md#disable-hyperthreading) setting in the [`[cluster]` section](cluster-definition.md) is defined, then this setting cannot be defined.

The default value is `false`.

```
disable_hyperthreading = true
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `enable_efa`
<a name="queue-enable-efa"></a>

**(Optional)** If set to `true`, specifies that Elastic Fabric Adapter (EFA) is enabled for the nodes in this queue. To view the list of EC2 instances that support EFA, see [Supported instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html#efa-instance-types) in the *Amazon EC2 User Guide for Linux Instances*. If the [`enable_efa`](cluster-definition.md#enable-efa) setting in the [`[cluster]` section](cluster-definition.md) is defined, then this setting cannot be defined. A cluster placement group should be used to minimize latencies between instances. For more information, see [`placement`](cluster-definition.md#placement) and [`placement_group`](cluster-definition.md#placement-group).

```
enable_efa = true
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `enable_efa_gdr`
<a name="queue-enable-efa-gdr"></a>

**(Optional)** Starting with AWS ParallelCluster version 2.11.3, this setting has no effect. Elastic Fabric Adapter (EFA) support for GPUDirect RDMA (remote direct memory access) is enabled for the compute nodes is always enabled if it's supported by the instance type.

**Note**  
AWS ParallelCluster version 2.10.0 through 2.11.2: If `true`, specifies that Elastic Fabric Adapter (EFA) GPUDirect RDMA (remote direct memory access) is enabled for the nodes in this queue. Setting this to `true` requires that the [`enable_efa`](#queue-enable-efa) setting is set to `true` .EFA GPUDirect RDMA is supported by the following instance types (`p4d.24xlarge`) on these operating systems (`alinux2`, `centos7`, `ubuntu1804`, or `ubuntu2004`). If the [`enable_efa_gdr`](cluster-definition.md#enable-efa-gdr) setting in the [`[cluster]` section](cluster-definition.md) is defined, then this setting cannot be defined. A cluster placement group should be used to minimize latencies between instances. For more information, see [`placement`](cluster-definition.md#placement) and [`placement_group`](cluster-definition.md#placement-group).

The default value is `false`.

```
enable_efa_gdr = true
```

**Note**  
Support for `enable_efa_gdr` was added in AWS ParallelCluster version 2.10.0.

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `placement_group`
<a name="queue-placement-group"></a>

**(Optional)** If present, defines the placement group for this queue. This setting replaces the [`placement_group`](cluster-definition.md#placement-group) setting.

Valid options are the following values:
+ `DYNAMIC`
+ An existing Amazon EC2 cluster placement group name

When set to `DYNAMIC`, a unique placement group for this queue is created and deleted as part of the cluster stack.

For more information about placement groups, see [Placement groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) in the *Amazon EC2 User Guide*. If the same placement group is used for different instance types, it’s more likely that the request might fail due to an insufficient capacity error. For more information, see [Insufficient instance capacity](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-capacity) in the *Amazon EC2 User Guide*.

There is no default value.

Not all instance types support cluster placement groups. For example, `t2.micro` doesn't support cluster placement groups. For information about the list of instance types that support cluster placement groups, see [Cluster placement group rules and limitations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-cluster) in the *Amazon EC2 User Guide*. See [Placement groups and instance launch issues](troubleshooting.md#placement-groups-and-instance-launch-issues) for tips when working with placement groups.

```
placement_group = DYNAMIC
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

# `[raid]` section
<a name="raid-section"></a>

Defines configuration settings for a RAID array that's built from a number of identical Amazon EBS volumes. The RAID drive is mounted on the head node and is exported to compute nodes with NFS.

The format is `[raid raid-name]`. *raid-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[raid rs]
shared_dir = raid
raid_type = 1
num_of_raid_volumes = 2
encrypted = true
```

**Topics**
+ [`shared_dir`](#raid-shared-dir)
+ [`ebs_kms_key_id`](#raid-ebs_kms_key_id)
+ [`encrypted`](#raid-encrypted)
+ [`num_of_raid_volumes`](#num-of-raid-volumes)
+ [`raid_type`](#raid-type)
+ [`volume_iops`](#raid-volume-iops)
+ [`volume_size`](#raid-volume-size)
+ [`volume_throughput`](#raid-volume-throughput)
+ [`volume_type`](#raid-volume-type)

## `shared_dir`
<a name="raid-shared-dir"></a>

**(Required)** Defines the mount point for the RAID array on the head and compute nodes.

The RAID drive is created only if this parameter is specified.

Don't use `NONE` or `/NONE` as the shared directory.

The following example mounts the array at `/raid`.

```
shared_dir = raid
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ebs_kms_key_id`
<a name="raid-ebs_kms_key_id"></a>

**(Optional)** Specifies a custom AWS KMS key to use for encryption.

This parameter must be used together with `encrypted = true`, and it must have a custom [`ec2_iam_role`](cluster-definition.md#ec2-iam-role).

For more information, see [Disk encryption with a custom KMS Key](tutorials_04_encrypted_kms_fs.md).

```
ebs_kms_key_id = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `encrypted`
<a name="raid-encrypted"></a>

**(Optional)** Specifies whether the file system is encrypted.

The default value is `false`.

```
encrypted = false
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `num_of_raid_volumes`
<a name="num-of-raid-volumes"></a>

**(Optional)** Defines the number of Amazon EBS volumes to assemble the RAID array from.

Minimum number of volumes is `2`.

Maximum number of volumes is `5`.

The default value is `2`.

```
num_of_raid_volumes = 2
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `raid_type`
<a name="raid-type"></a>

**(Required)** Defines the RAID type for the RAID array.

The RAID drive is created only if this parameter is specified.

Valid options are the following values:
+ `0`
+ `1`

For more information on RAID types, see [RAID info](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html) in the *Amazon EC2 User Guide*.

The following example creates a RAID `0` array:

```
raid_type = 0
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `volume_iops`
<a name="raid-volume-iops"></a>

**(Optional)** Defines the number of IOPS for `io1`, `io2`, and `gp3` type volumes.

The default value, supported values, and `volume_iops` to `volume_size` ratio varies by [`volume_type`](#raid-volume-type) and [`volume_size`](#raid-volume-size).

`volume_type` = `io1`  
Default `volume_iops` = 100  
Supported values `volume_iops` = 100–64000 †  
Maximum `volume_iops` to `volume_size` ratio = 50 IOPS per GiB. 5000 IOPS requires a `volume_size` of at least 100 GiB.

`volume_type` = `io2`  
Default `volume_iops` = 100  
Supported values `volume_iops` = 100–64000 (256000 for `io2` Block Express volumes) †  
Maximum `volume_iops` to `volume_size` ratio = 500 IOPS per GiB. 5000 IOPS requires a `volume_size` of at least 10 GiB.

`volume_type` = `gp3`  
Default `volume_iops` = 3000  
Supported values `volume_iops` = 3000–16000  
Maximum `volume_iops` to `volume_size` ratio = 500 IOPS per GiB. 5000 IOPS requires a `volume_size` of at least 10 GiB.

```
volume_iops = 3000
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

† Maximum IOPS is guaranteed only on [Instances built on the Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) provisioned with more than 32,000 IOPS. Other instances guarantee up to 32,000 IOPS. Older `io1` volumes might not reach full performance unless you [modify the volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html). `io2` Block Express volumes support `volume_iops` values up to 256000. For more information, see [`io2` Block Express volumes (In preview)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#io2-block-express) in the *Amazon EC2 User Guide*.

## `volume_size`
<a name="raid-volume-size"></a>

**(Optional)** Defines the size of the volume to be created, in GiB.

The default value and supported values varies by [`volume_type`](#raid-volume-type).

`volume_type` = `standard`  
Default `volume_size` = 20 GiB  
Supported values `volume_size` = 1–1024 GiB

`volume_type` = `gp2`, `io1`, `io2`, and `gp3`  
Default `volume_size` = 20 GiB  
Supported values `volume_size` = 1–16384 GiB

`volume_type` = `sc1` and `st1`  
Default `volume_size` = 500 GiB  
Supported values `volume_size` = 500–16384 GiB

```
volume_size = 20
```

**Note**  
Before AWS ParallelCluster version 2.10.1, the default value for all volume types was 20 GiB.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `volume_throughput`
<a name="raid-volume-throughput"></a>

**(Optional)** Defines the throughput for `gp3` volume types, in MiB/s.

The default value is `125`.

Supported values `volume_throughput` = 125–1000 MiB/s

The ratio of `volume_throughput` to `volume_iops` can be no more than 0.25. The maximum throughput of 1000 MiB/s requires that the `volume_iops` setting is at least 4000.

```
volume_throughput = 1000
```

**Note**  
Support for `volume_throughput` was added in AWS ParallelCluster version 2.10.1.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `volume_type`
<a name="raid-volume-type"></a>

**(Optional)** Defines the type of volume to build.

Valid options are the following values:

`gp2`, `gp3`  
General purpose SSD

`io1`, `io2`  
Provisioned IOPS SSD

`st1`  
Throughput optimized HDD

`sc1`  
Cold HDD

`standard`  
Previous generation magnetic

For more information, see [Amazon EBS volume types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) in the *Amazon EC2 User Guide*.

The default value is `gp2`.

```
volume_type = io2
```

**Note**  
Support for `gp3` and `io2` was added in AWS ParallelCluster version 2.10.1.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

# `[scaling]` section
<a name="scaling-section"></a>

**Topics**
+ [`scaledown_idletime`](#scaledown-idletime)

Specifies settings that define how the compute nodes scale.

The format is `[scaling scaling-name]`. *scaling-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[scaling custom]
scaledown_idletime = 10
```

## `scaledown_idletime`
<a name="scaledown-idletime"></a>

**(Optional)** Specifies the amount of time in minutes without a job, after which the compute node terminates.

This parameter isn't used if `awsbatch` is the scheduler.

The default value is `10`.

```
scaledown_idletime = 10
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

# `[vpc]` section
<a name="vpc-section"></a>

Specifies Amazon VPC configuration settings. For more information about VPCs, see [What is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) and [Security best practices for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html) in the *Amazon VPC User Guide*.

The format is `[vpc vpc-name]`. *vpc-name* must start with a letter, contain no more than 30 characters, and only contain letters, numbers, hyphens (-), and underscores (\$1).

```
[vpc public]
vpc_id = vpc-xxxxxx
master_subnet_id = subnet-xxxxxx
```

**Topics**
+ [`additional_sg`](#additional-sg)
+ [`compute_subnet_cidr`](#compute-subnet-cidr)
+ [`compute_subnet_id`](#compute-subnet-id)
+ [`master_subnet_id`](#master-subnet-id)
+ [`ssh_from`](#ssh-from)
+ [`use_public_ips`](#use-public-ips)
+ [`vpc_id`](#vpc-id)
+ [`vpc_security_group_id`](#vpc-security-group-id)

## `additional_sg`
<a name="additional-sg"></a>

**(Optional)** Provides an additional Amazon VPC security group Id for all instances.

There is no default value.

```
additional_sg = sg-xxxxxx
```

## `compute_subnet_cidr`
<a name="compute-subnet-cidr"></a>

**(Optional)** Specifies a Classless Inter-Domain Routing (CIDR) block. Use this parameter if you want AWS ParallelCluster to create a compute subnet.

```
compute_subnet_cidr = 10.0.100.0/24
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `compute_subnet_id`
<a name="compute-subnet-id"></a>

**(Optional)** Specifies the ID of an existing subnet in which to provision the compute nodes.

If not specified, [`compute_subnet_id`](#compute-subnet-id) uses the value of [`master_subnet_id`](#master-subnet-id).

If the subnet is private, you must set up NAT for web access.

```
compute_subnet_id = subnet-xxxxxx
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `master_subnet_id`
<a name="master-subnet-id"></a>

**(Required)** Specifies the ID of an existing subnet in which to provision the head node.

```
master_subnet_id = subnet-xxxxxx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `ssh_from`
<a name="ssh-from"></a>

**(Optional)** Specifies a CIDR-formatted IP range to allow SSH access from.

This parameter is used only when AWS ParallelCluster creates the security group.

The default value is `0.0.0.0/0`.

```
ssh_from = 0.0.0.0/0
```

[Update policy: This setting can be changed during an update.](using-pcluster-update.md#update-policy-setting-supported)

## `use_public_ips`
<a name="use-public-ips"></a>

**(Optional)** Defines whether to assign public IP addresses to compute instances.

If set to `true`, an Elastic IP address is associated to the head node.

If set to `false`, the head node has a public IP (or not) according to the value of the "Auto-assign Public IP" subnet configuration parameter.

For examples, see [networking configuration](networking.md).

The default value is `true`.

```
use_public_ips = true
```

**Important**  
By default, all AWS accounts are limited to five (5) Elastic IP addresses for each AWS Region. For more information, see [Elastic IP address limit](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#using-instance-addressing-limit) in *Amazon EC2 User Guide*.

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update.md#update-policy-compute-fleet)

## `vpc_id`
<a name="vpc-id"></a>

**(Required)** Specifies the ID of the Amazon VPC in which to provision the cluster.

```
vpc_id = vpc-xxxxxx
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update.md#update-policy-fail)

## `vpc_security_group_id`
<a name="vpc-security-group-id"></a>

**(Optional)** Specifies the use of an existing security group for all instances.

There is no default value.

```
vpc_security_group_id = sg-xxxxxx
```

The security group created by AWS ParallelCluster allows SSH access using port 22 from the addresses specified in the [`ssh_from`](#ssh-from) setting, or all IPv4 addresses (`0.0.0.0/0`) if the [`ssh_from`](#ssh-from) setting isn't specified. If Amazon DCV is enabled, then the security group allows access to Amazon DCV using port 8443 (or whatever the [`port`](dcv-section.md#dcv-section-port) setting specifies) from the addresses specified in the [`access_from`](dcv-section.md#dcv-section-access-from) setting, or all IPv4 addresses (`0.0.0.0/0`) if the [`access_from`](dcv-section.md#dcv-section-access-from) setting isn't specified.

**Warning**  
You can change the value of this parameter and update the cluster if [`[cluster]`](cluster-definition.md) [`fsx_settings`](cluster-definition.md#fsx-settings) isn't specified or both `fsx_settings` and an external existing FSx for Lustre file system is specified for [`fsx-fs-id`](fsx-section.md#fsx-fs-id) in [`[fsx fs]`](fsx-section.md).  
You can't change the value of this parameter if an AWS ParallelCluster managed FSx for Lustre file system is specified in `fsx_settings` and `[fsx fs]`.

[Update policy: If AWS ParallelCluster managed Amazon FSx for Lustre file systems aren't specified in the configuration, this setting can be changed during an update.](using-pcluster-update.md#update-policy-no-managed-fsx-lustre)

# Examples
<a name="examples"></a>

The following example configurations demonstrate AWS ParallelCluster configurations using Slurm, Torque, and AWS Batch schedulers.

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

**Contents**
+ [Slurm Workload Manager (`slurm`)](#example.slurm)
+ [Son of Grid Engine (`sge`) and Torque Resource Manager (`torque`)](#example.torque)
+ [AWS Batch (`awsbatch`)](#example.awsbatch)

## Slurm Workload Manager (`slurm`)
<a name="example.slurm"></a>

The following example launches a cluster with the `slurm` scheduler. The example configuration launches 1 cluster with 2 job queues. The first queue, `spot`, initially has 2 `t3.micro` Spot instances available. It can scale up to a maximum of 10 instances, and scale down to a minimum of 1 instance when no jobs have been run for 10 minutes (adjustable using the [`scaledown_idletime`](scaling-section.md#scaledown-idletime) setting). The second queue, `ondemand`, starts with no instances and can scale up to a maximum of 5 `t3.micro` On-Demand instances.

```
[global]
update_check = true
sanity_check = true
cluster_template = slurm

[aws]
aws_region_name = <your AWS Region>

[vpc public]
master_subnet_id = <your subnet>
vpc_id = <your VPC>

[cluster slurm]
key_name = <your EC2 keypair name>
base_os = alinux2                   # optional, defaults to alinux2
scheduler = slurm
master_instance_type = t3.micro     # optional, defaults to t3.micro
vpc_settings = public
queue_settings = spot,ondemand

[queue spot]
compute_resource_settings = spot_i1
compute_type = spot                 # optional, defaults to ondemand

[compute_resource spot_i1]
instance_type = t3.micro
min_count = 1                       # optional, defaults to 0
initial_count = 2                   # optional, defaults to 0

[queue ondemand]
compute_resource_settings = ondemand_i1

[compute_resource ondemand_i1]
instance_type = t3.micro
max_count = 5                       # optional, defaults to 10
```

## Son of Grid Engine (`sge`) and Torque Resource Manager (`torque`)
<a name="example.torque"></a>

**Note**  
This example only applies to AWS ParallelCluster versions up to and including version 2.11.4. Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers.

The following example launches a cluster with the `torque` or `sge` scheduler. To use SGE, change `scheduler = torque` to `scheduler = sge`. The example configuration allows a maximum of 5 concurrent nodes, and scales down to two when no jobs have run for 10 minutes.

```
[global]
update_check = true
sanity_check = true
cluster_template = torque

[aws]
aws_region_name = <your AWS Region>

[vpc public]
master_subnet_id = <your subnet>
vpc_id = <your VPC>

[cluster torque]
key_name = <your EC2 keypair name>but they aren't eligible for future updates
base_os = alinux2                   # optional, defaults to alinux2
scheduler = torque                  # optional, defaults to sge
master_instance_type = t3.micro     # optional, defaults to t3.micro
vpc_settings = public
initial_queue_size = 2              # optional, defaults to 0
maintain_initial_size = true        # optional, defaults to false
max_queue_size = 5                  # optional, defaults to 10
```

**Note**  
Starting with version 2.11.5, AWS ParallelCluster doesn't support the use of SGE or Torque schedulers. If you use these versions, you can continue using them, or troubleshooting support from the AWS service and AWS Support teams.

## AWS Batch (`awsbatch`)
<a name="example.awsbatch"></a>

The following example launches a cluster with the `awsbatch` scheduler. It's set to select the better instance type based on your job resource needs.

The example configuration allows a maximum of 40 concurrent vCPUs, and scales down to zero when no jobs have run for 10 minutes (adjustable using the [`scaledown_idletime`](scaling-section.md#scaledown-idletime) setting).

```
[global]
update_check = true
sanity_check = true
cluster_template = awsbatch

[aws]
aws_region_name = <your AWS Region>

[vpc public]
master_subnet_id = <your subnet>
vpc_id = <your VPC>

[cluster awsbatch]
scheduler = awsbatch
compute_instance_type = optimal # optional, defaults to optimal
min_vcpus = 0                   # optional, defaults to 0
desired_vcpus = 0               # optional, defaults to 4
max_vcpus = 40                  # optional, defaults to 20
base_os = alinux2               # optional, defaults to alinux2, controls the base_os of
                                # the head node and the docker image for the compute fleet
key_name = <your EC2 keypair name>
vpc_settings = public
```