

# Configuration of multiple queues
<a name="configuration-of-multiple-queues-v3"></a>

With AWS ParallelCluster version 3, you can configure multiple queues by setting the [`Scheduler`](Scheduling-v3.md#yaml-Scheduling-Scheduler) to `slurm` and specifying more than one queue for [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) in the configuration file. In this mode, different instance types coexist in the compute nodes that are specified in the [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) section of the configuration file. [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) with different instance types are scaled up or down as needed for the [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues).

Multiple *queues* within a single cluster are generally preferred over multiple clusters when the workloads share the same underlying infrastructure and resources (like shared storage, networking, or login nodes). If workloads have similar compute, storage, and networking needs, using multiple queues within a single cluster is more efficient because it allows for resource sharing and avoids unnecessary duplication. This approach simplifies management and reduces overhead, while still allowing for efficient job scheduling and resource allocation. On the other hand, multiple *clusters* should be used when there are strong security, data, or operational isolation requirements between workloads. For example, if you need to manage and operate workloads independently, with different schedules, update cycles, or access policies, multiple clusters are more appropriate.


**Cluster queue and compute resource quotas**  

| Resource | Quota | 
| --- | --- | 
|  [`Slurm queues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues)  |  50 queues per cluster  | 
|  [`Compute resources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources)  |  50 compute resources per queue 50 compute resources per cluster  | 

**Node Counts**

Each compute resource in [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) for a queue must have a unique [`Name`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-Name), [`InstanceType`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-InstanceType), [`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount), and [`MaxCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MaxCount). [`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount) and [`MaxCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MaxCount) have default values that define the range of instances for a compute resource in [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) for a queue. You can also specify your own values for [`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount) and [`MaxCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MaxCount). Each compute resource in [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) is composed of static nodes numbered from 1 to the value of [`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount) and dynamic nodes numbered from the value of [`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount) to the value of [`MaxCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MaxCount).

**Example Configuration**

The following is an example of a [Scheduling](Scheduling-v3.md) section for a cluster configuration file. In this configuration there are two queues named `queue1` and `queue2` and each of the queues has [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) with a specified [`MaxCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MaxCount).

```
Scheduling:
  Scheduler: slurm
  SlurmQueues:
  - Name: queue1
    ComputeResources:
    - InstanceType: c5.xlarge
      MaxCount: 5
      Name: c5xlarge
    - InstanceType: c4.xlarge
      MaxCount: 5
      Name: c4xlarge
  - Name: queue2
    ComputeResources:
    - InstanceType: c5.xlarge
      MaxCount: 5
      Name: c5xlarge
```

**Hostnames**

The instances that are launched into the compute fleet are dynamically assigned. Hostnames are generated for each node. By default AWS ParallelCluster will use the following format of the hostname :

 `$HOSTNAME=$QUEUE-$STATDYN-$COMPUTE_RESOURCE-$NODENUM` 
+ `$QUEUE` is the name of the queue. For example, if the [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) section has an entry with the [`Name`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Name) set to “`queue-name`” then “`$QUEUE`” is “`queue-name`”.
+  `$STATDYN` is `st` for static nodes or `dy` for dynamic nodes. 
+  `$COMPUTE_RESOURCE` is the [`Name`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-Name) of the [`ComputeResources`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-ComputeResources) compute resource corresponding to this node.
+  `$NODENUM` is the number of the node. `$NODENUM` is between one (1) and the value of [`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount) for static nodes and between one (1) and [`MaxCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MaxCount)-[`MinCount`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-MinCount) for dynamic nodes.

From the example configuration file above, a given node from `queue1` and compute resource `c5xlarge` has a hostname: `queue1-dy-c5xlarge-1`.

Both hostnames and fully-qualified domain names (FQDN) are created using Amazon Route 53 hosted zones. The FQDN is `$HOSTNAME.$CLUSTERNAME.pcluster`, where `$CLUSTERNAME` is the name of the cluster.

Note that the same format will be used for the Slurm node names as well.

 Users can choose to use the default Amazon EC2 hostname of the instance powering the compute node instead of the default host name format used by AWS ParallelCluster. This can be done by setting the [`UseEc2Hostnames`](Scheduling-v3.md#yaml-Scheduling-SlurmSettings-Dns-UseEc2Hostnames) parameter to be true. However, Slurm node names will continue to use the default AWS ParallelCluster format.