

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 使用適用於 Apache Spark 的 Nvidia RAPIDS Accelerator
<a name="emr-spark-rapids"></a>

在 Amazon EMR 6.2.0 版及更高版本中，針對採用 EC2 圖形處理單元 (GPU) 執行個體類型的 Spark，您可以使用 Nvidia [適用於 Apache Spark 的 RAPIDS Accelerator](https://docs.nvidia.com/spark-rapids/user-guide/latest/overview.html) 外掛程式來加速。RAPIDS Accelerator 將透過 GPU 加快 Apache Spark 3.0 資料科學管道而無需變更程式碼，並且加快資料處理和模型訓練，同時大幅降低基礎設施成本。

下列各章節會引導您完成 EMR 叢集設定，以使用適用於 Spark 的 Spark-RAPIDS 外掛程式。

## 選擇執行個體類型
<a name="emr-spark-rapids-instancetypes"></a>

若要使用適用於 Spark 的 Spark-RAPIDS 外掛程式，核心和任務執行個體群組必須使用符合 Spark-RAPIDS [硬體要求](https://nvidia.github.io/spark-rapids/)的 EC2 GPU 執行個體類型。若要檢視 Amazon EMR 支援的 GPU 執行個體類型的完整清單，請參閱《Amazon EMR 管理指南》**中的[支援的執行個體類型](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-supported-instance-types.html)。主要執行個體群組的執行個體類型可以是 GPU 或非 GPU 類型，但不支援 ARM 執行個體類型。

## 為您的叢集設定應用程式組態
<a name="emr-spark-rapids-appconfig"></a>

**1。啟用 Amazon EMR 以便在您的新叢集上安裝外掛程式**

若要安裝外掛程式，請在建立叢集時提供以下組態：

```
{
	"Classification":"spark",
	"Properties":{
		"enableSparkRapids":"true"
	}
}
```

**2. 設定 YARN 以使用 GPU**

如需有關如何在 YARN 上使用 GPU 的詳細資訊，請參閱 Apache Hadoop 文件中的[在 YARN 上使用 GPU](https://hadoop.apache.org/docs/r3.2.1/hadoop-yarn/hadoop-yarn-site/UsingGpus.html)。下列範例顯示 Amazon EMR 6.x 和 7.x 版的 YARN 組態範例：

------
#### [ Amazon EMR 7.x ]

**Amazon EMR 7.x 的 YARN 組態範例**

```
{
    "Classification":"yarn-site",
    "Properties":{
        "yarn.nodemanager.resource-plugins":"yarn.io/gpu",
        "yarn.resource-types":"yarn.io/gpu",
        "yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices":"auto",
        "yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables":"/usr/bin",
        "yarn.nodemanager.linux-container-executor.cgroups.mount":"true",
        "yarn.nodemanager.linux-container-executor.cgroups.mount-path":"/spark-rapids-cgroup",
        "yarn.nodemanager.linux-container-executor.cgroups.hierarchy":"yarn",
        "yarn.nodemanager.container-executor.class":"org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor"
    }
},{
    "Classification":"container-executor",
    "Properties":{
        
    },
    "Configurations":[
        {
            "Classification":"gpu",
            "Properties":{
                "module.enabled":"true"
            }
        },
        {
            "Classification":"cgroups",
            "Properties":{
                "root":"/spark-rapids-cgroup",
                "yarn-hierarchy":"yarn"
            }
        }
    ]
}
```

------
#### [ Amazon EMR 6.x ]

**Amazon EMR 6.x 的 YARN 組態範例**

```
{
    "Classification":"yarn-site",
    "Properties":{
        "yarn.nodemanager.resource-plugins":"yarn.io/gpu",
        "yarn.resource-types":"yarn.io/gpu",
        "yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices":"auto",
        "yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables":"/usr/bin",
        "yarn.nodemanager.linux-container-executor.cgroups.mount":"true",
        "yarn.nodemanager.linux-container-executor.cgroups.mount-path":"/sys/fs/cgroup",
        "yarn.nodemanager.linux-container-executor.cgroups.hierarchy":"yarn",
        "yarn.nodemanager.container-executor.class":"org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor"
    }
},{
    "Classification":"container-executor",
    "Properties":{
        
    },
    "Configurations":[
        {
            "Classification":"gpu",
            "Properties":{
                "module.enabled":"true"
            }
        },
        {
            "Classification":"cgroups",
            "Properties":{
                "root":"/sys/fs/cgroup",
                "yarn-hierarchy":"yarn"
            }
        }
    ]
}
```

------

**3. 設定 Spark 以使用 RAPIDS**

以下是啟用 Spark 以使用 RAPIDS 外掛程式所需的組態：

```
{
	"Classification":"spark-defaults",
	"Properties":{
		"spark.plugins":"com.nvidia.spark.SQLPlugin",
		"spark.executor.resource.gpu.discoveryScript":"/usr/lib/spark/scripts/gpu/getGpusResources.sh",
		"spark.executor.extraLibraryPath":"/usr/local/cuda/targets/x86_64-linux/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/docker/usr/lib/hadoop/lib/native:/docker/usr/lib/hadoop-lzo/lib/native"
	}
}
```

當在您的叢集上啟用 Spark RAPIDS 外掛程式時，也可使用 XGBoost 文件中的 [XGBoost4J-Spark 程式庫](https://xgboost.readthedocs.io/en/latest/jvm/xgboost4j_spark_tutorial.html)。您可以使用以下組態將 XGBoost 與您的 Spark 作業整合：

```
{
	"Classification":"spark-defaults",
	"Properties":{
		"spark.submit.pyFiles":"/usr/lib/spark/jars/xgboost4j-spark_3.0-1.4.2-0.3.0.jar"
	}
}
```

如需了解您可以用於微調 GPU 加速 EMR 叢集的更多 Spark 組態，請參閱 Nvidia.github.io 文件中的[《Rapids Accelerator for Apache Spark 調校指南》](https://docs.nvidia.com/spark-rapids/user-guide/latest/tuning-guide.html)。

**4. 設定 YARN 容量排程器**

`DominantResourceCalculator` 必須設定為啟用 GPU 排程和隔離。如需詳細資訊，請參閱 Apache Hadoop 文件中的[在 YARN 上使用 GPU](https://hadoop.apache.org/docs/r3.2.1/hadoop-yarn/hadoop-yarn-site/UsingGpus.html)。

```
{
	"Classification":"capacity-scheduler",
	"Properties":{
		"yarn.scheduler.capacity.resource-calculator":"org.apache.hadoop.yarn.util.resource.DominantResourceCalculator"
	}
}
```

**5. 建立 JSON 檔案以包含您的組態**

您可以建立 JSON 檔案，其中包含將 RAPIDS 外掛程式用於 Spark 叢集的組態。您要在稍後啟動叢集時提供該檔案。

您可以將檔案存放在本機或 S3 上。如需如何為您的叢集提供應用程式組態的詳細資訊，請參閱 [設定應用程式](emr-configure-apps.md)。

使用下列範例檔案作為範本來建立您的組態。

------
#### [ Amazon EMR 7.x ]

**Amazon EMR 7.x 的 `my-configurations.json` 檔案範例**

```
[
    {
        "Classification":"spark",
        "Properties":{
            "enableSparkRapids":"true"
        }
    },
    {
        "Classification":"yarn-site",
        "Properties":{
            "yarn.nodemanager.resource-plugins":"yarn.io/gpu",
            "yarn.resource-types":"yarn.io/gpu",
            "yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices":"auto",
            "yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables":"/usr/bin",
            "yarn.nodemanager.linux-container-executor.cgroups.mount":"true",
            "yarn.nodemanager.linux-container-executor.cgroups.mount-path":"/spark-rapids-cgroup",
            "yarn.nodemanager.linux-container-executor.cgroups.hierarchy":"yarn",
            "yarn.nodemanager.container-executor.class":"org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor"
        }
    },
    {
        "Classification":"container-executor",
        "Properties":{
            
        },
        "Configurations":[
            {
                "Classification":"gpu",
                "Properties":{
                    "module.enabled":"true"
                }
            },
            {
                "Classification":"cgroups",
                "Properties":{
                    "root":"/spark-rapids-cgroup",
                    "yarn-hierarchy":"yarn"
                }
            }
        ]
    },
    {
        "Classification":"spark-defaults",
        "Properties":{
            "spark.plugins":"com.nvidia.spark.SQLPlugin",
            "spark.executor.resource.gpu.discoveryScript":"/usr/lib/spark/scripts/gpu/getGpusResources.sh",
            "spark.executor.extraLibraryPath":"/usr/local/cuda/targets/x86_64-linux/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/docker/usr/lib/hadoop/lib/native:/docker/usr/lib/hadoop-lzo/lib/native",
            "spark.submit.pyFiles":"/usr/lib/spark/jars/xgboost4j-spark_3.0-1.4.2-0.3.0.jar",
            "spark.rapids.sql.concurrentGpuTasks":"1",
            "spark.executor.resource.gpu.amount":"1",
            "spark.executor.cores":"2",
            "spark.task.cpus":"1",
            "spark.task.resource.gpu.amount":"0.5",
            "spark.rapids.memory.pinnedPool.size":"0",
            "spark.executor.memoryOverhead":"2G",
            "spark.locality.wait":"0s",
            "spark.sql.shuffle.partitions":"200",
            "spark.sql.files.maxPartitionBytes":"512m"
        }
    },
    {
        "Classification":"capacity-scheduler",
        "Properties":{
            "yarn.scheduler.capacity.resource-calculator":"org.apache.hadoop.yarn.util.resource.DominantResourceCalculator"
        }
    }
]
```

------
#### [ Amazon EMR 6.x ]

**Amazon EMR 6.x 的 `my-configurations.json` 檔案範例**

```
[
    {
        "Classification":"spark",
        "Properties":{
            "enableSparkRapids":"true"
        }
    },
    {
        "Classification":"yarn-site",
        "Properties":{
            "yarn.nodemanager.resource-plugins":"yarn.io/gpu",
            "yarn.resource-types":"yarn.io/gpu",
            "yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices":"auto",
            "yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables":"/usr/bin",
            "yarn.nodemanager.linux-container-executor.cgroups.mount":"true",
            "yarn.nodemanager.linux-container-executor.cgroups.mount-path":"/sys/fs/cgroup",
            "yarn.nodemanager.linux-container-executor.cgroups.hierarchy":"yarn",
            "yarn.nodemanager.container-executor.class":"org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor"
        }
    },
    {
        "Classification":"container-executor",
        "Properties":{
            
        },
        "Configurations":[
            {
                "Classification":"gpu",
                "Properties":{
                    "module.enabled":"true"
                }
            },
            {
                "Classification":"cgroups",
                "Properties":{
                    "root":"/sys/fs/cgroup",
                    "yarn-hierarchy":"yarn"
                }
            }
        ]
    },
    {
        "Classification":"spark-defaults",
        "Properties":{
            "spark.plugins":"com.nvidia.spark.SQLPlugin",
            "spark.executor.resource.gpu.discoveryScript":"/usr/lib/spark/scripts/gpu/getGpusResources.sh",
            "spark.executor.extraLibraryPath":"/usr/local/cuda/targets/x86_64-linux/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat/lib:/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/docker/usr/lib/hadoop/lib/native:/docker/usr/lib/hadoop-lzo/lib/native",
            "spark.submit.pyFiles":"/usr/lib/spark/jars/xgboost4j-spark_3.0-1.4.2-0.3.0.jar",
            "spark.rapids.sql.concurrentGpuTasks":"1",
            "spark.executor.resource.gpu.amount":"1",
            "spark.executor.cores":"2",
            "spark.task.cpus":"1",
            "spark.task.resource.gpu.amount":"0.5",
            "spark.rapids.memory.pinnedPool.size":"0",
            "spark.executor.memoryOverhead":"2G",
            "spark.locality.wait":"0s",
            "spark.sql.shuffle.partitions":"200",
            "spark.sql.files.maxPartitionBytes":"512m"
        }
    },
    {
        "Classification":"capacity-scheduler",
        "Properties":{
            "yarn.scheduler.capacity.resource-calculator":"org.apache.hadoop.yarn.util.resource.DominantResourceCalculator"
        }
    }
]
```

------

## 為您的叢集新增引導操作
<a name="emr-spark-rapids-bootstrap"></a>

如需有關如何在建立叢集時提供引導動作指令碼的詳細資訊，請參閱《Amazon EMR 管理指南》**中的[引導動作基本概念](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html#bootstrapUses)。

下列範例指令碼說明如何為 Amazon EMR 6.x 和 7.x 建立引導動作檔案：

------
#### [ Amazon EMR 7.x ]

**Amazon EMR 7.x 的 `my-bootstrap-action.sh` 檔案範例**

若要使用 YARN 管理使用 Amazon EMR 7.x 版的 GPU 資源，您必須將 CGroup v1 手動掛載到叢集。如此範例所示，您可以使用引導動作指令碼來執行此操作。

```
#!/bin/bash
set -ex
 
sudo mkdir -p /spark-rapids-cgroup/devices
sudo mount -t cgroup -o devices cgroupv1-devices /spark-rapids-cgroup/devices
sudo chmod a+rwx -R /spark-rapids-cgroup
```

------
#### [ Amazon EMR 6.x ]

**Amazon EMR 6.x 的 `my-bootstrap-action.sh` 檔案範例**

對於 Amazon EMR 6.x 版，您必須在叢集上開啟 YARN 的 CGroup 許可。如此範例所示，您可以使用引導動作指令碼來執行此操作。

```
#!/bin/bash
set -ex
 
sudo chmod a+rwx -R /sys/fs/cgroup/cpu,cpuacct
sudo chmod a+rwx -R /sys/fs/cgroup/devices
```

------

## 啟動您的叢集
<a name="emr-spark-rapids-launchcluster"></a>

最後一個步驟是使用上述叢集組態來啟動您的叢集。以下是透過 Amazon EMR CLI 啟動叢集的範例命令：

```
 aws emr create-cluster \
--release-label emr-7.12.0 \
--applications Name=Hadoop Name=Spark \
--service-role EMR_DefaultRole_V2 \
--ec2-attributes KeyName=my-key-pair,InstanceProfile=EMR_EC2_DefaultRole \
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.4xlarge \                 
                  InstanceGroupType=CORE,InstanceCount=1,InstanceType=g4dn.2xlarge \    
                  InstanceGroupType=TASK,InstanceCount=1,InstanceType=g4dn.2xlarge \
--configurations file:///my-configurations.json \
--bootstrap-actions Name='My Spark Rapids Bootstrap action',Path=s3://amzn-s3-demo-bucket/my-bootstrap-action.sh
```