

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 偵測映像中的人臉
<a name="faces-detect-images"></a>

Amazon Rekognition Image 提供 [DetectFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectFaces.html) 操作，這可尋找主要的人臉特徵 (例如眼睛、鼻子與嘴巴)，以偵測輸入映像中的人臉。Amazon Rekognition Image 可偵測映像中 100 張最大的臉孔。

您可以用映像位元組陣列 (Base64 編碼映像位元組) 的方式提供輸入映像，或指定 Amazon S3 物件。在此程序中，您會將映像 (JPEG 或 PNG) 上傳至您的 S3 儲存貯體，並指定物件索引碼名稱。

**偵測映像中的人臉**

1. 如果您尚未執行：

   1. 建立或更新具有 `AmazonRekognitionFullAccess` 和 `AmazonS3ReadOnlyAccess` 許可的使用者。如需詳細資訊，請參閱[步驟 1：設定 AWS 帳戶並建立使用者](setting-up.md#setting-up-iam)。

   1. 安裝和設定 AWS CLI 和 AWS SDKs。如需詳細資訊，請參閱[步驟 2：設定 AWS CLI 和 AWS SDKs](setup-awscli-sdk.md)。

1. 將 (含有一或多個人臉) 的映像上傳至您的 S3 儲存貯體。

   如需指示說明，請參閱《Amazon Simple Storage Service 使用者指南》**中的[上傳物件至 Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UploadingObjectsintoAmazonS3.html)。

1. 使用下列範例以呼叫 `DetectFaces`。

------
#### [ Java ]

   此範例顯示偵測到之人臉的估計年齡範圍，並列出所有偵測到之人臉屬性的 JSON。變更映像檔案名稱的 `photo` 值。變更儲存映像之 Amazon S3 儲存貯體的 `amzn-s3-demo-bucket` 值。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.AmazonRekognitionException;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.S3Object;
   import com.amazonaws.services.rekognition.model.AgeRange;
   import com.amazonaws.services.rekognition.model.Attribute;
   import com.amazonaws.services.rekognition.model.DetectFacesRequest;
   import com.amazonaws.services.rekognition.model.DetectFacesResult;
   import com.amazonaws.services.rekognition.model.FaceDetail;
   import com.fasterxml.jackson.databind.ObjectMapper;
   import java.util.List;
   
   
   public class DetectFaces {
      
      
      public static void main(String[] args) throws Exception {
   
         String photo = "input.jpg";
         String bucket = "bucket";
   
         AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
   
   
         DetectFacesRequest request = new DetectFacesRequest()
            .withImage(new Image()
               .withS3Object(new S3Object()
                  .withName(photo)
                  .withBucket(bucket)))
            .withAttributes(Attribute.ALL);
         // Replace Attribute.ALL with Attribute.DEFAULT to get default values.
   
         try {
            DetectFacesResult result = rekognitionClient.detectFaces(request);
            List < FaceDetail > faceDetails = result.getFaceDetails();
   
            for (FaceDetail face: faceDetails) {
               if (request.getAttributes().contains("ALL")) {
                  AgeRange ageRange = face.getAgeRange();
                  System.out.println("The detected face is estimated to be between "
                     + ageRange.getLow().toString() + " and " + ageRange.getHigh().toString()
                     + " years old.");
                  System.out.println("Here's the complete set of attributes:");
               } else { // non-default attributes have null values.
                  System.out.println("Here's the default set of attributes:");
               }
   
               ObjectMapper objectMapper = new ObjectMapper();
               System.out.println(objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(face));
            }
   
         } catch (AmazonRekognitionException e) {
            e.printStackTrace();
         }
   
      }
   
   }
   ```

------
#### [ Java V2 ]

   此程式碼取自 AWS 文件開發套件範例 GitHub 儲存庫。請參閱[此處](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/DetectFaces.java)的完整範例。

   ```
   import java.util.List;
   
   //snippet-start:[rekognition.java2.detect_labels.import]
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.S3Object;
   import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest;
   import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.Attribute;
   import software.amazon.awssdk.services.rekognition.model.FaceDetail;
   import software.amazon.awssdk.services.rekognition.model.AgeRange;
   
   //snippet-end:[rekognition.java2.detect_labels.import]
   
   public class DetectFaces {
   
       public static void main(String[] args) {
           final String usage = "\n" +
               "Usage: " +
               "   <bucket> <image>\n\n" +
               "Where:\n" +
               "   bucket - The name of the Amazon S3 bucket that contains the image (for example, ,amzn-s3-demo-bucket)." +
               "   image - The name of the image located in the Amazon S3 bucket (for example, Lake.png). \n\n";
   
           if (args.length != 2) {
               System.out.println(usage);
               System.exit(1);
           }
   
           String bucket = args[0];
           String image = args[1];
           Region region = Region.US_WEST_2;
           RekognitionClient rekClient = RekognitionClient.builder()
               .region(region)
               .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
               .build();
   
           getLabelsfromImage(rekClient, bucket, image);
           rekClient.close();
       }
   
       // snippet-start:[rekognition.java2.detect_labels_s3.main]
       public static void getLabelsfromImage(RekognitionClient rekClient, String bucket, String image) {
   
           try {
               S3Object s3Object = S3Object.builder()
                   .bucket(bucket)
                   .name(image)
                   .build() ;
   
               Image myImage = Image.builder()
                   .s3Object(s3Object)
                   .build();
   
               DetectFacesRequest facesRequest = DetectFacesRequest.builder()
                       .attributes(Attribute.ALL)
                       .image(myImage)
                       .build();
   
                   DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest);
                   List<FaceDetail> faceDetails = facesResponse.faceDetails();
                   for (FaceDetail face : faceDetails) {
                       AgeRange ageRange = face.ageRange();
                       System.out.println("The detected face is estimated to be between "
                                   + ageRange.low().toString() + " and " + ageRange.high().toString()
                                   + " years old.");
   
                       System.out.println("There is a smile : "+face.smile().value().toString());
                   }
   
           } catch (RekognitionException e) {
               System.out.println(e.getMessage());
               System.exit(1);
           }
       }
    // snippet-end:[rekognition.java2.detect_labels.main]
   }
   ```

------
#### [ AWS CLI ]

   此範例顯示來自 `detect-faces` AWS CLI 操作的 JSON 輸出。將 `file` 取代為映像檔案的名稱。將 `amzn-s3-demo-bucket` 取代為 Amazon S3 儲存貯體的名稱，其中包含映像檔案。

   ```
   aws rekognition detect-faces --image "{"S3Object":{"Bucket":"amzn-s3-demo-bucket,"Name":"image-name"}}"\
                                --attributes "ALL" --profile profile-name --region region-name
   ```

    如果您在 Windows 裝置上存取 CLI，請使用雙引號而非單引號，並以反斜線 (即\$1) 替代內部雙引號，以解決您可能遇到的任何剖析器錯誤。例如，請參閱下列內容：

   ```
   aws rekognition detect-faces --image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" --attributes "ALL" 
   --profile profile-name --region region-name
   ```

------
#### [ Python ]

   此範例顯示偵測到之人臉的估計年齡範圍，並列出所有偵測到之人臉屬性的 JSON。變更映像檔案名稱的 `photo` 值。變更儲存映像之 Amazon S3 儲存貯體的 `amzn-s3-demo-bucket` 值。將建立 Rekognition 工作階段的行中 `profile_name` 值取代為您開發人員設定檔的名稱。

   ```
   import boto3
   import json
   
   def detect_faces(photo, bucket, region):
       
       session = boto3.Session(profile_name='profile-name',
                               region_name=region)
       client = session.client('rekognition', region_name=region)
   
       response = client.detect_faces(Image={'S3Object':{'Bucket':bucket,'Name':photo}},
                                      Attributes=['ALL'])
   
       print('Detected faces for ' + photo)
       for faceDetail in response['FaceDetails']:
           print('The detected face is between ' + str(faceDetail['AgeRange']['Low'])
                 + ' and ' + str(faceDetail['AgeRange']['High']) + ' years old')
   
           print('Here are the other attributes:')
           print(json.dumps(faceDetail, indent=4, sort_keys=True))
   
           # Access predictions for individual face details and print them
           print("Gender: " + str(faceDetail['Gender']))
           print("Smile: " + str(faceDetail['Smile']))
           print("Eyeglasses: " + str(faceDetail['Eyeglasses']))
           print("Face Occluded: " + str(faceDetail['FaceOccluded']))
           print("Emotions: " + str(faceDetail['Emotions'][0]))
   
       return len(response['FaceDetails'])
       
   def main():
       photo='photo'
       bucket='amzn-s3-demo-bucket'
       region='region'
       face_count=detect_faces(photo, bucket, region)
       print("Faces detected: " + str(face_count))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   此範例顯示偵測到之人臉的估計年齡範圍，並列出所有偵測到之人臉屬性的 JSON。變更映像檔案名稱的 `photo` 值。變更儲存映像之 Amazon S3 儲存貯體的 `amzn-s3-demo-bucket` 值。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using System.Collections.Generic;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class DetectFaces
   {
       public static void Example()
       {
           String photo = "input.jpg";
           String bucket = "amzn-s3-demo-bucket";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           DetectFacesRequest detectFacesRequest = new DetectFacesRequest()
           {
               Image = new Image()
               {
                   S3Object = new S3Object()
                   {
                       Name = photo,
                       Bucket = bucket
                   },
               },
               // Attributes can be "ALL" or "DEFAULT". 
               // "DEFAULT": BoundingBox, Confidence, Landmarks, Pose, and Quality.
               // "ALL": See https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Rekognition/TFaceDetail.html
               Attributes = new List<String>() { "ALL" }
           };
   
           try
           {
               DetectFacesResponse detectFacesResponse = rekognitionClient.DetectFaces(detectFacesRequest);
               bool hasAll = detectFacesRequest.Attributes.Contains("ALL");
               foreach(FaceDetail face in detectFacesResponse.FaceDetails)
               {
                   Console.WriteLine("BoundingBox: top={0} left={1} width={2} height={3}", face.BoundingBox.Left,
                       face.BoundingBox.Top, face.BoundingBox.Width, face.BoundingBox.Height);
                   Console.WriteLine("Confidence: {0}\nLandmarks: {1}\nPose: pitch={2} roll={3} yaw={4}\nQuality: {5}",
                       face.Confidence, face.Landmarks.Count, face.Pose.Pitch,
                       face.Pose.Roll, face.Pose.Yaw, face.Quality);
                   if (hasAll)
                       Console.WriteLine("The detected face is estimated to be between " +
                           face.AgeRange.Low + " and " + face.AgeRange.High + " years old.");
               }
           }
           catch (Exception e)
           {
               Console.WriteLine(e.Message);
           }
       }
   }
   ```

------
#### [ Ruby ]

   此範例顯示偵測到之人臉的估計年齡範圍，並列出各種人臉屬性。變更映像檔案名稱的 `photo` 值。變更儲存映像之 Amazon S3 儲存貯體的 `amzn-s3-demo-bucket` 值。

   ```
      # Add to your Gemfile
      # gem 'aws-sdk-rekognition'
      require 'aws-sdk-rekognition'
      credentials = Aws::Credentials.new(
         ENV['AWS_ACCESS_KEY_ID'],
         ENV['AWS_SECRET_ACCESS_KEY']
      )
      bucket = 'bucket' # the bucketname without s3://
      photo  = 'input.jpg'# the name of file
      client   = Aws::Rekognition::Client.new credentials: credentials
      attrs = {
        image: {
          s3_object: {
            bucket: bucket,
            name: photo
          },
        },
        attributes: ['ALL']
      }
      response = client.detect_faces attrs
      puts "Detected faces for: #{photo}"
      response.face_details.each do |face_detail|
        low  = face_detail.age_range.low
        high = face_detail.age_range.high
        puts "The detected face is between: #{low} and #{high} years old"
        puts "All other attributes:"
        puts "  bounding_box.width:     #{face_detail.bounding_box.width}"
        puts "  bounding_box.height:    #{face_detail.bounding_box.height}"
        puts "  bounding_box.left:      #{face_detail.bounding_box.left}"
        puts "  bounding_box.top:       #{face_detail.bounding_box.top}"
        puts "  age.range.low:          #{face_detail.age_range.low}"
        puts "  age.range.high:         #{face_detail.age_range.high}"
        puts "  smile.value:            #{face_detail.smile.value}"
        puts "  smile.confidence:       #{face_detail.smile.confidence}"
        puts "  eyeglasses.value:       #{face_detail.eyeglasses.value}"
        puts "  eyeglasses.confidence:  #{face_detail.eyeglasses.confidence}"
        puts "  sunglasses.value:       #{face_detail.sunglasses.value}"
        puts "  sunglasses.confidence:  #{face_detail.sunglasses.confidence}"
        puts "  gender.value:           #{face_detail.gender.value}"
        puts "  gender.confidence:      #{face_detail.gender.confidence}"
        puts "  beard.value:            #{face_detail.beard.value}"
        puts "  beard.confidence:       #{face_detail.beard.confidence}"
        puts "  mustache.value:         #{face_detail.mustache.value}"
        puts "  mustache.confidence:    #{face_detail.mustache.confidence}"
        puts "  eyes_open.value:        #{face_detail.eyes_open.value}"
        puts "  eyes_open.confidence:   #{face_detail.eyes_open.confidence}"
        puts "  mout_open.value:        #{face_detail.mouth_open.value}"
        puts "  mout_open.confidence:   #{face_detail.mouth_open.confidence}"
        puts "  emotions[0].type:       #{face_detail.emotions[0].type}"
        puts "  emotions[0].confidence: #{face_detail.emotions[0].confidence}"
        puts "  landmarks[0].type:      #{face_detail.landmarks[0].type}"
        puts "  landmarks[0].x:         #{face_detail.landmarks[0].x}"
        puts "  landmarks[0].y:         #{face_detail.landmarks[0].y}"
        puts "  pose.roll:              #{face_detail.pose.roll}"
        puts "  pose.yaw:               #{face_detail.pose.yaw}"
        puts "  pose.pitch:             #{face_detail.pose.pitch}"
        puts "  quality.brightness:     #{face_detail.quality.brightness}"
        puts "  quality.sharpness:      #{face_detail.quality.sharpness}"
        puts "  confidence:             #{face_detail.confidence}"
        puts "------------"
        puts ""
      end
   ```

------
#### [ Node.js ]

   此範例顯示偵測到之人臉的估計年齡範圍，並列出各種人臉屬性。變更映像檔案名稱的 `photo` 值。變更儲存映像之 Amazon S3 儲存貯體的 `amzn-s3-demo-bucket` 值。

    將建立 Rekognition 工作階段的行中 `profile_name` 值取代為您開發人員設定檔的名稱。

   如果您使用 TypeScript 定義，可能需要使用 `import AWS from 'aws-sdk'` 而不是 `const AWS = require('aws-sdk')`，以便使用 Node.js 執行該程序。您可以查閱[適用於 Javascript 的AWS SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/)，以獲取更多詳細資訊。視您設定組態的方式而定，您可能還需要使用 `AWS.config.update({region:region});` 來指定您所在的區域。

   ```
   
   
   // Load the SDK
   var AWS = require('aws-sdk');
   const bucket = 'bucket-name' // the bucketname without s3://
   const photo  = 'photo-name' // the name of file
   
   var credentials = new AWS.SharedIniFileCredentials({profile: 'profile-name'});
   AWS.config.credentials = credentials;
   AWS.config.update({region:'region-name'});
   
   const client = new AWS.Rekognition();
   const params = {
     Image: {
       S3Object: {
         Bucket: bucket,
         Name: photo
       },
     },
     Attributes: ['ALL']
   }
   
   client.detectFaces(params, function(err, response) {
       if (err) {
         console.log(err, err.stack); // an error occurred
       } else {
         console.log(`Detected faces for: ${photo}`)
         response.FaceDetails.forEach(data => {
           let low  = data.AgeRange.Low
           let high = data.AgeRange.High
           console.log(`The detected face is between: ${low} and ${high} years old`)
           console.log("All other attributes:")
           console.log(`  BoundingBox.Width:      ${data.BoundingBox.Width}`)
           console.log(`  BoundingBox.Height:     ${data.BoundingBox.Height}`)
           console.log(`  BoundingBox.Left:       ${data.BoundingBox.Left}`)
           console.log(`  BoundingBox.Top:        ${data.BoundingBox.Top}`)
           console.log(`  Age.Range.Low:          ${data.AgeRange.Low}`)
           console.log(`  Age.Range.High:         ${data.AgeRange.High}`)
           console.log(`  Smile.Value:            ${data.Smile.Value}`)
           console.log(`  Smile.Confidence:       ${data.Smile.Confidence}`)
           console.log(`  Eyeglasses.Value:       ${data.Eyeglasses.Value}`)
           console.log(`  Eyeglasses.Confidence:  ${data.Eyeglasses.Confidence}`)
           console.log(`  Sunglasses.Value:       ${data.Sunglasses.Value}`)
           console.log(`  Sunglasses.Confidence:  ${data.Sunglasses.Confidence}`)
           console.log(`  Gender.Value:           ${data.Gender.Value}`)
           console.log(`  Gender.Confidence:      ${data.Gender.Confidence}`)
           console.log(`  Beard.Value:            ${data.Beard.Value}`)
           console.log(`  Beard.Confidence:       ${data.Beard.Confidence}`)
           console.log(`  Mustache.Value:         ${data.Mustache.Value}`)
           console.log(`  Mustache.Confidence:    ${data.Mustache.Confidence}`)
           console.log(`  EyesOpen.Value:         ${data.EyesOpen.Value}`)
           console.log(`  EyesOpen.Confidence:    ${data.EyesOpen.Confidence}`)
           console.log(`  MouthOpen.Value:        ${data.MouthOpen.Value}`)
           console.log(`  MouthOpen.Confidence:   ${data.MouthOpen.Confidence}`)
           console.log(`  Emotions[0].Type:       ${data.Emotions[0].Type}`)
           console.log(`  Emotions[0].Confidence: ${data.Emotions[0].Confidence}`)
           console.log(`  Landmarks[0].Type:      ${data.Landmarks[0].Type}`)
           console.log(`  Landmarks[0].X:         ${data.Landmarks[0].X}`)
           console.log(`  Landmarks[0].Y:         ${data.Landmarks[0].Y}`)
           console.log(`  Pose.Roll:              ${data.Pose.Roll}`)
           console.log(`  Pose.Yaw:               ${data.Pose.Yaw}`)
           console.log(`  Pose.Pitch:             ${data.Pose.Pitch}`)
           console.log(`  Quality.Brightness:     ${data.Quality.Brightness}`)
           console.log(`  Quality.Sharpness:      ${data.Quality.Sharpness}`)
           console.log(`  Confidence:             ${data.Confidence}`)
           console.log("------------")
           console.log("")
         }) // for response.faceDetails
       } // if
     });
   ```

------

## DetectFaces 操作要求
<a name="detectfaces-request"></a>

`DetectFaces` 的輸入是映像。在此範例中，映像是從 Amazon S3 儲存貯體載入的。`Attributes` 參數指定所有臉部屬性應該要傳回。如需詳細資訊，請參閱 [使用映像](images.md)。

```
{
    "Image": {
        "S3Object": {
            "Bucket": "amzn-s3-demo-bucket",
            "Name": "input.jpg"
        }
    },
    "Attributes": [
        "ALL"
    ]
}
```

## DetectFaces 操作回應
<a name="detectfaces-response"></a>

 `DetectFaces` 針對每個偵測到的臉孔傳回以下資訊：


+ **週框方塊**：人臉周圍的週框方塊座標。
+ **可信度**：週框方塊包含人臉的可信度。
+ **人臉特徵點**：人臉特徵點的陣列。對於每個特徵點 (例如左眼、右眼與嘴巴)，回應均會提供 x 和 y 座標。
+ **人臉屬性**：一組人臉屬性，例如人臉是否被遮擋，以 `FaceDetail` 的物件形式傳回。該組合包括：年齡範圍、絡腮鬍、表情、目視方向、眼鏡、眼睛張開、人臉遮擋、性別、嘴巴張開、八字胡、微笑和太陽鏡。對於每個這類屬性，該回應均會提供一個值。該值的類型可能不同，例如布林值類型 (此人是否戴著太陽眼鏡) 或字串類型 (此人是男性或女性) 或 (眼睛注視方向的俯仰/偏移) 的角度值類型。此外，對於大部分屬性，該回應也會提供偵測到之屬性值的可信度。請注意，雖然使用 `DetectFaces` 時支援人臉遮擋和目視方向屬性，但使用 `StartFaceDetection` 和 `GetFaceDetection` 分析視訊時不支援這些屬性。
+ **畫質**：描述人臉的亮度與銳利度。如需盡可能確保最佳臉部偵測的資訊，請參閱 [人臉比較輸入映像的建議](recommendations-facial-input-images.md)。
+ **姿態**：描述映像內人臉的旋轉。

要求可以描繪您想要傳回的人臉屬性陣列。人臉屬性的 `DEFAULT` 子集：始終傳回 `BoundingBox` `Confidence`、`Pose`、`Quality` 和 `Landmarks`。您可以請求傳回特定的人臉屬性 (除了預設清單)，透過使用 `["DEFAULT", "FACE_OCCLUDED", "EYE_DIRECTION"]` 或僅使用一個屬性，例如 `["FACE_OCCLUDED"]`。您可以使用 `["ALL"]` 要求所有人臉屬性。要求更多屬性可能會增加回應時間。

以下是 `DetectFaces` API 呼叫的回應範例。

```
{
  "FaceDetails": [
    {
      "BoundingBox": {
        "Width": 0.7919622659683228,
        "Height": 0.7510867118835449,
        "Left": 0.08881539851427078,
        "Top": 0.151064932346344
      },
      "AgeRange": {
        "Low": 18,
        "High": 26
      },
      "Smile": {
        "Value": false,
        "Confidence": 89.77348327636719
      },
      "Eyeglasses": {
        "Value": true,
        "Confidence": 99.99996948242188
      },
      "Sunglasses": {
        "Value": true,
        "Confidence": 93.65237426757812
      },
      "Gender": {
        "Value": "Female",
        "Confidence": 99.85968780517578
      },
      "Beard": {
        "Value": false,
        "Confidence": 77.52591705322266
      },
      "Mustache": {
        "Value": false,
        "Confidence": 94.48904418945312
      },
      "EyesOpen": {
        "Value": true,
        "Confidence": 98.57169342041016
      },
      "MouthOpen": {
        "Value": false,
        "Confidence": 74.33953094482422
      },
      "Emotions": [
        {
          "Type": "SAD",
          "Confidence": 65.56403350830078
        },
        {
          "Type": "CONFUSED",
          "Confidence": 31.277774810791016
        },
        {
          "Type": "DISGUSTED",
          "Confidence": 15.553778648376465
        },
        {
          "Type": "ANGRY",
          "Confidence": 8.012762069702148
        },
        {
          "Type": "SURPRISED",
          "Confidence": 7.621500015258789
        },
        {
          "Type": "FEAR",
          "Confidence": 7.243380546569824
        },
        {
          "Type": "CALM",
          "Confidence": 5.8196024894714355
        },
        {
          "Type": "HAPPY",
          "Confidence": 2.2830512523651123
        }
      ],
      "Landmarks": [
        {
          "Type": "eyeLeft",
          "X": 0.30225440859794617,
          "Y": 0.41018882393836975
        },
        {
          "Type": "eyeRight",
          "X": 0.6439348459243774,
          "Y": 0.40341562032699585
        },
        {
          "Type": "mouthLeft",
          "X": 0.343580037355423,
          "Y": 0.6951127648353577
        },
        {
          "Type": "mouthRight",
          "X": 0.6306480765342712,
          "Y": 0.6898072361946106
        },
        {
          "Type": "nose",
          "X": 0.47164231538772583,
          "Y": 0.5763645172119141
        },
        {
          "Type": "leftEyeBrowLeft",
          "X": 0.1732882857322693,
          "Y": 0.34452149271965027
        },
        {
          "Type": "leftEyeBrowRight",
          "X": 0.3655243515968323,
          "Y": 0.33231860399246216
        },
        {
          "Type": "leftEyeBrowUp",
          "X": 0.2671719491481781,
          "Y": 0.31669262051582336
        },
        {
          "Type": "rightEyeBrowLeft",
          "X": 0.5613729953765869,
          "Y": 0.32813435792922974
        },
        {
          "Type": "rightEyeBrowRight",
          "X": 0.7665090560913086,
          "Y": 0.3318614959716797
        },
        {
          "Type": "rightEyeBrowUp",
          "X": 0.6612788438796997,
          "Y": 0.3082450032234192
        },
        {
          "Type": "leftEyeLeft",
          "X": 0.2416982799768448,
          "Y": 0.4085965156555176
        },
        {
          "Type": "leftEyeRight",
          "X": 0.36943578720092773,
          "Y": 0.41230902075767517
        },
        {
          "Type": "leftEyeUp",
          "X": 0.29974061250686646,
          "Y": 0.3971870541572571
        },
        {
          "Type": "leftEyeDown",
          "X": 0.30360740423202515,
          "Y": 0.42347756028175354
        },
        {
          "Type": "rightEyeLeft",
          "X": 0.5755768418312073,
          "Y": 0.4081145226955414
        },
        {
          "Type": "rightEyeRight",
          "X": 0.7050536870956421,
          "Y": 0.39924031496047974
        },
        {
          "Type": "rightEyeUp",
          "X": 0.642906129360199,
          "Y": 0.39026668667793274
        },
        {
          "Type": "rightEyeDown",
          "X": 0.6423097848892212,
          "Y": 0.41669243574142456
        },
        {
          "Type": "noseLeft",
          "X": 0.4122826159000397,
          "Y": 0.5987403392791748
        },
        {
          "Type": "noseRight",
          "X": 0.5394935011863708,
          "Y": 0.5960900187492371
        },
        {
          "Type": "mouthUp",
          "X": 0.478581964969635,
          "Y": 0.6660456657409668
        },
        {
          "Type": "mouthDown",
          "X": 0.483366996049881,
          "Y": 0.7497162818908691
        },
        {
          "Type": "leftPupil",
          "X": 0.30225440859794617,
          "Y": 0.41018882393836975
        },
        {
          "Type": "rightPupil",
          "X": 0.6439348459243774,
          "Y": 0.40341562032699585
        },
        {
          "Type": "upperJawlineLeft",
          "X": 0.11031254380941391,
          "Y": 0.3980775475502014
        },
        {
          "Type": "midJawlineLeft",
          "X": 0.19301874935626984,
          "Y": 0.7034031748771667
        },
        {
          "Type": "chinBottom",
          "X": 0.4939905107021332,
          "Y": 0.8877836465835571
        },
        {
          "Type": "midJawlineRight",
          "X": 0.7990140914916992,
          "Y": 0.6899225115776062
        },
        {
          "Type": "upperJawlineRight",
          "X": 0.8548634648323059,
          "Y": 0.38160091638565063
        }
      ],
      "Pose": {
        "Roll": -5.83309268951416,
        "Yaw": -2.4244730472564697,
        "Pitch": 2.6216139793395996
      },
      "Quality": {
        "Brightness": 96.16363525390625,
        "Sharpness": 95.51618957519531
      },
      "Confidence": 99.99872589111328,
      "FaceOccluded": {
        "Value": true,
        "Confidence": 99.99726104736328
      },
      "EyeDirection": {
        "Yaw": 16.299732,
        "Pitch": -6.407457,
        "Confidence": 99.968704
      }
    }
  ],
  "ResponseMetadata": {
    "RequestId": "8bf02607-70b7-4f20-be55-473fe1bba9a2",
    "HTTPStatusCode": 200,
    "HTTPHeaders": {
      "x-amzn-requestid": "8bf02607-70b7-4f20-be55-473fe1bba9a2",
      "content-type": "application/x-amz-json-1.1",
      "content-length": "3409",
      "date": "Wed, 26 Apr 2023 20:18:50 GMT"
    },
    "RetryAttempts": 0
  }
}
```

注意下列事項：
+ `Pose` 資料描述偵測到之臉部的旋轉。您可以使用 `BoundingBox` 與 `Pose` 資料的組合，在您的應用程式所顯示的臉部周圍繪製週框方塊。
+ `Quality` 描述臉部的亮度與銳利度。您可能會發現這有助於在映像之間比較人臉，並找出最佳人臉。
+ 上述回應顯示服務可偵測到的所有臉部 `landmarks`、所有臉部屬性與情緒。若要在回應中取得所有專案，您必須將 `attributes` 參數的值指定為 `ALL`。根據預設，`DetectFaces` API 只會傳回下列五個臉部屬性：`BoundingBox`、`Confidence`、`Pose`、`Quality` 和 `landmarks`。傳回的預設特徵點如下：`eyeLeft`、`eyeRight`、`nose`、`mouthLeft` 和 `mouthRight`。

  