

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 在流视频中搜索人脸
<a name="rekognition-video-stream-processor-search-faces"></a>

Amazon Rekognition Video 可以搜索集合中与在流视频中检测到的人脸匹配的人脸。有关集合的更多信息，请参阅[在集合中搜索人脸](collections.md)。

**Topics**
+ [创建 Amazon Rekognition Video 人脸搜索流处理器](#streaming-video-creating-stream-processor)
+ [启动 Amazon Rekognition Video 人脸搜索流处理器](#streaming-video-starting-stream-processor)
+ [使用流处理器搜索人脸（Java V2 示例）](#using-stream-processors-v2)
+ [使用流处理器搜索人脸（Java V1 示例）](#using-stream-processors)
+ [读取流视频分析结果](streaming-video-kinesis-output.md)
+ [在本地使用 Kinesis 视频流显示 Rekognition 结果](displaying-rekognition-results-locally.md)
+ [了解 Kinesis 人脸识别 JSON 帧记录](streaming-video-kinesis-output-reference.md)

下图显示了 Amazon Rekognition Video 如何检测和识别流视频中的人脸。

![\[该图描绘了使用 Amazon Rekognition Video 处理来自 Amazon Kinesis 的视频流的工作流。\]](http://docs.aws.amazon.com/zh_cn/rekognition/latest/dg/images/VideoRekognitionStream.png)


## 创建 Amazon Rekognition Video 人脸搜索流处理器
<a name="streaming-video-creating-stream-processor"></a>

在分析流媒体视频之前，您需要先创建一个 Amazon Rekognition Video 流处理器 ()。[CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html)流处理器包含有关 Kinesis 数据流和 Kinesis 视频流的信息。它还包含含有您要在输入流视频中识别的人脸的集合的标识符。您还可为流处理器指定名称。以下是 `CreateStreamProcessor` 请求的 JSON 示例。

```
{
       "Name": "streamProcessorForCam",
       "Input": {
              "KinesisVideoStream": {
                     "Arn": "arn:aws:kinesisvideo:us-east-1:nnnnnnnnnnnn:stream/inputVideo"
              }
       },
       "Output": {
              "KinesisDataStream": {
                     "Arn": "arn:aws:kinesis:us-east-1:nnnnnnnnnnnn:stream/outputData"
              }
       },
       "RoleArn": "arn:aws:iam::nnnnnnnnnnn:role/roleWithKinesisPermission",
       "Settings": {
              "FaceSearch": {
                     "CollectionId": "collection-with-100-faces",
                     "FaceMatchThreshold": 85.5
              }
       }
}
```

以下是来自 `CreateStreamProcessor` 的示例响应。

```
{
       “StreamProcessorArn”: “arn:aws:rekognition:us-east-1:nnnnnnnnnnnn:streamprocessor/streamProcessorForCam”
}
```

## 启动 Amazon Rekognition Video 人脸搜索流处理器
<a name="streaming-video-starting-stream-processor"></a>

您可使用在 `CreateStreamProcessor` 中指定的流处理器名称来调用 [StartStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartStreamProcessor.html)，由此开始分析流视频。以下是 `StartStreamProcessor` 请求的 JSON 示例。

```
{
       "Name": "streamProcessorForCam"
}
```

如果流处理器成功启动，则会返回 HTTP 200 响应以及空白的 JSON 正文。

## 使用流处理器搜索人脸（Java V2 示例）
<a name="using-stream-processors-v2"></a>

以下示例代码展示了如何使用适用于 Java 的 AWS SDK 版本 2 调用各种流处理器操作 [StartStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartStreamProcessor.html)，例如[CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html)和。

此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在[此处](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/CreateStreamProcessor.java)查看完整示例。

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.rekognition.RekognitionClient;
import software.amazon.awssdk.services.rekognition.model.CreateStreamProcessorRequest;
import software.amazon.awssdk.services.rekognition.model.CreateStreamProcessorResponse;
import software.amazon.awssdk.services.rekognition.model.FaceSearchSettings;
import software.amazon.awssdk.services.rekognition.model.KinesisDataStream;
import software.amazon.awssdk.services.rekognition.model.KinesisVideoStream;
import software.amazon.awssdk.services.rekognition.model.ListStreamProcessorsRequest;
import software.amazon.awssdk.services.rekognition.model.ListStreamProcessorsResponse;
import software.amazon.awssdk.services.rekognition.model.RekognitionException;
import software.amazon.awssdk.services.rekognition.model.StreamProcessor;
import software.amazon.awssdk.services.rekognition.model.StreamProcessorInput;
import software.amazon.awssdk.services.rekognition.model.StreamProcessorSettings;
import software.amazon.awssdk.services.rekognition.model.StreamProcessorOutput;
import software.amazon.awssdk.services.rekognition.model.StartStreamProcessorRequest;
import software.amazon.awssdk.services.rekognition.model.DescribeStreamProcessorRequest;
import software.amazon.awssdk.services.rekognition.model.DescribeStreamProcessorResponse;

/**
 * Before running this Java V2 code example, set up your development
 * environment, including your credentials.
 * <p>
 * For more information, see the following documentation topic:
 * <p>
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 */
public class CreateStreamProcessor {
    public static void main(String[] args) {
        final String usage = """
                
                Usage:    <role> <kinInputStream> <kinOutputStream> <collectionName> <StreamProcessorName>
                
                Where:
                   role - The ARN of the AWS Identity and Access Management (IAM) role to use. \s
                   kinInputStream - The ARN of the Kinesis video stream.\s
                   kinOutputStream - The ARN of the Kinesis data stream.\s
                   collectionName - The name of the collection to use that contains content. \s
                   StreamProcessorName - The name of the Stream Processor. \s
                """;

        if (args.length != 5) {
            System.out.println(usage);
            System.exit(1);
        }

        String role = args[0];
        String kinInputStream = args[1];
        String kinOutputStream = args[2];
        String collectionName = args[3];
        String streamProcessorName = args[4];

        Region region = Region.US_EAST_1;
        RekognitionClient rekClient = RekognitionClient.builder()
                .region(region)
                .build();

        processCollection(rekClient, streamProcessorName, kinInputStream, kinOutputStream, collectionName,
                role);
        startSpecificStreamProcessor(rekClient, streamProcessorName);
        listStreamProcessors(rekClient);
        describeStreamProcessor(rekClient, streamProcessorName);
        deleteSpecificStreamProcessor(rekClient, streamProcessorName);
    }

    public static void listStreamProcessors(RekognitionClient rekClient) {
        ListStreamProcessorsRequest request = ListStreamProcessorsRequest.builder()
                .maxResults(15)
                .build();

        ListStreamProcessorsResponse listStreamProcessorsResult = rekClient.listStreamProcessors(request);
        for (StreamProcessor streamProcessor : listStreamProcessorsResult.streamProcessors()) {
            System.out.println("StreamProcessor name - " + streamProcessor.name());
            System.out.println("Status - " + streamProcessor.status());
        }
    }

    private static void describeStreamProcessor(RekognitionClient rekClient, String StreamProcessorName) {
        DescribeStreamProcessorRequest streamProcessorRequest = DescribeStreamProcessorRequest.builder()
                .name(StreamProcessorName)
                .build();

        DescribeStreamProcessorResponse describeStreamProcessorResult = rekClient
                .describeStreamProcessor(streamProcessorRequest);
        System.out.println("Arn - " + describeStreamProcessorResult.streamProcessorArn());
        System.out.println("Input kinesisVideo stream - "
                + describeStreamProcessorResult.input().kinesisVideoStream().arn());
        System.out.println("Output kinesisData stream - "
                + describeStreamProcessorResult.output().kinesisDataStream().arn());
        System.out.println("RoleArn - " + describeStreamProcessorResult.roleArn());
        System.out.println(
                "CollectionId - "
                        + describeStreamProcessorResult.settings().faceSearch().collectionId());
        System.out.println("Status - " + describeStreamProcessorResult.status());
        System.out.println("Status message - " + describeStreamProcessorResult.statusMessage());
        System.out.println("Creation timestamp - " + describeStreamProcessorResult.creationTimestamp());
        System.out.println("Last update timestamp - " + describeStreamProcessorResult.lastUpdateTimestamp());
    }

    private static void startSpecificStreamProcessor(RekognitionClient rekClient, String StreamProcessorName) {
        try {
            StartStreamProcessorRequest streamProcessorRequest = StartStreamProcessorRequest.builder()
                    .name(StreamProcessorName)
                    .build();

            rekClient.startStreamProcessor(streamProcessorRequest);
            System.out.println("Stream Processor " + StreamProcessorName + " started.");

        } catch (RekognitionException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }

    private static void processCollection(RekognitionClient rekClient, String StreamProcessorName,
                                          String kinInputStream, String kinOutputStream, String collectionName, String role) {
        try {
            KinesisVideoStream videoStream = KinesisVideoStream.builder()
                    .arn(kinInputStream)
                    .build();

            KinesisDataStream dataStream = KinesisDataStream.builder()
                    .arn(kinOutputStream)
                    .build();

            StreamProcessorOutput processorOutput = StreamProcessorOutput.builder()
                    .kinesisDataStream(dataStream)
                    .build();

            StreamProcessorInput processorInput = StreamProcessorInput.builder()
                    .kinesisVideoStream(videoStream)
                    .build();

            FaceSearchSettings searchSettings = FaceSearchSettings.builder()
                    .faceMatchThreshold(75f)
                    .collectionId(collectionName)
                    .build();

            StreamProcessorSettings processorSettings = StreamProcessorSettings.builder()
                    .faceSearch(searchSettings)
                    .build();

            CreateStreamProcessorRequest processorRequest = CreateStreamProcessorRequest.builder()
                    .name(StreamProcessorName)
                    .input(processorInput)
                    .output(processorOutput)
                    .roleArn(role)
                    .settings(processorSettings)
                    .build();

            CreateStreamProcessorResponse response = rekClient.createStreamProcessor(processorRequest);
            System.out.println("The ARN for the newly create stream processor is "
                    + response.streamProcessorArn());

        } catch (RekognitionException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }

    private static void deleteSpecificStreamProcessor(RekognitionClient rekClient, String StreamProcessorName) {
        rekClient.stopStreamProcessor(a -> a.name(StreamProcessorName));
        rekClient.deleteStreamProcessor(a -> a.name(StreamProcessorName));
        System.out.println("Stream Processor " + StreamProcessorName + " deleted.");
    }
}
```

## 使用流处理器搜索人脸（Java V1 示例）
<a name="using-stream-processors"></a>

以下示例代码显示如何使用 Java V1 调用各种流处理器操作 [StartStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartStreamProcessor.html)，例如[CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html)和。该示例包括一个流处理器管理器类 (StreamManager)，该类提供调用流处理器操作的方法。入门类（Starter）创建一个 StreamManager 对象并调用各种操作。

**配置示例:**

1. 将 Starter 类成员字段的值设置为所需值。

1. 在 Starter 类函数 `main` 中，取消注释所需的函数调用。

### Starter 类
<a name="streaming-started"></a>

```
//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

// Starter class. Use to create a StreamManager class
// and call stream processor operations.
package com.amazonaws.samples;
import com.amazonaws.samples.*;

public class Starter {

	public static void main(String[] args) {
		
		
    	String streamProcessorName="Stream Processor Name";
    	String kinesisVideoStreamArn="Kinesis Video Stream Arn";
    	String kinesisDataStreamArn="Kinesis Data Stream Arn";
    	String roleArn="Role Arn";
    	String collectionId="Collection ID";
    	Float matchThreshold=50F;

		try {
			StreamManager sm= new StreamManager(streamProcessorName,
					kinesisVideoStreamArn,
					kinesisDataStreamArn,
					roleArn,
					collectionId,
					matchThreshold);
			//sm.createStreamProcessor();
			//sm.startStreamProcessor();
			//sm.deleteStreamProcessor();
			//sm.deleteStreamProcessor();
			//sm.stopStreamProcessor();
			//sm.listStreamProcessors();
			//sm.describeStreamProcessor();
		}
		catch(Exception e){
			System.out.println(e.getMessage());
		}
	}
}
```

### StreamManager 班级
<a name="streaming-manager"></a>

```
//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

// Stream manager class. Provides methods for calling
// Stream Processor operations.
package com.amazonaws.samples;

import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.CreateStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.CreateStreamProcessorResult;
import com.amazonaws.services.rekognition.model.DeleteStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.DeleteStreamProcessorResult;
import com.amazonaws.services.rekognition.model.DescribeStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.DescribeStreamProcessorResult;
import com.amazonaws.services.rekognition.model.FaceSearchSettings;
import com.amazonaws.services.rekognition.model.KinesisDataStream;
import com.amazonaws.services.rekognition.model.KinesisVideoStream;
import com.amazonaws.services.rekognition.model.ListStreamProcessorsRequest;
import com.amazonaws.services.rekognition.model.ListStreamProcessorsResult;
import com.amazonaws.services.rekognition.model.StartStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.StartStreamProcessorResult;
import com.amazonaws.services.rekognition.model.StopStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.StopStreamProcessorResult;
import com.amazonaws.services.rekognition.model.StreamProcessor;
import com.amazonaws.services.rekognition.model.StreamProcessorInput;
import com.amazonaws.services.rekognition.model.StreamProcessorOutput;
import com.amazonaws.services.rekognition.model.StreamProcessorSettings;

public class StreamManager {

    private String streamProcessorName;
    private String kinesisVideoStreamArn;
    private String kinesisDataStreamArn;
    private String roleArn;
    private String collectionId;
    private float matchThreshold;

    private AmazonRekognition rekognitionClient;
    

    public StreamManager(String spName,
    		String kvStreamArn,
    		String kdStreamArn,
    		String iamRoleArn,
    		String collId,
    		Float threshold){
    	streamProcessorName=spName;
    	kinesisVideoStreamArn=kvStreamArn;
    	kinesisDataStreamArn=kdStreamArn;
    	roleArn=iamRoleArn;
    	collectionId=collId;
    	matchThreshold=threshold;
    	rekognitionClient=AmazonRekognitionClientBuilder.defaultClient();
    	
    }
    
    public void createStreamProcessor() {
    	//Setup input parameters
        KinesisVideoStream kinesisVideoStream = new KinesisVideoStream().withArn(kinesisVideoStreamArn);
        StreamProcessorInput streamProcessorInput =
                new StreamProcessorInput().withKinesisVideoStream(kinesisVideoStream);
        KinesisDataStream kinesisDataStream = new KinesisDataStream().withArn(kinesisDataStreamArn);
        StreamProcessorOutput streamProcessorOutput =
                new StreamProcessorOutput().withKinesisDataStream(kinesisDataStream);
        FaceSearchSettings faceSearchSettings =
                new FaceSearchSettings().withCollectionId(collectionId).withFaceMatchThreshold(matchThreshold);
        StreamProcessorSettings streamProcessorSettings =
                new StreamProcessorSettings().withFaceSearch(faceSearchSettings);

        //Create the stream processor
        CreateStreamProcessorResult createStreamProcessorResult = rekognitionClient.createStreamProcessor(
                new CreateStreamProcessorRequest().withInput(streamProcessorInput).withOutput(streamProcessorOutput)
                        .withSettings(streamProcessorSettings).withRoleArn(roleArn).withName(streamProcessorName));

        //Display result
        System.out.println("Stream Processor " + streamProcessorName + " created.");
        System.out.println("StreamProcessorArn - " + createStreamProcessorResult.getStreamProcessorArn());
    }

    public void startStreamProcessor() {
        StartStreamProcessorResult startStreamProcessorResult =
                rekognitionClient.startStreamProcessor(new StartStreamProcessorRequest().withName(streamProcessorName));
        System.out.println("Stream Processor " + streamProcessorName + " started.");
    }

    public void stopStreamProcessor() {
        StopStreamProcessorResult stopStreamProcessorResult =
                rekognitionClient.stopStreamProcessor(new StopStreamProcessorRequest().withName(streamProcessorName));
        System.out.println("Stream Processor " + streamProcessorName + " stopped.");
    }

    public void deleteStreamProcessor() {
        DeleteStreamProcessorResult deleteStreamProcessorResult = rekognitionClient
                .deleteStreamProcessor(new DeleteStreamProcessorRequest().withName(streamProcessorName));
        System.out.println("Stream Processor " + streamProcessorName + " deleted.");
    }

    public void describeStreamProcessor() {
        DescribeStreamProcessorResult describeStreamProcessorResult = rekognitionClient
                .describeStreamProcessor(new DescribeStreamProcessorRequest().withName(streamProcessorName));

        //Display various stream processor attributes.
        System.out.println("Arn - " + describeStreamProcessorResult.getStreamProcessorArn());
        System.out.println("Input kinesisVideo stream - "
                + describeStreamProcessorResult.getInput().getKinesisVideoStream().getArn());
        System.out.println("Output kinesisData stream - "
                + describeStreamProcessorResult.getOutput().getKinesisDataStream().getArn());
        System.out.println("RoleArn - " + describeStreamProcessorResult.getRoleArn());
        System.out.println(
                "CollectionId - " + describeStreamProcessorResult.getSettings().getFaceSearch().getCollectionId());
        System.out.println("Status - " + describeStreamProcessorResult.getStatus());
        System.out.println("Status message - " + describeStreamProcessorResult.getStatusMessage());
        System.out.println("Creation timestamp - " + describeStreamProcessorResult.getCreationTimestamp());
        System.out.println("Last update timestamp - " + describeStreamProcessorResult.getLastUpdateTimestamp());
    }

    public void listStreamProcessors() {
        ListStreamProcessorsResult listStreamProcessorsResult =
                rekognitionClient.listStreamProcessors(new ListStreamProcessorsRequest().withMaxResults(100));

        //List all stream processors (and state) returned from Rekognition
        for (StreamProcessor streamProcessor : listStreamProcessorsResult.getStreamProcessors()) {
            System.out.println("StreamProcessor name - " + streamProcessor.getName());
            System.out.println("Status - " + streamProcessor.getStatus());
        }
    }
}
```

# 读取流视频分析结果
<a name="streaming-video-kinesis-output"></a>

您可使用 Amazon Kinesis Data Streams 客户端库来使用已发送到 Amazon Kinesis Data Streams 输出流的分析结果。有关更多信息，请参阅[从 Kinesis 数据流读取数据](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html)。Amazon Rekognition Video 将每个分析过的帧的 JSON 帧记录放入 Kinesis 输出流。Amazon Rekognition Video 不会分析通过 Kinesis 视频流传递给它的每一帧。

已发送到 Kinesis 数据流的帧记录包含以下信息：该帧位于哪个 Kinesis 视频流片段中、该帧位于片段中的哪个位置以及在该帧中识别的人脸。它还包含流处理器的状态信息。有关更多信息，请参阅 [了解 Kinesis 人脸识别 JSON 帧记录](streaming-video-kinesis-output-reference.md)。

Amazon Kinesis Video Streams 解析器库包含使用 Amazon Rekognition Video 结果并将其与原始 Kinesis 视频流集成的示例测试。有关更多信息，请参阅 [在本地使用 Kinesis 视频流显示 Rekognition 结果](displaying-rekognition-results-locally.md)。

Amazon Rekognition Video 将 Amazon Rekognition Video 的分析信息流式传输到 Kinesis 数据流。以下是单个记录的 JSON 示例。

```
{
  "InputInformation": {
    "KinesisVideo": {
      "StreamArn": "arn:aws:kinesisvideo:us-west-2:nnnnnnnnnnnn:stream/stream-name",
      "FragmentNumber": "91343852333289682796718532614445757584843717598",
      "ServerTimestamp": 1510552593.455,
      "ProducerTimestamp": 1510552593.193,
      "FrameOffsetInSeconds": 2
    }
  },
  "StreamProcessorInformation": {
    "Status": "RUNNING"
  },
  "FaceSearchResponse": [
    {
      "DetectedFace": {
        "BoundingBox": {
          "Height": 0.075,
          "Width": 0.05625,
          "Left": 0.428125,
          "Top": 0.40833333
        },
        "Confidence": 99.975174,
        "Landmarks": [
          {
            "X": 0.4452057,
            "Y": 0.4395594,
            "Type": "eyeLeft"
          },
          {
            "X": 0.46340984,
            "Y": 0.43744427,
            "Type": "eyeRight"
          },
          {
            "X": 0.45960626,
            "Y": 0.4526856,
            "Type": "nose"
          },
          {
            "X": 0.44958648,
            "Y": 0.4696949,
            "Type": "mouthLeft"
          },
          {
            "X": 0.46409217,
            "Y": 0.46704912,
            "Type": "mouthRight"
          }
        ],
        "Pose": {
          "Pitch": 2.9691637,
          "Roll": -6.8904796,
          "Yaw": 23.84388
        },
        "Quality": {
          "Brightness": 40.592964,
          "Sharpness": 96.09616
        }
      },
      "MatchedFaces": [
        {
          "Similarity": 88.863960,
          "Face": {
            "BoundingBox": {
              "Height": 0.557692,
              "Width": 0.749838,
              "Left": 0.103426,
              "Top": 0.206731
            },
            "FaceId": "ed1b560f-d6af-5158-989a-ff586c931545",
            "Confidence": 99.999201,
            "ImageId": "70e09693-2114-57e1-807c-50b6d61fa4dc",
            "ExternalImageId": "matchedImage.jpeg"
          }
        }
      ]
    }
  ]
}
```

在 JSON 示例中，请注意以下内容：
+ **InputInformation**— 有关用于将视频流式传输到亚马逊 Rekognition Video 的 Kinesis 视频流的信息。有关更多信息，请参阅 [InputInformation](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-inputinformation)。
+ **StreamProcessorInformation**— 亚马逊 Rekognition Video 流处理器的状态信息。`Status` 字段的唯一可能值为 RUNNING。有关更多信息，请参阅 [StreamProcessorInformation](streaming-video-kinesis-output-reference-streamprocessorinformation.md)。
+ **FaceSearchResponse**— 包含有关流视频中与输入集合中的人脸匹配的人脸的信息。 [FaceSearchResponse](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-facesearchresponse)包含一个[DetectedFace](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-detectedface)对象，即在分析的视频帧中检测到的人脸。对于每个检测到的人脸，数组 `MatchedFaces` 包含一组在输入集合中找到的匹配的人脸对象 ([MatchedFace](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-facematch)) 以及一个相似度得分。

## 将 Kinesis 视频流映射到 Kinesis 数据流
<a name="mapping-streams"></a>

您可能希望将 Kinesis 视频流帧映射到发送至 Kinesis 数据流的分析帧。例如，在流视频显示期间，您可能希望在识别出的人脸周围显示方框。边界框的坐标将作为 Kinesis 人脸识别记录的一部分发送给 Kinesis 数据流。为了正确显示边界框，您需要将随 Kinesis 人脸识别记录发送的时间信息映射到源 Kinesis 视频流中相应的帧。

将 Kinesis 视频流映射到 Kinesis 数据流所用的方法取决于流式处理的是实时媒体（如实时流视频）还是存档媒体（如存储的视频）。

### 流式处理实时媒体时的映射
<a name="mapping-streaming-video"></a>

**将 Kinesis 视频流帧映射到 Kinesis 数据流帧中**

1. 将[PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html)操作`FragmentTimeCodeType`的输入参数设置为`RELATIVE`。

1. 调用 `PutMedia` 将实时媒体传输到 Kinesis 视频流中。

1. 如果从 Kinesis 数据流接收 Kinesis 人脸识别记录，请存储 [KinesisVideo](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo) 字段的 `ProducerTimestamp` 和 `FrameOffsetInSeconds` 值。

1. 同时添加 `ProducerTimestamp` 和 `FrameOffsetInSeconds` 字段值，计算与 Kinesis 视频流帧对应的时间戳。

### 流式处理存档媒体时的映射
<a name="map-stored-video"></a>

**将 Kinesis 视频流帧映射到 Kinesis 数据流帧中**

1. 致电[PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html)将存档的媒体传输到 Kinesis 视频流中。

1. 如果您从 `PutMedia` 操作的响应中接收到 `Acknowledgement` 对象，请存储 [Payload](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html#API_dataplane_PutMedia_ResponseSyntax) 字段的 `FragmentNumber` 字段值。`FragmentNumber` 是 MKV 集群的片段编号。

1. 如果从 Kinesis 数据流接收 Kinesis 人脸识别记录，请存储 [KinesisVideo](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo) 字段的 `FrameOffsetInSeconds` 字段值。

1. 使用步骤 2 和步骤 3 中存储的 `FrameOffsetInSeconds` 和 `FragmentNumber` 值计算映射。`FrameOffsetInSeconds` 是发送至 Amazon Kinesis Data Streams 的具有特定 `FragmentNumber` 的片段中的偏移。有关获取已知片段编号的视频帧的更多信息，请参阅 [Amazon Kinesis Video Streams 存档媒体](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_Operations_Amazon_Kinesis_Video_Streams_Archived_Media.html)。

# 在本地使用 Kinesis 视频流显示 Rekognition 结果
<a name="displaying-rekognition-results-locally"></a>

 [你可以使用亚马逊 Kinesis Video Streams 解析器库的示例测试，查看亚马逊 Rekognition Video Streams 的结果显示在你来自亚马逊 Kinesis Video Streams 的提要中。KinesisVideo ](https://github.com/aws/amazon-kinesis-video-streams-parser-library#kinesisvideo---rekognition-examples)在检测到的人脸上方`KinesisVideoRekognitionIntegrationExample`显示边界框，并通过本地渲染视频。 JFrame此过程假设您已成功将来自设备摄像头的媒体输入连接到 Kinesis 视频流并启动了 Amazon Rekognition 流处理器。有关更多信息，请参阅 [使用 GStreamer 插件进行直播](streaming-using-gstreamer-plugin.md)。

## 步骤 1：安装 Kinesis 视频流解析器库
<a name="step-1-install-parser-library"></a>

 要创建目录并下载 Github 存储库，请运行以下命令：

```
$ git clone https://github.com/aws/amazon-kinesis-video-streams-parser-library.git
```

 导航到库目录并运行以下 Maven 命令进行全新安装：

```
$ mvn clean install
```

## 步骤 2：配置 Kinesis 视频流和 Rekognition 集成示例测试
<a name="step-2-configure-kinesis-video-rekognition-example-test"></a>

 打开 `KinesisVideoRekognitionIntegrationExampleTest.java` 文件。删除类标题后的 `@Ignore`。使用来自 Amazon Kinesis 和 Amazon Rekognition 资源的信息填充数据字段。有关更多信息，请参阅 [设置您的 Amazon Rekognition Video 和 Amazon Kinesis 资源](setting-up-your-amazon-rekognition-streaming-video-resources.md)。如果您要将视频流式传输到 Kinesis 视频流，请移除 `inputStream` 参数。

 请看下面的代码示例: 

```
RekognitionInput rekognitionInput = RekognitionInput.builder()
  .kinesisVideoStreamArn("arn:aws:kinesisvideo:us-east-1:123456789012:stream/rekognition-test-video-stream")
  .kinesisDataStreamArn("arn:aws:kinesis:us-east-1:123456789012:stream/AmazonRekognition-rekognition-test-data-stream")
  .streamingProcessorName("rekognition-test-stream-processor")
  // Refer how to add face collection :
  // https://docs.aws.amazon.com/rekognition/latest/dg/add-faces-to-collection-procedure.html
  .faceCollectionId("rekognition-test-face-collection")
  .iamRoleArn("rekognition-test-IAM-role")
  .matchThreshold(0.95f)
  .build();                
            
KinesisVideoRekognitionIntegrationExample example = KinesisVideoRekognitionIntegrationExample.builder()
  .region(Regions.US_EAST_1)
  .kvsStreamName("rekognition-test-video-stream")
  .kdsStreamName("AmazonRekognition-rekognition-test-data-stream")
  .rekognitionInput(rekognitionInput)
  .credentialsProvider(new ProfileCredentialsProvider())
  // NOTE: Comment out or delete the inputStream parameter if you are streaming video, otherwise
  // the test will use a sample video. 
  //.inputStream(TestResourceUtil.getTestInputStream("bezos_vogels.mkv"))
  .build();
```

## 步骤 3：运行 Kinesis 视频流和 Rekognition 集成示例测试
<a name="step-3-run-kinesis-video-rekognition-example-test"></a>

 如果您正在向其进行流式传输，请确保您的 Kinesis 视频流正在接收媒体输入，然后在运行 Amazon Rekognition Video 流处理器的情况下开始分析您的流。有关更多信息，请参阅 [Amazon Rekognition Video 流处理器操作概述](streaming-video.md#using-rekognition-video-stream-processor)。将`KinesisVideoRekognitionIntegrationExampleTest`课程当作 JUnit测试来运行。短暂延迟后，将打开一个新窗口，其中包含来自您的 Kinesis 视频流的视频源，以及在检测到的人脸上绘制的边界框。

**注意**  
 本示例中使用的集合中的人脸必须以这种格式指定外部图像 ID（文件名），边界框标签才能显示有意义的文本： PersonName1-Trusted、 PersonName 2-Intruder、 PersonName 3-Neutral 等。标签也可以采用颜色编码，并且可以在 FaceType .java 文件中进行自定义。

# 了解 Kinesis 人脸识别 JSON 帧记录
<a name="streaming-video-kinesis-output-reference"></a>

您可以使用 Amazon Rekognition Video 来识别流视频中的人脸。对于每个分析过的帧，Amazon Rekognition Video 会将 JSON 帧记录输出到 Kinesis 数据流中。Amazon Rekognition Video 不会分析通过 Kinesis 视频流传递给它的每一帧。

JSON 帧记录包含以下信息：输入和输出流、流处理器的状态以及在分析过的帧中识别的人脸。本节包含 JSON 帧记录的参考信息。

以下是 Kinesis 数据流记录的 JSON 语法。有关更多信息，请参阅 [使用流视频事件](streaming-video.md)。

**注意**  
Amazon Rekognition Video API 的工作原理是将输入流中的人脸与人脸集合进行比较，并返回找到的最接近的匹配项以及相似度分数。

```
{
    "InputInformation": {
        "KinesisVideo": {
            "StreamArn": "string",
            "FragmentNumber": "string",
            "ProducerTimestamp": number,
            "ServerTimestamp": number,
            "FrameOffsetInSeconds": number
        }
    },
    "StreamProcessorInformation": {
        "Status": "RUNNING"
    },
    "FaceSearchResponse": [
        {
            "DetectedFace": {
                "BoundingBox": {
                    "Width": number,
                    "Top": number,
                    "Height": number,
                    "Left": number
                },
                "Confidence": number,
                "Landmarks": [
                    {
                        "Type": "string",
                        "X": number,
                        "Y": number
                    }
                ],
                "Pose": {
                    "Pitch": number,
                    "Roll": number,
                    "Yaw": number
                },
                "Quality": {
                    "Brightness": number,
                    "Sharpness": number
                }
            },
            "MatchedFaces": [
                {
                    "Similarity": number,
                    "Face": {
                        "BoundingBox": {
                            "Width": number,
                            "Top": number,
                            "Height": number,
                            "Left": number
                        },
                        "Confidence": number,
                        "ExternalImageId": "string",
                        "FaceId": "string",
                        "ImageId": "string"
                    }
                }
            ]
        }
    ]
}
```

## JSON 记录
<a name="streaming-video-kinesis-output-reference-processorresult"></a>

JSON 记录包含有关由 Amazon Rekognition Video 处理的帧的信息。该记录包含有关流视频的信息、分析过的帧的状态信息以及有关在该帧中识别的人脸的信息。

**InputInformation**

有关用于将视频流式传输到 Amazon Rekognition Video 的 Kinesis 视频流的信息。

类型：[InputInformation](#streaming-video-kinesis-output-reference-inputinformation) 对象

**StreamProcessorInformation**

有关 Amazon Rekognition Video 流处理器的信息。这包括流处理器的当前状态的状态信息。

类型：[StreamProcessorInformation](streaming-video-kinesis-output-reference-streamprocessorinformation.md) 对象 

**FaceSearchResponse**

有关在流视频帧中检测到的人脸与在输入集合中找到的匹配人脸的信息。

类型：[FaceSearchResponse](#streaming-video-kinesis-output-reference-facesearchresponse) 对象数组

## InputInformation
<a name="streaming-video-kinesis-output-reference-inputinformation"></a>

有关 Amazon Rekognition Video 使用的源视频流的信息。有关更多信息，请参阅 [使用流视频事件](streaming-video.md)。

**KinesisVideo**

类型：[KinesisVideo](#streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo) 对象

## KinesisVideo
<a name="streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo"></a>

有关将源视频流式传输到 Amazon Rekognition Video 的 Kinesis 视频流的信息。有关更多信息，请参阅 [使用流视频事件](streaming-video.md)。

**StreamArn**

Kinesis 数据流的 Amazon 资源名称 (ARN)。

类型：字符串 

**FragmentNumber**

一个流视频的片断，包含此记录表示的帧。

类型：字符串

**ProducerTimestamp**

片段的生成者端 Unix 时间戳。有关更多信息，请参阅 [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html)。

类型：数字

**ServerTimestamp**

片段的服务器端 Unix 时间戳。有关更多信息，请参阅 [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html)。

类型：数字

**FrameOffsetInSeconds**

片段内的帧的偏移量（以秒为单位）。

类型：数字 

# StreamProcessorInformation
<a name="streaming-video-kinesis-output-reference-streamprocessorinformation"></a>

有关流处理器的状态信息。

**状态**

流处理器的当前状态。一个可能的值为 RUNNING。

类型：字符串

## FaceSearchResponse
<a name="streaming-video-kinesis-output-reference-facesearchresponse"></a>

有关在流视频帧中检测到的人脸与集合中的与检测到的人脸匹配的人脸的信息。您可在调用 [CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html) 时指定集合。有关更多信息，请参阅 [使用流视频事件](streaming-video.md)。

**DetectedFace**

在分析的视频帧中检测到的人脸的人脸详细信息。

类型：[DetectedFace](#streaming-video-kinesis-output-reference-detectedface) 对象

**MatchedFaces**

集合中的人脸的人脸详细信息数组，该集合与在 `DetectedFace` 中检测到的人脸匹配。

类型：[MatchedFace](#streaming-video-kinesis-output-reference-facematch) 对象数组

## DetectedFace
<a name="streaming-video-kinesis-output-reference-detectedface"></a>

有关在流视频帧中检测到的人脸的信息。[MatchedFace](#streaming-video-kinesis-output-reference-facematch) 对象字段中提供了输入集合中的匹配人脸。

**BoundingBox**

在分析过的视频帧内检测到的人脸的边界框坐标。该 BoundingBox 对象与用于图像分析的 BoundingBox 对象具有相同的属性。

类型：[BoundingBox](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_BoundingBox.html) 对象 

置信度

Amazon Rekognition Video 对检测到的人脸是否真的是人脸的置信度 (1-100)。1 表示最低置信度，100 表示最高置信度。

类型：数字

**标记**

一组人脸标记。

类型：[地标](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Landmark.html)对象数组

**姿势**

指示根据人脸的俯仰、翻滚和偏转确定的人脸的姿势。

类型：[姿势](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Pose.html)对象

**质量**

确定人脸图像亮度和锐度。

类型：[ImageQuality](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_ImageQuality.html) 对象

## MatchedFace
<a name="streaming-video-kinesis-output-reference-facematch"></a>

有关与在分析的视频帧中检测到的人脸匹配的人脸的信息。

**人脸**

人脸匹配信息，针对输入集合中与 [DetectedFace](#streaming-video-kinesis-output-reference-detectedface) 对象中的人脸匹配的人脸。

类型：[人脸](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Face.html)对象 

**相似度**

人脸匹配的置信度 (1-100)。1 表示最低置信度，100 表示最高置信度。

类型：数字 