

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 启动到 Amazon Lex V2 机器人的对话流
<a name="start-stream-conversation"></a>

您可以使用该[StartConversation](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_StartConversation.html)操作在应用程序中启动用户和 Amazon Lex V2 机器人之间的直播。来自应用程序的 `POST` 请求会在您的应用程序和 Amazon Lex V2 机器人之间建立连接。这样，您的应用程序和机器人就能够开始通过事件相互交换信息。

**注意**  
使用时 StartConversation，如果语音音频时长超过的配置值`max-length-ms`，Amazon Lex V2 将在指定的时长内切断音频。

仅在以下情况下支持该操作 SDKs：`StartConversation`
+ [适用于 C\$1\$1 的 AWS SDK](https://docs.aws.amazon.com/goto/SdkForCpp/runtime.lex.v2-2020-08-07/StartConversation)
+ [适用于 Java V2 的 AWS SDK](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/lexruntimev2/LexRuntimeV2AsyncClient.html)
+ [适用于 JavaScript v3 的 AWS 开发工具包](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-lex-runtime-v2/index.html#aws-sdkclient-lex-runtime-v2)
+ [适用于 Ruby V3 的 AWS SDK](https://docs.aws.amazon.com/goto/SdkForRubyV3/runtime.lex.v2-2020-08-07/StartConversation)

您的应用程序必须向 Amazon Lex V2 机器人发送的第一个事件是。[ConfigurationEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_ConfigurationEvent.html)该事件包含诸如响应类型格式之类的信息。可以在配置事件中使用的参数如下所示：
+ **responseContentType**— 确定机器人是用文字还是语音回应用户输入。
+ **sessionState**：与机器人流式会话相关的信息，例如预先确定的意图或对话状态。
+ **welcomeMessages**：指定在用户与机器人对话开始时为其播放的欢迎消息。这些消息在用户提供输入之前播放。要激活欢迎消息，还须指定 `sessionState` 和 `dialogAction` 参数的值。
+ **disablePlayback**：确定机器人是否应等待客户端的提示后才开始监听来电者的输入。默认情况下，播放处于激活状态，因此该字段的值为 `false`。
+ **requestAttributes**：为请求提供其他信息。

有关如何为上述参数指定值的信息，请参阅[StartConversation](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_StartConversation.html)操作[ConfigurationEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_ConfigurationEvent.html)的数据类型。

机器人与您的应用程序之间的每个流只能有一个配置事件。在您的应用程序发送配置事件后，机器人可以从您的应用程序获取其他通信。

如果您已指定您的用户通过音频与 Amazon Lex V2 机器人通信，则您的应用程序可以在对话期间向该机器人发送以下事件：
+ [AudioInputEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_AudioInputEvent.html)— 包含最大大小为 320 字节的音频块。您的应用程序必须通过多个音频输入事件从服务器向机器人发送消息。该流中的每个音频输入事件都必须具有相同的音频格式。
+ [DTMFInput事件](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_DTMFInputEvent.html) — 向机器人发送 DTMF 输入。每次 DTMF 按键都对应一个事件。
+ [PlaybackCompletionEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_PlaybackCompletionEvent.html)— 通知服务器已向他们播放了来自用户输入的响应。如果您要向用户发送音频响应，则必须使用播放完成事件。如果配置事件的 `disablePlayback` 取值为 `true`，则无法使用此功能。
+ [DisconnectionEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_DTMFInputEvent.html)— 通知机器人用户已断开与对话的连接。

如果您已指定用户通过文本与机器人通信，则您的应用程序可以在对话期间向机器人发送以下事件：
+ [TextInputEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_TextInputEvent.html)— 从您的应用程序发送给机器人的文本。每个文本输入事件中最多可包含 512 个字符。
+ [PlaybackCompletionEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_PlaybackCompletionEvent.html)— 通知服务器已向他们播放了来自用户输入的响应。如果您要向用户播放音频，则必须使用此事件。如果配置事件的 `disablePlayback` 取值为 `true`，则无法使用此功能。
+ [DisconnectionEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_DTMFInputEvent.html)— 通知机器人用户已断开与对话的连接。

您必须以正确的格式对发送给 Amazon Lex V2 机器人的每个事件进行编码。有关更多信息，请参阅 [事件流编码](event-stream-encoding.md)。

每个事件都有一个事件 ID。为了帮助解决流中可能出现的任何问题，请为每个输入事件分配一个唯一的事件 ID。然后，您可以通过机器人解决任何处理失败问题。

Amazon Lex V2 还为每个事件使用时间戳。除了事件 ID 之外，您还可以使用这些时间戳来帮助解决任何网络传输问题。

在用户与 Amazon Lex V2 机器人对话期间，机器人可以向用户发送以下出站事件作为响应：
+ [IntentResultEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_IntentResultEvent.html)— 包含 Amazon Lex V2 根据用户话语确定的意图。每个内部结果事件均包括：
  + **inputMode**：用户言语的类型。有效值为 `Speech`、`DTMF` 或 `Text`。
  + **interpretations**：Amazon Lex V2 根据用户言语确定的解释。
  + **requestAttributes**：如果您尚未使用 lambda 函数修改请求属性，则这些属性与对话开始时传递的属性相同。
  + **sessionId**：用于对话的会话标识符。
  + **sessionState**：用户与 Amazon Lex V2 的会话状态。
+ [TranscriptEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_TranscriptEvent.html)— 如果用户向您的应用程序提供输入，则此事件包含用户对机器人说话的脚本。如果没有用户输入，您的应用程序不会收到 `TranscriptEvent`。

  发送到应用程序的转录事件的值取决于您是将音频（语音和 DMTF）还是文本指定为对话模式：
  + 语音输入转录：如果用户正在与机器人通话，则该转录事件是用户音频的转录。这是从用户开始说话到他们结束讲话的所有语音的转录。
  + DTMF 输入转录：如果用户通过键盘键入，则该转录事件包含用户在输入中按下的所有数字。
  + 文本输入转录：如果用户提供文本输入，则该转录事件包含用户输入中的所有文本。
+ [TextResponseEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_TextResponseEvent.html)— 包含文本格式的机器人响应。默认情况下，会返回文本响应。如果您已将 Amazon Lex V2 配置为返回音频响应，则该文本用于生成音频响应。每个文本响应事件都包含机器人返回给用户的消息对象数组。
+ [AudioResponseEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_AudioResponseEvent.html)— 包含根据中生成的文本合成的音频响应。`TextResponseEvent`要接收音频响应事件，您必须将 Amazon Lex V2 配置为提供音频响应。所有音频响应事件都具有相同的音频格式。每个事件包含最多 100 字节的音频块。Amazon Lex V2 向您的应用程序发送一个空的音频块，其中 `bytes` 字段设置为 `null`，表示音频响应事件已结束。
+ [PlaybackInterruptionEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_PlaybackInterruptionEvent.html)— 当用户中断机器人发送到您的应用程序的响应时，Amazon Lex V2 会触发此事件以停止响应的播放。
+ [HeartbeatEvent](https://docs.aws.amazon.com/lexv2/latest/APIReference/API_runtime_HeartbeatEvent.html)— Amazon Lex V2 会定期将此事件发送回去，以防止您的应用程序与机器人之间的连接超时。

## 使用 Amazon Lex V2 机器人时音频对话事件的时序
<a name="audio-conversation-sequence"></a>

下列图中展示了用户与 Amazon Lex V2 机器人之间的流式音频对话。应用程序会持续将音频流式传输到机器人，机器人会从该音频中寻找用户输入。在该示例中，用户和机器人都通过语音进行通信。每个图都对应一个用户言语和机器人对该言语的响应。

应用程序和机器人之间对话的开始如下图所示。该流传输从零时（t0）开始。

![\[Timeline showing audio input events from application and various response events from bot during a conversation.\]](http://docs.aws.amazon.com/zh_cn/lexv2/latest/dg/images/Streaming-Page-1.png)


上图中的各事件如下所述：
+ t0：应用程序向机器人发送配置事件以启动流传输。
+ t1：应用程序流传输音频数据。该数据被分解为来自应用程序的一系列输入事件。
+ t2：对于*用户言语 1*，当用户开始说话时，机器人会检测到一个音频输入事件。
+ t2：当用户说话时，机器人会发送心跳事件以保持连接。它间歇性发送这些事件，以确保连接不会超时。
+ t3：机器人检测到用户言语已结束。
+ t4：机器人向应用程序发回一个包含用户语音转录的转录事件。这表示*机器人对用户言语 1 的响应*的开始。
+ t5：机器人发送意图结果事件，以指示用户想要执行的操作。
+ t6：机器人开始在文本响应事件中以文本形式提供响应。
+ t7：机器人向应用程序发送一系列音频响应事件，以便为用户播放。
+ t8：机器人发送另一个心跳事件以间歇性地保持连接。

下图是对上图的延续。它显示应用程序向机器人发送播放完成事件，表明它已停止为用户播放音频响应。应用程序向用户播放*机器人对用户言语 1 的响应*。用户通过*用户言语 2* 来回应*机器人对用户言语 1 的响应*。

![\[Timeline of audio input events from user and response events from bot, showing interaction flow.\]](http://docs.aws.amazon.com/zh_cn/lexv2/latest/dg/images/Streaming-Page-2.png)


上图中的各事件如下所述：
+ t10：应用程序发送播放完成事件，表示它已完成向用户播放机器人消息。
+ t11：应用程序将用户响应作为*用户言语 2* 发送回机器人。
+ t12：关于*机器人对用户言语 2 的响应*，机器人会等待用户停止说话，然后开始提供音频响应。
+ t13：当机器人向应用程序发送*机器人对用户言语 2 的响应*时，会检测到*用户言语 3* 的开始。机器人会停止*机器人对用户言语 2 的响应*，并发送播放中断事件。
+ t14：机器人向应用程序发送播放中断事件，表示用户已中断提示。

下图显示了*机器人对用户言语 3 的响应*，以及机器人响应用户言语后对话仍在继续。

![\[Diagram showing events flow between application, bot, and user utterances over time.\]](http://docs.aws.amazon.com/zh_cn/lexv2/latest/dg/images/Streaming-Page-3.png)


# 使用 API 开始流传输对话
<a name="using-streaming-api"></a>

当您开始向 Amazon Lex V2 机器人发起流传输时，您已完成以下任务：

1. 创建与服务器的初始连接。

1. 配置安全凭证和机器人详细信息。机器人详细信息包括机器人是接受 DTMF 和音频输入，还是文本输入。

1. 向服务器发送事件。这些事件是来自用户的文本数据或音频数据。

1. 处理来自服务器的事件。在此步骤中，您可以确定机器人输出是以文本还是语音形式呈现给用户。

以下代码示例初始化与 Amazon Lex V2 机器人和您的本地计算机的流传输对话。您可以修改这些代码以满足您的需求。

以下代码是通过 适用于 Java 的 AWS SDK 启动机器人连接并配置机器人详细信息和凭证的请求示例。

```
package com.lex.streaming.sample;

import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.lexruntimev2.LexRuntimeV2AsyncClient;
import software.amazon.awssdk.services.lexruntimev2.model.ConversationMode;
import software.amazon.awssdk.services.lexruntimev2.model.StartConversationRequest;

import java.net.URISyntaxException;
import java.util.UUID;
import java.util.concurrent.CompletableFuture;

/**
 * The following code creates a connection with the Amazon Lex bot and configures the bot details and credentials.  
 * Prerequisite: To use this example, you must be familiar with the Reactive streams programming model.
 * For more information, see
 * https://github.com/reactive-streams/reactive-streams-jvm.
 * This example uses AWS SDK for Java for Amazon Lex V2.
 * <p>
 * The following sample application interacts with an Amazon Lex bot with the streaming API. It uses the Audio
 * conversation mode to return audio responses to the user's input.
 * <p>
 * The code in this example accomplishes the following:
 * <p>
 * 1. Configure details about the conversation between the user and the Amazon Lex bot. These details include the conversation mode and the specific bot the user is speaking with.
 * 2. Create an events publisher that passes the audio events to the Amazon Lex bot after you establish the connection. The code we provide in this example tells your computer to pick up the audio from
 * your microphone and send that audio data to Amazon Lex.
 * 3. Create a response handler that handles the audio responses from the Amazon Lex bot and plays back the audio to you.
 */
public class LexBidirectionalStreamingExample {

    public static void main(String[] args) throws URISyntaxException, InterruptedException {
        String botId = "";
        String botAliasId = "";
        String localeId = "";
        String accessKey = "";
        String secretKey = "";
        String sessionId = UUID.randomUUID().toString();
        Region region = Region.region_name; // Choose an AWS Region where the Amazon Lex Streaming API is available.

        AwsCredentialsProvider awsCredentialsProvider = StaticCredentialsProvider
                .create(AwsBasicCredentials.create(accessKey, secretKey));

        // Create a new SDK client. You need to use an asynchronous client.
        System.out.println("step 1: creating a new Lex SDK client");
        LexRuntimeV2AsyncClient lexRuntimeServiceClient = LexRuntimeV2AsyncClient.builder()
                .region(region)
                .credentialsProvider(awsCredentialsProvider)
                .build();


        // Configure the bot, alias and locale that you'll use to have a conversation.
        System.out.println("step 2: configuring bot details");
        StartConversationRequest.Builder startConversationRequestBuilder = StartConversationRequest.builder()
                .botId(botId)
                .botAliasId(botAliasId)
                .localeId(localeId);

        // Configure the conversation mode of the bot. By default, the
        // conversation mode is audio.
        System.out.println("step 3: choosing conversation mode");
        startConversationRequestBuilder = startConversationRequestBuilder.conversationMode(ConversationMode.AUDIO);

        // Assign a unique identifier for the conversation.
        System.out.println("step 4: choosing a unique conversation identifier");
        startConversationRequestBuilder = startConversationRequestBuilder.sessionId(sessionId);

        // Start the initial request.
        StartConversationRequest startConversationRequest = startConversationRequestBuilder.build();

        // Create a stream of audio data to the Amazon Lex bot. The stream will start after the connection is established with the bot.
        EventsPublisher eventsPublisher = new EventsPublisher();

        // Create a class to handle responses from bot. After the server processes the user data you've streamed, the server responds
        // on another stream.
        BotResponseHandler botResponseHandler = new BotResponseHandler(eventsPublisher);

        // Start a connection and pass in the publisher that streams the audio and process the responses from the bot.
        System.out.println("step 5: starting the conversation ...");
        CompletableFuture<Void> conversation = lexRuntimeServiceClient.startConversation(
                startConversationRequest,
                eventsPublisher,
                botResponseHandler);

        // Wait until the conversation finishes. The conversation finishes if the dialog state reaches the "Closed" state.
        // The client stops the connection. If an exception occurs during the conversation, the
        // client sends a disconnection event.
        conversation.whenComplete((result, exception) -> {
            if (exception != null) {
                eventsPublisher.disconnect();
            }
        });

        // The conversation finishes when the dialog state is closed and last prompt has been played.
        while (!botResponseHandler.isConversationComplete()) {
            Thread.sleep(100);
        }

        // Randomly sleep for 100 milliseconds to prevent JVM from exiting.
        // You won't need this in your production code because your JVM is
        // likely to always run.
        // When the conversation finishes, the following code block stops publishing more data and informs the Amazon Lex bot that there is no more data to send.
        if (botResponseHandler.isConversationComplete()) {
            System.out.println("conversation is complete.");
            eventsPublisher.stop();
        }
    }
}
```

以下代码是通过 适用于 Java 的 AWS SDK 向机器人发送事件的请求示例。此示例中的代码表示通过计算机上的麦克风发送音频事件。

```
package com.lex.streaming.sample;

import org.reactivestreams.Publisher;
import org.reactivestreams.Subscriber;
import software.amazon.awssdk.services.lexruntimev2.model.StartConversationRequestEventStream;

/**
 * You use the Events publisher to send events to the Amazon Lex bot. When you establish a connection, the bot uses the
 * subscribe() method and enables the events publisher starts sending events to
 * your computer. The bot uses the "request" method of the subscription to make more requests. For more information on the request method, see https://github.com/reactive-streams/reactive-streams-jvm. 
 */
public class EventsPublisher implements Publisher<StartConversationRequestEventStream> {

    private AudioEventsSubscription audioEventsSubscription;

    @Override
    public void subscribe(Subscriber<? super StartConversationRequestEventStream> subscriber) {
        if (audioEventsSubscription == null) {

            audioEventsSubscription = new AudioEventsSubscription(subscriber);
            subscriber.onSubscribe(audioEventsSubscription);

        } else {
            throw new IllegalStateException("received unexpected subscription request");
        }
    }

    public void disconnect() {
        if (audioEventsSubscription != null) {
            audioEventsSubscription.disconnect();
        }
    }

    public void stop() {
        if (audioEventsSubscription != null) {
            audioEventsSubscription.stop();
        }
    }

    public void playbackFinished() {
        if (audioEventsSubscription != null) {
            audioEventsSubscription.playbackFinished();
        }
    }
}
```

以下代码是使用 适用于 Java 的 AWS SDK 处理机器人响应的请求示例。此示例中的代码表示将 Amazon Lex V2 配置为向您播放音频响应。

```
package com.lex.streaming.sample;

import javazoom.jl.decoder.JavaLayerException;
import javazoom.jl.player.advanced.AdvancedPlayer;
import javazoom.jl.player.advanced.PlaybackEvent;
import javazoom.jl.player.advanced.PlaybackListener;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.services.lexruntimev2.model.AudioResponseEvent;
import software.amazon.awssdk.services.lexruntimev2.model.DialogActionType;
import software.amazon.awssdk.services.lexruntimev2.model.IntentResultEvent;
import software.amazon.awssdk.services.lexruntimev2.model.PlaybackInterruptionEvent;
import software.amazon.awssdk.services.lexruntimev2.model.StartConversationResponse;
import software.amazon.awssdk.services.lexruntimev2.model.StartConversationResponseEventStream;
import software.amazon.awssdk.services.lexruntimev2.model.StartConversationResponseHandler;
import software.amazon.awssdk.services.lexruntimev2.model.TextResponseEvent;
import software.amazon.awssdk.services.lexruntimev2.model.TranscriptEvent;

import java.io.IOException;
import java.io.UncheckedIOException;
import java.util.concurrent.CompletableFuture;

/**
 * The following class is responsible for processing events sent from the Amazon Lex bot. The bot sends multiple audio events,
 * so the following code concatenates those audio events and uses a publicly available Java audio player to play out the message to
 * the user.
 */
public class BotResponseHandler implements StartConversationResponseHandler {

    private final EventsPublisher eventsPublisher;

    private boolean lastBotResponsePlayedBack;
    private boolean isDialogStateClosed;
    private AudioResponse audioResponse;


    public BotResponseHandler(EventsPublisher eventsPublisher) {
        this.eventsPublisher = eventsPublisher;
        this.lastBotResponsePlayedBack = false;// At the start, we have not played back last response from bot.
        this.isDialogStateClosed = false; // At the start, the dialog state is open.
    }

    @Override
    public void responseReceived(StartConversationResponse startConversationResponse) {
        System.out.println("successfully established the connection with server. request id:" + startConversationResponse.responseMetadata().requestId()); // would have 2XX, request id.
    }

    @Override
    public void onEventStream(SdkPublisher<StartConversationResponseEventStream> sdkPublisher) {

        sdkPublisher.subscribe(event -> {
            if (event instanceof PlaybackInterruptionEvent) {
                handle((PlaybackInterruptionEvent) event);
            } else if (event instanceof TranscriptEvent) {
                handle((TranscriptEvent) event);
            } else if (event instanceof IntentResultEvent) {
                handle((IntentResultEvent) event);
            } else if (event instanceof TextResponseEvent) {
                handle((TextResponseEvent) event);
            } else if (event instanceof AudioResponseEvent) {
                handle((AudioResponseEvent) event);
            }
        });
    }

    @Override
    public void exceptionOccurred(Throwable throwable) {
        System.err.println("got an exception:" + throwable);
    }

    @Override
    public void complete() {
        System.out.println("on complete");
    }

    private void handle(PlaybackInterruptionEvent event) {
        System.out.println("Got a PlaybackInterruptionEvent: " + event);
    }

    private void handle(TranscriptEvent event) {
        System.out.println("Got a TranscriptEvent: " + event);
    }


    private void handle(IntentResultEvent event) {
        System.out.println("Got an IntentResultEvent: " + event);
        isDialogStateClosed = DialogActionType.CLOSE.equals(event.sessionState().dialogAction().type());
    }

    private void handle(TextResponseEvent event) {
        System.out.println("Got an TextResponseEvent: " + event);
        event.messages().forEach(message -> {
            System.out.println("Message content type:" + message.contentType());
            System.out.println("Message content:" + message.content());
        });
    }

    private void handle(AudioResponseEvent event) {//Synthesize speech
        // System.out.println("Got a AudioResponseEvent: " + event);
        if (audioResponse == null) {
            audioResponse = new AudioResponse();
            //Start an audio player in a different thread.
            CompletableFuture.runAsync(() -> {
                try {
                    AdvancedPlayer audioPlayer = new AdvancedPlayer(audioResponse);

                    audioPlayer.setPlayBackListener(new PlaybackListener() {
                        @Override
                        public void playbackFinished(PlaybackEvent evt) {
                            super.playbackFinished(evt);

                            // Inform the Amazon Lex bot that the playback has finished.
                            eventsPublisher.playbackFinished();
                            if (isDialogStateClosed) {
                                lastBotResponsePlayedBack = true;
                            }
                        }
                    });
                    audioPlayer.play();
                } catch (JavaLayerException e) {
                    throw new RuntimeException("got an exception when using audio player", e);
                }
            });
        }

        if (event.audioChunk() != null) {
            audioResponse.write(event.audioChunk().asByteArray());
        } else {
            // The audio audio prompt has ended when the audio response has no
            // audio bytes.
            try {
                audioResponse.close();
                audioResponse = null;  // Prepare for the next audio prompt.
            } catch (IOException e) {
                throw new UncheckedIOException("got an exception when closing the audio response", e);
            }
        }
    }

    // The conversation with the Amazon Lex bot is complete when the bot marks the Dialog as DialogActionType.CLOSE
    // and any prompt playback is finished. For more information, see
    // https://docs.aws.amazon.com/lexv2/latest/dg/API_runtime_DialogAction.html.
    public boolean isConversationComplete() {
        return isDialogStateClosed && lastBotResponsePlayedBack;
    }

}
```

要将机器人配置为通过音频响应输入事件，您必须先向 Amazon Lex V2 订阅音频事件，然后将机器人配置为对用户的输入事件提供音频响应。

以下代码是从 Amazon Lex V2 订阅音频事件的 适用于 Java 的 AWS SDK 示例。

```
package com.lex.streaming.sample;

import org.reactivestreams.Subscriber;
import org.reactivestreams.Subscription;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.services.lexruntimev2.model.AudioInputEvent;
import software.amazon.awssdk.services.lexruntimev2.model.ConfigurationEvent;
import software.amazon.awssdk.services.lexruntimev2.model.DisconnectionEvent;
import software.amazon.awssdk.services.lexruntimev2.model.PlaybackCompletionEvent;
import software.amazon.awssdk.services.lexruntimev2.model.StartConversationRequestEventStream;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.nio.ByteBuffer;
import java.util.Arrays;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.atomic.AtomicLong;

public class AudioEventsSubscription implements Subscription {
    private static final AudioFormat MIC_FORMAT = new AudioFormat(8000, 16, 1, true, false);
    private static final String AUDIO_CONTENT_TYPE = "audio/lpcm; sample-rate=8000; sample-size-bits=16; channel-count=1; is-big-endian=false";
    //private static final String RESPONSE_TYPE = "audio/pcm; sample-rate=8000";
    private static final String RESPONSE_TYPE = "audio/mpeg";
    private static final int BYTES_IN_AUDIO_CHUNK = 320;
    private static final AtomicLong eventIdGenerator = new AtomicLong(0);

    private final AudioInputStream audioInputStream;
    private final Subscriber<? super StartConversationRequestEventStream> subscriber;
    private final EventWriter eventWriter;
    private CompletableFuture eventWriterFuture;


    public AudioEventsSubscription(Subscriber<? super StartConversationRequestEventStream> subscriber) {
        this.audioInputStream = getMicStream();
        this.subscriber = subscriber;
        this.eventWriter = new EventWriter(subscriber, audioInputStream);
        configureConversation();
    }

    private AudioInputStream getMicStream() {
        try {
            DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, MIC_FORMAT);
            TargetDataLine targetDataLine = (TargetDataLine) AudioSystem.getLine(dataLineInfo);

            targetDataLine.open(MIC_FORMAT);
            targetDataLine.start();

            return new AudioInputStream(targetDataLine);
        } catch (LineUnavailableException e) {
            throw new RuntimeException(e);
        }
    }

    @Override
    public void request(long demand) {
        // If a thread to write events has not been started, start it.
        if (eventWriterFuture == null) {
            eventWriterFuture = CompletableFuture.runAsync(eventWriter);
        }
        eventWriter.addDemand(demand);
    }

    @Override
    public void cancel() {
        subscriber.onError(new RuntimeException("stream was cancelled"));
        try {
            audioInputStream.close();
        } catch (IOException e) {
            throw new UncheckedIOException(e);
        }
    }

    public void configureConversation() {
        String eventId = "ConfigurationEvent-" + String.valueOf(eventIdGenerator.incrementAndGet());

        ConfigurationEvent configurationEvent = StartConversationRequestEventStream
                .configurationEventBuilder()
                .eventId(eventId)
                .clientTimestampMillis(System.currentTimeMillis())
                .responseContentType(RESPONSE_TYPE)
                .build();

        System.out.println("writing config event");
        eventWriter.writeConfigurationEvent(configurationEvent);
    }

    public void disconnect() {

        String eventId = "DisconnectionEvent-" + String.valueOf(eventIdGenerator.incrementAndGet());

        DisconnectionEvent disconnectionEvent = StartConversationRequestEventStream
                .disconnectionEventBuilder()
                .eventId(eventId)
                .clientTimestampMillis(System.currentTimeMillis())
                .build();

        eventWriter.writeDisconnectEvent(disconnectionEvent);

        try {
            audioInputStream.close();
        } catch (IOException e) {
            throw new UncheckedIOException(e);
        }

    }
    //Notify the subscriber that we've finished.
    public void stop() {
        subscriber.onComplete();
    }

    public void playbackFinished() {
        String eventId = "PlaybackCompletion-" + String.valueOf(eventIdGenerator.incrementAndGet());

        PlaybackCompletionEvent playbackCompletionEvent = StartConversationRequestEventStream
                .playbackCompletionEventBuilder()
                .eventId(eventId)
                .clientTimestampMillis(System.currentTimeMillis())
                .build();

        eventWriter.writePlaybackFinishedEvent(playbackCompletionEvent);
    }

    private static class EventWriter implements Runnable {
        private final BlockingQueue<StartConversationRequestEventStream> eventQueue;
        private final AudioInputStream audioInputStream;
        private final AtomicLong demand;
        private final Subscriber subscriber;

        private boolean conversationConfigured;

        public EventWriter(Subscriber subscriber, AudioInputStream audioInputStream) {
            this.eventQueue = new LinkedBlockingQueue<>();

            this.demand = new AtomicLong(0);
            this.subscriber = subscriber;
            this.audioInputStream = audioInputStream;
        }

        public void writeConfigurationEvent(ConfigurationEvent configurationEvent) {
            eventQueue.add(configurationEvent);
        }

        public void writeDisconnectEvent(DisconnectionEvent disconnectionEvent) {
            eventQueue.add(disconnectionEvent);
        }

        public void writePlaybackFinishedEvent(PlaybackCompletionEvent playbackCompletionEvent) {
            eventQueue.add(playbackCompletionEvent);
        }

        void addDemand(long l) {
            this.demand.addAndGet(l);
        }

        @Override
        public void run() {
            try {

                while (true) {
                    long currentDemand = demand.get();

                    if (currentDemand > 0) {
                        // Try to read from queue of events.
                        // If nothing is in queue at this point, read the audio events directly from audio stream.
                        for (long i = 0; i < currentDemand; i++) {

                            if (eventQueue.peek() != null) {
                                subscriber.onNext(eventQueue.take());
                                demand.decrementAndGet();
                            } else {
                                writeAudioEvent();
                            }
                        }
                    }
                }
            } catch (InterruptedException e) {
                throw new RuntimeException("interrupted when reading data to be sent to server");
            } catch (Exception e) {
                e.printStackTrace();
            }
        }

        private void writeAudioEvent() {
            byte[] bytes = new byte[BYTES_IN_AUDIO_CHUNK];

            int numBytesRead = 0;
            try {
                numBytesRead = audioInputStream.read(bytes);
                if (numBytesRead != -1) {
                    byte[] byteArrayCopy = Arrays.copyOf(bytes, numBytesRead);

                    String eventId = "AudioEvent-" + String.valueOf(eventIdGenerator.incrementAndGet());

                    AudioInputEvent audioInputEvent = StartConversationRequestEventStream
                            .audioInputEventBuilder()
                            .audioChunk(SdkBytes.fromByteBuffer(ByteBuffer.wrap(byteArrayCopy)))
                            .contentType(AUDIO_CONTENT_TYPE)
                            .clientTimestampMillis(System.currentTimeMillis())
                            .eventId(eventId).build();

                    //System.out.println("sending audio event:" + audioInputEvent);
                    subscriber.onNext(audioInputEvent);
                    demand.decrementAndGet();
                    //System.out.println("sent audio event:" + audioInputEvent);
                } else {
                    subscriber.onComplete();
                    System.out.println("audio stream has ended");
                }

            } catch (IOException e) {
                System.out.println("got an exception when reading from audio stream");
                System.err.println(e);
                subscriber.onError(e);
            }
        }
    }
}
```

以下 适用于 Java 的 AWS SDK 示例将 Amazon Lex V2 机器人配置为对输入事件提供音频响应。

```
package com.lex.streaming.sample;

import java.io.IOException;
import java.io.InputStream;
import java.io.UncheckedIOException;
import java.util.Optional;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;

public class AudioResponse extends InputStream{

    // Used to convert byte, which is signed in Java, to positive integer (unsigned)
    private static final int UNSIGNED_BYTE_MASK = 0xFF;
    private static final long POLL_INTERVAL_MS = 10;

    private final LinkedBlockingQueue<Integer> byteQueue = new LinkedBlockingQueue<>();

    private volatile boolean closed;

    @Override
    public int read() throws IOException {
        try {
            Optional<Integer> maybeInt;
            while (true) {
                maybeInt = Optional.ofNullable(this.byteQueue.poll(POLL_INTERVAL_MS, TimeUnit.MILLISECONDS));

                // If we get an integer from the queue, return it.
                if (maybeInt.isPresent()) {
                    return maybeInt.get();
                }

                // If the stream is closed and there is nothing queued up, return -1.
                if (this.closed) {
                    return -1;
                }
            }
        } catch (InterruptedException e) {
            throw new IOException(e);
        }
    }

    /**
     * Writes data into the stream to be offered on future read() calls.
     */
    public void write(byte[] byteArray) {
        // Don't write into the stream if it is already closed.
        if (this.closed) {
            throw new UncheckedIOException(new IOException("Stream already closed when attempting to write into it."));
        }

        for (byte b : byteArray) {
            this.byteQueue.add(b & UNSIGNED_BYTE_MASK);
        }
    }

    @Override
    public void close() throws IOException {
        this.closed = true;
        super.close();
    }
}
```