

# 将 Snap 与 IVS 广播 SDK 结合使用
<a name="broadcast-3p-camera-filters-integrating-snap"></a>

本文档介绍如何将 Snap 的 Camera Kit SDK 与 IVS 广播 SDK 结合使用。

## Web
<a name="integrating-snap-web"></a>

本节假设您已经熟悉[使用 Web 广播 SDK 发布和订阅视频](getting-started-pub-sub-web.md)。

要集成 Snap 的 Camera Kit SDK 与 IVS 实时流式 Web 广播 SDK，您需要：

1. 安装 Camera Kit SDK 和 Webpack。（我们的示例使用 Webpack 作为捆绑程序，但您可以使用自己选择的任何捆绑程序。）

1. 创建 `index.html`。

1. 添加安装元素。

1. 创建`index.css`。

1. 显示和设置参与者。

1. 显示连接的相机和麦克风。

1. 创建 Camera Kit 会话。

1. 获取镜头并填充镜头选择器。

1. 将 Camera Kit 会话的输出渲染到画布。

1. 创建用于填充“镜头”下拉列表的函数。

1. 为 Camera Kit 提供用于渲染的媒体来源并发布 `LocalStageStream`。

1. 创建`package.json`。

1. 创建 Webpack 配置文件。

1. 设置 HTTPS 服务器并进行测试。

这些步骤中的各自的介绍如下。

### 安装 Camera Kit SDK 和 Webpack
<a name="integrating-snap-web-install-camera-kit"></a>

在此示例中，我们使用了 Webpack 作为捆绑程序；但是，您也可以使用任何捆绑程序。

```
npm i @snap/camera-kit webpack webpack-cli
```

### 创建 index.html
<a name="integrating-snap-web-create-index"></a>

接下来，创建 HTML 样板，并将 Web 广播 SDK 作为脚本标签导入。在下面的代码中，请务必将 `<SDK version>` 替换为您正在使用的广播 SDK 版本。

#### HTML
<a name="integrating-snap-web-create-index-code"></a>

```
<!--
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */
-->
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <title>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</title>

  <!-- Fonts and Styling -->
  <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,300italic,700,700italic" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.css" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/milligram/1.4.1/milligram.css" />
  <link rel="stylesheet" href="./index.css" />

  <!-- Stages in Broadcast SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>
  <!-- Introduction -->
  <header>
    <h1>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</h1>

    <p>This sample is used to demonstrate basic HTML / JS usage. <b><a href="https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/multiple-hosts.html">Use the AWS CLI</a></b> to create a <b>Stage</b> and a corresponding <b>ParticipantToken</b>. Multiple participants can load this page and put in their own tokens. You can <b><a href="https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#glossary" target="_blank">read more about stages in our public docs.</a></b></p>
  </header>
  <hr />
  
  <!-- Setup Controls -->
 
  <!-- Display Local Participants -->
  
  <!-- Lens Selector -->

  <!-- Display Remote Participants -->

  <!-- Load All Desired Scripts -->
```

### 添加安装元素
<a name="integrating-snap-web-add-setup-elements"></a>

创建 HTML，用于选择相机、麦克风和镜头，并指定参与者令牌：

#### HTML
<a name="integrating-snap-web-setup-controls-code"></a>

```
<!-- Setup Controls -->
  <div class="row">
    <div class="column">
      <label for="video-devices">Select Camera</label>
      <select disabled id="video-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="audio-devices">Select Microphone</label>
      <select disabled id="audio-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="token">Participant Token</label>
      <input type="text" id="token" name="token" />
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="join-button">Join Stage</button>
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="leave-button">Leave Stage</button>
    </div>
  </div>
```

在其下方添加其他 HTML 以显示来自本地和远程参与者的相机源：

#### HTML
<a name="integrating-snap-web-local-remote-participants-code"></a>

```
 <!-- Local Participant -->
<div class="row local-container">
    <canvas id="canvas"></canvas>

    <div class="column" id="local-media"></div>
    <div class="static-controls hidden" id="local-controls">
      <button class="button" id="mic-control">Mute Mic</button>
      <button class="button" id="camera-control">Mute Camera</button>
    </div>
  </div>

  
  <hr style="margin-top: 5rem"/>
  
  <!-- Remote Participants -->
  <div class="row">
    <div id="remote-media"></div>
  </div>
```

加载其他逻辑，包括用于设置相机的辅助方法和捆绑的 JavaScript 文件。（在本节的后面部分，您将创建这些 JavaScript 文件并将它们捆绑到单个文件中，这样您就可以将 Camera Kit 作为模块导入。捆绑的 JavaScript 文件将包含设置 Camera Kit、应用镜头以及将相机源及所应用镜头发布到舞台的逻辑。） 添加 `body` 和 `html` 元素的结束标签，以完成 `index.html` 的创建。

#### HTML
<a name="integrating-snap-web-load-all-scripts-code"></a>

```
<!-- Load all Desired Scripts -->
  <script src="./helpers.js"></script>
  <script src="./media-devices.js"></script>
  <!-- <script type="module" src="./stages-simple.js"></script> -->
  <script src="./dist/bundle.js"></script>
</body>
</html>
```

### 创建 index.css
<a name="integrating-snap-web-create-index-css"></a>

创建 CSS 源文件来设置页面样式。我们不会详细介绍这段代码，以便可以将重点放在用于管理暂存区和与 Snap 的 Camera Kit SDK 集成的逻辑上。

#### CSS
<a name="integrating-snap-web-create-index-css-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

html,
body {
  margin: 2rem;
  box-sizing: border-box;
  height: 100vh;
  max-height: 100vh;
  display: flex;
  flex-direction: column;
}

hr {
  margin: 1rem 0;
}

table {
  display: table;
}

canvas {
  margin-bottom: 1rem;
  background: green;
}

video {
  margin-bottom: 1rem;
  background: black;
  max-width: 100%;
  max-height: 150px;
}

.log {
  flex: none;
  height: 300px;
}

.content {
  flex: 1 0 auto;
}

.button {
  display: block;
  margin: 0 auto;
}

.local-container {
  position: relative;
}

.static-controls {
  position: absolute;
  margin-left: auto;
  margin-right: auto;
  left: 0;
  right: 0;
  bottom: -4rem;
  text-align: center;
}

.static-controls button {
  display: inline-block;
}

.hidden {
  display: none;
}

.participant-container {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: column;
  margin: 1rem;
}

video {
  border: 0.5rem solid #555;
  border-radius: 0.5rem;
}
.placeholder {
  background-color: #333333;
  display: flex;
  text-align: center;
  margin-bottom: 1rem;
}
.placeholder span {
  margin: auto;
  color: white;
}
#local-media {
  display: inline-block;
  width: 100vw;
}

#local-media video {
  max-height: 300px;
}

#remote-media {
  display: flex;
  justify-content: center;
  align-items: center;
  flex-direction: row;
  width: 100%;
}

#lens-selector {
  width: 100%;
  margin-bottom: 1rem;
}
```

### 显示和设置参与者
<a name="integrating-snap-web-setup-participants"></a>

接下来，创建 `helpers.js`，其中包含用于显示和设置参与者的辅助方法：

#### JavaScript
<a name="integrating-snap-web-setup-participants-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

function setupParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);

  const participantContainerId = isLocal ? 'local' : id;
  const participantContainer = createContainer(participantContainerId);
  const videoEl = createVideoEl(participantContainerId);

  participantContainer.appendChild(videoEl);
  groupContainer.appendChild(participantContainer);

  return videoEl;
}

function teardownParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);
  const participantContainerId = isLocal ? 'local' : id;

  const participantDiv = document.getElementById(
    participantContainerId + '-container'
  );
  if (!participantDiv) {
    return;
  }
  groupContainer.removeChild(participantDiv);
}

function createVideoEl(id) {
  const videoEl = document.createElement('video');
  videoEl.id = id;
  videoEl.autoplay = true;
  videoEl.playsInline = true;
  videoEl.srcObject = new MediaStream();
  return videoEl;
}

function createContainer(id) {
  const participantContainer = document.createElement('div');
  participantContainer.classList = 'participant-container';
  participantContainer.id = id + '-container';

  return participantContainer;
}
```

### 显示已连接的相机和麦克风
<a name="integrating-snap-web-display-cameras-microphones"></a>

接下来，创建 `media-devices.js`，其中包含用于显示连接到设备的相机和麦克风的辅助方法：

#### JavaScript
<a name="integrating-snap-web-display-cameras-microphones-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

/**
 * Returns an initial list of devices populated on the page selects
 */
async function initializeDeviceSelect() {
  const videoSelectEl = document.getElementById('video-devices');
  videoSelectEl.disabled = false;

  const { videoDevices, audioDevices } = await getDevices();
  videoDevices.forEach((device, index) => {
    videoSelectEl.options[index] = new Option(device.label, device.deviceId);
  });

  const audioSelectEl = document.getElementById('audio-devices');

  audioSelectEl.disabled = false;
  audioDevices.forEach((device, index) => {
    audioSelectEl.options[index] = new Option(device.label, device.deviceId);
  });
}

/**
 * Returns all devices available on the current device
 */
async function getDevices() {
  // Prevents issues on Safari/FF so devices are not blank
  await navigator.mediaDevices.getUserMedia({ video: true, audio: true });

  const devices = await navigator.mediaDevices.enumerateDevices();
  // Get all video devices
  const videoDevices = devices.filter((d) => d.kind === 'videoinput');
  if (!videoDevices.length) {
    console.error('No video devices found.');
  }

  // Get all audio devices
  const audioDevices = devices.filter((d) => d.kind === 'audioinput');
  if (!audioDevices.length) {
    console.error('No audio devices found.');
  }

  return { videoDevices, audioDevices };
}

async function getCamera(deviceId) {
  // Use Max Width and Height
  return navigator.mediaDevices.getUserMedia({
    video: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
    audio: false,
  });
}

async function getMic(deviceId) {
  return navigator.mediaDevices.getUserMedia({
    video: false,
    audio: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
  });
}
```

### 创建 Camera Kit 会话
<a name="integrating-snap-web-camera-kit-session"></a>

创建 `stages.js`，其中包含将镜头应用于相机视频源并将该视频源发布到舞台的逻辑。我们建议将以下代码块复制并粘贴到 `stages.js`。然后，您可以逐段查看代码，以了解以下各节中将会发生什么。

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);

  const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};

async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);

  // Create StageStreams for Audio and Video
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
      lensSelector.disabled = false;
    } else {
      controls.classList.add('hidden');
      lensSelector.disabled = true;
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

在本文件的第一部分，我们导入广播 SDK 和 Camera Kit Web SDK，并初始化将用于每个 SDK 的变量。我们通过在[启动 Camera Kit Web SDK](https://kit.snapchat.com/reference/CameraKit/web/0.7.0/index.html#bootstrapping-the-sdk) 后调用 `createSession` 来创建 Camera Kit 会话。请注意，画布元素对象会传递到会话；这会告诉 Camera Kit 渲染到该画布中。

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code-2"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
```

### 获取镜头并填充镜头选择器
<a name="integrating-snap-web-fetch-apply-lens"></a>

要获取镜头，请将镜头组 ID 占位符替换为自己的占位符，可在 [Camera Kit 开发人员门户](https://camera-kit.snapchat.com/)中找到此占位符。使用我们稍后将创建的 `populateLensSelector()` 函数填充“镜头”选择下拉列表。

#### JavaScript
<a name="integrating-snap-web-fetch-apply-lens-code"></a>

```
session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);
```

### 将 Camera Kit 会话的输出渲染到画布
<a name="integrating-snap-web-render-output-to-canvas"></a>

使用 [captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) 方法返回画布内容的 `MediaStream`。画布将包含已应用镜头的相机视频源的视频流。此外，为用于将相机和麦克风静音的按钮添加事件侦听器，以及用于加入和离开舞台的事件侦听器。在加入舞台的事件侦听器中，我们传入 Camera Kit 会话以及来自画布的 `MediaStream`，这样它就可以发布到舞台。

#### JavaScript
<a name="integrating-snap-web-render-output-to-canvas-code"></a>

```
const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};
```

### 创建用于填充“镜头”下拉列表的函数
<a name="integrating-snap-web-populate-lens-dropdown"></a>

创建以下函数，用之前获取的镜头来填充**镜头**选择器。**镜头**选择器是页面上的一个 UI 元素，可让您从镜头列表中选择要应用于相机源的镜头。此外，请创建 `handleLensChange` 回调函数，以便从**镜头**下拉列表中选择指定镜头时应用该镜头。

#### JavaScript
<a name="integrating-snap-web-populate-lens-dropdown-code"></a>

```
const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};
```

### 为 Camera Kit 提供用于渲染的媒体来源并发布 LocalStageStream
<a name="integrating-snap-web-publish-localstagestream"></a>

要发布应用了镜头的视频流，请创建一个调用 `setCameraKitSource` 的函数，用于传入之前从画布上捕获的 `MediaStream`。画布中的 `MediaStream` 目前没有执行任何操作，因为我们还没有整合本地相机源。我们可以通过调用 `getCamera` 辅助方法并将其分配给 `localCamera` 来整合本地相机源。然后，我们可以将本地相机源（通过 `localCamera`）和会话对象传递给 `setCameraKitSource`。`setCameraKitSource` 函数通过调用 `createMediaStreamSource` 将我们的本地相机源转换为 [CameraKit 的媒体源](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#creating-a-camerakitsource)。然后，[转换](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#2d-transforms) `CameraKit` 的媒体源以生成前置相头的镜像。然后，镜头效果应用于媒体源，并通过调用 `session.play()` 将其渲染到输出画布。

现在，将镜头应用于从画布捕获的 `MediaStream` 后，我们可以继续将其发布到舞台。为此，我们使用来自 `MediaStream` 的视频轨道创建 `LocalStageStream`。然后，可以将 `LocalStageStream` 的实例传递到要发布的 `StageStrategy`。

#### JavaScript
<a name="integrating-snap-web-publish-localstagestream-code"></a>

```
async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);
  // Create StageStreams for Audio and Video
  // cameraStageStream = new LocalStageStream(localCamera.getVideoTracks()[0]);
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };
```

下面的其余代码用于创建和管理我们的舞台：

#### JavaScript
<a name="integrating-snap-web-create-manage-stage-code"></a>

```
stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events

  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
    } else {
      controls.classList.add('hidden');
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

### 创建 package.json
<a name="integrating-snap-web-package-json"></a>

创建 `package.json` 并添加以下 JSON 配置。该文件定义了我们的依赖项，并包含用于捆绑代码的脚本命令。

#### JSON 配置
<a name="integrating-snap-web-package-json-code"></a>

```
{
  "dependencies": {
    "@snap/camera-kit": "^0.10.0"
  },
  "name": "ivs-stages-with-snap-camerakit",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "build": "webpack"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "devDependencies": {
    "webpack": "^5.95.0",
    "webpack-cli": "^5.1.4"
  }
}
```

### 创建 Webpack 配置文件
<a name="integrating-snap-web-webpack-config"></a>

创建 `webpack.config.js` 并添加以下代码。这捆绑了我们到目前为止所创建的代码，以便我们能够利用 import 语句来使用 Camera Kit。

#### JavaScript
<a name="integrating-snap-web-webpack-config-code"></a>

```
const path = require('path');
module.exports = {
  entry: ['./stage.js'],
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist'),
  },
};
```

最后，运行 `npm run build` 以按照 Webpack 配置文件中的定义捆绑您的 JavaScript。出于测试目的，您随后可以从本地计算机提供 HTML 和 JavaScript。在此示例中，我们使用了 Python 的 `http.server` 模块。

### 设置 HTTPS 服务器并进行测试
<a name="integrating-snap-web-https-server-test"></a>

要测试代码，我们需要设置 HTTPS 服务器。使用 HTTPS 服务器进行本地开发并测试 Web 应用程序与 Snap Camera Kit SDK 的集成将有助于避免 CORS（跨源资源共享）问题。

打开终端并导航到创建了目前为止所有代码的目录。运行以下命令以生成自签名 SSL/TLS 证书和私有密钥：

```
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
```

将创建两个文件：`key.pem`（私有密钥）和 `cert.pem`（自签名证书）。新建一个名为 `https_server.py` 的 Python 文件，并添加以下代码：

#### Python
<a name="integrating-snap-web-https-server-test-code"></a>

```
import http.server
import ssl

# Set the directory to serve files from
DIRECTORY = '.'

# Create the HTTPS server
server_address = ('', 4443)
httpd = http.server.HTTPServer(
    server_address, http.server.SimpleHTTPRequestHandler)

# Wrap the socket with SSL/TLS
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain('cert.pem', 'key.pem')
httpd.socket = context.wrap_socket(httpd.socket, server_side=True)

print(f'Starting HTTPS server on https://localhost:4443, serving {DIRECTORY}')
httpd.serve_forever()
```

打开终端，导航到创建 `https_server.py` 文件的目录，然后运行以下命令：

```
python3 https_server.py
```

这将启动 https://localhost:4443 上的 HTTPS 服务器，提供来自当前目录的文件。确保 `cert.pem` 和 `key.pem` 文件与 `https_server.py` 文件位于同一目录中。

打开浏览器并导航至 https://localhost:4443。由于这是一个自签名 SSL/TLS 证书，您的 Web 浏览器不会信任它，所以您将会收到警告。由于这仅用于测试目的，您可以绕过该警告。然后，您应该会在屏幕上看到之前指定的 Snap Lens 的 AR 效果已应用到相机视频源。

请注意，这种使用 Python 内置 `http.server` 和 `ssl` 模块的设置适用于本地开发和测试目的，但不建议将其用于生产环境。Web 浏览器和其他客户端不信任此设置中使用的自签名 SSL/TLS 证书，这意味着用户在访问服务器时会遇到安全警告。此外，尽管我们在此示例中使用了 Python 的内置 http.server 和 ssl 模块，但您仍可以选择使用其他 HTTPS 服务器解决方案。

## Android
<a name="integrating-snap-android"></a>

要将 Snap 的 Camera Kit SDK 与 IVS Android 广播 SDK 集成，您必须安装 Camera Kit SDK，初始化 Camera Kit 会话，应用镜头并将 Camera Kit 会话的输出馈送到自定义图像输入源。

要安装 Camera Kit SDK，请将以下内容添加到您的模块的 `build.gradle` 文件中。将 `$cameraKitVersion` 替换为[最新的 Camera Kit SDK 版本](https://docs.snap.com/camera-kit/integrate-sdk/mobile/changelog-mobile)。

### Java
<a name="integrating-snap-android-install-camerakit-sdk-code"></a>

```
implementation "com.snap.camerakit:camerakit:$cameraKitVersion"
```

初始化并获取 `cameraKitSession`。Camera Kit 还为 Android 的 [CameraX](https://developer.android.com/media/camera/camerax) API 提供了便捷的包装器，因此您无需编写复杂的逻辑即可将 CameraX 与 Camera Kit 一起使用。您可以将 `CameraXImageProcessorSource` 对象用作 [ImageProcessor](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-image-processor/index.html) 的[来源](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-source/index.html)，这样您就可以开始相机预览流式帧了。

### Java
<a name="integrating-snap-android-initialize-camerakitsession-code"></a>

```
 protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_main);

        // Camera Kit support implementation of ImageProcessor that is backed by CameraX library:
        // https://developer.android.com/training/camerax
        CameraXImageProcessorSource imageProcessorSource = new CameraXImageProcessorSource( 
            this /*context*/, this /*lifecycleOwner*/
        );
        imageProcessorSource.startPreview(true /*cameraFacingFront*/);

        cameraKitSession = Sessions.newBuilder(this)
                .imageProcessorSource(imageProcessorSource)
                .attachTo(findViewById(R.id.camerakit_stub))
                .build();
    }
```

### 获取并应用镜头
<a name="integrating-snap-android-fetch-apply-lenses"></a>

您可以在 [Camera Kit 开发人员门户](https://camera-kit.snapchat.com/)的轮盘中配置镜头及其顺序：

#### Java
<a name="integrating-snap-android-configure-lenses-code"></a>

```
// Fetch lenses from repository and apply them
 // Replace LENS_GROUP_ID with Lens Group ID from https://camera-kit.snapchat.com
cameraKitSession.getLenses().getRepository().get(new Available(LENS_GROUP_ID), available -> {
     Log.d(TAG, "Available lenses: " + available);
     Lenses.whenHasFirst(available, lens -> cameraKitSession.getLenses().getProcessor().apply(lens, result -> {
          Log.d(TAG,  "Apply lens [" + lens + "] success: " + result);
      }));
});
```

要进行广播，请将处理后的帧发送到自定义图像源的底层 `Surface`。使用 `DeviceDiscovery` 对象并创建 `CustomImageSource` 以返回 `SurfaceSource`。然后，您可以将 `CameraKit` 会话的输出渲染到 `SurfaceSource` 提供的底层 `Surface`。

#### Java
<a name="integrating-snap-android-broadcast-code"></a>

```
val publishStreams = ArrayList<LocalStageStream>()

val deviceDiscovery = DeviceDiscovery(applicationContext)
val customSource = deviceDiscovery.createImageInputSource(BroadcastConfiguration.Vec2(720f, 1280f))

cameraKitSession.processor.connectOutput(outputFrom(customSource.inputSurface))
val customStream = ImageLocalStageStream(customSource)

// After rendering the output from a Camera Kit session to the Surface, you can 
// then return it as a LocalStageStream to be published by the Broadcast SDK
val customStream: ImageLocalStageStream = ImageLocalStageStream(surfaceSource)
publishStreams.add(customStream)

@Override
fun stageStreamsToPublishForParticipant(stage: Stage, participantInfo: ParticipantInfo): List<LocalStageStream> = publishStreams
```