

# Stream live media (SDKs)
<a name="webrtc-sdks"></a>

In Amazon Kinesis Video Streams WebRTC, peers are devices that are configured for real-time, two-way streaming via a signaling channel. Amazon Kinesis Video Streams with WebRTC SDKs are easy-to-use software libraries that you can download and install on the devices and application clients that you want to configure as peers over a given signaling channel.

Amazon Kinesis Video Streams with WebRTC includes the following SDKs:
+ [Amazon Kinesis Video Streams with WebRTC SDK in C for embedded devices](kvswebrtc-sdk-c.md)
+ [Amazon Kinesis Video Streams with WebRTC SDK in JavaScript for web applications](kvswebrtc-sdk-js.md)
+ [Amazon Kinesis Video Streams WebRTC SDK for Android](kvswebrtc-sdk-android.md)
+ [Amazon Kinesis Video Streams WebRTC SDK for iOS](kvswebrtc-sdk-ios.md)

Each SDK includes corresponding samples and step-by-step instructions that can help you build and run those applications. You can use these samples for low latency, live, two-way audio and video streaming and data exchange between any combinations of Web/Android/iOS applications or embedded devices. In other words, you can stream live audio and video from an embedded camera device to Android or web applications or between two Android applications.

# Amazon Kinesis Video Streams with WebRTC SDK in C for embedded devices
<a name="kvswebrtc-sdk-c"></a>

The following step-by-step instructions describe how to download, build, and run the Kinesis Video Streams with WebRTC SDK in C for embedded devices and its corresponding samples.

The following codecs are supported:
+ **Audio: **
  + G.711 A-Law
  + G.711 U-Law
  + Opus
+ **Video:**
  + H.264
  + H.265
  + VP8

## Download the SDK
<a name="gs-download-sdk"></a>

To download the Kinesis Video Streams with WebRTC SDK in C for embedded devices, run the following command:

```
$ git clone https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c.git
```

## Build the SDK
<a name="gs-build-sdk"></a>

**Important**  
Before you complete these steps on a macOS and depending on the version of the macOS you have, you must run `xcode-select --install` to download the package with the command line tools and header. Then open `/Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg` and follow the installer to install the command line tools and header. You only need to do this once and before invoking `cmake`. If you already have the command line tools and header installed, you do not need to run this command again.

Complete the following steps:

****

1. Install cmake:
   + On macOS, run `brew install cmake pkg-config srtp` 
   + on Ubuntu, run `sudo apt-get install pkg-config cmake libcap2 libcap-dev`

1. Obtain the access key and the secret key of the AWS account that you want to use for this demo.

1. Run the following command to create a `build` directory in your downloaded WebRTC C SDK, and execute `cmake` from it:

   ```
   $ mkdir -p amazon-kinesis-video-streams-webrtc-sdk-c/build; cd amazon-kinesis-video-streams-webrtc-sdk-c/build; cmake ..
   ```

1. Now that you're in the `build` directory you just created with the step above, run `make` to build the WebRTC C SDK and its provided samples.
**Note**  
 The `kvsWebrtcClientMasterGstSample` will NOT be built if the system doesn't have `gstreamer` installed. To make sure it is built (on macOS) you must run: `brew install gstreamer gst-plugins-base gst-plugins-good` 

## Run the SDK samples
<a name="gs-run-c-sample"></a>

After you complete the procedure above, you end up with the following sample applications in your `build` directory:
+ `kvsWebrtcClientMaster` - This application sends sample H264/Opus frames (path: /samples/h264SampleFrames and /samples/opusSampleFrames) via the signaling channel. It also accepts incoming audio, if enabled in the browser. When checked in the browser, it prints the metadata of the received audio packets in your terminal.
+ `kvsWebrtcClientViewer` - This application accepts sample H264/Opus frames and prints them out. 
+ `kvsWebrtcClientMasterGstSample` - This application sends sample H264/Opus frames from a GStreamer pipeline.

To run any of these samples, complete the following steps:

1. Setup your environment with your AWS account credentials:

   ```
   export AWS_ACCESS_KEY_ID=YourAccessKey
   export AWS_SECRET_ACCESS_KEY=YourSecretKey
   export AWS_DEFAULT_REGION=YourAWSRegion
   ```

   If you're using temporary AWS credentials, also export your session token:

   ```
   export AWS_SESSION_TOKEN=YourSessionToken
   ```

   If you have a custom CA certificate path to set, you can set it using:

   ```
   export AWS_KVS_CACERT_PATH=../certs/cert.pem
   ```
**Note**  
By default, the SSL CA certificate is set to ../certs/cert.pem which points to the file in this repository in [GitHub](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/certs/cert.pem).

1. Run either of the sample applications by passing to it the name that you want to give to your signaling channel. The application creates the signaling channel using the name you provide. For example, to create a signaling channel called `myChannel` and to start sending sample H264/Opus frames via this channel, run the following command: 

   ```
   ./kvsWebrtcClientMaster myChannel
   ```

   When the command line application prints `Connection established`, you can proceed to the next step.

1. Now that your signaling channel is created and the connected master is streaming media to it, you can view this stream. For example, you can view this live stream in a web application. To do so, open the WebRTC SDK Test Page using the steps in [Use the sample application](kvswebrtc-sdk-js.md#build-sdk-js) and set the following values using the same AWS credentials and the same signaling channel that you specified for the master above:
   + Access key ID
   + Secret access key
   + Signaling channel name
   + Client ID (optional)

   Choose **Start viewer** to start live video streaming of the sample H264/Opus frames.

## Video tutorial
<a name="sdk-c-video"></a>

This video demonstrates how to connect your camera and get started with Amazon Kinesis Video Streams for WebRTC.




# Amazon Kinesis Video Streams with WebRTC SDK in JavaScript for web applications
<a name="kvswebrtc-sdk-js"></a>

You can find the Kinesis Video Streams with WebRTC SDK in JavaScript for web applications and its corresponding samples in [GitHub](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js).

**Topics**
+ [Install the SDK](#install-sdk-js)
+ [WebRTC JavaScript SDK documentation](#docs-sdk-js)
+ [Use the sample application](#build-sdk-js)
+ [Edit the sample application](#run-sdk-js)

## Install the SDK
<a name="install-sdk-js"></a>

Whether and how you install the Kinesis Video Streams with WebRTC SDK in JavaScript depends on whether the code executes in `Node.js` modules or browser scripts.

------
#### [ NodeJS module ]

The preferred way to install the Kinesis Video Streams with WebRTC SDK in JavaScript for Node.js is to use [npm, the Node.js package manager](https://www.npmjs.com/).

The package is hosted at [https://www.npmjs.com/package/amazon-kinesis-video-streams-webrtc](https://www.npmjs.com/package/amazon-kinesis-video-streams-webrtc?activeTab=readme).

To install this SDK in your `Node.js` project, use the terminal to navigate to the same directory as your project’s `package.json`:

Type the following:

```
npm install amazon-kinesis-video-streams-webrtc
```

You can import the SDK classes like typical Node.js modules:

```
// JavaScript
const SignalingClient = require('amazon-kinesis-video-streams-webrtc').SignalingClient;
// TypeScript
import { SignalingClient } from 'amazon-kinesis-video-streams-webrtc';
```

------
#### [ Browser ]

You don't have to install the SDK to use it in browser scripts. You can load the hosted SDK package directly from AWS with a script in your HTML pages.

To use the SDK in the browser, add the following script element to your HTML pages:

```
<script src="https://unpkg.com/amazon-kinesis-video-streams-webrtc/dist/kvs-webrtc.min.js"></script>
```

After the SDK loads in your page, the SDK is available from the global variable `KVSWebRTC` (or `window.KVSWebRTC`). 

For example, `window.KVSWebRTC.SignalingClient`.

------

## WebRTC JavaScript SDK documentation
<a name="docs-sdk-js"></a>

The documentation for the SDK methods are on the GitHub readme, under [Documentation](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js?tab=readme-ov-file#documentation).

In the [Usage](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js?tab=readme-ov-file#usage) section, there is additional information for integrating this SDK along with the AWS SDK for JavaScript to build a web-based viewer application.

See the `examples` directory for an example of a complete application, including both a master and viewer role.

## Use the sample application
<a name="build-sdk-js"></a>

Kinesis Video Streams with WebRTC also hosts a sample application that you can use to either create a new signaling channel or connect to an existing channel and use it as a master or viewer.

The Kinesis Video Streams with WebRTC sample application is located in [GitHub](https://awslabs.github.io/amazon-kinesis-video-streams-webrtc-sdk-js/examples/index.html).

The code for the sample application is in the `examples` directory.

**Topics**
+ [Stream peer-to-peer from the sample application to the AWS Management Console](#sdk-js-stream-console)
+ [Stream peer-to-peer from the sample application to the sample application](#sdk-js-stream-test)
+ [Stream peer-to-peer with WebRTC Ingestion from the sample page to the sample page](#sdk-js-stream-ingestion)

### Stream peer-to-peer from the sample application to the AWS Management Console
<a name="sdk-js-stream-console"></a>



1. Open the [Kinesis Video Streams with WebRTC sample application](https://awslabs.github.io/amazon-kinesis-video-streams-webrtc-sdk-js/examples/index.html) and complete the following:
   + AWS Region. For example, `us-west-2`. 
   + The AWS access key and the secret key for your IAM user or role. Leave the session token blank if you are using long-term AWS credentials.
   + The name of the signaling channel to which you want to connect.

     If you want to connect to a new signaling channel, choose **Create Channel** to create a signaling channel with the value provided in the box. 
**Note**  
Your signaling channel name must be unique for the current account and region. You can use letters, numbers, underscores (\$1), and hyphens (-), but not spaces.
   + Whether you want to send audio, video, or both.
   + WebRTC Ingestion and Storage. Expand the node and choose one of the following: 
     + Select **Automatically determine ingestion mode**. 
     + Make sure **Automatically determine ingestion mode** isn't selected and set the manual override to **OFF**. 
**Note**  
**Automatically determine ingestion mode** has the application call the [DescribeMediaStorageConfiguration](https://docs.aws.amazon.com//kinesisvideostreams/latest/dg/API_DescribeMediaStorageConfiguration.html) API to determine which mode to run in (Peer-to-peer or WebRTC ingestion). This additional API call adds a small amount to the startup time.   
If you know ahead of time which mode this signaling channel is running in, use the manual override to skip this API call. 
   + ICE candidate generation. Leave `STUN`/`TURN` selected and leave `Trickle ICE` enabled.

1. Choose **Start Master** to connect to the signaling channel.

   Allow access to your camera and/or microphone, if needed.

1. Open the [Kinesis Video Streams console](https://console.aws.amazon.com//kinesisvideo/home/) in the AWS Management Console.

   Make sure the correct region is selected.

1. In the left navigation, select **[signaling channels](https://console.aws.amazon.com//kinesisvideo/home#/signalingChannels/)**. 

   Select the name of the signaling channel above. Use the search bar, if needed.

1. Expand the **Media playback viewer** section.

1. Choose the **play** button on the video player. This joins the WebRTC session as a `viewer`.

   The media that is being sent on the demo page should display in the AWS Management Console.

### Stream peer-to-peer from the sample application to the sample application
<a name="sdk-js-stream-test"></a>

1. Open the [Kinesis Video Streams with WebRTC sample application](https://awslabs.github.io/amazon-kinesis-video-streams-webrtc-sdk-js/examples/index.html) and complete the following information:
   + AWS Region. For example, `us-west-2`. 
   + The AWS access key and the secret key for your IAM user or role. Leave the session token blank if you are using long-term AWS credentials.
   + The name of the signaling channel to which you want to connect.

     If you want to connect to a new signaling channel, choose **Create Channel** to create a signaling channel with the value provided in the box. 
**Note**  
Your signaling channel name must be unique for the current account and region. You can use letters, numbers, underscores (\$1), and hyphens (-), but not spaces.
   + Whether you want to send audio, video, or both.
   + WebRTC Ingestion and Storage. Expand the node and choose one of the following: 
     + Select **Automatically determine ingestion mode**. 
     + Make sure **Automatically determine ingestion mode** isn't selected and set the manual override to **OFF**. 
**Note**  
**Automatically determine ingestion mode** has the application call the [DescribeMediaStorageConfiguration](https://docs.aws.amazon.com//kinesisvideostreams/latest/dg/API_DescribeMediaStorageConfiguration.html) API to determine which mode to run in (Peer-to-peer or WebRTC ingestion). This additional API call adds a small amount to the startup time.   
If you know ahead of time which mode this signaling channel is running in, use the manual override to skip this API call. 
   + ICE candidate generation. Leave `STUN`/`TURN` selected and leave `Trickle ICE` enabled.

1. Choose **Start Master** to connect to the signaling channel as the `master` role.

   Allow access to your camera and/or microphone, if needed.

1. Open another browser tab and open the [Kinesis Video Streams with WebRTC sample application](https://awslabs.github.io/amazon-kinesis-video-streams-webrtc-sdk-js/examples/index.html). All of the information from the previous run should load.

1. Scroll down and choose **Start Viewer** to connect to the signaling channel as the `viewer` role.

   You should see the media being exchanged between the `master` and `viewer`.

### Stream peer-to-peer with WebRTC Ingestion from the sample page to the sample page
<a name="sdk-js-stream-ingestion"></a>

1. Follow [Ingest media from a browser](ingest-media.md#ingest-browser) to connect a master participant and make sure it is connected to the storage session.

1. Follow [Add viewers to the ingestion session](ingest-media.md#ingest-add-viewers) to add viewer participants.

   Viewer participants will connect to and receive media from the storage session. They can send optional audio back to the storage session.

   The storage session handles mixing the media received from master and viewer participants and sending it to the appropriate destinations.

1. You can view and consume ingested media through [Kinesis Video Streams playback](https://docs.aws.amazon.com//kinesisvideostreams/latest/dg/how-playback.html).

## Edit the sample application
<a name="run-sdk-js"></a>

To edit the SDK and sample application for development purposes, follow the instructions below.

**Prerequisite**

NodeJS version 16\$1

**Note**  
We recommend downloading the latest long term support (LTS) version from [https://nodejs.org/en/download](https://nodejs.org/en/download).

**Edit the sample application**

1. Download the Kinesis Video Streams with WebRTC SDK in JavaScript.

   Type the following in the terminal:

   ```
   git clone https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js.git
   ```

1. Navigate to the directory with the package.json file. The file is located in the repository's root directory.

   Type the following in the terminal:

   ```
   cd amazon-kinesis-video-streams-webrtc-sdk-js
   ```

1. Install dependencies. 

   Type the following [npm CLI](https://docs.npmjs.com/cli/v10/commands) command in the terminal:

   ```
   npm install
   ```

1. Start the web server to start serving web pages. 

   Type the following [npm CLI](https://docs.npmjs.com/cli/v10/commands) command in the terminal:

   ```
   npm run develop
   ```

1. In your browser, visit [http://localhost:3001/](http://localhost:3001/).

   You can make edits to the web page by editing the files in the `examples` directory.

# Amazon Kinesis Video Streams WebRTC SDK for Android
<a name="kvswebrtc-sdk-android"></a>

The following step-by-step instructions describe how to download, build, and run the Kinesis Video Streams with WebRTC SDK for Android and its corresponding samples.

**Note**  
Amazon Kinesis Video Streams doesn't support IPv6 addresses on Android. See more information about [disabling IPv6 on your Android device](https://www.cactusvpn.com/tutorials/how-to-disable-ipv6-on-android/).

## Download the SDK
<a name="download-sdk-android"></a>

To download the WebRTC SDK in Android, run the following command:

```
$ git clone https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-android.git
```

## Build the SDK
<a name="build-sdk-android"></a>

To build the WebRTC SDK in Android, complete the following steps:

1. Import the Android WebRTC SDK into the Android Studio integrated development environment (IDE) by opening `amazon-kinesis-video-streams-webrtc-sdk-android/build.gradle` with **Open as Project**. 

1. If you open the project for the first time, it automatically syncs. If not - initiate a sync. When you see a build error, choose to install any required SDKs by choosing **Install missing SDK package(s)**, then choose **Accept** and complete the install.

1. Configure Amazon Cognito (user pool and identity pool) settings. For details steps, see [Configure Amazon Cognito for the SDK](#build-sdk-android-cognito). This generates authentication and authorization settings required to build the Android WebRTC SDK.

1. In your Android IDE, open `awsconfiguration.json` (from `src/main/res/raw/`). The file looks like the following:

   ```
   {
     "Version": "1.0",
     "CredentialsProvider": {
       "CognitoIdentity": {
         "Default": {
           "PoolId": "REPLACE_ME",
           "Region": "REPLACE_ME"
         }
       }
     },
     "IdentityManager": {
       "Default": {}
     },
     "CognitoUserPool": {
       "Default": {
         "AppClientSecret": "REPLACE_ME",
         "AppClientId": "REPLACE_ME",
         "PoolId": "REPLACE_ME",
         "Region": "REPLACE_ME"
       }
     }
   }
   ```

   Update `awsconfiguration.json` with the values generated by running the steps in [Configure Amazon Cognito for the SDK](#build-sdk-android-cognito).

1. Make sure your Android device is connected to the computer where you're running the Android IDE. In the Android IDE, select the connected device and then build and run the WebRTC Android SDK.

   This step installs an app called `AWSKinesisVideoWebRTCDemoApp` on your Android device. Using this app, you can verify live WebRTC audio/video streaming between mobile, web and IoT device clients.

## Run the sample application
<a name="run-sdk-android"></a>

****

Complete the following steps:

1. On your Android device, open **AWSKinesisVideoWebRTCDemoApp** and log in using either a new (by creating it first) or an existing Amazon Cognito account.

1. In **AWSKinesisVideoWebRTCDemoApp**, navigate to the **Channel Configuration** page and either create a new signaling channel or choose an existing one.
**Note**  
Currently, using the sample application in this SDK, you can only run one signalling channel in **AWSKinesisVideoWebRTCDemoApp**.

1. Optional: choose a unique **Client Id** if you want to connect to this channel as a viewer. Client ID is required only if multiple viewers are connected to a channel. This helps channel's master identify respective viewers.

1. Choose the AWS Region and whether you want to send audio or video data, or both.

1. To verify peer-to-peer streaming, do any of the following:
**Note**  
Ensure that you specify the same signaling channel name, AWS region, viewer ID, and the AWS account ID on all clients that you're using in this demo.
   + Peer-to-peer streaming between two Android devices: master and viewer
     + Using procedures above, download, build, and run the Android WebRTC SDK on two Android devices. 
     + Open **AWSKinesisVideoWebRTCDemoApp** on one Android device in master mode (choose **START MASTER**) to start a new session (signaling channel).
**Note**  
Currently, there can only be one master for any given signaling channel.
     + Open **AWSKinesisVideoWebRTCDemoApp** on your second Android device in viewer mode to connect to the signaling channel (session) started in the step above (choose **START VIEWER**).

       Verify that the viewer can see master's audio/video data.
   + Peer-to-peer streaming between the embedded SDK master and an Android device viewer
     + Download, build, and run the [Amazon Kinesis Video Streams with WebRTC SDK in C for embedded devices](kvswebrtc-sdk-c.md) in master mode on a camera device. 
     + Using procedures above, download, build, and run the Android WebRTC SDK on an Android device. Open **AWSKinesisVideoWebRTCDemoApp** on this Android device in viewer mode and verify that the viewer can see the embedded SDK master's audio/video data.
   + Peer-to-peer streaming between Android device as master and web browser as viewer
     + Using procedures above, download, build, and run the Android WebRTC SDK on an Android device. Open **AWSKinesisVideoWebRTCDemoApp** on this Android device in master mode (choose **START MASTER**) to start a new session (signaling channel).
     + Download, build, and run the [Amazon Kinesis Video Streams with WebRTC SDK in JavaScript for web applications](kvswebrtc-sdk-js.md) as viewer and verify that the viewer can see the Android master's audio/video. 

## Configure Amazon Cognito for the SDK
<a name="build-sdk-android-cognito"></a>

### Prerequisites
<a name="androidsdk-prerequisites"></a>
+ We recommend [Android Studio](https://developer.android.com/studio/index.html) for examining, editing, and running the application code. We recommend using the latest stable version.
+ In the sample code, you provide Amazon Cognito credentials.

Follow these procedures to set up an Amazon Cognito user pool and identity pool.

### Set up a user pool
<a name="setup-user-pool"></a>

**To set up a user pool**

1. Sign in to the [Amazon Cognito console](https://console.aws.amazon.com/cognito/home) and verify the region is correct.

1. In the navigation on the left choose **User pools**.

1. In the **User pools** section, choose **Create user pool**.

1. Complete the following sections:

   1. **Step 1: Configure sign-in experience** - In the **Cognito user pool sign-in options** section, select the appropriate options.

      Select **Next**.

   1. **Step 2: Configure security requirements** - Select the appropriate options.

      Select **Next**.

   1. **Step 3: Configure sign-up experience** - Select the appropriate options.

      Select **Next**.

   1. **Step 4: Configure message delivery** - Select the appropriate options.

      In the **IAM role selection** field, select an existing role or create a new role.

      Select **Next**.

   1. **Step 5: Integrate your app** - Select the appropriate options.

      In the **Initial app client** field, choose **Confidential client**.

      Select **Next**.

   1. **Step 6: Review and create** - Review your selections from the previous sections, then choose **Create user pool**.

1. On the **User pools** page, select the pool that you just created.

   Copy the **User pool ID** and make note of this for later. In the `awsconfiguration.json` file, this is `CognitoUserPool.Default.PoolId`.

1. Select the **App integration** tab and go to the bottom of the page.

1. In the **App client list** section, choose the **App client name** you just created.

   Copy the **Client ID** and make note of this for later. In the `awsconfiguration.json` file, this is `CognitoUserPool.Default.AppClientId`.

1. Show the **Client secret** and make note of this for later. In the `awsconfiguration.json` file, this is `CognitoUserPool.Default.AppClientSecret`.

### Set up an identity pool
<a name="setup-identity-pool"></a>

**To set up an identity pool**

1. Sign in to the [Amazon Cognito console](https://console.aws.amazon.com/cognito/home) and verify the region is correct.

1. In the navigation on the left choose **Identity pools**.

1. Choose **Create identity pool**.

1. Configure the identity pool.

   1. **Step 1: Configure identity pool trust** - Complete the following sections:
      + **User access** - Select **Authenticated access**
      + **Authenticated identity sources** - Select **Amazon Cognito user pool**

      Select **Next**.

   1. **Step 2: Configure permissions** - In the **Authenticated role** section, complete the following fields:
      + **IAM role** - Select **Create a new IAM role**
      + **IAM role name** - Enter a name and make note of it for a later step.

      Select **Next**.

   1. **Step 3: Connect identity providers** - In the **User pool details** section complete the following fields: 
      + **User pool ID** - Select the user pool you created earlier.
      + **App client ID** - Select the app client ID you created earlier.

      Select **Next**.

   1. **Step 4: Configure properties** - Type a name in the **Identity pool name** field.

      Select **Next**.

   1. **Step 5: Review and create** - Review your selections in each of the sections, then select **Create identity pool**.

1. On the **Identity pools** page, select your new identity pool.

   Copy the **Identity pool ID** and make note of this for later. In the `awsconfiguration.json` file, this is `CredentialsProvider.CognitoIdentity.Default.PoolId`.

1. Update the permissions for the IAM role.

   1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. In the navigation on the left, choose **Roles**.

   1. Find and select the role you created above.
**Note**  
Use the search bar, if needed.

   1. Select the attached permissions policy.

      Select **Edit**.

   1. Select the **JSON** tab and replace the policy with the following:

      ```
      {
          "Version": "2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "cognito-identity:*",
                      "kinesisvideo:*"
                  ],
                  "Resource": [
                      "*"
                  ]
              }
          ]
      }
      ```

      Select **Next**.

   1. Select the box next to **Set this new version as the default** if it isn't already selected.

      Select **Save changes**.

# Amazon Kinesis Video Streams WebRTC SDK for iOS
<a name="kvswebrtc-sdk-ios"></a>

The following step-by-step instructions describe how to download, build, and run the Kinesis Video Streams WebRTC SDK in iOS and its corresponding samples.

## Download the SDK
<a name="download-sdk-js"></a>

To download the WebRTC SDK in iOS, run the following command:

```
$ git clone https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-ios.git
```

## Build the SDK
<a name="build-sdk-ios"></a>

Complete the following steps:

1. Import the iOS WebRTC SDK into the XCode integrated development environment (IDE) on an iOS computer by opening `KinesisVideoWebRTCDemoApp.xcworkspace` (path: amazon-kinesis-video-streams-webrtc-sdk-ios/Swift/AWSKinesisVideoWebRTCDemoApp.xcworkspace).

1. If you open the project for the first time, it automatically builds. If not, initiate a build.

   You might see the following error: 

   ```
   error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation.
   ```

   If you see this, do the following:

   1. Change your current working directory to `amazon-kinesis-video-streams-webrtc-sdk-ios/Swift` and run the following in the command line:

      ```
      pod cache clean --all
      pod install
      ```

   1. Change your current working directory to `amazon-kinesis-video-streams-webrtc-sdk-ios` and run the following at the command line:

      ```
      $ git checkout Swift/Pods/AWSCore/AWSCore/Service/AWSService.m
      ```

   1. `Build` again.

1. Configure Amazon Cognito (user pool and identity pool) settings. For details steps, see [Configure Amazon Cognito for the SDK](#build-sdk-ios-cognito). This generates authentication and authorization settings required to build the iOS WebRTC SDK.

1. In your IDE, open the `awsconfiguration.json` file (from `/Swift/KVSiOSApp`). The file looks like the following:

   ```
   {
       "Version": "1.0",
       "CredentialsProvider": {
           "CognitoIdentity": {
               "Default": {
                   "PoolId": "REPLACEME",
                   "Region": "REPLACEME"
               }
           }
       },
       "IdentityManager": {
           "Default": {}
       },
       "CognitoUserPool": {
           "Default": {
               "AppClientSecret": "REPLACEME",
               "AppClientId": "REPLACEME",
               "PoolId": "REPLACEME",
               "Region": "REPLACEME"
           }
       }
   }
   ```

   Update `awsconfiguration.json` with the values generated by running the steps in [Configure Amazon Cognito for the SDK](kvswebrtc-sdk-android.md#build-sdk-android-cognito).

1. In your IDE, open the `Constants.swift` file (from `/Swift/KVSiOSApp`). The file looks like the following:

   ```
   import Foundation
   import AWSCognitoIdentityProvider
   
   let CognitoIdentityUserPoolRegion = AWSRegionType.USWest2
   let CognitoIdentityUserPoolId = "REPLACEME"
   let CognitoIdentityUserPoolAppClientId = "REPLACEME"
   let CognitoIdentityUserPoolAppClientSecret = "REPLACEME"
   
   let AWSCognitoUserPoolsSignInProviderKey = "UserPool"
   let CognitoIdentityPoolID = "REPLACEME"
   
   let AWSKinesisVideoEndpoint = "https://kinesisvideo.us-west-2.amazonaws.com"
   let AWSKinesisVideoKey = "kinesisvideo"
   
   let VideoProtocols =  ["WSS", "HTTPS"]
   
   let ConnectAsMaster = "connect-as-master"
   let ConnectAsViewer = "connect-as-viewer"
   
   let MasterRole = "MASTER"
   let ViewerRole = "VIEWER"
   
   let ClientID = "ConsumerViewer"
   ```

   Update `Constants.swift` with the values generated by running the steps in [Configure Amazon Cognito for the SDK](kvswebrtc-sdk-android.md#build-sdk-android-cognito).

1. Make sure your iOS device is connected to the Mac computer where you're running XCode. In XCode, select the connected device and then build and run the WebRTC iOS SDK.

   This step installs an app called `AWSKinesisVideoWebRTCDemoApp` on your iOS device. Using this app, you can verify live WebRTC audio/video streaming between mobile, web and IoT device clients.

## Run the sample application
<a name="run-sdk-ios"></a>

****

Complete the following steps:

1. On your iOS device, open **AWSKinesisVideoWebRTCDemoApp** and log in using either a new (by creating it first) or an existing Amazon Cognito account.

1. In **AWSKinesisVideoWebRTCDemoApp**, navigate to the **Channel Configuration** page and either create a new signaling channel or choose an existing one.
**Note**  
Currently, using the sample application in this SDK, you can only run one signalling channel in **AWSKinesisVideoWebRTCDemoApp**.

1. (Optional) Choose a unique **Client Id** if you want to connect to this channel as a viewer. Client Id is required only if multiple viewers are connected to a channel. This helps channel's master identify respective viewers.

1. Choose the AWS Region and whether you want to send audio or video data, or both.

1. To verify peer-to-peer streaming, do any of the following:
**Note**  
Ensure that you specify the same signaling channel name, AWS region, viewer ID, and the AWS account ID on all clients that you're using in this demo.
   + Peer-to-peer streaming between two iOS devices: master and viewer
     + Using procedures above, download, build, and run the iOS WebRTC SDK on two iOS devices. 
     + Open **AWSKinesisVideoWebRTCDemoApp** on one iOS device in master mode (choose **START MASTER**) to start a new session (signaling channel).
**Note**  
Currently, there can only be one master for any given signaling channel.
     + Open **AWSKinesisVideoWebRTCDemoApp** on your second iOS device in viewer mode to connect to the signaling channel (session) started in the step above (choose **START VIEWER**).

       Verify that the viewer can see master's audio/video data.
   + Peer-to-peer streaming between the embedded SDK master and an iOS device viewer
     + Download, build, and run the [Amazon Kinesis Video Streams with WebRTC SDK in C for embedded devices](kvswebrtc-sdk-c.md) in master mode on a camera device. 
     + Using procedures above, download, build, and run the iOS WebRTC SDK on an iOS device. Open **AWSKinesisVideoWebRTCDemoApp** on this iOS device in viewer mode and verify that the iOS viewer can see the embedded SDK master's audio/video data.
   + Peer-to-peer streaming between iOS device as master and web browser as viewer
     + Using procedures above, download, build, and run the iOS WebRTC SDK on an iOS device. Open **AWSKinesisVideoWebRTCDemoApp** on this iOS device in master mode (choose **START MASTER**) to start a new session (signaling channel).
     + Download, build, and run the [Amazon Kinesis Video Streams with WebRTC SDK in JavaScript for web applications](kvswebrtc-sdk-js.md) as viewer and verify that the JavaScript viewer can see the Android master's audio/video. 

## Configure Amazon Cognito for the SDK
<a name="build-sdk-ios-cognito"></a>

### Prerequisites
<a name="iossdk-prerequisites"></a>
+ We recommend XCode for examining, editing, and running the application code. We recommend the latest version.
+ In the sample code, you provide Amazon Cognito credentials.

Follow these procedures to set up an Amazon Cognito user pool and identity pool.

### Set up a user pool
<a name="setup-user-pool"></a>

**To set up a user pool**

1. Sign in to the [Amazon Cognito console](https://console.aws.amazon.com/cognito/home) and verify the region is correct.

1. In the navigation on the left choose **User pools**.

1. In the **User pools** section, choose **Create user pool**.

1. Complete the following sections:

   1. **Step 1: Configure sign-in experience** - In the **Cognito user pool sign-in options** section, select the appropriate options.

      Select **Next**.

   1. **Step 2: Configure security requirements** - Select the appropriate options.

      Select **Next**.

   1. **Step 3: Configure sign-up experience** - Select the appropriate options.

      Select **Next**.

   1. **Step 4: Configure message delivery** - Select the appropriate options.

      In the **IAM role selection** field, select an existing role or create a new role.

      Select **Next**.

   1. **Step 5: Integrate your app** - Select the appropriate options.

      In the **Initial app client** field, choose **Confidential client**.

      Select **Next**.

   1. **Step 6: Review and create** - Review your selections from the previous sections, then choose **Create user pool**.

1. On the **User pools** page, select the pool that you just created.

   Copy the **User pool ID** and make note of this for later. In the `awsconfiguration.json` file, this is `CognitoUserPool.Default.PoolId`.

1. Select the **App integration** tab and go to the bottom of the page.

1. In the **App client list** section, choose the **App client name** you just created.

   Copy the **Client ID** and make note of this for later. In the `awsconfiguration.json` file, this is `CognitoUserPool.Default.AppClientId`.

1. Show the **Client secret** and make note of this for later. In the `awsconfiguration.json` file, this is `CognitoUserPool.Default.AppClientSecret`.

### Set up an identity pool
<a name="setup-identity-pool"></a>

**To set up an identity pool**

1. Sign in to the [Amazon Cognito console](https://console.aws.amazon.com/cognito/home) and verify the region is correct.

1. In the navigation on the left choose **Identity pools**.

1. Choose **Create identity pool**.

1. Configure the identity pool.

   1. **Step 1: Configure identity pool trust** - Complete the following sections:
      + **User access** - Select **Authenticated access**
      + **Authenticated identity sources** - Select **Amazon Cognito user pool**

      Select **Next**.

   1. **Step 2: Configure permissions** - In the **Authenticated role** section, complete the following fields:
      + **IAM role** - Select **Create a new IAM role**
      + **IAM role name** - Enter a name and make note of it for a later step.

      Select **Next**.

   1. **Step 3: Connect identity providers** - In the **User pool details** section complete the following fields: 
      + **User pool ID** - Select the user pool you created earlier.
      + **App client ID** - Select the app client ID you created earlier.

      Select **Next**.

   1. **Step 4: Configure properties** - Type a name in the **Identity pool name** field.

      Select **Next**.

   1. **Step 5: Review and create** - Review your selections in each of the sections, then select **Create identity pool**.

1. On the **Identity pools** page, select your new identity pool.

   Copy the **Identity pool ID** and make note of this for later. In the `awsconfiguration.json` file, this is `CredentialsProvider.CognitoIdentity.Default.PoolId`.

1. Update the permissions for the IAM role.

   1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. In the navigation on the left, choose **Roles**.

   1. Find and select the role you created above.
**Note**  
Use the search bar, if needed.

   1. Select the attached permissions policy.

      Select **Edit**.

   1. Select the **JSON** tab and replace the policy with the following:

      ```
      {
          "Version": "2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "cognito-identity:*",
                      "kinesisvideo:*"
                  ],
                  "Resource": [
                      "*"
                  ]
              }
          ]
      }
      ```

      Select **Next**.

   1. Select the box next to **Set this new version as the default** if it isn't already selected.

      Select **Save changes**.

# Client metrics for the C SDK
<a name="kvswebrtc-reference"></a>

Applications built with Amazon Kinesis Video Streams with WebRTC are comprised of various moving parts, including networking, signaling, candidates exchange, peer connection, and data exchange. Kinesis Video Streams with WebRTC in C supports various client-side metrics that enable you to monitor and track the performance and usage of these components in your applications. The supported metrics fall into two major categories: custom metrics defined specifically for the Kinesis Video Streams' implementation of signaling and networking, and media and data-related protocol-specific metrics that are derived from the [W3C](https://www.w3.org/TR/webrtc-stats/) standard. Note that only a subset of the W3C standard metrics is currently supported for Kinesis Video Streams with WebRTC in C. 

**Topics**
+ [Signaling metrics](#kvswebrtc-reference-signaling)
+ [W3C standard metrics supported for C SDK](#kvswebrtc-reference-w3cstandard)

## Signaling metrics
<a name="kvswebrtc-reference-signaling"></a>

Signaling metrics can be used to understand how the signaling client behaves while your application is running. You can use the `STATUS signalingClientGetMetrics (SIGNALING_CLIENT_HANDLE, PSignalingClientMetrics)` API to obtain these signaling metrics. Here's an example usage pattern:

```
SIGNALING_CLIENT_HANDLE signalingClientHandle;
SignalingClientMetrics signalingClientMetrics;
STATUS retStatus = signalingClientGetMetrics(signalingClientHandle, &signalingClientMetrics);
printf("Signaling client connection duration: %" PRIu64 " ms",
       (signalingClientMetrics.signalingClientStats.connectionDuration / HUNDREDS_OF_NANOS_IN_A_MILLISECOND));
```

The Definition of `signalingClientStats` can be found in [Stats.h](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/src/include/com/amazonaws/kinesis/video/webrtcclient/Stats.h).

The following signaling metrics are currently supported:


****  

| Metric | Description | 
| --- | --- | 
| cpApiCallLatency | Calculate latency for control plane API calls. Calculation is done using Exponential Moving Average (EMA). The associated calls include: describeChannel, createChannel, getChannelEndpoint and deleteChannel. | 
| dpApiCallLatency | Calculate latency for data plane API calls. Calculation is done using Exponential Moving Average (EMA). The associated calls include: getIceConfig. | 
| signalingClientUptime | This indicates the time for which the client object exists. Every time this metric is invoked, the most recent uptime value is emitted. | 
| connectionDuration | If connection is established, this emits the duration for which the connection is alive. Else, a value of 0 is emitted. This is different from signaling client uptime since, connections come and go, but signalingClientUptime is indicative of the client object itself. | 
| numberOfMessagesSent | This value is updated when the peer sends an offer, answer, or an ICE candidate. | 
| numberOfMessagesReceived | Unlike numberOfMessagesSent, this metric is updated for any type of signaling message. The types of signaling messages are available in SIGNALING\$1MESSAGE\$1TYPE. | 
| iceRefreshCount | This is incremented when getIceConfig is invoked. The rate at which this is invoked is based on the TTL as part of the ICE configuration received. Each time a fresh set of ICE configuration is received, a timer is set to refresh next time, given the validity of the credentials in the configuration minus some grace period.  | 
| numberOfErrors | The counter is used to track the number of errors generated within the signaling client. Errors generated while getting ICE configuration, getting signaling state, tracking signaling metrics, sending signaling message, and connecting the signaling client to the web socket in order to send/receive messages are tracked. | 
| numberOfRuntimeErrors | The metric includes errors that are incurred while the core of the signaling client is running. Scenarios like reconnect failures, message receive failures, and ICE configuration refresh errors are tracked here. | 
| numberOfReconnects | The metric is incremented on every reconnect. This is a useful metric to understand the stability of the network connection in the set up. | 

## W3C standard metrics supported for C SDK
<a name="kvswebrtc-reference-w3cstandard"></a>

A subset of the [W3C](https://www.w3.org/TR/webrtc-stats/) standard metrics is currently supported for the applications built with the WebRTC C SDK. These fall into the following categories:
+ Networking:
  + [Ice Candidate](https://www.w3.org/TR/webrtc-stats/#icecandidate-dict*): these metrics provide information about the selected local and remote candidates for data exchange between the peers. This includes server source of the candidate, IP address, type of candidate selected for the communication, and candidate priority. These metrics are useful as a snapshot report.
  + [Ice Server](https://www.w3.org/TR/webrtc-stats/#ice-server-dict*): these metrics are for gathering operational information about the different ICE servers supported. This is useful when trying to understand the server that is primarily being used for communication and connectivity checks. In some instances, it is also useful to examine these metrics if the gathering of candidates fails. 
  + [Ice Candidate Pair](https://www.w3.org/TR/webrtc-stats/#candidatepair-dict*): these metrics are for understanding the number of bytes/packets that are being exchanged between the peers and also time-related measurements.
+ Media and data:
  + [Remote Inbound RTP](https://www.w3.org/TR/webrtc-stats/#remoteinboundrtpstats-dict*): these metrics represent the endpoint perspective of the data stream sent by the sender.
  + [Outbound RTP](https://www.w3.org/TR/webrtc-stats/#dom-rtcoutboundrtpstreamstats): these metrics provide information about the outgoing RTP stream. They can also be very useful when analyzing choppy streaming or streaming stops.
  + [Inbound RTP](https://www.w3.org/TR/webrtc-stats/#dom-rtcinboundrtpstreamstats): these metrics provide information about the incoming media. 
  + [Data channel metrics](https://www.w3.org/TR/webrtc-stats/#dcstats-dict*): these metrics can help you analyze the number of messages and bytes sent and received over a data channel. The metrics can be pulled by using the channel ID.

You can use the `STATUS rtcPeerConnectionGetMetrics (PRtcPeerConnection, PRtcRtpTransceiver, PRtcStats)` API to gather metrics related to ICE, RTP and the data channel. Here's a usage example:

```
RtcStats rtcStats;
rtcStats.requestedTypeOfStats = RTC_STATS_TYPE_LOCAL_CANDIDATE;
STATUS retStatus = rtcPeerConnectionGetMetrics (pRtcPeerConnection, NULL, &rtcStats);
printf(“Local Candidate address: %s\n”, rtcStats.rtcStatsObject.localIceCandidateStats.address);
```

Here's another example that shows usage pattern to get transceiver related stats:

```
RtcStats rtcStats;
PRtcRtpTransceiver pVideoRtcRtpTransceiver;
rtcStats.requestedTypeOfStats = RTC_STATS_TYPE_OUTBOUND_RTP;
STATUS retStatus = rtcPeerConnectionGetMetrics (pRtcPeerConnection, pVideoRtcRtpTransceiver, &rtcStats);
printf(“Number of packets discarded on send: %s\n”, rtcStats.rtcStatsObject.outboundRtpStreamStats.packetsDiscardedOnSend);
```

In the above example, if the second argument to rtcPeerConnectionGetMetrics() is NULL, data for the first transceiver in the list is returned.

Definition for rtcStatsObject can be found in [Stats.h](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/src/include/com/amazonaws/kinesis/video/webrtcclient/Stats.h). and definition for RtcStats can be found in [Include.h](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/src/include/com/amazonaws/kinesis/video/webrtcclient/Include.h). 

Sample usages of the APIs and the different metrics can be found in the [samples](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/tree/master/samples) directory in the WebRTC C SDK repository and in the [Kinesis Video Stream demos repository](https://github.com/aws-samples/amazon-kinesis-video-streams-demos/tree/master/canary/webrtc-c/src).

The following [W3C](https://www.w3.org/TR/webrtc-stats/) standard metrics are currently supported for the applications built with the WebRTC C SDK.

**Topics**
+ [Networking](#kvswebrtc-reference-ice)
+ [Media](#kvswebrtc-reference-media)
+ [Data channel](#kvswebrtc-reference-datachannel)

### Networking
<a name="kvswebrtc-reference-ice"></a>

ICE Server Metrics:


****  

| Metric | Description | 
| --- | --- | 
| URL | URL of the STUN/TURN server being tracked | 
| Port | Port number used by the client | 
| Protocol | Transport protocol extracted from ICE Server URI. If the value is UDP, ICE tries TURN over UDP, else ICE tried TURN over TCP/TLS. If the URI does not contain transport, ICE tries TURN over UDP and TCP/TLS. In case of STUN server, this field is empty. | 
| Total Requests Sent  | The value is updated for every srflx candidate request and while sending binding request from turn candidates. | 
| Total Responses Received | The value is updated every time a STUN binding response is received. | 
| Total Round Trip Time | The value is updated every time an equivalent response is received for a request. The request packet is tracked in a hash map with the checksum as the key. | 

ICE Candidate Stats: Only the information about the selected candidate (local and remote) is included.


****  

| Metric | Description | 
| --- | --- | 
| address | This indicates the IP address of the local and remote candidate. | 
| port | Port number of the candidate | 
| protocol | Protocol used to obtain the candidate. The valid values are UDP/TCP. | 
| candidateType | Type of candidate selected - host, srflx or relay. | 
| priority | Priority of the selected local and remote candidate. | 
| url | Source of the selected local candidate. This gives an indication of if the candidate selected is received from a STUN server or TURN server. | 
| relayProtocol | If TURN server is used to obtain the selected local candidate, this field indicates what protocol was used to obtain it. Valid values are TCP/UDP. | 

ICE Candidate Pair Stats: Only the information about the selected candidate pairs is included.


****  

| Metric | Description | 
| --- | --- | 
| localCandidateId | The ID of the selected local candidate in the pair. | 
| remoteCandidateId | The ID of the selected remote candidate in the pair.  | 
| state | State of the candidate pair being inspected. | 
| nominated | Set to TRUE since the stats are being pulled for selected candidate pair. | 
| packetsSent | Number of packets sent. This is calculated in the . call in the writeFrame call. This information can also be extracted from outgoing RTP Stats, but since Ice candidate pair includes a lastPacketSent timestamp, it might be useful to calculate number of packets sent between two points in time. | 
| packetsReceived | This is updated every time the incomingDataHandler is called. | 
| bytesSent | This is calculated in the iceAgentSendPacket() in the writeFrame() call. This is useful when calculating a bit rate. Currently, this also includes the header and padding since the ICE layer is oblivious to the RTP packet format. | 
| bytesReceived | This is updated every time the incomingDataHandler is called. Currently, this also includes the header and padding since the ICE layer is oblivious to the RTP packet format. | 
| lastPacketSentTimestamp | This is updated every time a packet is sent. This can be used in conjunction with the packetsSent and a recorded start time in application to current packet transfer rate. | 
| lastPacketReceivedTimestamp | This is updated on receiving data in incomingDataHandler(). This can be used in conjunction with packetsReceived to deduce the current packet receive rate. The start time has to be recorded at the application layer in the transceiverOnFrame() callback. | 
| firstRequestTimestamp | Recorded when the very first STUN binding request is sent out successfully in iceAgentSendStunPacket(). This can be used along with lastRequestTimestamp and requestsSent to find average time between STUN binding requests. | 
| lastRequestTimestamp | Recorded every time a STUN binding request is sent out successfully in iceAgentSendStunPacket(). | 
| lastResponseTimestamp | Recorded every time a STUN binding response is received. | 
| totalRoundTripTime | Updated when a binding response is received for a request. The request and response are mapped in a hash table based on checksum.  | 
| currentRoundTripTime | Most recent round trip time updated when a binding response is received for a request on the candidate pair.  | 
| requestsReceived | A counter that is updated on every STUN binding request received. | 
| requestsSent | A counter that is updated on every STUN binding request sent out in iceAgentSendStunPacket().  | 
| responsesSent | A counter that is updated on every STUN binding response sent out in response to a binding request in handleStunPacket(). | 
| responsesReceived | A counter that is updated on every STUN binding response received in handleStunPacket().  | 
| packetsDiscardedOnSend |  Updated when packet sending fails. In other words, this is updated when iceUtilsSendData() fails. This isuseful to determine percentage of packets dropped in a specific duration. | 
| bytesDiscardedOnSend | Updated when packet sending fails. In other words, this is updated when iceUtilsSendData() fails. This is useful when determining percentage of packets dropped in a specific duration. Note that the counter also includes the header of the packets. | 

### Media
<a name="kvswebrtc-reference-media"></a>

Outbound RTP Stats


****  

| Metric | Description | 
| --- | --- | 
| voiceActivityFlag | This is currently part of RtcEncoderStats defined in Include.h. The flag is set to TRUE if the last audio packet contained voice. The flag is currently not set in the samples. | 
| packetsSent | This indicates the total number of RTP packets sent out for the selected SSRC. This is a part of [https://www.w3.org/TR/webrtc-stats/\$1sentrtpstats-dict\$1](https://www.w3.org/TR/webrtc-stats/#sentrtpstats-dict*) and is included as part of outbound stats. This is incremented every time writeFrame() is called. | 
| bytesSent | Total number of bytes excluding RTP header and padding that is sent. This is updated on every writeFrame call. | 
| encoderImplementation | This is updated by the application layer as part of RtcEncoderStats object.  | 
| packetsDiscardedOnSend | This field is updated if the ICE agent fails to send the encrypted RTP packet for any reason in the iceAgentSendPacket call. | 
| bytesDiscardedOnSend | This field is also updated if the ICE agent fails to send the encrypted RTP packet for any reason in the iceAgentSendPacket call. | 
| framesSent | This is incremented only if media stream tack type is MEDIA\$1STREAM\$1TRACK\$1KIND\$1VIDEO. | 
| hugeFramesSent | This counter is updated for frames that are 2.5 times the average size of frames. The size of the frame is obtained by calculating the fps (based on the last known frame count time and number of frames encoded in a time interval) and using the targetBitrate in RtcEncoderStats set by the application.  | 
| framesEncoded | This counter is updated only for video track after successful encoding of the frame. It is updated on every writeFrame call. | 
| keyFramesEncoded | This counter is updated only for video track after successful encoding of the key frame. It is updated on every writeFrame call. | 
| framesDiscardedOnSend | This is updated when frame sending fails due to iceAgentSendPacket call failure. A frame comprises of a group of packets and currently, framesDiscardedOnSend fails if any packet gets discarded on while sending because of an error. | 
| frameWidth | This ideally represents the frame width of the last encoded frame. Currently, this is set to a value by the application as part of RtcEncoderStats\$1 \$1and is of not much significance. | 
| frameHeight | This ideally represents the frame height of the last encoded frame. Currently, this is set to a value by the application as part of RtcEncoderStats and is of not much significance. | 
| frameBitDepth |  This represents the bit depth per pixel width of the last encoded frame. Currently, this is set by the application as part of RtcEncoderStats and translated into outbound stats.  | 
| nackCount | This value is updated every time a NACK is received on an RTP packet and a re-attempt to send the packet is made. The stack supports re-transmission of packets on receiving a NACK. | 
| firCount | The value is updated on receiving a FIR packet (onRtcpPacket->onRtcpFIRPacket). It indicates how often the stream falls behind and has to skip frames in order to catch up. FIR packet is currently not decoded to extract the fields, so, even though the count is set, no action is taken. | 
| pliCount | The value is updated on receiving a PLI packet (onRtcpPacket->onRtcpPLIPacket). It indicates that some amount of encoded video data has been lost for one or more frames. | 
| sliCount | The value is updated on receiving a SLI packet (onRtcpPacket->onRtcpSLIPacket). It indicates how often packet loss affects a single frame. | 
| qualityLimitationResolutionChanges | Currently, the stack supports this metric, however, the frame width and height are not monitored for every encoded frame. | 
| lastPacketSentTimestamp | The timestamp at which the last packet was sent. It is updated on every writeFrame call. | 
| headerBytesSent | Total number of RTP header and padding bytes sent for this SSRC excluding the actual RTP payload. | 
| bytesDiscardedOnSend | This is updated when frame sending fails due to iceAgentSendPacket call failure. A frame comprises of a group of packets, which in turn comprises of bytes and currently, bytesDiscardedOnSend fails if any packet gets discarded on while sending because of an error. | 
| retransmittedPacketsSent | The number of packets that are retransmitted on reception of PLI/SLI/NACK. Currently, the stack only counts the packet resent of NACK since PLI and SLI based retransmissions are not supported. | 
| retransmittedBytesSent | The number of bytes that are retransmitted on reception of PLI/SLI/NACK. Currently, the stack only counts the bytes resent of NACK since PLI and SLI based retransmissions are not supported. | 
| targetBitrate | This is set in the application level. | 
| totalEncodedBytesTarget | This is increased by the target frame size in bytes every time a frame is encoded. This is updated using size parameter in Frame structure. | 
| framesPerSecond | This is calculated based on the time recorded for the last known encoded frame and the number of frames sent within a second. | 
| totalEncodeTime | This is set to an arbitrary value in the application and is translated to outbound stats internally. | 
| totalPacketSendDelay | This is currently set to 0 since iceAgentSendPacket sends packet immediately. | 

Remote inbound RTP Stats:


****  

| Metric | Description | 
| --- | --- | 
| roundTripTime | The value is extracted from the RTCP receiver report on receiving an RTCP packet type 201 (receiver report). The report comprises of details such as last sender report and delay since last sender report to calculate round trip time. Sender reports are generated roughly every 200 milliseconds comprising of information such as number of packets sent and bytes sent that are extracted from outbound stats. | 
| totalRoundTripTime | Sum of round trip times calculated | 
| fractionLost | Represents the fraction of RTP packets lost for the SSRC since the previous sender/receiver reporfractionLost was sent. | 
| reportsReceived | Updated every time a receiver report type packet is received. | 
| roundTripTimeMeasurements | Indicates the total number of reports received for the SSRC that contains valid round trip time. However, currently this value is incremented regardless so its meaning is the same as reportsReceived. | 

Inbound RTP Stats:


****  

| Metric | Description | 
| --- | --- | 
| packetsReceived | The counter is updated when a packet is received for a specific SSRC. | 
| jitter | This metric indicates the packet Jitter measured in seconds for the specific SSRC. | 
| jitterBufferDelay | This metric denotes the sum of time spent by each packet in the jitter buffer. | 
| jitterBufferEmittedCount | The total number of audio samples or video frames that have come out of the jitter buffer.  | 
| packetsDiscarded | The counter is updated when the Jitter buffer is full and the packet cannot be pushed into it. This can be used to calculate percentage of packets discarded in a fixed duration. | 
| framesDropped | This value is updated when the onFrameDroppedFunc() is invoked.  | 
| lastPacketReceivedTimestamp | Represents the timestamp at which the last packet was received for this SSRC. | 
| headerBytesReceived | The counter is updated on receiving an RTP packet. | 
| bytesReceived | Number of bytes received. This does not include the header bytes. This metric can be used to calculate the incoming bit rate. | 
| packetsFailedDecryption | This is incremented when the decryption of the SRTP packet fails. | 

### Data channel
<a name="kvswebrtc-reference-datachannel"></a>

Data channel metrics:


****  

| Metric | Description | 
| --- | --- | 
| label | Label is the name of the data channel being inspected. | 
| protocol | Since our stack uses SCTP, the protocol is set to a constant SCTP. | 
| dataChannelIdentifier | The even or odd identifier used to uniquely identify a data channel. This is updated to an odd value if the SDK is the offerer and even value if SDK is the answerer. | 
| state | State of the data channel when the stats are queried. Currently, the two states supported are RTC\$1DATA\$1CHANNEL\$1STATE\$1CONNECTING (when the channel is created) and RTC\$1DATA\$1CHANNEL\$1STATE\$1OPEN (Set in the onOpen() event). | 
| messagesSent | The counter is updated when the SDK sends messages over the data channel. | 
| bytesSent | The counter is updated with the bytes in the message that is sent out. This can be used to understand how many bytes are not sent due to failure, that is, to understand the percentage of bytes that are sent. | 
| messagesReceived | The metric is incremented in the onMessage() callback. | 
| bytesReceived | The metric is generated in the onMessage() callback. | 