

# Amazon Connect Contact Lens
Contact Lens

**Note**  
**Powered by Amazon Bedrock**: AWS implements [automated abuse detection](https://docs.aws.amazon.com//bedrock/latest/userguide/abuse-detection.html). Because Amazon Connect Contact Lens is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI).

Amazon Connect Contact Lens provides contact center analytics and quality management capabilities that enable you to monitor, measure, and continuously improve contact quality and agent performance for a better overall customer experience.
+ [Analyze conversations using conversational analytics](analyze-conversations.md). You can uncover trends and improve customer service by understanding sentiment, conversation characteristics, emerging contact themes, self-service user experiences, and agent compliance risks. 

  Conversational analytics helps you protect your customer's privacy by enabling you to [automatically redact sensitive data](sensitive-data-redaction.md) from conversation transcripts, audio files, and email messages.

  
+ [Evaluate performance](evaluations.md). You can review conversations alongside contact details, recordings, transcripts, and summaries, without the need to switch applications. You can define and assess agent performance criteria (for example, script adherence, sensitive data collection, and customer greetings) and automatically pre-populate evaluation forms.
+ [Set up and review agent screen recordings](agent-screen-recording.md). You can review agent actions handling customer contacts by reviewing screen recordings. This helps you ensure adherence to quality standards, compliance requirements, and best practices. It also helps you identify coaching opportunities and bottlenecks so you can streamline workflows.
+ [Search for completed and in-progress contacts](contact-search.md). You can search for contacts as far back as two years ago.
+ [Monitor live & recorded conversations](monitoring-amazon-connect.md). You can monitor live conversations (both voice and chat) and barge live voice conversations. This is especially helpful for agents in training.
+ [Transfer](transfer-contacts-admin.md), [reschedule](reschedule-contacts-admin.md), or [end](end-contacts-admin.md) in-progress contacts. While on the **Contact details** page, you can manage in-progress contacts.

# Analyze conversations using conversational analytics in Amazon Connect Contact Lens
Analyze conversations using conversational analytics

With Contact Lens conversational analytics, you can analyze conversations between customers and agents or customers and conversational AI, across voice, chat, and email, using natural language processing. Conversational analytics performs sentiment analysis, detects issues, and enables you to automatically categorize contacts. 

**Speech analytics support**
+ **Real-time call analytics**: Use to detect and resolve customer issues more proactively while the call is in progress. For example, it can [analyze and alert](add-rules-for-alerts.md) you when a customer is getting frustrated because the agent is unable to resolve a complicated problem. This allows you to provide more immediate assistance. 
+ **Post-call analytics**: Use to understand trends of customer conversations, self-service interactions, and agent compliance. This helps you identify opportunities to improve conversational AI and coach agents after the call.

**Chat analytics support**
+ **Real-time chat analytics**: As with real-time call analytics, you can detect and resolve customer issues more proactively while the chat is progress and [receive an alert](add-rules-for-alerts-chat.md). For example, managers can get a real-time email alert when customer sentiment for a chat contact turns negative, allowing them to join the in-progress contact and help resolve the customer issue. 
+ **Post-chat analytics**: Use to understand trends of customer conversations with both bots and agents. It provides information specific to a chat interaction, such as the agent greeting time, and agent and customer response times. The response times and sentiments help you investigate the customer's experience with the bot versus the agent, and identify areas for improvement. 
+ Each processed chat message is charged the same way. While not all messages may have all features applied (for example, summarization is applied to `text/plain` messages only), if Contact Lens conversational analytics is enabled on the contact, the message is counted for billing. For more information about pricing, see [Amazon Connect Pricing](https://aws.amazon.com/connect/pricing/).

**Email analytics support**
+ **Email analytics**: Use to analyze email conversations between customers and agents. Contact Lens automatically categorizes email contacts, redacts sensitive data from email transcripts, and generates contact summaries. This helps you understand email conversation trends and ensure compliance across your email channel.
+ Because email contacts are asynchronous, with one participant acting at a time, the real-time and post-contact distinction that applies to voice and chat does not apply to email. An email analysis is initiated as soon as the [Flow block in Amazon Connect: Set recording, analytics and processing behavior](set-recording-analytics-processing-behavior.md) is used when an email contact is received or sent.

You can protect your customer's privacy by redacting sensitive data, such as name, address, and credit card information from transcripts and audio recordings. 

## Sample Contact details page for a call


The following image shows the conversational analytics for a voice call. Notice that it includes **Talk time** metrics.

![\[A sample contact details page with talk time metrics.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-contactdetails-call1b.png)


1. **Customer sentiment trend**: This graph shows how customer sentiment changes as the contact progresses. For more information, see [Investigate sentiment scores](sentiment-scores.md).

1. **Customer sentiment**: This graph shows the distribution of customer sentiment for the entire call. This is calculated by counting the total number of conversation turns or chat messages where a customer had Positive, Neutral, and Negative sentiment.

1. **Talk time**: This graph shows the distribution of talk time and non talk-time during the entire call. The talk time is further split into agent and customer talk time. 

The following image shows the next section on the **Contact details** page for a voice call: the audio analysis and transcript. Notice that personally identifiable information (PII) has been [ redacted from the transcript](sensitive-data-redaction.md). 

![\[The audio analysis and transcript for the contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-contactdetails-call2b.png)


## Sample Contact details page for real-time chat analytics


The following image shows the conversational analytics for a real-time chat. Notice that it includes Key highlights and customer sentiment.

![\[A contact details page with conversational analytics for a real-time chat.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-realtime-chat.png)


## Sample Contact details page for post-chat analytics


The following image shows post-chat analytics. Notice that it includes chat response metrics, such as **Agent greeting time** (the time from the agent joining the chat to when they send the first response), **Customer response time**, and **Agent response time**.

![\[A contact details page with summary and conversational analytics for a chat.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-contactdetails-chat1b.png)


The following image shows the next section on the **Contact details** page for a chat: the interaction analysis and transcript. Notice that you can investigate the customer's interaction with a bot versus the agent.

![\[The contact details page, the interaction analysis and transcript for a chat.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-contactdetails-chat2b.png)


## Sample Contact details page for email analytics


The following image shows the conversational analytics for an email contact. Email analytics includes categorization, sensitive data redaction, and contact summaries. Because email contacts are asynchronous, there are no real-time analytics or sentiment scores.

![\[A sample contact details page with conversational analytics for an email contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-contactdetails-email.png)


# Enable conversational analytics in Amazon Connect Contact Lens
Enable conversational analytics

You can enable Contact Lens conversational analytics in a few steps:

1. Enable Contact Lens on your Amazon Connect instance.

1. Add a [Set recording and analytics behavior](set-recording-behavior.md) block to a flow, and configure it to enable conversational analytics for voice, chat, email, or a combination of channels.

The following image shows a block that's configured for call recording and speech analytics. The **Call recording** option is set to **Agent and customer**. In the **Analytics** section, the options are selected for automated interactions and agent interactions.

![\[The properties page for a set recording and analytics behavior block.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/set-recording-and-analytics-behavior.png)


The procedures in this topic describe the steps to enable conversational analytics for calls, chats, or emails.

**Topics**
+ [Important things to know](#important-set-behaviorblock)
+ [Enable Contact Lens for your Amazon Connect instance](#enable-cl)
+ [Enable call recording and speech analytics](#enable-callrecording-speechanalytics)
+ [Enable chat analytics](#enable-chatanalytics)
+ [Enable email analytics](#enable-emailanalytics)
+ [Enable redaction](#enable-redaction)
+ [Review redaction for accuracy](#review-sensitive-data-redaction)
+ [Disable sentiment analysis](#disable-sentiment-analysis-voice-and-chat)
+ [Dynamically enable redaction based on the customer's language](#dynamically-enable-analytics-contact-flow)
+ [Design a flow for key highlights](#call-summarization-agent)
+ [What if the flow block fails to enable conversational analytics?](#troubleshoot-contactlens-enablement)
+ [Multi-party calls](#multiparty-calls-contactlens)

## Important things to know
Important things to know
+ **Collect data after transferring a contact**: If you want to continue using conversational analytics to collect data after transferring a contact to another agent or queue, you need to add another [Set recording and analytics behavior](set-recording-behavior.md) block with **Enable analytics** enabled for the flow. This is because a transfer generates a second contact ID and contact record. Conversational analytics needs to run on that contact record as well.
**Note**  
For [queue-to-queue transfers](queue-to-queue-transfer.md) the configuration information for conversational analytics is copied to the transferred contact.
+ When you choose a language that is supported by sentiment analysis, AND select **Enable Contact Lens speech analytics**, **Enable chat analytics**, or **Enable email analytics** in the [Set recording and analytics behavior](set-recording-behavior.md) block, sentiment analysis is enabled by default. You can choose to [disable sentiment analysis](#disable-sentiment-analysis-voice-and-chat). 
+ Where you place the [Set recording and analytics behavior](set-recording-behavior.md) block in a flow affects the agent's experience with key highlights. For more information, see [Design a flow for key highlights](#call-summarization-agent).

## Enable Contact Lens for your Amazon Connect instance
Enable Contact Lens for your Amazon Connect instance

Before you can enable conversational analytics, you first need to enable Contact Lens for your instance. 

1. Open the Amazon Connect console at [https://console.aws.amazon.com/connect/](https://console.aws.amazon.com/connect/).

1. On the instances page, choose the instance alias. The instance alias is also your **instance name**, which appears in your Amazon Connect URL. The following image shows the **Amazon Connect virtual contact center instances** page, with a box around the instance alias.  
![\[The Amazon Connect virtual contact center instances page, the instance alias.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/instance.png)

1. In the Amazon Connect console, in the navigation pane, choose **Analytics tools**, and then choose **Enable Contact Lens**.

1. Choose **Save**.

## Enable call recording and speech analytics
Enable call recording and speech analytics

After Contact Lens is enabled for your instance, you can add [Set recording and analytics behavior](set-recording-behavior.md) blocks to your flows. You then enable conversational analytics when you configure the **Set recording and analytics behavior** block.

1. In the flow designer add a [Set recording and analytics behavior](set-recording-behavior.md) block to your flow. 

   For information about which flow types you can use with this block and other tips, see [Set recording and analytics behavior](set-recording-behavior.md).

1. Open the **Set recording and analytics behavior** properties page. Under **Call recording**, choose **On**, **Agent and Customer**.

   Both agent and customer call recordings are required to use conversational analytics for voice contacts.

1. Under **Analytics**, choose **Enable Contact Lens conversational analytics**, **Enable speech analytics**. 

   If you don't see this option, Amazon Connect Contact Lens hasn't been enabled for your instance. For instructions to enable it, see [Enable Contact Lens for your Amazon Connect instance](#enable-cl).

1. Choose one of the following:

   1. **Post-call analytics**: Contact Lens analyzes the call recording after the conversation and After Contact Work (ACW) is complete. This option provides the best transcription accuracy.

   1. **Real-time analytics**: Contact Lens provides both real-time insights during the call, and post-call analytics after the conversation has ended and After Contact Work (ACW) is complete.

      If you choose this option, we recommend setting up alerts based on keywords and phrases that the customer may utter during the call. Contact Lens analyzes the conversation real-time to detect the specified keywords or phrases, and alerts supervisors. From there, supervisors can listen in on the live call and provide guidance to the agent to help them resolve the issue faster.

      For information about setting up alerts, see [Alert supervisors in real-time for calls](add-rules-for-alerts.md).

      If your instance was created before October 2018, additional configuration is needed to access real-time call analytics. For more information, see [Service-linked role permissions](connect-slr.md#slr-permissions).

1. Choose from the [list of available languages](supported-languages.md#supported-languages-contact-lens).

   For instructions about specifying the language dynamically, see [Dynamically enable redaction based on the customer's language](#dynamically-enable-analytics-contact-flow).

1. Optionally, enable redaction of sensitive data. For more information, see the next section, [Enable redaction](#enable-redaction).

1. Choose **Save**.

1. If the contact is going to be transferred to another agent or queue, repeat these steps to add another [Set recording and analytics behavior](set-recording-behavior.md) block with **Enable Contact Lens for conversational analytics** enabled. 

## Enable chat analytics
Enable chat analytics

1. In the [Set recording and analytics behavior](set-recording-behavior.md) block, under **Analytics**, choose **Enable Contact Lens conversational analytics**, and **Enable chat analytics**.
**Note**  
By choosing this option you will receive both real-time and post-chat analytics.

   If you don't see this option, Amazon Connect Contact Lens hasn't been enabled for your instance. For instructions to enable it, see [Enable Contact Lens for your Amazon Connect instance](#enable-cl).

1. Choose from the [list of available languages](supported-languages.md#supported-languages-contact-lens).

   For instructions on choosing the language and redaction dynamically, see [Dynamically enable redaction based on the customer's language](#dynamically-enable-analytics-contact-flow).

1. Optionally, enable redaction of sensitive data. For more information, see the next section, [Enable redaction](#enable-redaction).

1. Choose **Save**.

1. If the contact is going to be transferred to another agent or queue, repeat these steps to add another [Set recording and analytics behavior](set-recording-behavior.md) block with **Enable Contact Lens for conversational analytics** enabled. 

## Enable email analytics
Enable email analytics

You can enable Contact Lens conversational analytics for email contacts to automatically categorize emails, redact sensitive data, and generate contact summaries.

1. In the flow designer, add a [Set recording, analytics and processing behavior](set-recording-analytics-processing-behavior.md) block to your inbound email flow. Place the block before the email contact is routed to a queue or agent.

1. Open the block properties. For **Action**, choose **Set recording and analytics behavior**.

1. For **Channel**, choose **Email**.

1. Under **Analytics**, choose **Enable Contact Lens conversational analytics**, and **Enable email analytics**.

   If you don't see this option, Amazon Connect Contact Lens hasn't been enabled for your instance. For instructions to enable it, see [Enable Contact Lens for your Amazon Connect instance](#enable-cl).

1. Choose from the [list of available languages](supported-languages.md#supported-languages-contact-lens).

1. Optionally, enable redaction of sensitive data. For more information, see [Enable redaction](#enable-redaction).

1. Optionally, under **Contact Lens Generative AI capabilities**, enable **Contact summary** to generate summaries for email contacts.

1. Choose **Save**.

1. If the email contact is going to be transferred to another agent or queue, repeat these steps to add another [Set recording, analytics and processing behavior](set-recording-analytics-processing-behavior.md) block with **Enable Contact Lens for conversational analytics** enabled.

## Enable redaction of sensitive data
Enable redaction

When you configure the [Set recording and analytics behavior](set-recording-behavior.md) block for conversational analytics, you also have the option to enable redaction of sensitive data in a flow. When redaction is enabled you can choose from the following options:
+ Redact all personally identifiable information (PII) data (all PII entities supported).
+ Choose which PII entities to redact from the list of supported entities.

If you accept the default settings, Contact Lens conversational analytics redacts all personally identifiable information (PII) it identifies, and replaces it with **[PII]** in the transcript. The default settings are shown in the following image because the following options are selected: **Redact sensitive data**, **Redact All PII data**, and **Replace with placeholder PII**.

![\[The default settings for sensitive data redaction.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-enable-redaction-default.png)


### Select PII entities to redact


Under the **Data redaction** section, you can select specific PII entities to redact. The following image shows that **Credit/Debit Card Number** is going to be redacted.

![\[The data redaction section, a list of entities you can redact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-select-entities-to-redact.png)


### Choose data redaction replacement


Under the **Data redaction replacement** section, you can choose the mask to be used as data redaction replacement. For example, in the following image, the **Replace with placeholder PII** option indicates that **PII** will replace the data.

![\[The option to replace data with PII.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-dataredactionreplacement.png)


For more information about using redaction, see [Use sensitive data redaction](sensitive-data-redaction.md).

## Review sensitive data redaction for accuracy
Review redaction for accuracy

The redaction feature is designed to identify and remove sensitive data. However, due to the predictive nature of machine learning, it may not identify and remove all instances of sensitive data in a transcript generated by Contact Lens. We recommend you review any redacted output to ensure it meets your needs.

**Important**  
The redaction feature does not meet the requirements for de-identification under medical privacy laws like the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), so we recommend you continue to treat it as protected health information after redaction.

For the location of redacted files and examples, see [Output file locations](example-contact-lens-output-locations.md).

## Disable sentiment analysis
Disable sentiment analysis

When you choose a language that is supported by sentiment analysis, AND choose **Enable speech analytics** or **Enable chat analytics**, sentiment analysis is enabled by default for all agents and customers. For a list of languages supported by sentiment analysis, see [AI features](supported-languages.md#supported-languages-contact-lens). 

The following image shows the sentiment analysis option is enabled on the **Set recording and analytics behavior** block. 

![\[The Sentiment analysis option when it is enabled.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/sentiment-analysis-enabled.png)


The following image shows a language that is not supported by sentiment analysis. We recommend opening the **Sentiment** section to verify whether it is enabled or disabled. 

![\[The Sentiment analysis option when it is disable because the language isn't supported.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/sentiment-analysis-verify.png)


To disable sentiment analysis for all agents and customers, deselect the **Enable Sentiment Analysis** option, as shown in the following image.

![\[The sentiment analysis option when it is disabled.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/sentiment-analysis-disabled.png)


## Dynamically enable redaction based on the customer's language
Dynamically enable redaction based on the customer's language

You can dynamically enable the redaction of the output files based on the language of the customer. For example, for customers using en-US, you may want only a redacted file whereas for those using en-GB, you may want both the original and redacted output files.
+ Redaction: choose one of the following (they are case sensitive)
  + None
  + RedactedOnly
  + RedactedAndOriginal
+ Language: Choose from the [list of available languages](supported-languages.md#supported-languages-contact-lens).

You can set these attributes in the following ways:
+ User defined: use a **Set contact attributes** block. For general instructions about using this block, see [How to reference contact attributes](how-to-reference-attributes.md). Define the **Destination key** and **Value** for redaction and language as needed. 

  The following image shows an example of how you can configure the **Set contact attributes** block to use contact attributes for redaction. Choose the **Use text** option, set **Destination key** to **redaction\$1option**, and set **Value** to **RedactedAndOriginal**. 
**Note**  
 **Value** is case sensitive.   
![\[The set contact attributes block, the use text option, the value is case sensitive.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-contact-attributes-enable-redaction1.png)

  The following image show how to use contact attributes for language. Choose the Use text option, set Destination key to language, set **Value** to **en-US**.  
![\[The set contact attributes block, the use text option, the value is case sensitive.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-contact-attributes-enable-redaction2.png)
+ [Use a Lambda function](attribs-with-lambda.md). This is similar to how you set up user-defined contact attributes. An AWS Lambda function can return the result as a key-value pair, depending on the language of the Lambda response. The following example shows a Lambda response in JSON: 

  ```
  {
     'redaction_option': 'RedactedOnly',
     'language': 'en-US'
  }
  ```

## Design a flow for key highlights
Design a flow for key highlights

Transcripts are visible to agents using the Contact Control Panel (CCP) depending on whether conversational analytics is enabled in the [Set recording and analytics behavior](set-recording-behavior.md) in the inbound flow, and/or a transfer flow.

This section provides three use cases for enabling conversational analytics in the [Set recording and analytics behavior](set-recording-behavior.md) block, and describes how they affect the agent's experience with key highlights.

### Use case 1: Conversational analytics is enabled in an inbound flow only

+ A contact enters the inbound flow, and there are no call transfers. Following is the agent experience:

  The agent receives the full transcript during After Contact Work (ACW). The transcript includes everything said by the agent and the customer, from the moment the agent accepts the initial call, until the call has ended, as shown in the following image.  
![\[The contact control panel, the transcript of the conversation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use1.png)
+ A contact enters the inbound flow, and there is a call transfer. Following is the agent experience:
  + Agent 1 receives a call transcript after they leave the conference/warm transfer, during ACW.

    The transcript includes everything said by agent 1 and the customer, from the moment the agent accepts the initial call, until the agent 1 leaves the conference/warm transfer portion of the call. The transcript includes the flow (transfer/queue flow) prompt messages, as shown in the following image.   
![\[The flow transfer prompt in the transcript.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use2.png)
  + Agent 2 receives a call transcript at the time of accepting the conference/warm transfer call from agent 1.

    The transcript includes everything said by agent 1 and the customer, from the moment agent 1 accepts the initial call until the agent 1 leaves the conference/warm transfer portion of the call. The transcript includes the flow (transfer/queue flow) prompt messages, and the warm transfer conversation, as shown in the following image.   
![\[The transcript, the flow transfer prompt and the warm transfer between two agents.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use2b.png)

    Because conversational analytics is not enabled in the transfer flow, agent 2 doesn't see the remainder of the transcript when the call has ended and they enter ACW. The following image of ACW for agent 2 shows the transcript is empty.   
![\[An empty transcript.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use2c.png)

### Use case 2: Conversational analytics is enabled in an inbound flow and a transfer flow (quick connect)

+ A contact enters the inbound flow, and there are no call transfers. Following is the agent experience:
  + Agent 1 receives a full call transcript (unredacted) during ACW. 

    The transcript includes everything said by agent 1 and the customer from the moment the agent accepts the call, until the call has ended. This is shown in the following image of the CCP for agent 1.  
![\[The CCP for agent 1, a full call transcript.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use3.png)
+ A contact enters the inbound flow, and there is a call transfer. Following is the agent experience:
  + Agent 1 receives a call transcript after they leave the conference/warm transfer, during ACW.

    The transcript includes everything said by agent 1 and the customer from the moment agent 1 accepts the call, until agent 1 leaves the conference/warm transfer portion of the call. The transcript includes flow (transfer/queue flow) prompt messages.

    The full call transcript until warm transfer is shown in the following image.  
![\[A full call transcript until agent 1 leaves the conference.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use2b.png)
  + Agent 2 receives a call transcript at the time of accepting the conference/warm transfer call from agent 1.

    The transcript includes everything said by agent 1 and the customer, from the moment agent 1 accepts the call, until agent 1 leaves the conference/warm transfer portion of the call. The transcript includes the flow (transfer/queue flow) prompt messages. 
  + Because conversational analytics is enabled in the transfer flow, agent 2 receives a call transcript after the call is completed, during ACW. 

    The transcript includes only the remaining portion of the call between agent 2 and customer, after agent 1 has left the call. The transcript includes everything said by agent 2 and the customer, from the moment they are conferenced/warm transferred in, until the call has ended. An example transcript is shown in the following image.  
![\[A transcript of the call between agent 2 and the customer.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-summarization-use3b.png)

## What if the flow block fails to enable conversational analytics?
What if the flow block fails to enable conversational analytics?

It's possible that the [Set recording and analytics behavior](set-recording-behavior.md) block can fail to enable conversational analytics on a contact. If conversational analytics isn't enabled for a contact, [check the flow logs](search-contact-flow-logs.md) for the error.

## Multi-party calls and conversational analytics
Multi-party calls

Contact Lens conversational analytics supports calls with up to 2 participants. For example, if there are more than two parties (agent and customer) on a call, or a call is getting transferred to a third party, the quality of the transcription and analytics, such as sentiment, redaction, categories among others, can get degraded. We recommend you disable conversational analytics for multi-party or third-party calls if there are more than two parties (agent and customer). To do this, add another [Set recording and analytics behavior](set-recording-behavior.md) block to the flow and disable conversational analytics. For more information about the behavior of the flow block, see [Configuration tips](set-recording-behavior.md#set-recording-behavior-tips). 

# Assign permissions to use Contact Lens conversational analytics in Amazon Connect
Assign permissions

To keep customer data secure, you set security profile permissions to determine on who can access information generated by Contact Lens conversational analytics. 

Following is a description of the required security profile permissions, as well as some permissions that are helpful to have but not required. Several of these are Search permissions, which are needed so you can find the contacts you want to analyze. They aren't specific to Contact Lens conversational analytics.

## Conversational analytics permissions
Conversational analytic permissions
+ **Contact Lens - conversational analytics**
  + On the **Contact details** page you can view graphs that summarize conversational analytics (customer sentiment, talk time for voice contacts), as well as sentiment colors and indicators for each conversation turn on transcripts and recordings. For example, the following image shows how this information is displayed on the **Contact details** page for a voice contact.

    **Contact Lens - conversational analytics - View** permission is also required to view sentiment indicators on conversation recordings and transcripts.   
![\[Graphs on the contact details page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-conversationalanalytics-permission.png)  
![\[Graphs on the contact details page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-conversationalanalytics-permission-2.png)
+ **Call recordings (unredacted)**

  On the **Contact details** and **Contact search** pages for a contact, view unredacted audio recordings.
+ **Call recordings (redacted)**

  On the **Contact details** and **Contact search** pages for a contact, listen to call recordings in which the sensitive data has been redacted.
+ **Contact transcripts (unredacted)**

  On the **Contact details** and **Contact search** pages for a contact, view unredacted chat, email conversations and unredacted voice transcripts produced by Contact Lens.
+ **Contact transcripts (redacted)**

  On the **Contact details** and **Contact search** pages for a contact, view chat and voice transcripts in which the sensitive data has been redacted.

**Important**  
If you have permissions to:  
Both **Contact transcripts (unredacted) - Access** and **Contact transcripts (redacted) - Access**
– OR –  
Both **Call recordings (unredacted) - Access** and **Call recordings (redacted) - Access**
Note the following behavior:  
When redaction is enabled on the flow, redacted content is displayed on the **Contact details** and **Contact search** pages.
When redaction is disabled on the flow or the contact is not analyzed by Contact Lens, unredacted content is displayed on the **Contact details** and **Contact search** pages.
You cannot access both the redacted and unredacted version of a conversation at the same time.

## Search permissions
Search permissions
+ **Contact search**

  This permission is required so you can access the **Contact search** page, which is where you can search contacts so you can review the analyzed recording and transcript. In addition, you can do fast, full-text search on call transcripts, and search by sentiment score and non-talk time. 
+ **View my contacts**

  This permission is required if you need to access the **Contact search** page, review only those contacts that you handled, and review the analyzed recording and transcripts.
**Important**  
If both **Contact search** and **View my contacts** permissions are granted, then the user will have access to all contacts.
+ **Search contacts by conversation characteristics**

  This permission isn't required by Contact Lens conversational analytics but it's helpful as it provides more search options.

  On the **Contact Search** page:
  + For voice contacts, you can access additional filters that allow you to return results by sentiment score and non-talk time.
  + For chat contacts, you can access an additional filter to search for contacts by response time. 
  + For both voice and chat, you can search conversations that fall into specific contact categories. 

  For more information, see [Search for sentiment score/shift](search-conversations.md#sentiment-search), [Search for non-talk time](search-conversations.md#nontalk-time-search), and [Search a contact category](search-conversations.md#contact-category-search).

  The following image shows of the **Filters** section of the **Contact Search** page, and the **Filters** dropdown menu. Filters with **CL** next to them are only available to users who have this security profile permission.   
![\[The add filters dropdown menu, filters with CL next to them.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-contact-category-3.png)
+ **Search contacts by keywords**

  This permission isn't required by Contact Lens conversational analytics but it's helpful as it provides more search options.
  + On the **Contact Search** page, you can access additional filters that allow you to search contacts by **Words or phrases**, such as "*thank you for your business*." For more information, see [Search for words or phrases](search-conversations.md#keyword-search).  
![\[The add filters dropdown menu, the Words or phrases CL filter.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-words-phrases.png)

# Conversational analytics metrics in Amazon Connect
Conversational analytics metrics

The following metrics are derived from Contact Lens conversational analytics. These metrics are available only when [Contact Lens is enabled for your instance](enable-analytics.md#enable-cl) and [conversational analytics](enable-analytics.md#enable-callrecording-speechanalytics) is enabled on the contact. 

These metrics are displayed on the Real-time and Historical metrics reports. For instructions about how add these metrics to your report, see [How to create a historical metrics report](create-historical-metrics-report.md#historical-reports-howto-create).

Also check out the [Contact Lens conversational analytics dashboard](contact-lens-conversational-analytics-dashboard.md) for data visualizations about the trends of contact drivers over time. 

## Agent talk time percent


This metric measures the talk time by an agent in a voice conversation as a percent of the total conversation duration. 

**Metric type**: Percent

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `PERCENT_TALK_TIME_AGENT`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Agent talk time percent

**Calculation logic**:
+ Sum all the intervals in which an agent was engaged in conversation (talk time agent). 
+ Divide the sum by the total conversation duration. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average agent greeting time


This metric provides the average first response time of agents on chat, indicating how quickly they engage with customers after joining the chat. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_GREETING_TIME_AGENT`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average agent greeting time

**Calculation logic**:
+ This metric is calculated by dividing the total time it takes for an agent to initiate their first response by the number of chat contacts. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average agent interruptions


This metric quantifies the average frequency of agent interruptions during customer interactions. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_INTERRUPTIONS_AGENT`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average agent interruptions

**Calculation logic**:
+ This metric is calculated by dividing the total number of agent interruptions by the total number of contacts.

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average agent interruption time


This metric measures the average of total agent interruption time while talking to a contact. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_INTERRUPTION_TIME_AGENT`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average agent interruption time

**Calculation logic**:
+ Sum the interruption intervals within each conversation.
+ Divide the sum the number of conversations that experienced at least one interruption. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average agent talk time


This metric measures the average time that was spent talking in a conversation by an agent. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_TALK_TIME_AGENT`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average agent talk time

**Calculation logic**:
+ Sum the durations of all intervals during which the agent was speaking. 
+ Divide the sum by the total number of contacts. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average conversation duration


This metric measures the average conversation duration of voice contacts with agents.

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_CONVERSATION_DURATION`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average conversation duration

**Calculation logic**:
+ This metric is calculated by the total time from the start of the conversation until the last word spoken by either the agent or the customer.
+ This value is then divided by the total number of contacts to provide an average representation of the conversation time spent on the call. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average customer talk time


This metric measures the average time that was spent talking in a conversation by a customer. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_TALK_TIME_CUSTOMER`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average customer talk time

**Calculation logic**:
+ Sum the durations of all intervals during which the customer was speaking. 
+ Divide the sum by the total number of contacts. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average non-talk time


This metric provides the average of total non-talk time in a voice conversation. Non-talk time refers to the combined duration of hold time and periods of silence exceeding 3 seconds, during which neither the agent nor the customer is engaged in conversation. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_NON_TALK_TIME`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average non-talk time

**Calculation logic**:
+ Sum all the intervals in which both participants remained silent.
+ Divide the sum by the number of contacts. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Average talk time


This metric measures the average time that was spent talking during a voice contact across either the customer or the agent. 

**Metric type**: String (*hh:mm:ss*)

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_TALK_TIME`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Average talk time

**Calculation logic**:
+ Sum all the intervals in which either an agent, a customer, or both were engaged in conversation.
+ Divide the sum by the total number of contacts. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Customer talk time percent


This metric provides the talk time by a customer in a voice conversation as a percent of the total conversation duration. 

**Metric type**: Percent

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `PERCENT_TALK_TIME_CUSTOMER`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Customer talk time percent

**Calculation logic**:
+ Sum all the intervals in which a customer was engaged in conversation.
+ Divide the sum by the total conversation duration. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Non-talk time percent


This metric provides the non-talk time in a voice conversation as a percent of the total conversation duration. 

**Metric type**: Percent

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `PERCENT_NON_TALK_TIME`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Non-talk time percent

**Calculation logic**:
+ Sum all the intervals in which participants remained silent (non-talk time).
+ Divide the sum by the total conversation duration. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

## Talk time percent


This metric provides the talk time in a voice conversation as a percent of the total conversation duration. 

**Metric type**: Percent

**Metric category**: Conversational analytics driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `PERCENT_TALK_TIME`

**How to access using the Amazon Connect admin website**: 
+ Historical metrics reports: Talk time percent

**Calculation logic**:
+ Sum all the intervals in which either an agent, a customer, or both were engaged in conversation (talk time). 
+ Divide the sum by the total conversation duration. 

**Notes**:
+ This metric is available only for contacts analyzed by Contact Lens conversational analytics. 

# Amazon Connect Contact Lens notification types
Contact Lens notification types

Contact Lens provides the following notification types:
+ Contact Lens Post Call/Chat Rules Matched: An EventBridge event is delivered whenever a Contact Lens rule is matched and has triggered the EventBridge rule action. 

  This event contains useful information about the Contact Lens rule that is triggered including the category assigned, and details of the agent, contact and queue.
+ Contact Lens Real Time Call/Chat Rules Matched: An EventBridge event is delivered whenever a Contact Lens rule is matched and has triggered in real time. 

  This event contains useful information about the Contact Lens rule that is triggered including the category assigned, and details of the agent, contact and queue.
+ Contact Lens Analysis State Change: An EventBridge event is delivered when Contact Lens is unable to analyze a contact recording. The event contains the Event Reason Code which provides the details on why it was unable to process the recording.

You can use these notification types in a variety of scenarios. For example, use Contact Lens analysis State Change events to signal unexpected errors in the processing of a contact file where EventBridge event details can be subsequently stored in a CloudWatch log for additional review, trigger additional workflows, or alert relevant support teams for further investigation. 

The Contact Lens events for speech and chat analytics enable numerous new use cases such as surfacing and visualization of additional insights, for example:
+ Generate alerts on real-time customer sentiment drops across all call and chat conversations
+ Aggregating and reporting on reoccurring issues and topics
+ Measuring the impact of the latest marketing campaign by detecting how many customers referenced it during a call
+ Customizing agent compliance standards for each Region and lines of business, and enrolling agents into additional training where required.

# Add custom vocabularies to Contact Lens using the Amazon Connect admin website
Add custom vocabularies

You can improve the accuracy of speech recognition for product names, brand names, and domain-specific terminology, by expanding and tailoring the vocabulary of the speech-to-text engine in Contact Lens. 

This topic explains how to add custom vocabularies using the Amazon Connect admin website. You can also add them using the [CreateVocabulary](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateVocabulary.html) and [AssociateDefaultVocabulary](https://docs.aws.amazon.com/connect/latest/APIReference/API_AssociateDefaultVocabulary.html) APIs. 

## Things to know about custom vocabularies
Things to know about vocabularies
+ You must set a vocabulary as the **default** for it to be applied to the analyses to generate transcripts. The following image shows the **Custom vocabularies** page. Choose the ellipsis, and then choose **Set as default**.  
![\[The custom vocabularies page, the location of the ellipses, the set as default option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-custom-vocab-default.png)
+ You can have one vocabulary per language applied to the analyses. This means only one file per language can be in the **Ready (default)** state.
+ You can upload and activate up to 20 vocabulary files. You can activate all 20 files at the same time.
+ Transcription is a one-time event. A newly uploaded vocabulary isn't applied retroactively to existing transcriptions.
+ Your text file must be in LF format. If you use any other format, such CRLF format, your custom vocabulary is not accepted by Amazon Transcribe.
+ The sample vocabulary file can be downloaded only when you choose an English language setting.
+ For limits to the size of a vocabulary file and other requirements, see [Custom vocabularies](https://docs.aws.amazon.com/transcribe/latest/dg/custom-vocabulary.html) in the *Amazon Transcribe Developer Guide*.
+ Custom vocabularies apply to speech analytics only. They do not apply to chat conversations because the transcripts already exist. 

## Required permissions
Required permissions

Before you can add custom vocabularies to Amazon Connect, you need the **Analytics and Optimization**, **Contact Lens - custom vocabularies** permission assigned to your security profile.

By default, in new instances of Amazon Connect the **Admin** and **CallCenterManager** security profiles have this permission.

For information about how to add more permissions to an existing security profile, see [Update security profiles in Amazon Connect](update-security-profiles.md).

## Add a custom vocabulary
Add a custom vocabulary

1. Log in to Amazon Connect with a user account that has the required permissions to add custom vocabularies.

1. Navigate to **Analytics and optimization**, **Custom vocabularies**.

1. Choose **Add custom vocabulary**.

1. On the **Add custom vocabulary** page, enter a name for the vocabulary, choose an English language, and then choose **Download a sample** file.
**Note**  
The sample vocabulary file can be downloaded only when you choose an English language setting. Otherwise, an error message is displayed, as shown in the following image.  

![\[The error message that processing the vocabulary file failed.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-custom-vocab-sample-error.png)


   The following image shows what the sample vocabulary file looks like. The header contains `Phrase`, `IPA`, `SoundsLike`, `DisplayAs`. The header is required.  
![\[A sample vocabulary file, the header.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-custom-vocab-header.png)

1. The information in the file is separated by one [TAB] per entry. For details about how to add words and acronyms to your vocabulary file, see [Creating a custom vocabulary using a table](https://docs.aws.amazon.com/transcribe/latest/dg/custom-vocabulary-create-table.html) in the *Amazon Transcribe Developer Guide*.

   The following image shows words in a sample vocabulary file. Words in the Phrase column are required. Words in the `IPA`, `SoundsLike`, and `DisplayAs` columns are optional.  
![\[A sample vocabulary file, words in the phrase column are required.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-custom-vocab-phrase-column.png)

   To enter multiple words in the **Phrase** column, separate each word with a hyphen (-); do not use spaces. 

## Vocabulary states
Vocabulary states
+ **Ready (default)**: The vocabulary is being applied to the analyses to generate transcripts. It is applied to both real-time and post-call analyses.
+ **Ready**: The vocabulary is not being applied to analyses, but it is a valid file and available. To apply it to analyses, set it to default. 
+ **Processing**: Amazon Connect is validating your uploaded vocabulary and trying to apply it to the analyses to generate transcripts.
+ **Deleting**: You chose to **Remove** the vocabulary, and Amazon Connect is deleting it now. 

  It takes about 90 minutes for Amazon Connect to delete a vocabulary.

If you attempt to upload a vocabulary that does not validate, it results in a **Failed** state. For example, if you add multiple-word phrases to the **Phrase** column, and separate them with spaces instead of hyphens, it will fail. 

## Download and view a custom vocabulary
View a custom vocabulary

To view a custom vocabulary that has been uploaded, you download and open the file. Only files in the **Ready** state can be downloaded and viewed.

1. Navigate to **Analytics and optimization**, **Custom vocabularies**.

1. Choose **More**, **Download**. The location of **Download** is shown in the following image.  
![\[The custom vocabularies page, a list of vocabularies, the more dropdown menu, the download option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-custom-vocab-download.png)

1. Open the download to view the contents.

1. You can change the contents, and then choose **Save and upload**. 

# Create Contact Lens rules using the Amazon Connect admin website
Create rules with Contact Lens

Contact Lens rules allow you to automatically categorize contacts, receive alerts, or generate tasks based on keywords that are used during a call, chat, or email, sentiment scores, customer attributes, and other criteria. 

This topic explains how to create rules using the Amazon Connect admin website. To create and manage rules programmatically, see [Rules actions](https://docs.aws.amazon.com/connect/latest/APIReference/rules-api.html) and the [Amazon Connect Rules Function language](https://docs.aws.amazon.com/connect/latest/APIReference/connect-rules-language.html) in the *Amazon Connect API Reference Guide*. 

**Tip**  
For a list of rules feature specifications (for example, how many rules you can create), see [Amazon Connect Rules feature specifications](feature-limits.md#rules-feature-specs).

## Step 1: Define rule conditions for conversational analytics
Step 1: Define rule conditions

1. On the navigation menu, choose **Analytics and optimization**, **Rules**.

1. Select **Create a rule**, **Conversational analytics**.

1. Under **When**, use the dropdown list to choose **post-call analysis**, **real-time analysis**, **post-chat analysis**, or **email analysis**.  
![\[The new rule page, the when dropdown menu.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rule-define-conditions.png)

1. Choose **Add condition**. 

   You can combine criteria from a large set of conditions to build very specific Contact Lens rules. Following are the available conditions: 
   + **Words or phrases**: Choose from [Exact match, Pattern match, or Semantic match](exact-match-pattern-match-semantic-match.md) to trigger an alert or task when keywords are uttered.
   + **Natural Language - Semantic Match**: Provide a natural language statement (e.g., customer called to cancel their account) to match with conversation transcripts using generative AI, and take an action (for example, triggering a task, performing an evaluation, etc.) For more information, see [Generative AI-powered semantic match](natural-language-semantic-match.md)
   + **After contact work (ACW)**: Build rules to measure agent efficiency in completing after contact work.
   + **Agent hierarchy**: Build rules that run on a specific agent hierarchy. Agent hierarchies may represent geographical locations, departments, products, or teams.

     To see list of agent hierarchies so you can add them to rules, you need the **Agent hierarchy - View** permission in your security profile.
   + **Agent**: Build rules that run on a subset of agents. For example, create a rule to ensure newly hired agents comply with company standards.

     To see agent names so you can add them to rules, you need **Users - View** permissions in your security profile. 
   + **AI agent**: Identify contacts where a particular Connect AI agent performed self-service or agent assistance. You can select multiple AI agents, or select a specific version of an agent.

     To see AI agent names so you can add them to rules, you need **AI agents - View** permissions in your security profile.
   + **AI agent - Escalation**: Identify contacts when a Connect AI agent used for customer self-service escalated to a human.

     To see AI agent names so you can add them to rules, you need **AI agents - View** permissions in your security profile.
   + **Agent interaction duration**: Build rules to identify contacts that had an agent interaction longer or shorter than what was expected. This feature applies to calls only.
   + **Contact segment attributes**: You can identify contacts within rules using custom contact segment attributes with values populated from other systems or using custom logic. You can [define an attribute](predefined-attributes.md#predefined-attributes-create-web-admin) and set its value in flows. Custom segment attributes are only present on that specific contact ID, and not the entire contact chain. For example, you can build a rule that identifies that contact was pre-authenticated in IVR, before being connected with the agent.

     To see the list of contact segment attributes to add to a rule, you need **Predefined attributes - View** permissions.
   + **Disconnect reason**: Build rules that check for why a contact disconnected. For example, if the agent disconnected prior to the customer, or if the contact was transferred.
   + **Highest loudness score**: Build rules that check for the peak loudness score (in decibels) during the conversation for the agent or the customer. Higher loudness (for example, over 70Db) may be associated with excitement or anger, while speech below a certain loudness score (for example, 30Db or lower) might be hard to understand.
   + **Hold time**: Build rules to identify contacts that had unusual hold times to identify opportunities to handle contacts more efficiently. You can set rules using longest hold time, total hold time, and number of holds. You can also check for hold time as a percentage of the total time the customer was connected with the agent (customer hold time divided by agent interaction duration and customer hold time).
   + **Initiation method**: Build rules that check whether a contact was inbound, outbound, transferred, etc.
   + **Contact attributes**: Build rules that run on the values of custom [contact attributes](what-is-a-contact-attribute.md). For example, you can build rules specifically for a particular line of business or for specific customers, such as based on their membership level, their current country of residence, or if they have an outstanding order. 

     You can add up to five contact attributes to a rule.
   + **Sentiment - Time period**: Build rules that run on the sentiment analysis results (positive, negative, or neutral) over a trailing window of time. 

     For example, you can build a rule for when customer sentiment has remained negative for a set period of time. If the participant joined the contact later, the time period set here applies to when participant was present.

     When rules are applied to contacts that don't have sentiment data, neutral sentiment is used.
   + **Sentiment - Entire contact**: Build rules that run on the value of sentiment scores over an entire contact. For example, you can build a rule when customer sentiment has remained low for the entire contact, you can create a task for a customer experience analyst to review the call transcript and follow-up.

     When rules are applied to contacts that don't have sentiment data, neutral sentiment is used.
   + **Interruptions**: Build rules that detect when the agent has interrupted the customer for more than X times. This feature applies to calls only.
   + **Non-talk time**: Build rules that check for no speech detected. This may include periods of a customer being put on hold. You can check for total non-talk time, longest non-talk time period within a conversation, or percentage of non-talk time during the conversation. High non-talk time, such as a percentage of non-talk time exceeding 50 percent of the conversation, may indicate an opportunity to improve processes or agent coaching opportunities. This feature applies to calls only.
   + **Response time**: Build rules to identify contacts where the participant had a response time longer or shorter than what was expected: Average or Maximum. 

     For example, you can set a rule on the **Agent greeting time**, also known as **First response time**: after the agent joined the chat, how long until they sent the first greeting message. This will help you to identify when an agent took too long to engage with the customer.
   + **Potential disconnect issue**: Build rules that check for any technical issues (such as network connectivity, device problems). You can use this to exclude contacts from automated agent performance evaluations, where there were connectivity issues out of the agent’s control.
   + **Queues**: Build rules that run on a subset of queues or check if the contact was not queued. Often organizations use queues to indicate a line of business, topic, or domain. For example, you could build rules specifically for your sales queues, tracking the impact of a recent marketing campaign, or, alternatively, rules for your customer support queues, tracking overall sentiment. For self-service interactions, you can check if the contact was never queued, potentially indicating successful self-service with an AI agent.

     To see queue names so you can add them to rules you need **Queues - View** permissions in your security profile.
   + **Routing profile**: Identify contacts handled by agents mapped to a specific routing profile. The routing profile may indicate agent department or skill proficiency. For example, you may perform automated evaluations of agents with the routing profile New hires, trained on basic troubleshooting using different evaluation criteria versus tenured multi-skilled agents.

     To see the routing profiles so you can add them to rules, you need **Routing Profiles - View** permissions in your security profile.
   + **Talk time**: Build rules using threshold of absolute time spent talking by the agent or the customer. This can be used to identify where the customer did not speak at all, leading the agent to disconnect or where the agent exhibited call avoidance behaviors such as not speaking after picking up the phone.
   + **Agent interaction duration**: Build rules to identify contacts that had an agent interaction longer or shorter than what was expected. This feature applies to calls only.

   The following image shows a sample rule with multiple conditions for a voice contact.  
![\[A sample rule with multiple conditions for a voice contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-conditions.png)

   The following image shows a sample rule with multiple conditions for a chat contact. The rule is triggered when the **First** response time is greater than or equal to 1 minute, and the agent did not mention any of the listed greeting words or phrases in their first response.

   **First response time** = after the agent has joined the chat, how long until they sent the first message to the customer.   
![\[A sample rule with multiple conditions for a chat contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-conditions-chat.png)

1. Choose **Next**.

## Step 2: Define rule actions
Step 2: Define rule actions

1. Choose **Add action**. You can choose the following actions:
   + [Create Task](contact-lens-rules-create-task.md): this option is not available for real-time chat
   + [Send email notification](contact-lens-rules-email.md)
   + [Generate an EventBridge event](contact-lens-rules-eventbridge-event.md)  
![\[The add action dropdown menu, a list of actions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-action-no-wisdom.png)

1. Choose **Next**.

1. Review and make any edits, then choose **Save**. 

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

# Automatically categorize contacts by matching conversations with natural language statements, or specific words and phrases
Automatically categorize contacts

Contact Lens conversational analytics enables you to automatically categorize contacts to identify top drivers, customer experience, and agent behavior for your contacts. On the **Contact details** page for a chat, categories appear above the transcript, as shown in the following image. 

![\[The Contact details page, the Categories section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-category-overview-chat2.png)


Following are some of the key things you can do when you categorize contacts:
+ With generative AI-powered contact categorization, you can provide criteria to categorize contacts in natural language (for example, did the customer try to make a payment on their balance?). 
+ You can provide specific words or phrases spoken by agents or customers to match with a conversation. Contact Lens then automatically labels contacts that meet the match criteria, and provides relevant points from the conversation. 
+ You can define actions to receive alerts and generate tasks on categorized contacts.
+ You can specify additional criteria to categorize contacts, such as customer sentiment score, queues, or any custom attributes that you have added to contacts, like customer loyalty information.

## When to use words or phrases


Using specific words or phrases is useful when there is a well-defined list of words or phrases that you wish to detect, for example, monitoring agent script adherence or assessing customer interest in a product. 

## When to use natural language


Using natural language statements to match with contacts is useful when there are too many possible words or phrases or when you want to match with context-specific criteria, for example, "The customer wanted to make a change to their subscription plan.", "The agent resolved all of the customer's issues." 

## Add rules to categorize contacts


In this section:
+ [Step 1: Define conditions](#add-category-rules-define-conditions)
+ [Step 2: Define actions](#add-category-rules-define-actions)

### Step 1: Define conditions


1. Log in to Amazon Connect with a user account that is assigned the **CallCenterManager** security profile, or that is enabled for **Rules** permissions.

1. On the navigation menu, choose **Analytics and optimization**, **Rules**. 

1. Select **Create a rule**, **Conversational analytics**. 

1. Assign a name to the rule.

1. Under **When**, use the dropdown list to choose **post-call analysis**, **real-time analysis**, **post-chat analysis**, **real-time chat analysis**, or **email analysis**.  
![\[The new rule page, the When dropdown list.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rule-define-conditions.png)

1. Choose **Add condition**, and then choose the type of match: 
   + **Words or Phrases - Exact Match**: Finds contacts that match with the exact words or phrases. Enter the words or phrases, separated by a comma.
   + **Words or Phrases Pattern Match**: Finds contacts by looking for a pattern of words or phrases. You can also specify the distance between words. For example, if you are looking for contacts where the word "credit" was mentioned but you do not want to see any mention of the words "credit card," you can define a pattern matching category to look for the word "credit" that is not within a one-word distance of "card."
   + **Natural Language - Semantic Match**: Use generative AI to find contacts that match the provided natural language statement. The statement should be answerable with a yes or no answer. Natural language - Semantic match is used when you want to match contacts with context-specific criteria or when there are too many possible words or phrases for matching. The following are examples: 
     + "The customer wanted to make a change to their subscription plan."
     + "The customer indicated a desire to terminate their current services."
     + "The agent offered multiple payment options."
     + "The agent assured the customer that their call was important and requested additional waiting time."
     + "The agent resolved all of the customer's issues."
**Note**  
Natural Language - Semantic Match conditions cannot be used for real-time analysis.
To create rules that use generative AI requires an additional permission: **Rules - Generative AI**.

     **Pro Tip**:Use generative AI-powered **Natural language- Semantic match** if you previously used **Words or Phrases - Semantic Match**. 
   + **Words or Phrases - Semantic Match**: Finds words that may be synonyms. For example, if you enter "upset" it can match "not happy," or "hardly acceptable" can match with "unacceptable," and "unsubscribe" can match with "cancel subscription." Similarly, it can semantically match phrases. For example, "thank you so much for helping me out," "thanks a lot and this is so helpful," and "I am so happy that you are able to help me."

     This removes the need to define an exhaustive list of keywords while creating categories, and provides you the ability to cast a wider net for searching similar phrases that are important to you. For best semantic matching results, provide keywords or phrases with similar meaning within a semantic matching card. Currently, you can provide a maximum of four keywords and phrases per semantic matching card.

1. Using **Words or Phrases - Exact Match** as an example, enter the words or phrases, separated by a comma, that you want to highlight and choose **Add**. Each word or phrase separated by a comma gets its own line in the card.   
![\[The new rules page, the Words or phrases - Exact match section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-script.png)  
![\[The new rules page, the Words or phrases - Exact match section, the Add button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-script2.png)

   The logic that Contact Lens uses to read these phrases is: (Hello AND thank AND you AND for AND calling AND Example AND Corp) OR (we AND value AND your AND business) OR (how AND may AND I AND assist AND you).

   Alternatively, use a **Natural Language - Semantic Match** condition and enter a natural language statement in the textbox, that Generative AI should be able to evaluate as either True or False.  
![\[The new rules page, the Natural language - Semantic match section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-natural-language-semantic.png)

1. To add more words or phrases, choose **Add group of words or phrases**. In the following image, the first group of words or phrases are what the agent might utter, and the second group is what the customer might utter.  
![\[A Words or phrases - Exact match for agent, the word AND, a Words or phrases section for the customer.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-script3.png)

   1. The logic that Contact Lens uses to read these phrases is: (Hello AND thank AND you AND for AND calling AND Example AND Corp) OR (we AND value AND your AND business) OR (how AND may AND I AND assist AND you).

   1. The two cards are connected with an AND. This means, one of the rows in the first card needs to be uttered AND then one of the phrases in the second card needs to be uttered.

   The logic that Contact Lens uses to read the two cards of words or phrases is (card 1) AND (card 2).

1. Choose **Add condition** to apply the rules to:
   + Specific queues
   + When contact attributes have certain values
   + When sentiment scores have certain values

   For example, the following image shows a rule that applies when an agent is working the BasicQueue or Billing and Payments queues, the customer is for auto insurance, and the agent is located in Seattle.  
![\[A rule with multiple conditions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-3.png)

### Step 2: Define actions


In addition to categorizing a contact, you can define what actions Amazon Connect should take: 

1. [Generate an EventBridge event](contact-lens-rules-eventbridge-event.md)

1. [Create Task](contact-lens-rules-create-task.md)

1. [Create Case](contact-lens-rules-create-case.md)

1. [Send email notifications](contact-lens-rules-email.md)

1. [Create a rule that submits an automated evaluation](contact-lens-rules-submit-automated-evaluation.md)

### Step 3: Review and save


1. When done, choose **Save**. 

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

# When a rule or category fails to be evaluated by Amazon Connect Contact Lens
Failed categories or rules

When Amazon Connect Contact Lens evaluates a rule or category during a post-contact analysis for a voice or chat contact, it is possible that the rule or category fails to evaluate. 

Following are the possible category outcomes when a rule or category is evaluated during contact analysis:

1. **Successfully matched and applied to the contact**. When categories are displayed on the **Contact details** page, it indicates they were successfully matched and applied to the contact.

1. **Successfully evaluated and but they don't apply to the contact**. When categories are absent from the **Contact details** page, it indicates they don't apply to the contact but were successfully evaluated by Contact Lens rules.

1. **The contact analysis was completed but a specific category was not evaluated**. When a category fails to be evaluated, it doesn't mean the category doesn't apply to the contact (based on its criteria), but rather that Contact Lens completed the contact analysis without evaluating this specific category. 

The following image shows that failed categories are denoted with their dashed borders, transparent backgrounds, error icons, and failed prefixes. When you hover over a failed category, details about why the category failed to evaluate are displayed.

![\[The failed categories on the Contact details page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/failed-categories1.png)


These failed categories only exist from rules with the semantic match condition. The two possible reasons are:

1. **Quota exceeded**: Your Gen AI actions limit was exceeded for that time span. You can request a quota increase through AWS Support.

1. **Failed safety guidelines**: Category processing failed because it did not satisfy security and quality guardrails.

We recommend adding more conditions to your semantic match rules to narrow down the number of contacts it may apply to. This will help avoid quota exceeded failures.

## Contact Lens post-contact analysis output customer S3 file


Failed categories appear in the analysis file under JobDetails > Skipped Analysis.

The `SkippedAnalysis` section shows contact analysis that was marked as 'Skipped', even though the analysis was completed for that contact. It contains the properties "Feature" and "ReasonCode". `POST_CONTACT_SUMMARY` is one of the existing features.

`CATEGORIZATION` is added as a new feature to skipped analysis. There is one unique categorization element in the `SkippedAnalysis` array for each unique `ReasonCode` that resulted in failed categorization. A new `SkippedEntities` property is introduced for each unique element, containing a list of all category names (and their associated rule IDs) that failed due to the associated reason code.

Following is an example of failed categories within `JobDetails`:

```
"JobDetails": {
    "SkippedAnalysis": [
        {
            "Feature": "CATEGORIZATION",
            "ReasonCode": "QUOTA_EXCEEDED", 
            "SkippedEntities": [
                {
                    "CategoryName": "PotentialFraud"
                    "RuleId": "a1130485-9529-4249-a1d4-5738b4883748"
                },
                {
                    "CategoryName": "Refund"
                    "RuleId": "bbbbbbb-9529-4249-a1d4-5738b4883748"
                }
            ]
        },
        {
            "Feature": "CATEGORIZATION",
            "ReasonCode": "FAILED_SAFETY_GUIDELINES", 
            "SkippedEntities": [
                {
                    "CategoryName": "ManagerEscalation"
                    "RuleId": "cccccccc-9529-4249-a1d4-5738b4883748"
                },
            ]
        },
        {
            "Feature": "POST_CONTACT_SUMMARY",
            "ReasonCode": "INSUFFICIENT_CONVERSATION_CONTENT"
        }
    ]
},
```

For more information, see [Example Contact Lens conversational analytics output files for a call](contact-lens-example-output-files.md).

# Add real-time alerts to Contact Lens for supervisors based on keywords and phrases in a call
Alert supervisors in real-time for calls

After you [enable real-time analytics](enable-analytics.md) in your flow, you can add rules that automatically alert supervisors when a customer experience issue occurs. 

For example, Contact Lens can automatically send an alert when certain keywords or phrases are mentioned during the conversation, or when it detects other criteria. The supervisor sees the alert on the real-time metrics dashboard. From there, supervisors can listen in to the live call, and provide guidance to the agent over chat to help them resolve the issue faster.

The following image shows an example of what a supervisor would see on the real-time metrics report when they get an alert. In this case, Contact Lens has detected an angry customer situation. 

![\[The real-time metrics page, an alert for an angry customer.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-real-time-metrics-alert2.png)


When the supervisor listens in to a live call, Contact Lens provides them with a real-time transcript and customer sentiment trend that helps them understand the situation and assess the appropriate action. The transcript also eliminates the need for customers to repeat themselves if they are transferred to another agent. 

The following image shows a sample real-time transcript.

![\[A sample real-time transcript.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-real-time-transcript.png)


## Add rules for real-time alerts for calls


1. Log in to Amazon Connect with a user account that is assigned the **CallCenterManager** security profile, or that is enabled for **Rules** permissions.

1. On the navigation menu, choose **Analytics and optimization**, **Rules**. 

1. Select **Create a rule**, **Conversational analytics**. 

1. Assign a name to the rule.

1. Under **When**, use the dropdown list to choose **real-time analysis**.

1. Choose **Add condition**, and then choose the type of match: 
   + **Exact Match**: Finds only the exact words or phrases.
   + **Pattern Match**: Finds matches that may be less than 100 percent exact. You can also specify the distance between words. For example, you might look for contacts where the word "credit" was mentioned, but you do not want to see any mention of the words "credit card." You can define a pattern matching category to look for the word "credit" that is not within a one-word distance of the word "card." 
**Tip**  
Semantic Match isn't available for real-time analysis.

1. Enter the words or phrases, separated by a comma, that you want to highlight. Real-time rules only support any keywords or phrases that **were mentioned**.   
![\[A words and phrases rule.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-alert-rules-1.png)

1. Choose **Add**. Each word or phrase separated by a comma gets its own line.  
![\[A words and phrases rule with multiple phrases, each on it's own line.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-alert-rules-2.png)

   The logic that Contact Lens uses to read these words or phrases is: (Talk OR to OR your OR manager) OR (this OR is OR not OR helpful) OR (speak OR to OR your OR supervisor), etc.

1. To add more words or phrases, choose **Add group of words or phrases**. In the following image, the first group of words or phrases are what the agent might utter. The second group is what the customer might utter.  
![\[A words and phrases rule with multiple phrases for customer and agent.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-script3.png)

   1. In this first card, Contact Lens reads each line as an OR. For example: (Hello) OR (thank OR you OR for OR calling OR Example OR Corp) OR (we OR value OR your OR business).

   1. The two cards are connected with an AND. This means, one of the rows in the first card needs to be uttered AND then one of the phrases in the second card needs to be uttered.

   The logic that Contact Lens uses to read the two cards of words or phrases is (card 1) AND (card 2).

1. Choose **Add condition** to apply the rules to:
   + Specific queues
   + When contact attributes have certain values
   + When sentiment scores have certain values

   For example, the following image shows a rule that applies when an agent is working the BasicQueue or Billing and Payments queues, the customer is for auto insurance, and the agent is located in Seattle.  
![\[A words and phrases rule with multiple conditions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-3.png)

1. When done, choose **Next**. 

1. In the **Assign contact category** box, add a name for the category. For example, **Compliant** or **Not\$1Compliant**.

1. Choose **Next**, then choose **Save and publish**.

# Add real-time alerts to Contact Lens for supervisors based on keywords and phrases in a chat
Alert supervisors in real-time in chat

After you [enable real-time analytics](enable-analytics.md) in your flow, you can add rules that automatically alert supervisors when a customer experience issue occurs. 

For example, Contact Lens can automatically send an alert when certain keywords or phrases are mentioned during the chat, or when it detects other criteria. The supervisor can then view the **Contact details** page for a real-time chat to view the issue. From there, supervisors can join the chat, and provide guidance to the agent over chat to help them resolve the issue faster.

The following image shows an example of what a supervisor would see on the **Contact details** page when they get an alert for a real-time chat. In this case, Contact Lens has detected an angry customer situation. 

![\[The contact details page, an alert for an angry real-time chat customer.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-realtime-alert-chat.png)


When the supervisor monitors a chat, Contact Lens provides them with a real-time transcript and customer sentiment trend that helps them understand the situation and assess the appropriate action. The transcript also eliminates the need for customers to repeat themselves if they are transferred to another agent. 

## Add rules for real-time alerts for chats


1. Log in to Amazon Connect with a user account that is assigned the **CallCenterManager** security profile, or that is enabled for **Rules** permissions.

1. On the navigation menu, choose **Analytics and optimization**, **Rules**. 

1. Select **Create a rule**, **Conversational analytics**. 

1. Assign a name to the rule.

1. Under **When**, use the dropdown list to choose **real-time analysis**.

1. Choose **Add condition**, and then choose the type of match. The following image shows a rule configured for a **Sentiment - Time period** condition.   
![\[The conditions for a real-time chat analysis rule.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-realtime-chat-rule2.png)

   Choose from the following options:
   + **Exact Match**: Finds only the exact words or phrases.
   + **Pattern Match**: Finds matches that may be less than 100 percent exact. You can also specify the distance between words. For example, you might look for contacts where the word "credit" was mentioned, but you do not want to see any mention of the words "credit card." You can define a pattern matching category to look for the word "credit" that is not within a one-word distance of the word "card." 
**Tip**  
Semantic Match isn't available for real-time analysis.

1. Enter the words or phrases, separated by a comma, that you want to highlight. Real-time rules only support any keywords or phrases that **were mentioned**.   
![\[A words and phrases rule.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-alert-rules-1.png)

1. Choose **Add**. Each word or phrase separated by a comma gets its own line.  
![\[A words and phrases rule with multiple phrases, each on it's own line.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-alert-rules-2.png)

   The logic that Contact Lens uses to read these words or phrases is: (Talk OR to OR your OR manager) OR (this OR is OR not OR helpful) OR (speak OR to OR your OR supervisor), etc.

1. To add more words or phrases, choose **Add group of words or phrases**. In the following image, the first group of words or phrases are what the agent might mention. The second group is what the customer might mention.  
![\[A words and phrases rule with multiple phrases for customer and agent.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-script3.png)

   1. In this first card, Contact Lens reads each line as an OR. For example: (Hello) OR (thank OR you OR for OR calling OR Example OR Corp) OR (we OR value OR your OR business).

   1. The two cards are connected with an AND. This means, one of the rows in the first card needs to be mentioned AND then one of the phrases in the second card needs to be mentioned.

   The logic that Contact Lens uses to read the two cards of words or phrases is (card 1) AND (card 2).

1. Choose **Add condition** to apply the rules to:
   + Specific queues
   + When contact attributes have certain values
   + When sentiment scores have certain values

   For example, the following image shows a rule that applies when an agent is working the BasicQueue or Billing and Payments queues, the customer is for auto insurance, and the agent is located in Seattle.  
![\[A words and phrases rule with multiple conditions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-3.png)

1. When done, choose **Next**. 

1. In the **Assign contact category** box, add a name for the category. For example, **Compliant** or **Not\$1Compliant**.

1. Choose **Add action** to specify what action Amazon Connect should take when the conditions are met. You can configure supervisor alerts by using email notifications or by developing a custom integration with EventBridge.  
![\[The Generate an EventBridge event and Send email notification options.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-realtime-chat-rule3.png)

1. If you chose **Send email notification**, see [Create rules that send email notifications](contact-lens-rules-email.md) for more details about completing the page and for information about email limits. 

   If you chose **Generate an EventBridge event**, see [Create a rule that generates an EventBridge event](contact-lens-rules-eventbridge-event.md) for more details about completing the page and for information about subscribing to EventBridge event types.

# Create rules that send email notifications
Create rules that send email notifications

You can create rules that send email notifications to people in your organization. This helps you to respond more expediently to potential issues in your contact center. For example, you can create a rule to notify:
+ A team supervisor when there is an account escalation or cancellation.
+ A group of people in your contact center as a result of certain words being mentioned during a conversation.
+ A designated person in your contact center when a disagreement occurs during the call.
+ An agent who had handled the contact that was analyzed or evaluated with Amazon Connect conversational analytics.

**Important**  
All emails are sent from `no-reply@amazonconnect.com`. 
SAML users don't have primary email addresses, they have username logins. A username login is typically an email address but it doesn't have to be. For these users the field label **Email address** is empty inside Amazon Connect. When email notifications are sent for SAML users, they must have a secondary email configured in order to get it. If a secondary email is not configured, the user won't receive the email.

**To create a rule that sends an email notification**

1. Log in to Amazon Connect with a user account that has the [required permissions](permissions-for-rules.md) to create rules.

1. Navigate to **Analytics and optimization**, **Rules**.

1. On the **Rules** page, choose **Create a rule**, and then from the dropdown list, choose **Conversational analytics** or **Evaluation forms**.  
![\[The rules page, the create a rule dropdown list, the contact lens option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-create-rule.png)

1. On the **New rule** page, define the conditions for the rule. For more information, see:
   + [Define rule conditions for conversational analytics](build-rules-for-contact-lens.md#rule-conditions)
   + [Define rule conditions for evaluation forms](create-evaluation-rules.md#rule-conditions-eval).

1. When you define actions for the rule, choose **Send email notification** for the action.  
![\[The new rule page, the add action dropdown list, the send email notification action.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-email-action.png)

1. In the **Send email notification** section, choose who is going to receive the email by using one of these options: 
   + **Select recipients by login**: Routes the email to the specified user.
**Important**  
SAML users must have a secondary email configured in order to get it. If a secondary email is not configured, the user won't receive the email.
   + **Select recipients by tags**. Routes the email dynamically based on the agent's tag values.
   + **Select the agent who handled the contact**. Routes the email to the agent who handled the contact.

   In the following image, the rule sends a notification email to the agent who handled the contact.   
![\[The Send email notification section, the Select the agent who handled the contact option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-email-tag.png)

1. In **Subject**, add the email subject. In **Body**, add the contents of the email notification.

   Use **@ to add dynamic variables** that are populated during execution of the rule. For conversational analytics rules and evaluation forms rules, you can add **rule name, instance URL, contact, agent** and **queue** information for the contact that matched the rule. Evaluation forms rules additionally enable you to insert the **evaluation ID**.   
![\[The body of the email, the list of available variables.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/rules-send-email-dynamic-variables.png)
**Note**  
Other rule types support different variables:  
Real-time metrics rules enable you to enter **rule name, instance URL** and list of **agents, queues, flows or routing profile** that breached the threshold to trigger the alert.
Rules for cases allow you to insert **rule name, instance URL** and **case ID**.

1. Choose **Next**. Review your selections, and then choose **Save**.

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

## Email limits
Email limits
+ Amazon Connect has a default limit of 500 emails a day. When that limit is exceeded, the Amazon Connect instance is blocked for 24 hours from sending more email. This is because the emails are subject to bounce and complaint limits. For more information, see the **Bounce** and **Complaint** sections in [Understanding email deliverability in Amazon SES](https://docs.aws.amazon.com/ses/latest/dg/send-email-concepts-deliverability.html). 
+ All emails are sent from `no-reply@amazonconnect.com`, which you cannot customize.
+ SAML users don't have primary email addresses, they have username logins. A username login is typically an email address but it doesn't have to be. For these users the field label **Email address** is empty inside Amazon Connect. When email notifications are sent for SAML users, they must have a secondary email configured in order to get it. If a secondary email is not configured, the user won't receive the email.

If the default option for sending emails does not meeting your requirements, please contact your Technical Account Manager or Support to discuss with the Amazon Connect service team.

# Create a rule that generates an EventBridge event
Create a rule that generates an EventBridge event

In real-time or post-call/chat, you can get events and use them to trigger subsequent notifications or alerts, or aggregate reports outside of Amazon Connect. There's a lot you can do with this data. For example: 
+ Get real-time alerts in a QuickSight dashboard.
+ Create aggregated reported outside of Amazon Connect.
+ Join data with your CRM.
+ Connect your notification solution to EventBridge and make sure that by end of day, all of a certain type of events go to a certain inbox. The payload tells you the contact, agent, and queue. 

**Note**  
 For real-time metrics rules, the resources triggering the rule will be listed under **resources**. For example, if you create a rule that alerts you on queue metrics such as avg. queue answer time, the list of queues that breached the threshold will be listed under resources. 

**To create a rule that generates an EventBridge event**

1. When you create your rule, choose **Generate EventBridge event** for the action.  
![\[The new rule page, the take these actions section, the add action dropdown list, the Generate an EventBridge event action.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-events-example1.png)

1. For **Action name**, enter the name for the event payload.
**Note**  
The value you assign for **Action name** is visible in the EventBridge payload. When you aggregate events, the action name provides an additional dimension that you can use to process them. For example, you have 200 category names, but only 50 have a specific action name, such as NOTIFY\$1CUSTOMER\$1RETENTION.  
![\[The take these actions section, the assign contact category section, the Generate an EventBridge event section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-add-eb-action.png)

1. Choose **Next**. Review and then **Save**.

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

1. To leverage the EventBridge data, subscribe to the EventBridge event type. See the next procedure.

## Subscribe to EventBridge event types
Subscribe to EventBridge event types

To subscribe to EventBridge event types, create a custom EventBridge rule that matches the following:
+ "source" = "aws.connect"
+ "detail-type" = "Contact Lens Post Call Rules Matched" or one of the following:
  + **Contact Lens Realtime Rules Matched**
  + **Contact Lens Realtime Chat Rules Matched**
  + **Contact Lens Post Chat Rules Matched**
  +  **Contact Lens Evaluation Rules Matched**
  + **Metrics Rules Matched**

The following image shows these settings in the Event pattern section of the new rule page.

![\[The Event pattern section of the new EventBridge rule page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-eb-rules-events.png)


### Example EventBridge payloads
Example EventBridge payloads

Following is an example of what the EventBridge payload looks like when **Contact Lens Post Call Rules Matched**. 

```
{
 "version": "0", // set by EventBridge
 "id": "aaaaaaaa-bbbb-cccc-dddd-bf3703467718", // set by EventBridge
 "source": "aws.connect",
 "detail-type": "Contact Lens Post Call Rules Matched", 
 "account": "your AWS account ID",
 "time": "2020-04-27T18:43:48Z",
 "region": "us-east-1", // set by EventBridge
 "resources": ["arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN"],
 "detail": {
    "version": "1.0",
    "ruleName": "ACCOUNT_CANCELLATION", // Rule name
    "actionName": "NOTIFY_CUSTOMER_RETENTION",  
    "instanceArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN",
    "contactArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN/contact/contact-ARN",
    "agentArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN/agent/agent-ARN",
    "queueArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN/queue/queue-ARN",
    }
}
```

Following is an example of what the payload looks like when **Contact Lens Realtime Rules Matched**. 

```
{
 "version": "0", // set by EventBridge
 "id": "aaaaaaaa-bbbb-cccc-dddd-bf3703467718", // set by EventBridge
 "source": "aws.connect",
 "detail-type": "Contact Lens Realtime Rules Matched", 
 "account": "your AWS account ID",
 "time": "2020-04-27T18:43:48Z",
 "region": "us-east-1", // set by EventBridge
 "resources": ["arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN"],
 "detail": {
     "version": "1.0",
     "ruleName": "ACCOUNT_CANCELLATION", // Rule name
     "actionName": "NOTIFY_CUSTOMER_RETENTION",
      "instanceArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN",
     "contactArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN/contact/contact-ARN",
     "agentArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN/agent/agent-ARN",
     "queueArn": "arn:aws:connect:us-east-1:your AWS account ID:instance/instance-ARN/queue/queue-ARN",
      }
}
```

# Create a rule that generates a task
Create rules that generate tasks

Amazon Connect rules enables you to generate tasks. This helps you create traceable actions with owners and provides you visibility on task completion and productivity out the box.

Following are some examples:
+ Review a contact when the customer is fraudulent. For example, you can create a follow-up task when a customer utters words or phrases that makes them appear potentially fraudulent.
+ Follow up when the customer mentions specific topics that you want to later on upsell or provide additional support by reaching out.
+ Evaluate agent performance in specific scenarios, e.g. customer sentiment was very low during the conversation and the customer expressed frustration.
+ Take operational actions, such as assigning additional agents to queues on which avg. queue answer time in the last hour has exceeded acceptable thresholds.

**To create a rule that creates a task**

1. When you create your rule, choose **Create Task** for the action.  
![\[The new rule page, the add action dropdown menu, the create task option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-add-task-example1.png)

1. Complete the task fields as follows:  
![\[The new rule page, the assign contact category section, the Create task section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-add-tasks-example2.png)

   1. **Category name**: The category name appears in the contact record. Max length: 200 characters.

   1. **Name**: The name appears in the agent's Contact Control Panel (CCP). Max length: 512 characters. 

   1. **Description**: The description appears in the agent's Contact Control Panel (CCP). Max length: 4096 characters.
**Note**  
 In Name and Description, use **@ to add dynamic variables** that are populated during execution of the rule. For conversational analytics rules and evaluation forms rules, you can add **rule name, instance URL, contact, agent** and **queue** information for the contact that matched the rule. Evaluation forms rules additionally enable you to insert the **evaluation ID**.   

![\[The task action with dynamic variables.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/rules-create-task-dynamic-variables.png)

Other rule types support different variables::   
Real-time metrics rules enable you to enter **rule name, instance URL and list of agents, queues, flows or routing profile** that breached the threshold to trigger the alert.
Rules for cases allow you to insert **rule name, instance URL** and **case ID**.

   1. **Task reference name**: This is a default reference that automatically appears in the agent's CCP.
      + For real-time rules, the task reference links to the Real-time details page. 
      + For post-call/chat rules, the task reference links to the **Contact details** page. 

   1. **Additional Reference name**: Max length: 4096 characters. You can add up to 25 references.

   1. **Select a flow**: Choose the flow that is designed to route the task to the appropriate owner of the task. The flow must be saved and published for it to appear in your list of options in the dropdown.

1. The following image shows an example of how this information appears in the agent's CCP.  
![\[A task in the agent's Contact Control Panel.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-add-tasks-ccp.png)

   In this example, the agent sees the following values for **Name**, **Description**, and **Task reference name**:

   1. **Name** = **Action-Required-Contact Lens- ba2cf8fe....** 

   1. **Description** = **Test**

   1. **Task reference name** = taskRef and the URL to the Real-time details page

1. Choose **Next**. Review and then choose **Save** the task. 

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

## Voice and task contact records are linked
Voice and task contact records are linked

When a rule creates a task, a contact record is automatically generated for the task. It's linked to the contact record of the voice call or chat that met the criteria for the rule to create the task.

For example, a call comes into your contact center and generates CTR1:

![\[Information on the initial contact record when a call comes in.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-attributes-example1.png)


The Rules engine generates a task. In the contact record for the task, the voice contact record appears as the **Previous contact ID**. In addition, the task contact record inherits contact attributes from the voice contact record, as illustrated in the following image:

![\[Contact record 2 for the task.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-attributes-example2.png)


## About dynamic values for ContactId, AgentId, QueueId, RuleName
About dynamic values in brackets

The dynamic values in brackets [ ] are called [contact attributes](what-is-a-contact-attribute.md). Contact attributes enable you to store temporary information about the contact so you can use it in a flow.

When you add contact attributes in brackets [ ] — such as ContactId, AgentId, QueueId, or RuleName — the value is passed from one contact record to another. You can use contact attributes in your flow to branch and route the contact accordingly.

For more information, see [Use contact attributes](connect-contact-attributes.md).

# Create a rule in Contact Lens that ends associated tasks from a case
Create a rule that ends associated tasks from a case

**To create a rule that ends associated tasks**

1. When you create your rule, choose **A new case is updated** as the event source.  
![\[The new rule page, the add action dropdown menu, the a case is added option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-update-case-1.png)

1. When you create your rule, choose **End tasks** for the action.  
![\[The new rule page, the add action dropdown menu, the end tasks option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-ends-tasks-2.png)  
![\[The end tasks option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-ends-tasks-3.png)

1. Choose **Next**. Review and then choose **Save**.

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

# Create a rule in Contact Lens that creates a case
Create a rule that creates a case

**To create a rule that creates a case**

1. When you create your rule, choose **Post-call analysis is available**, **Post-chat analysis is available**, or **Email analysis is available** as the event source.  
![\[The define condition page, choose Post-call analysis is available, Post-chat analysis is available, or Email analysis is available as event source.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-create-case-1.png)

1. Choose **Next**

1. On the actions page, choose **Create case** for the action.  
![\[The new rule page, the add action dropdown menu, the create case option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-create-case-2.png)

1. In the **Create case** card, select a **Case template**.  
![\[In the Create case card, select a Case template.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-create-case-3.png)

1. Fill out the **required fields** and add **optional case fields** to populate case data.
**Note**  
A customer profile must be associated with a contact for this action to work. For more information, see [Enable Cases](enable-cases.md).  
![\[Fill out the required fields and add optional case fields to populate case data.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-create-case-4.png)

1. Choose **Next**. Review and then choose **Save**.

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

# Create a rule in Contact Lens that updates a case
Create a rule that updates a case

**To create a rule that updates a case**

1. When you create your rule, choose **A case is updated** as the event source and choose **Next**.  
![\[The new rule page, the add action dropdown menu, the a case is updated option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-update-case-1.png)

1. When you create your rule, choose **Update case** for the action.  
![\[The new rule page, the add action dropdown menu, the update case option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-update-case-2.png)

1. Select any case field that you want to update from the dropdown and define its new value.  
![\[Select any case field that you want to update from the dropdown and define its new value.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-update-case-3.png)  
![\[Select any case field that you want to update from the dropdown and define its new value.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-update-case-4.png)

1. Choose **Next**. Review and then choose **Save**.

1. After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

   You cannot apply rules to past, stored conversations. 

# Create a rule in Contact Lens that submits an automated evaluation
Create a rule that submits an automated evaluation

Contact Lens enables you to automatically fill and submit evaluations by using insights and metrics from conversational analytics. 

## Step 1: Configure automation on the evaluation form


Before you can create a rule that submits an automated evaluation, you need to configure automation on the evaluation form. For detailed instructions, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate) in [Create an evaluation form](create-evaluation-forms.md).

Following is an overview of the steps:

1.  Setup automation on every question in an evaluation form.

1.  Turn on **Enable automated submission of evaluations** before activating the evaluation form.

1.  When you activate the evaluation form with automation configured, a prompt is displayed for you to create a rule, as shown in the following image.   
![\[A prompt to create a rule.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/create-a-rule-to-submit-automated-evaluations-1.png)

1.  Choose **Create a rule**. 

1. On the **Rules** page, define a rule that specifies which contacts are automatically evaluated using the selected evaluation form. The following procedure provides instructions.

## Step 2: Define a rule that specifies which contacts are automatically evaluated


You can trigger automated evaluations with two types of rules:
+ A **Conversational analytics** rule that automatically evaluates the contact after Contact Lens completes its analysis.
+ An **Evaluation forms** rule that can be used to trigger a situation-specific evaluation form as an outcome of a generic evaluation form. For example, if the answer to the evaluation question *Was the customer interested in purchasing a product* is *Yes*, then you can trigger another evaluation form measuring *Agent sales performance*.

### Trigger automated evaluations with a conversational analytics rule


This is the default rule type that is selected when you create a rule to submit an automated evaluation during form activation. You can also create such a rule by selecting **Create a rule**, **Conversational analytics** on the **Rules** page.

1. Choose **A Contact Lens post-call analysis is available** or **A Contact Lens post-chat analysis is available** as the event source. These two options are highlighted in the following image.  
![\[The post-call analysis and post-chat analysis options.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/defined-conditions-evaluations.png)

1. Define conditions to identity contacts to be automatically evaluated, and then choose **Next**.

   Example conditions that you can use to identify the specific set of agents or contacts on which the evaluation form is applicable are: 
   + Agents
   + Agent hierarchy
   + AI agent
   + Queues
   + Initiation method

   In addition, you can exclude contacts that may have ended prematurely due to connectivity or other issues using conditions such as:
   + Interaction duration (for example, over 30 seconds)
   + Talk time (for example, the customer speaks for over 10 seconds)
   + Potential disconnect issue when the issue does not exist or there is no known connectivity or device issue during the conversation

1. On the **Define actions** page provide a category name to identify the rule.

1. Choose **Add action**, select **Submit automated evaluation**, and select the form that you want to use for automatically submitting an evaluation. (This action is already selected on the page if you created the rule when you activate the form.)

1. Choose **Next**. Review and then choose **Save and Publish**.

After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

**Important**  
You cannot apply rules to past, stored conversations.

### Trigger automated evaluations with an evaluation forms rule


1. Go to the **Rules** page. Select **Create a rule**, **Evaluation forms**.

1. Under **When**, select the event source as **A Contact Lens evaluation result is available**.

1. Choose **Add condition** to trigger a situation-specific evaluation. For example:
   + A specific answer on another evaluation, shown in the following image.  
![\[A specific answer on another evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/add-condition-1.png)
   + The score of another evaluation form, shown in the following image.  
![\[The score of another evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/add-condition-2.png)

1. Choose **Add action**, select **Submit automated evaluation**, and select the form that you want to use for automatically submitting an evaluation.

1. Choose **Next**. Review and then choose **Save and Publish**.

## Frequently Asked Questions (FAQ)


1.  **Can an automated evaluation override an evaluation that has been manually submitted?** 

    No, an automated evaluation cannot override a manually submitted evaluation. If an evaluation already exists, then the automated evaluation will fail for that contact and account administrators can see such failure notifications within CloudWatch.

1.  **How do I identify automated evaluations?** 

    If an evaluation is automatically submitted, it is marked as "submitted by Contact Lens automation" on the **Contact details** page. If an automated evaluation is edited and re-submitted by an evaluator, the "submitted by" contains the name of the evaluator. 

1.  **Can I automatically evaluate a contact using multiple evaluation forms?** 

    Yes, you can automatically submit evaluations on a contact using multiple evaluation forms. You need to create multiple rules to submit automated evaluations using the different evaluation forms.

# Use a Word or phrase condition in a Contact Lens rule
Exact match, pattern match, and semantic match

Within Contact Lens **conversational analytics** rule, you have the option to specify a Words or phrases condition. You can choose Exact Match, Semantic Match, or Pattern Match for the words or phrases. This topic explains each type of match.

**Note**  
All three match types are not case sensitive, for example, if you have specified the word as "billing", it will also match with the transcript containing the word "Billing".

## How to use exact match


**Exact Match** is an exact word match, which can be either singular or plural.

You can add the keywords or phrases by using either of the following methods:
+ Selecting **Enter keywords or phrases** and entering values manually in the text box. Multiple values can be separated by a comma.  
![\[Enter keywords or phrases option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/exact-match-1.png)
+ Selecting **Import from word collection** to import pre-defined words and phrases from word collections.  
![\[Import from word collection option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/exact-match-2.png)

Word collections can be categorized into two types: user word collections and system word collections. System word collections are pre-defined by Amazon Connect, which are non-editable to users. A user word collection can be created, read, updated, and deleted (CRUD) by users. For more information, see [Manage word collections when you create conversational analytics rules in Contact Lens](manage-word-collections.md).

## How to use pattern match


If you want to match related words, append an asterisk (\$1) to the criteria. For example, if you want to match on all variations of "neighbor" (neighbors, neighborhood) you would type **neighbo\$1**.

With **Pattern Match** you can specify the following:
+ **List of values**: This is useful when you want to build expressions with interchangeable values. For example, the expression might be: 

  *I'm calling about a power outage in ["Beijing" or "London" or "New York" or "Paris" or "Tokyo"]*

  Then in your list of values you would add the cities: Beijing, London, New York, Paris, Tokyo. 

  The advantage of using values is that you can create one expression, instead of multiple. This reduces the number of cards that you need to create.
+ **Number**: This option is used most frequently in compliance scripts, or if you're looking for a context when you know somewhere in between there's a number (in digits [0-9]). This way you can put all of your criteria into one expression instead of two. For example, an agent compliance script might say:

  *I have been in this industry for [num] years and would like to discuss this topic with you.*

  Or a customer might say: 

  *I have been a member for [num] years.*
**Note**  
When extracting numbers from chat or audio transcripts, only numerical digits (0-9) are recognized.
For voice contacts, certain languages may not convert spoken numbers into digital format during [number transcription](https://docs.aws.amazon.com/transcribe/latest/dg/how-numbers.html). This means number pattern matching might not work in these cases. For a list of which languages support number transcription, see [Supported languages and language-specific features](https://docs.aws.amazon.com/transcribe/latest/dg/supported-languages.html) in the *Amazon Transcribe Developer Guide*. 
+ **Proximity definition**: Finds matches that may be less than 100 percent exact. You can also specify the distance between words. For example, if you are looking for contacts where the word "credit" was mentioned but you do not want to see any mention of the words "credit card," you can define a pattern matching category to look for the word "credit" that is not within a one-word distance of "card."

  For example, a proximity definition might be:

  *credit [is not within 1 word from] card*

**Tip**  
For a list of languages supported by pattern match, see [AI features](supported-languages.md#supported-languages-contact-lens). 

## How to use semantic match


Semantic matching is supported only for post-call/chat analysis.
+ An "intent" is an example of utterance. It can be a phrase or a sentence.
+ You can enter up to four intents in one card (group).
+ We recommend using semantically similar intents within one card to get the best results. For example, there's category for "politeness." It includes two intents: "greetings" and "goodbye". We recommend separating these intents into two cards:
  + Card 1: "How are you today" and "How’s everything going". They are semantically similar greetings.
  + Card 2: "Thanks for contacting us" and "Thank you for being our customer." They are semantically similar goodbyes.

  Separating the intents into two cards provides more accuracy than putting them all into one card.

# Use Generative AI to semantically match contacts with natural language statements
Generative AI-powered semantic match

Within a Contact Lens **conversational analytics** rule, you have the option to specify a **Natural language - semantic match** condition that uses generative AI to find contacts that match a natural language statement. Natural language - Semantic match is used when you want to match contacts with context-specific criteria (for example, the customer’s issue was resolved during the call) or when there are too many possible words or phrases to use the **Words or phrases** conditions. 

Pro Tip: Use generative AI-powered Natural language- Semantic match if you previously used Words or Phrases - Semantic Match.

## How to use Natural language - semantic match


****

1. Log in to Amazon Connect with a user that has permissions **Rules** and **Rules - Generative AI** permissions.

1. On the navigation menu, choose **Analytics and optimization**, and then **Rules**.

1. Then select **Create a Rule** and choose **Conversational analytics**.  
![\[Import from word collection option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/create-natural-semantic-match-rule.png)

1. Select either "A Contact Lens post-call analysis is available" or "A Contact Lens post-chat analysis is available".

1. Select **Add condition** and then choose **Natural language - semantic match**.  
![\[Import from word collection option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/choose-natural-semantic-match.png)

1. Enter a natural language statement that can be evaluated by Generative AI as true or false by matching with the conversation transcript.  
![\[Import from word collection option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/enter-natural-language-statement.png)

1. Add any additional conditions, for example, queues, custom contact attributes, etc.

1. Choose **Next** and provide a category name (with no spaces) that would be used to label contacts with the natural language statement, for example, **CustomerAddressChange**.

1. You can specify additional actions, such as [generating tasks](contact-lens-rules-create-task.md), [sending email notifications](contact-lens-rules-email.md), [automatically submit evaluations](contact-lens-rules-submit-automated-evaluation.md), among others.

1. Choose **Next** to review the rule before you **Save and Publish** the rule. If you are not ready to publish the rule, you can also **Save as draft**.

## Guidelines to use semantic-match


The following list details how to best use semantic-match:
+ The statement should be something that can be evaluated as true or false. 
+ Natural language - semantic match only uses the transcript of the conversation. If you want to use other contact attributes (for example, queues) in your match criteria, then those need to be specified as separate conditions within the rule.
+ If possible, use the term 'agent' instead of terms like 'colleague', 'employee', 'representative', 'advocate', or 'associate'. Similarly use the term 'customer', instead of terms like 'member', 'caller', 'guest', or 'subscriber'.
+ Only use double quotes if you want to check for exact words being spoken by the agent or the customer. For example, If the instruction is to check for the agent saying "Have a nice day", then the generative AI will not detect "Have a nice afternoon". Instead the natural language statement should say "The agent wished the customer a nice day". 

**Example statements to use with semantic-match **
+ The customer wanted to make a change to their subscription plan.
+ The customer conveyed gratitude towards the agent's support.
+ The customer indicated a desire to terminate their current services.
+ The customer requested a subsequent interaction.
+ The customer asked the agent to repeat information, indicating a lack of understanding.
+ The customer asked to talk to the agent’s manager.
+ The agent asked the customer for additional information or validation before providing a definitive answer.
+ The agent offered multiple payment options
+ The agent assured the customer that their call was important and requested additional waiting time.
+ The agent resolved all of the customer’s issues.

# Manage word collections when you create conversational analytics rules in Contact Lens
Manage word collections

A *word collection* is a set of pre-built words and phrases that you can use to define the exact match condition when you create conversational analytics rules. When you add exact match conditions to a rule, you can choose a list of words and phrases from a dropdown menu.

## Required permissions


Contact Lens Rules - Word Collections uses the same set of security profile permissions as Contact Lens Rules. For more information, see [Security profile permissions for Contact Lens rules](permissions-for-rules.md).

## How to access the word collection management page


1. When you create or update a conversational analytics rule, choose the gear icon on top right of the **Exact match** condition card, as shown in the following image.  
![\[Enter keywords or phrases option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/word-collections-permissions-how-to-access-1.png)

1. On the **Word collections** management page, you can view existing word collections and create new word collections.  
![\[Enter keywords or phrases option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/word-collections-permissions-how-to-access-2.png)

## How to create a user word collection


****

1. On the **Word collections** management page, choose **Create a word collection**.  
![\[Enter keywords or phrases option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/create-user-word-collections-1.png)

1. Enter the name of the word collection, add words and phrases, then choose **Save**.  
![\[Enter keywords or phrases option in the UI.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/create-user-word-collections-2.png)

## Word collection limits

+ Amazon Connect has a default limit of 100 user word collections per instance.
+ Each word collection can have a maximum of 100 words or phrases.
+ Each word or phrase is limited to no more than 512 characters.
+ You can manage only user word collections. You can not manage or edit system word collections.

# Enter a script in a Contact Lens rule for agents to follow
Enter a script in a rule

Enter a script in a Contact Lens rule when you need agents to use exact wording in customer calls. 

To enter a script in a rule, enter phrases. For example, if you want to highlight when agents say *Thank you for being a member. We appreciate your business*, enter two phrases: 
+ Thank you for being a member.
+ We appreciate your business.

To apply the rule to certain lines of businesses, add a condition for which queues it applies to, or contact attributes. For example, the following image shows a rule that applies when an agent is working the BasicQueue or Billing and Payments queues, the customer is for auto insurance, and the agent is located in Seattle.

![\[The new rule page, the Words or phrases - Exact match section, multiple conditions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-category-rules-3.png)


# Security profile permissions for Contact Lens rules
Required permissions

To view, edit, or add rules for automatic categorization, you must be assigned to a security profile that has **Analytics and Optimization: Rules** permissions.

To view, edit, or add rule that use generative AI (using the **Natural language - semantic match** condition), your security profile must additionally be assigned the **Analytics and Optimization: Rules - Generative AI** permission.

To see agent names so you can add them to rules, you need **Users and permissions: Users - View** permissions in your security profile. 

To see the queue names so you can add them to rules, you need **Routing: Queues - View** permissions in your security profile. 

For more information, see [Assign permissions to use Contact Lens conversational analytics in Amazon Connect](permissions-for-contact-lens.md).

# Design a flow to use contact attributes in a rule in Contact Lens
About contact attributes in a rule

You can have up to 5 contact attributes in a rule.

Contact attributes are retrieved at the beginning of the real-time contact analysis session and whatever is retrieved at that time is used for rule evaluation during the whole session. Any contact attribute update after the session started isn't picked up.

You can design flows to use the contact attributes you specify in a rule, and then route the task accordingly. For example, a call or chat arrives in your contact center. When Contact Lens analyzes the call or chat, it gets a hit on the **Compliance** rule. The contact record that's created for the call, for example, includes information similar to the following image. It shows the **Category** = **Compliance**, and it has two custom contact attributes: **CustomerType** = **VIP**, **AgentLocation** = **NYC**. 

![\[The contact record when the Compliance rule is triggered.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-attributes-example1.png)


The Rules engine generates a task. The contact record for the task inherits the contact attributes from the voice contact record, as illustrated in the following image.

![\[The contact record for the task, the custom contact attributes.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-rules-attributes-example2.png)


The voice contact record appears as the **Previous contact ID**. 

The flow that you specify in the rule should be designed to use the contact attributes and route the task to the appropriate owner. For example, you may want to route tasks where **CustomerType = VIP** to a specific agent.

For more information, see [Use contact attributes](connect-contact-attributes.md).

# Rules are applied to new contacts when Contact Lens analyzes conversations
Rules are applied to new contacts

After you add rules, they are applied to new contacts that occur after the rule was added. Rules are applied when Amazon Connect conversational analytics analyzes conversations.

You cannot apply rules to past, stored conversations. 

# Error notifications: When Contact Lens can't analyze a contact
Error notifications: When Contact Lens can't analyze a contact

It's possible that Contact Lens can't analyze a contact file, even though analysis is enabled on the flow. When this happens, Contact Lens sends error notifications using Amazon EventBridge events. 

Events are emitted on a [best effort](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html) basis.

## Subscribe to EventBridge notifications


To subscribe to these notifications, create a custom EventBridge rule that matches the following:
+ "source" = "aws.connect"
+ "detail-type" = "Contact Lens Analysis State Change"

You can also add to the pattern to be notified when a specific event code occurs. For more information, see [Event Patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/filtering-examples-structure.html) in the *Amazon EventBridge User Guide*.

The format of a notification looks like the following sample: 

```
{
    "version": "0", // set by CloudWatch Events
    "id": "55555555-1111-1111-1111-111111111111", // set by CloudWatch Events
    "source": "aws.connect",
    "detail-type": "Contact Lens Analysis State Change",
    "account": "111122223333",
    "time": "2020-04-27T18:43:48Z",
    "region": "us-east-1", // set by CloudWatch Events
    "resources": [
        "arn:aws:connect:us-east-1:111122223333:instance/abcd1234-defg-5678-h9j0-7c822889931e",
        "arn:aws:connect:us-east-1:111122223333:instance/abcd1234-defg-5678-h9j0-7c822889931e/contact/efgh4567-pqrs-5678-t9c0-111111111111"
    ],
    "detail": {
        "instance": "arn:aws:connect:us-east-1:111122223333:instance/abcd1234-defg-5678-h9j0-7c822889931e",
        "contact": "arn:aws:connect:us-east-1:111122223333:instance/abcd1234-defg-5678-h9j0-7c822889931e/contact/efgh4567-pqrs-5678-t9c0-111111111111",
        "channel": "VOICE",
        "state": "FAILED",
        "reasonCode": "RECORDING_FILE_CANNOT_BE_READ"
    }
}
```

## Event codes


 The following table lists the event codes that may result when Contact Lens can't analyze a contact.


| Event reason code | Description | 
| --- | --- | 
| INVALID\$1ANALYSIS\$1CONFIGURATION  | Contact Lens received invalid values when the flow was initiated, such as an unsupported or invalid language code, or an unsupported value for redaction behavior.  | 
| RECORDING\$1FILE\$1CANNOT\$1BE\$1READ  | Contact Lens can't get the recording file. This might be because file isn't present in the S3 bucket, or there are problems with permissions.  | 
| RECORDING\$1FILE\$1TOO\$1SMALL  |  The recording file is too small for analysis (less than 105 ms). If file doesn’t have expected format, an INVALID error occurs. Empty JSON is also an unexpected object.  | 
|  RECORDING\$1FILE\$1TOO\$1LARGE  | The recording file exceeds the duration limit for analysis.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/connect/latest/adminguide/contact-lens-error-notifications.html)  | 
|  RECORDING\$1FILE\$1INVALID  | The recording file is invalid.  | 
|  RECORDING\$1FILE\$1CANNOT\$1BE\$1READ  | An error occurred when Contact Lens tried to read the recording file.  | 
|  RECORDING\$1FILE\$1EMPTY  | The recording file is empty.  | 
|  RECORDING\$1SAMPLE\$1RATE\$1NOT\$1SUPPORTED  | The sample rate of the audio file is not supported. Contact Lens currently supports audio files with an 8kHz sample rate. That is the sample rate for Amazon Connect recordings.  | 

# Error notifications when an Amazon Connect rule fails to run
Error notifications: When Amazon Connect Rules action fails to run

It's important to know when a specific rule action has failed in a production environment, and what caused the failure. Then you can proactively mitigate such failures in future.

To get real-time insights on the actions that failed to run, you integrate Amazon Connect Rules with Amazon EventBridge events. This enables you to be notified when, for example, the "Create task" action failed to run because the maximum number of **Concurrent active tasks per instance** reached the service quota. When this happens, Amazon Connect sends error notifications using Amazon EventBridge events.

Events are emitted on a [best effort](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html) basis.

## Subscribe to EventBridge notifications


To subscribe to these notifications, create a custom EventBridge rule that matches the following:
+ "source" = "aws.connect"
+ "detail-type" = "Contact Lens Rules Action Execution Failed"

You can also add to the pattern to be notified when a specific event code occurs. For more information, see [Event Patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/filtering-examples-structure.html) in the *Amazon EventBridge User Guide*.

The format of a notification looks like the following sample: 

```
{
  "version": "0",
  "id": "8d122163-6c07-f8cb-06e7-373a1bcf8fc6",
  "source": "aws.connect",
  "detail-type": "Amazon Connect Rules Action Execution Failed",
  "account": "123456789012",
  "time": "2022-01-05T01:30:42Z",
  "region": "us-east-1",
  "resources": ["arn:aws:connect:us-east-1:123456789012:instance/cb54730f-5aac-4376-b2f4-7c822889931e"],
  "detail": {
    "ruleId": "7410c94b-21c2-4db0-a707-c6d751edbe8f",
    "actionType": "CREATE_TASK",
    "triggerEvent": "THIRD_PARTY",
    "instanceArn": "arn:aws:connect:us-east-1:123456789012:instance/cb54730f-5aac-4376-b2f4-7c822889931e",
    "reasonCode": "ResourceNotFoundException",
    "error": "ContactFlowId provided does not belong to connect instance",
    "additionalInfo": "{\n  \"message\": \"Not Found\",\n  \"code\": \"ResourceNotFoundException\",\n  \"statusCode\": 404,\n  \"time\": \"2022-01-03T20:23:07.073Z\",\n  \"requestId\": \"048e4403-71c1-47d6-96fc-825744f518e7\",\n  \"retryable\": false,\n  \"retryDelay\": 28.217537834500316\n}"
  }
}
```

## Supported action types

+ `CREATE_TASK`
+ `GENERATE_EVENTBRIDGE_EVENT`
+ `SEND_NOTIFICATION`

For information about `ASSIGN_CONTACT_CATEGORY`, see [Error notifications: When Contact Lens can't analyze a contact Troubleshoot](contact-lens-error-notifications.md).

## Supported trigger events

+ `REAL_TIME_CALL`
+ `REAL_TIME_CHAT`
+ `POST_CALL`
+ `POST_CHAT`
+ `THIRD_PARTY`

## Reason codes for failed actions


When an action fails, the error notification service collects the reason codes from the supported actions. For more information about the reason codes for Task and EventBridge action failures, see the following topics:
+ For reason codes for Task action failures, see [Errors](https://docs.aws.amazon.com/connect/latest/APIReference/API_StartTaskContact.html#API_StartTaskContact_Errors) in the **StartTaskContact** API topic in the *Amazon Connect API Reference Guide*.
+ For reason codes for EventBridge action failures, see [Errors](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEvents.html#API_PutEvents_Errors) in the **PutEvents** API topic in the *Amazon EventBridge API Reference Guide*.

# Specify variables for certain parameters when creating or managing rules using Amazon Connect APIs
API fields that support variable injection

When you create or manage rules programmatically using Amazon Connect APIs (such as [CreateRule](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateRule.html) or [UpdateRule](https://docs.aws.amazon.com/connect/latest/APIReference/API_UpdateRule.html)), you can specify variables for certain parameters. The variables are resolved at runtime when the action is triggered, based on the value of the [EventSourceName](https://docs.aws.amazon.com/connect/latest/APIReference/API_RuleTriggerEventSource.html) parameter. 

For example, let's say you're setting up a task action and you want to add more context. Following is an example of how you could use variable injections to include the ID of the contact and the ID of the agent in the `Description` field of the task: 
+ Customer is unhappy about the phone call. A swear word was detected during the conversation with agent `$.ContactLens.PostCall.Agent.AgentId` in the contact `$.ContactLens.PostCall.ContactId`

When the action is triggered, his string would resolve to "Customer is unhappy about the phone call. A swear word was detected during a conversation with agent 12345678-1234-1234-1234-EXAMPLEID012 in the contact 87654321-1234-1234-1234-EXAMPLEID345"

The following table lists each event source, and the JSONPath to use for fields that support variable injection. 


| EventSourceName | JSONPath Reference | 
| --- | --- | 
|  OnPostCallAnalysisAvailable  |  \$1.ContactLens.PostCall.ContactId \$1.ContactLens.PostCall.Agent.AgentId \$1.ContactLens.PostCall.Queue.QueueId  | 
|  OnRealTimeCallAnalysisAvailable  |  \$1.ContactLens.RealTimeCall.ContactId \$1.ContactLens.RealTimeCall.Agent.AgentId \$1.ContactLens.RealTimeCall.Queue.QueueId  | 
|  OnPostChatAnalysisAvailable  |  \$1.ContactLens.PostChat.ContactId \$1.ContactLens.PostChat.Agent.AgentId \$1.ContactLens.PostChat.Queue.QueueId  | 
|  OnSalesforceCaseCreate  |  \$1.ThirdParty.Salesforce.CaseCreate.CaseNumber \$1.ThirdParty.Salesforce.CaseCreate.Name \$1.ThirdParty.Salesforce.CaseCreate.Email \$1.ThirdParty.Salesforce.CaseCreate.Phone \$1.ThirdParty.Salesforce.CaseCreate.Company \$1.ThirdParty.Salesforce.CaseCreate.Type \$1.ThirdParty.Salesforce.CaseCreate.Reason \$1.ThirdParty.Salesforce.CaseCreate.Origin \$1.ThirdParty.Salesforce.CaseCreate.Subject \$1.ThirdParty.Salesforce.CaseCreate.Priority \$1.ThirdParty.Salesforce.CaseCreate.CreatedDate \$1.ThirdParty.Salesforce.CaseCreate.Description  | 
|  OnZendeskTicketCreate  |  \$1.ThirdParty.Zendesk.TicketCreate.Id \$1.ThirdParty.Zendesk.TicketCreate.Priority \$1.ThirdParty.Zendesk.TicketCreate.CreatedAt  | 
|  OnZendeskTicketStatusUpdate  |  \$1.ThirdParty.Zendesk.TicketStatusUpdate.Id \$1.ThirdParty.Zendesk.TicketStatusUpdate.Priority \$1.ThirdParty.Zendesk.TicketStatusUpdate.CreatedAt  | 

# Search conversations analyzed by Contact Lens
Search conversations

You can search the analyzed and transcribed recordings based on: 
+ Speaker (agent or customer)
+ Keywords
+ Sentiment score
+ Non-talk time (for calls only)
+ Response time (for chats only)

In addition, you can search conversations that are in specific contact categories (that is, the conversation has been categorized based on uttered keywords and phrases).

These criteria are described in the following sections.

**Important**  
When a Contact Lens is enabled on a contact, after a call or chat ends **and** the agent completes After Contact Work (ACW), Contact Lens analyzes (and for calls, transcribes) the recording of the customer-agent conversation. The agent must choose **Close contact** first.  
Chat transcripts are indexed for search when Contact Lens is enabled; they are not indexed for search if Contact Lens is not enabled.

## Required permissions for searching conversations
Required permissions

Before you can search conversations, you need the following permissions in your security profile. They allow you to do the type of search you want. 
+ Enable one of the following permissions to access the **Contact Search** page:
  + **Contact search**. Allows you to search for all contacts.
  + **View my contacts**: Allows you to search for only those contacts that you handled as an agent.
+ **Search contacts by conversation characteristics**. This includes non-talk time, sentiment score, and contact category.
+ **Search contacts by keywords**

For more information, see [Assign permissions](permissions-for-contact-lens.md).

## Search for words or phrases
Search for words or phrases

For keyword search, Contact Lens uses the `standard` analyzer in Amazon OpenSearch Service. This analyzer is not case sensitive. For example, if you enter *thank you for your business 2 CANCELLED Flights*, the search looks for:

 [thank, you, for, your, business, 2, cancelled, flights]

If you enter *"thank you for your business", two, "CANCELLED Flights"*, the search looks for:

 [thank you for your business, two, cancelled flights]

**To search conversations for words or phrases**

1. In Amazon Connect, log in with a user account that is assigned the **CallCenterManager** security profile, or that is enabled for the **Search contacts by keywords** permission.

1. Choose **Analytics and optimization**, **Contact search**.

1. In the **Filter** section, specify the time period that you want to search, and specify the channel.
**Tip**  
When searching by date, you can search up to 8 weeks at a time. 

1. Choose **Click here to add filter**, and in the dropdown menu, choose **Words or phrases**.   
![\[The contact search page, the filters section, the add filter dropdown, the Words or phrases option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-words-phrases.png)

1. In the **Used by** section, choose whose part of the conversation you want to search. Note the following:
   + **System** applies to chat, where the participant may be a Lex bot or prompt.
   + To search for words or phrases that are used by all participants, select **Agent**, **Customer**, **System**.
   + If no boxes are selected, it means search for words or phrases used by any of the participants.

1. In the **Logic** section, choose from the following options:
   + Choose **Match any** to return contacts that have any of the words present in the transcripts.

     For example, the following query means match (hello OR cancellation OR "example airline"). And, because no **Used by** boxes are selected, it means "find contacts where any of these words were used by any of the participants."  
![\[The Words or phrases dialog box, the Match any option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/match-any.png)
   + Choose **Match all** to return contacts that have all of the words present in the transcripts. 

     For example, the following query means match ("thank you for your business" AND cancellation AND "example airline"). And, because all the participant boxes are selected, it means "find contacts where all of these words and phrases were used by all of the participants."  
![\[The Words or phrases dialog box, the Match all option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/match-all.png)

1. In the **Words or phrases** section, enter the words to search, separated by commas. If you enter a phrase, surround it with quotation marks.

   You can enter up to 128 characters.

## Search for sentiment score or evaluate sentiment shift
Search for sentiment score/shift

With Contact Lens, you can search conversations for sentiment scores or sentiment shifts on a scale of -5 (most negative) to \$15 (most positive). This enables you to identify patterns and factors for why calls go well or poorly.

![\[The contact search page, the sentiment score filter.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-sentiment-score-shift.png)


For example, suppose you want to identify and investigate all the contacts where the customer sentiment ended negatively. You might search for all contacts where the sentiment score is **<=** (less than or equal to) -1. 

For more information, see [Investigate sentiment scores](sentiment-scores.md).

**To search for sentiment scores or evaluate sentiment shift**

1. In Amazon Connect, log in with a user account that is assigned the **CallCenterManager** security profile, or that is enabled for the **Search contacts by conversation characteristics** permission.

1. On the **Contact search** page, specify whether you want the sentiment score for words or phrases spoken by the customer or agent.

1. In **Type of score analysis**, specify what type of scores to return:
   + **Sentiment score**: This returns the average score for the customer or agent's portion of the conversation.

     In addition to searching for sentiment scores when the agent or customer are on the contact, you can filter the search by when the customer is: 
     + **With agent on the chat**
     + **Without agent on the chat**: This is the time the customer is chatting with a bot, prompts, and time in queue.   
![\[The sentiment score filter, the participant dropdown, customer without agent on the chat option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-sentiment-participant.png)
   + **Sentiment shift**: Identify where the customer or agent's sentiment changed during the contact.

     For example, the following image shows an example of searching for contacts where the customer's sentiment score begins at less than or equal to -1 and ends at greater than or equal to \$11. In addition, the customer is on a chat with the agent present.  
![\[The sentiment score filter, the sentiment shift option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-sentiment-score.png)

## Search for non-talk time
Search for non-talk time

To help you identify which calls to investigate, you can search for non-talk time. For example, you might want to find all calls where the non-talk time is greater than 20%, and then investigate them. 

Non-talk time includes hold time and any silence where both participants aren't talking for longer than three seconds. This duration can't be customized.

Use the drop-down arrow to specify whether to search conversations for the duration or percentage of non-talk time. These options are shown in the following image. 

 For information about how to use this metric, see [Investigate non-talk time](non-talk-time.md).

![\[The non-talk time filter, the duration and percentage options.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/non-talk-time.png)


## Search by response time for chat conversations


You can search by the:
+ Average response time of the agent or customer during the chat
+ Maximum response time of the agent or customer during the chat

You specify whether the duration is less or greater than or equal to a specific time. For information about how to use this metric, see [Investigate response time during chats in Contact Lens](response-time.md).

For the supported minimum and maximum response times, see [Amazon Connect Rules feature specifications](feature-limits.md#rules-feature-specs).

The following image shows a search for contacts where the agent's average response time was greater than or equal to 1 minute. 

![\[The response time filter.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/response-time.png)


## Search a contact category
Search a contact category

1. On the **Contact search** page, choose **Add filter**, **Contact category**.

1. In the **Contact categories** box, use the dropdown box to list all the current categories that are available for you to search. Or, if you start typing, the input is used to match existing categories and to filter those that don't match.
   + **Match any**: Searches for contacts that match any of the selected categories.
   + **Match all**: Searches for contacts that match all of the selected categories.
   + **Match none**: Searches for contacts that did not match any of the selected categories. Note that this would only return contacts that were analyzed by Contact Lens conversational analytics.

   The following image shows a dropdown menu with all the current categories listed.  
![\[The contact category filter, the match all option, the contact categories.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-contact-category2.png)

# Review analyzed conversations using Contact Lens
Review analyzed conversations

By using Amazon Connect Contact Lens, you can review the transcript and identify what part of the contact is of interest. You won't need to listen to an entire call or read an entire chat transcript to find out what's interesting about it. You can focus on specific parts of the audio or transcript. Both are highlighted for you wherever there are points of interest. 

For example, you might scan the transcript of the contact and see a red sentiment emoji for a customer turn, which indicates the customer is expressing a negative sentiment. You can choose the timestamp and jump to that portion of audio recording or chat interaction.

The following image shows an example of a voice contact.

![\[An analysis of a voice contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-category-hit.png)


The following image shows an example of a chat contact. **System Message** applies to chat, where the participant may be a Lex bot or prompt.

![\[An analysis of a chat contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-category-hit-chat.png)


**To review analyzed conversations**

1. Log in to Amazon Connect with a user account that has **Contact search** and **Contact Lens - conversational analytics** permissions in the security profile.

1. In Amazon Connect, choose **Analytics and optimization**, **Contact search**.

1. Use the filters on the page to narrow your search for a contact. For date, you can search up to 14 days at a time. For more information about searching for contacts, see [Search for completed and in-progress contacts](contact-search.md). 

1. Choose the contact ID to view the contact details for the contact.

1. In the **Recording** and **Transcript** sections of the **Contact details** page, review what was spoken or written, when, and their sentiment.

1. For calls, if desired, choose the play prompt to listen to the recording. Or, click on the relevant part of the recording to listen to the portion you're interested in.

1. For chats, if desired, use the graph to navigate to the portion of the transcript you're interested in.

# Navigate transcripts and audio in Amazon Connect Contact Lens
Navigate transcripts and audio

Supervisors are often required to review the contacts for many agents, for quality assurance purposes. The turn-by-turn transcript and sentiment data helps you quickly identify and navigate to the portion of the recording that is of interest to you. 

The following image of a contact record shows features that enable you to quickly navigate transcripts and audio to find areas that need your attention. While the image shows a voice contact, the same features apply to chat contacts.

![\[An analysis of a voice contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-navigate-transcripts2.png)


1. Use [Show key highlights](#contact-lens-contact-summarization) to review only the issue, outcome, and/or action item.

1. Use [Autoscroll](#autoscroll) for voice contacts, to jump around the audio or transcript. The two always stay in sync.

1. Scan for [sentiment emojis](#sentiment-emojis) to quickly identify a part for the transcript you want to read or listen to.

1. Choose the timestamp to jump to that part of the audio recording or transcript. The timestamp is calculated from the start of the customer interaction within the contact.

## Show key highlights
Show key highlights

It can be time-consuming to review contact transcripts that are hundreds of lines long. To make this process faster and more efficient, Contact Lens provides the option for you to view key highlights. The highlights show only those lines where Contact Lens has identified an issue, outcome, or action item in the transcript. 
+ **Issue** represents the call driver. For example, "I'm thinking of upgrading to your online subscription plan." 
+ **Outcome** represents the likely conclusion or outcome of the contact. For example, "Based on your current plan I would recommend the online essentials plans that we have."
+ **Action item** represents the action item the agent takes. For example, "Please keep an eye out for an email with a price quote. I will send it to you shortly."

Each contact has no more than one issue, one outcome, and one action item. Not all contacts will have all three. 

**Note**  
If Contact Lens displays the message **There are no key highlights for this transcript**, it means no issue, outcome, or action item was identified.

You don't need to configure key highlights. It works out-of-the-box without any training of the machine learning model. 

## Turn on autoscroll to synchronize the transcript and audio


For voice contacts, use **Autoscroll** to jump around the audio or transcript, and the two always stay in sync. For example:
+ When you listen to a conversation, the transcript moves along with it, showing you sentiment emojis and any detected issue.
+ You can scroll through the transcript, and choose the timestamp for the turn to listen to that specific point in the recording.

Because the audio and transcript are aligned, the transcript can help you understand what the agent and customer are saying. This is especially useful when:
+ The audio is bad, maybe due to a connection issue. The transcript can help you understand what's being said.
+ There's a dialect or language variant. Our models are trained on different accents so the transcript can help you understand what's being said.

## Scan for sentiment emojis


Sentiment emojis help you quickly scan a transcript so you can listen to that part of the conversation.

For example, where you see red emojis for customer turns and then a green emoji, you might choose the timestamp to jump to that specific point of the conversation to check how that agent helped the customer.

## Tap or click category tags to navigate through transcript


When you tap or click on the category tags, Contact Lens auto-navigates to the corresponding point-of-interests in the transcript. There are also category markers in the visualization of the interaction to indicate which part of the recording file has utterances related to the category. 

The following image shows part of a **Contact details** page for a chat. 

![\[A transcript of chat, a category, the relevant section of the transcript.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-category-tag-navigation.png)


# View generative AI-powered post-contact summaries in Amazon Connect
View generative AI-powered post-contact summaries

**Note**  
**Powered by Amazon Bedrock**: AWS implements [automated abuse detection](https://docs.aws.amazon.com//bedrock/latest/userguide/abuse-detection.html). Because generative AI-powered post-contact summaries is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI).

You can save valuable time with generative AI-powered post-contact summaries that provide essential information from customer conversations in a structured, concise, and easy to read format. You can quickly review the summaries and understand the context instead of reading through transcripts and monitoring calls. 

You can access generative AI-powered post-contact summaries multiple ways:
+ **Agents** can access post-contact summaries for voice and email contacts on the Contact Control Panel (CCP). They can use the summaries to quickly complete their After Contact Work (ACW). To learn about the agent's experience, see [View post-contact summaries on the CCP](#summaries-on-agentws).
+ **Managers and supervisors** can access summaries for voice, chat, and email contacts on the Amazon Connect admin website, on the **Contact details** and the **Contact search** pages. They can use the summaries to quickly understand the issues and outcomes for the contacts they are reviewing. To learn about the managers experience, see [View post-contact summaries on the Amazon Connect admin website](#summaries-on-website).
+ **Developers** can directly ingest the summaries from the [APIs](contact-lens-api.md) into third-party systems. They can also [integrate with Amazon Kinesis Data Streams](contact-analysis-segment-streams.md) for streaming. This latter option is useful when you have higher loads and you want avoid having the TPS throttled.

**Topics**
+ [

## Enable post-contact summaries
](#gen-ai-getstarted)
+ [

## Enable contact summaries for email
](#enable-email-summaries)
+ [View post-contact summaries on the CCP](#summaries-on-agentws)
+ [View post-contact summaries on the Amazon Connect admin website](#summaries-on-website)
+ [

## Why a summary is not generated
](#summary-not-generated)

## Enable post-contact summaries


**To enable post-contact summaries on the agent's CCP for voice contacts**

1. Add a [Set recording and analytics behavior](set-recording-behavior.md) block to your flow. 

1.  Configure the **Properties** page of the block:

   1. Set **Call recording** to **On**. Choose **Agent and customer**, as shown in the following image.  
![\[The properties page of the Set recording and analytics behavior block configured for call recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/call-recording-summaries.png)

   1. Set **Analytics** to **On**. 

   1. Choose **Enable speech analytics**. 

   1. Choose **Real-time and post-call analytics**.

   1. Under **Contact Lens Generative AI capabilities**, choose **Post-contact summary**. 

   The following image shows the **Analytics** section of a **Properties** page that is configured to enable post-contact summaries on the agent's CCP:   
![\[The properties page of the Set recording and analytics behavior block.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/set-block-post-contact-summaries-ccp.png)

1. Assign the following permissions to the agent's security profile:
   + **Contact Control Panel (CCP) - Contact Lens data - Access**
   + **Analysis and Optimization - Contact Lens–post-contact summary - View**
   + **Analysis and Optimization - Recorded conversations (redacted)**, **View Recorded conversations (unredacted)**, **All** or **Access** (least privilege is **Access** which is recommended)
   + **Analysis and Optimization - View my contacts ** or **Contact Search **

**To enable post-contact summaries on Amazon Connect admin website**

1. Configure the **Properties** page of the [Set recording and analytics behavior](set-recording-behavior.md) as follows: 

   1. Set **Analytics** to **On**. 

   1. Choose either **Enable speech analytics**, **Enable chat analytics**, or both.

      If you choose speech analytics, then choose either:
      + **Post-call analytics**
      + **Real-time and post-call analytics**: Choose this option if the user wants to view post-contact summaries for in progress contacts (that is, the agent is still in ACW but the call has ended).

   1. Granular redaction is not supported for post-contact summary. When granular redaction is selected, post-contact summary redacts all PII identified in text and replaces it with a [PII] tag.

   1. Under **Contact Lens Generative AI capabilities**, choose **Post-contact summary**. 

1. Assign the following permissions to the user's security profile:
   + **Analysis and Optimization - Contact Search** OR **View my contacts**
   + **Analysis and Optimization - Contact Lens–post-contact summary - View**
   + **Analysis and Optimization - Recorded conversations (redacted)**, **View Recorded conversations (unredacted)**, **All** or **Access** (least privilege is **Access** which is recommended)

## Enable contact summaries for email


**To enable contact summaries for email contacts**

1. Add a [Set recording, analytics and processing behavior](set-recording-analytics-processing-behavior.md) block to your inbound email flow.

1. Configure the **Properties** page of the block:

   1. For **Channel**, choose **Email**.

   1. Set **Analytics** to **On**.

   1. Choose **Enable email analytics**.

   1. Under **Contact Lens Generative AI capabilities**, choose **Contact summary**.

1. Choose **Save**.

## View post-contact summaries on the CCP
View post-contact summaries on the CCP

To help agents perform their After contact work (ACW), Amazon Connect displays a generative AI-powered post-contact summary on their CCP for voice contacts. The following image shows an example summary.

![\[The Contact Control Panel showing a generative AI-powered post-contact summary during After Contact Work (ACW).\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/genai-summary-ccp1.png)


1. The agent is in ACW. They can browse the transcript while a "Generating summary" banner is displayed on the top of the page.

1. While the agent is browsing, a message appears that the summary is available. If the agent clicks the banner, the CCP scrolls to the top of the page when the summary is displayed.

1. The banner disappears after the agent clicks on it.

**Note**  
Generative AI-powered post-contact summaries support voice, chat, and email contacts on the CCP. 

## View post-contact summaries on the Amazon Connect admin website
View post-contact summaries on the Amazon Connect admin website

To help managers and other users review contacts, they can view post-contact summaries on the Amazon Connect admin website. The following image shows an example of generative AI-powered post-contact summaries on the **Contact details** page. 

![\[The Contact details page showing a generative AI-powered post-contact summary with structured information about the customer conversation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/genai-summary2.png)


The following image shows an example of generative AI-powered post-contact summaries on the **Contact search** page.

![\[The Contact search page displaying generative AI-powered post-contact summaries for multiple customer interactions in a list view format.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/genai-summary-contactsearch2.png)


Each contact has no more than one summary generated. Not all contacts will have a summary generated; for more information, see [Why a summary is not generated](#summary-not-generated).

## Why a summary is not generated


If a summary is not generated, an error message is displayed on the **Contact details** and **Contact search** pages. In addition, the ReasonCode for the error appears in the `ContactSummary` object in the Contact Lens output file, similar to the following example:

```
"JobDetails": {
    "SkippedAnalysis": [
      {
        "Feature": "POST_CONTACT_SUMMARY",
        "ReasonCode": "INSUFFICIENT_CONVERSATION_CONTENT"
      }
    ]
  },
```

Following is a list of error messages that may be displayed on the Contact details or search pages if a summary is not generated. Also listed is the associated reason code that appears in the Contact Lens output file. 
+ **Summary could not be generated due to exceeding quota of concurrent summaries**. ReasonCode: `QUOTA_EXCEEDED`.

  If you receive this message, we recommend that you [submit a ticket](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase&limitType=service-code-connect) to increase the [Concurrent post-contact summary jobs](amazon-connect-service-limits.md#contactlens-quotas) quota. 
+ **Summary could not be generated due to not enough eligible conversation**. ReasonCode: `INSUFFICIENT_CONVERSATION_CONTENT`.

  For voice, there must be 1 utterance from each participant. For chat, there must be 1 message of supported types from each participant. Supported message types are `text/plain` and `text/markdown`. Messages of other types, such as `application/json`, are not used for the summary. 
+ **Contact Flow had invalid Contact Lens configuration for PostContact Summary, such as unsupported or invalid language code**. ReasonCode: `INVALID_ANALYSIS_CONFIGURATION`.

  This error is returned if the enabled summary is incompatible with other Contact Lens settings, particularly if it's enabled for an unsupported locale.
+ **Summary cannot be provided because it failed to satisfy security and quality guardrails**. ReasonCode: `FAILED_SAFETY_GUIDELINES`.

  This error can occur in Amazon Connect for Concurrent post-contact summary jobs. Amazon Connect passes contact data to Amazon Bedrock for summary generation. If the contact data contains unredacted Personally Identifiable Information (PII), Amazon Bedrock's safety guidelines are triggered. As a result, Amazon Bedrock refuses to generate the summary to protect sensitive information, leading to the error in Amazon Connect.
+ Internal system error. ReasonCode: `INTERNAL_ERROR`

# View key highlights of customer conversations in the Contact Control Panel (CCP)


It can be time-consuming to review contact transcripts that are hundreds of lines long. To make this process faster and more efficient, Contact Lens automatically identifies and labels key parts of customer conversations, then displays highlights of the conversations. Managers can view those highlights on the **Contact details** page. Agents can view the highlights in the Contact Control Panel (CCP). 

**Tip**  
For a list of supported languages, see the *Key highlights* column in the [Amazon Connect Contact Lens supported languages](supported-languages.md#supported-languages-contact-lens) topic.

After you enable Contact Lens, it identifies key parts of a customer conversation, assigns labels (such as issue, outcome, or action item) to those parts, and displays highlights of the customer conversation. You can expand the highlights to view the full transcript of the contact. 

The following example shows the key highlights on the **Contact details** page. 

![\[Key highlights on the Contact details page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-key-highlights.png)


1. Toggle **Show key highlights** on and off as needed.

1. **Issue** represents the contact driver. For example, "I'm thinking of upgrading to your online subscription plan." 

1. **Action item** represents the action item the agent takes. For example, "Please keep an eye out for an email with a price quote. I will send it to you shortly."

1. **Outcome** represents the likely conclusion or outcome of the contact. For example, "Based on your current plan I would recommend our online essentials plan."

Contacts have only one issue, one outcome, and one action item. It's possible for some contacts to not have all three.

**Note**  
You see this message **There are no key highlights for this transcript** when Contact Lens can't identify an issue, outcome, or action item.

To learn about the agent's experience—what part of the transcript is displayed in the Contact Control Panel (CCP), and when—see [Design a flow for key highlights](enable-analytics.md#call-summarization-agent).

# Use theme detection in Amazon Connect Contact Lens to discover issues with contacts
Use theme detection to discover issues

Use theme detection to discover previously unknown or emerging contact themes from thousands of customer interactions. For example, you can spot common reasons for customer outreach such as "cancel reservation" or "delayed order." You can then take appropriate actions to improve the customer experience by expediting issue resolution, and improving IVR options, knowledge base articles, and agent training.

## Important things to know
Important things to know
+ Theme detection is available in the following languages supported by Amazon Connect Contact Lens:     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/connect/latest/adminguide/use-theme-detection.html)
+ Theme detection is supported on contacts that were created on or after January 30, 2023.
+ The **Generate themes report** button is enabled only when your saved search contains at least 300 contacts with issues detected by Contact Lens. 
+ The theme detection report is generated for the 3,000 most recent contacts.
+ Theme detection reports are available for 30 days after they are created. After 30 days, the reports are deleted from the database and cannot be retrieved. 
+ The most recent 20 theme reports for a saved search are available in the **View theme reports** dropdown menu, as shown in the following image.  
![\[The contact search page, the view theme reports dropdown menu.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-view-theme-reports.png)

## How to generate a theme report
How to generate a theme report

1. Login to Amazon Connect using an account that has the following security profile permissions:
   + **Contact search - Access**
   + **Contact Lens - theme detection - Create**
   + **Contact Lens - theme detection - View**

1. In Amazon Connect, on the left navigation menu, choose **Analytics and optimization**, **Contact search**.

1. On the **Contact search** page, apply filters to select a group of contacts that have been analyzed by Contact Lens.
**Important**  
Your search query must return at least 300 contacts with issues detected by Contact Lens. Otherwise, the **Generate themes report** button is not enabled.

1. Choose **Save search** to save your results. Assign a name to your search.

1. Choose **Generate themes report**.

   Contact Lens applies machine learning to automatically group contacts with similar issues. When the report is generated, a banner displays a link to the theme report. An example banner is shown in the following image.  
![\[The contact search page, the theme detection banner.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-theme-detection-banner.png)

1. Click or tap the link for the theme report.

   The theme report is displayed. It includes theme labels and a list of contacts, as shown in the following image.   
![\[A theme report with several theme labels.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-theme-detection-drilldown.png)

1. Click or tap the theme labels to view associated contacts, listen to specific recordings, and read transcripts for deeper analysis.

# Investigate sentiment scores during contact conversations using Contact Lens
Investigate sentiment scores

## What are sentiment scores?


A sentiment score is an analysis of text, and a rating of whether it includes mostly positive, negative, or neutral language. Supervisors can use sentiment scores to search conversations and identify contacts that are associated with varying degrees of customer experiences, positive or negative. It helps them identify which of their contacts to investigate. 

You can view a sentiment score for the entire conversation, as well as sentiment trend across whole contact.

## How to investigate sentiment scores


When working to improve your contact center, you may want to focus on the following: 
+ Contacts that start with a positive sentiment score but end with a negative score.

  If you want to focus on a limited set of contacts to sample for quality assurance, for example, you can look at contacts where you know the customer had a positive sentiment at the start but ended with a negative sentiment. That shows you they left the conversation unhappy about something. 
+ Contacts that start with a negative sentiment score but end positive.

  Analyzing these contacts will help you identify what experiences you can recreate in your contact center. You can share successful techniques with other agents.

An additional way of looking at sentiment progression is to check the sentiment trendline. You can see the variation in the customer's sentiment as the contact progresses. For example, the following image shows a conversation with a very low sentiment score in the beginning of the conversation, it goes up, and then back down at the end.

![\[Customer sentiment trend.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-sentiment-trend.png)


For more information, see [Search for sentiment score or evaluate sentiment shift](search-conversations.md#sentiment-search).

## How sentiment scores are determined


Amazon Connect Contact Lens analyzes the sentiment of each speaker turn in a conversation as positive, negative, or neutral. It then considers two factors for each participant turn to assign a score that ranges from -5 to \$15 for each period of the call: 
+ Frequency. The number of times the sentiment is positive, negative or neutral.
+ Sentiment streaks. The consecutive turns with same sentiment.

The overall sentiment score is the average of the scores assigned during each portion of the call.

# Investigate non-talk time during calls using Amazon Connect Contact Lens
Investigate non-talk time

## What is non-talk time?


Amazon Connect Contact Lens also identifies the amount of *non-talk time******* in a call. Non-talk time equals hold time, plus any silence where both participants aren't talking for more than 3 seconds. This duration can't be customized.

The following image shows the location of non-talk time data on the **Contact details **page.

![\[The contact details page, the talk time section, the non-talk time data.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-nontalk-time-overview.png)


## How to investigate non-talk time


Non-talk time can help you identify calls that have gone poorly. This may be because:
+ The customer was asking a question that's new for your contact center.
+ It's taking the agent a long time to do something but they are well-trained. This indicates there may be an issue with the tools the agent is using. For example, the tools aren't responsive enough or aren't easy to use.
+ The agent didn't have a ready answer, but they are fairly new. This indicates they need more training.

You can decide whether to focus on these contacts to improve your contact center. For example, you can go to that section of the audio, and then look at the transcript to see what was going on.

 In the following example, the non-talk time occurred when the agent was searching for the caller's trip ID. This could indicate there's an issue with the agent's tools. Or if the agent is new, they need more training.

![\[The contact audio recording and transcript, the location of non-talk time.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-non-talk-time-transcript.png)


For more information, see [Search for non-talk time](search-conversations.md#nontalk-time-search).

# Investigate response time during chats in Contact Lens
Investigate response time

Use the response time metric to understand the responsiveness of the agent or customer during a chat contact.

Contact Lens calculates the following metrics:
+ **Agent greeting time**. This is the first response time for the agent, which is how fast the agent engaged with the customer after the agent joined the chat. A long first response time may explain, for example, if a customer has a negative sentiment in the beginning of conversation.
+ **Avg agent response time** and **Avg customer response time**. The agent response time helps you check an agent's performance against your organization's base line.
+ **Max agent response time** and **Max customer response time**.

  The customer's max response time may explain an agent's response time. For example, if a customer didn't reply for five minutes and then sent a message, it's possible the agent took longer than usual to respond because they were handling other chats at the same time. 

We recommend examining the response time metrics in conjunction with the interactions graph that shows gaps in conversation and participant sentiment.

You can click or tap the longest response time value on the graph to be directed to the associated message in the transcript. 

The following image of the **Contact details page** shows metrics for chat conversations. Note that **Agent greeting time** = after the agent joined the chat, how long until they sent the first response. 

![\[The contact details page, chat metrics.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-contactdetails-chat1b.png)


For more information, see [Search by response time for chat conversations](search-conversations.md#response-time-search).

# Investigate the loudness of agents and customers in calls using Contact Lens
Investigate loudness scores

A loudness score measures how loudly the customer or agent are speaking during a call. Contact Lens displays an analysis of the conversation that lets you identify where the customer or agent may be talking loudly and have a negative sentiment.

## How to use loudness scores


We recommend using loudness scores together with sentiments. Look for areas of the conversation where the loudness score is high and the sentiment is low. Then read that portion of the transcript or listen to that section of the call. 

For example, the following is an image of a recording and transcript analysis. Spiked vertical bars indicate where the customer is talking loudly. The horizontal red bars indicate their sentiment is negative.

![\[The contact details page, loudness scores.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-amplitude.png)


# Use sensitive data redaction to protect customer privacy using Contact Lens
Use sensitive data redaction

To help you protect your customer's privacy, Contact Lens conversational analytics lets you automatically redact sensitive data from conversation transcripts, audio files, and email transcripts. It redacts sensitive data, such as name, address, and credit card information using Natural Language Understanding. 

When you enable conversational analytics on the **Set recording and analytics behavior** block, you then have the option to enable redaction. For more information, see [Enable redaction of sensitive data](enable-analytics.md#enable-redaction).

For voice contacts, sensitive data redaction is applied after a call disconnects. For email contacts, redaction is applied after the email contact ends.

**Important**  
The redaction feature is designed to identify and remove sensitive data. However, due to the predictive nature of machine learning, it may not identify and remove all instances of sensitive data in a transcript generated by Contact Lens. We recommend you review any redacted output to ensure it meets your needs.   
The redaction feature does not meet the requirements for de-identification under medical privacy laws like the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), so we recommend you continue to treat it as protected health information after redaction.

For a list of the languages supported by Contact Lens redaction, see [Languages supported by Amazon Connect features](supported-languages.md).

## About redacted files
About redacted files

Redacted voice files are stored in your Voice Amazon S3 bucket, for example: connect-*instanceARN*/Analysis.

Redacted chat files are stored in your chat Amazon S3 bucket, for example: connect-*instanceARN*/Analysis/Chat

Redacted email files are stored in your email Amazon S3 bucket, for example: connect-*instanceARN*/Analysis/Email

You can access all files (redacted, unredacted, raw, etc.) through the AWS console, by using the Amazon S3 console.

Following is a list of what you can access by using the Amazon Connect admin website (such as on the **Contact details** page), assuming you have the appropriate [security profile permissions](permissions-for-contact-lens.md): 
+ Access redacted voice, chat, and email files. 
+ Download redacted voice recordings.

**Note**  
Currently, you cannot download redacted chat files and voice transcripts.

When redaction is enabled, Contact Lens generates the following files:
+ A redacted file. This file is generated by default when Redaction is enabled. It's the output schema, with sensitive data redacted. For an example file, see [Example redacted file for a call analyzed by Contact Lens conversational analytics](contact-lens-example-output-files.md#example-redacted-file).
+ An original (raw), analyzed file. This file is generated only when you choose **Get redacted and original transcripts with redacted audio** in the [Set recording and analytics behavior](set-recording-behavior.md) block. For an example file, see [Example original file for a call analyzed by Contact Lens conversational analytics](contact-lens-example-output-files.md#example-original-output-file).
**Important**  
For voice contacts, the original analyzed file is the only place where the complete conversation is stored. If you delete it, there will be no record of the sensitive data that was redacted. 
+ A redacted audio file (wav) for voice contacts. Sensitive data in audio files is redacted as silence. These silent times are not flagged in the Amazon Connect admin website or elsewhere as non-talk time. 

Use your file retention policies to determine how long to keep these files. 

# Use Contact Lens APIs for chat analytics
Use the API for call and chat analytics

Contact Lens includes two APIs that support conversational analytics. Use these APIs to build solutions that make your contact center more efficient. 
+ [ListRealtimeContactAnalysisSegments](https://docs.aws.amazon.com/contact-lens/latest/APIReference/API_ListRealtimeContactAnalysisSegments.html): Use for voice contacts.
+ [ListRealtimeContactAnalysisSegmentsV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_ListRealtimeContactAnalysisSegmentsV2.html): Use for chat contacts.

These conversational analytics APIs are polling APIs, with a standard request/response exchange, where you don't need to integrate with any other service. However, there are [rate limitations](amazon-connect-service-limits.md#connect-contactlens-api-quotas). If needed, you can eliminate these limitations by using the [streaming API](contact-analysis-segment-streams.md). It requires integration with Amazon Kinesis Data Streams. 

Following are two use cases for the call and chat analytics API.

## Better contact transfers


When a contact is transferred from one agent to another agent, you can transfer a transcript of the conversation to the new agent. The new agent then has context for why the customer is contacting your contact center, and the customer doesn't need to repeat information they already provided. Use the [ListRealtimeContactAnalysisSegments](https://docs.aws.amazon.com/contact-lens/latest/APIReference/API_ListRealtimeContactAnalysisSegments.html) API for voice contacts and the [ListRealtimeContactAnalysisSegmentsV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_ListRealtimeContactAnalysisSegmentsV2.html) API for chats to get the entire transcript of the conversation up to a certain point, and share it with the new agent. 

## Highlight key parts of the conversation as labels, issues, action items, and outcomes


With key highlights agents can quickly makes notes after the contact ends, and supervisors can quickly identify contacts for quality and agent performance management. This makes agents and supervisors more productive at their jobs.

# Access Contact Lens analytics for voice and chat contacts using Amazon Kinesis Data Streams
Use streaming for contact analysis

Contact analysis segment streams enable you to access Contact Lens analytics in for voice and chat contacts. Streaming overcomes the scaling limitations of existing [call and chat analytics APIs](contact-lens-api.md). For voice contacts, it also provides access to a data segment called `Utterance` that allows you to access partial transcripts. This enables you to meet ultra-low latency requirements to assist agents on live calls. 

This section explains how to integrate with Amazon Kinesis Data Streams for streaming.

Through streaming, you can receive the following event types: 
+ STARTED events published at the beginning of a contact analysis session.
+ SEGMENTS events published during the contact analysis sessions. These events contain a list of segments with analyzed information.
+ COMPLETED or FAILED events published at the end of a contact analysis session.

**Topics**
+ [Enable contact analysis segment streams](enable-contact-analysis-segment-streams.md)
+ [Voice: Data model for conversational analytics segment streams](real-time-contact-analysis-segment-streams-data-model.md)
+ [Chat: Data model for conversational analytics segment streams](chat-real-time-contact-analysis-segment-streams-data-model.md)
+ [Voice: Sample conversational analytics segment stream](sample-real-time-contact-analysis-segment-stream.md)
+ [Chat: Sample conversational analytics segment stream](chat-sample-real-time-contact-analysis-segment-stream.md)

# Enable contact analysis segment streams to analyze Contact Lens conversations
Enable contact analysis segment streams

Contact analysis segment streams are not enabled by default. This topic explains how to enable them. 

## Step 1: Create a Kinesis stream
Step 1: Create a data stream in Amazon Kinesis Data Streams

Create the data stream on the same account and Region where your Amazon Connect instance resides. For instructions, see [Step 1: Create a Data Stream](https://docs.aws.amazon.com/streams/latest/dev/tutorial-stock-data-kplkcl-create-stream.html) in the *Amazon Kinesis Data Streams Developer Guide*.

**Tip**  
We recommend creating a separate stream for each type of data. While it's possible to use the same stream for contact analysis segment streams, agent events, and contact records, it is much easier to manage and get data from the stream when you use a separate stream for each one. For more information, see the [Amazon Kinesis Data Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/introduction.html). 

## Step 2: Set up server-side encryption for the Kinesis stream (optional but recommended)
Step 2: Set up server-side encryption for the Kinesis stream

There are several ways you can do this. 
+ Option 1: Use the Kinesis AWS managed key (`aws/kinesis`). This works with no additional setup from you.
+ Option 2: Use the same customer managed key for call recordings, chat transcripts, or exported reports in your Amazon Connect instance.

  Enable encryption, and use a customer managed key for call recordings, chat transcripts, or exported reports in your Amazon Connect instance. Then choose the same KMS key for your Kinesis data stream. This key already has the permission (grant) required to be used.
+ Option 3: Use a different customer managed key.

  Use an existing customer managed key or create a new one and add required permissions for Amazon Connect role to use the key. To add permissions using AWS KMS grants, see the following example:

  ```
  aws kms create-grant \
      --key-id your key ID \
      --grantee-principal arn:aws:iam::your AWS account ID:role/aws-service-role/connect.amazonaws.com/AWSServiceRoleForAmazonConnect_11111111111111111111 \
      --operations GenerateDataKey \
      --retiring-principal arn:aws:iam::your AWS account ID:role/adminRole
  ```

  Where `grantee-principal` is the ARN of the service-linked role associated to your Amazon Connect instance. To find the ARN of the service-linked role, in the Amazon Connect console, go to **Overview**, **Distribution settings**, **Service-linked role**. 

## Step 3: Associate the Kinesis stream
Step 3: Associate the Kinesis stream

Use the Amazon Connect [AssociateInstanceStorageConfig](https://docs.aws.amazon.com/connect/latest/APIReference/API_AssociateInstanceStorageConfig.html) API to associate the following resource types:
+ For voice contacts, use `REAL_TIME_CONTACT_ANALYSIS_VOICE_SEGMENTS`
+ For chat contacts, use `REAL_TIME_CONTACT_ANALYSIS_CHAT_SEGMENTS`

**Note**  
`REAL_TIME_CONTACT_ANALYSIS_SEGMENTS` is deprecated, but it is still supported and applies to voice contacts only. Use `REAL_TIME_CONTACT_ANALYSIS_VOICE_SEGMENTS` for voice contacts moving forward.  
If you have previously associated a stream with `REAL_TIME_CONTACT_ANALYSIS_SEGMENTS`, no action is needed to update the stream to `REAL_TIME_CONTACT_ANALYSIS_VOICE_SEGMENTS`.

Specify the Kinesis stream where real-time contact analysis segments will be published. You'll need the instance ID and the Kinesis stream ARN. The following code shows an example:

```
// Build request
  const request: Connect.Types.AssociateInstanceStorageConfigRequest = {
    InstanceId: 'your Amazon Connect instance ID',
    ResourceType: 'REAL_TIME_CONTACT_ANALYSIS_VOICE_SEGMENTS or REAL_TIME_CONTACT_ANALYSIS_CHAT_SEGMENTS',
    StorageConfig: {
      StorageType: 'KINESIS_STREAM',
      KinesisStreamConfig: {
        StreamArn: 'the ARN of your Kinesis stream',
      },
    }
  };
```

### AWS CLI
AWS CLI

The following example is for chat contacts.

**Tip**  
If you don't include the AWS Region (`--region`) it uses the default Region based on the CLI profile.  
The `--storage-config` parameter value must not be include within single quote ('). Otherwise it raises an error.

```
aws connect associate-instance-storage-config \
--region "us-west-2" \
--instance-id your Amazon Connect instance ID \
--resource-type REAL_TIME_CONTACT_ANALYSIS_CHAT_SEGMENTS \
--storage-config StorageType=KINESIS_STREAM,KinesisStreamConfig={StreamArn=the ARN of your Kinesis stream}
```

### AWS SDK
AWS SDK

The following example is for voice contacts.

```
import { Connect } from 'aws-sdk';

async function associate (): Promise <void> {
  const clientConfig: Connect.ClientConfiguration = {
    region: 'the Region of your Amazon Connect instance',
  };

  const connect = new Connect(clientConfig);

  // Build request
  const request: Connect.Types.AssociateInstanceStorageConfigRequest = {
    InstanceId: 'your Amazon Connect instance ID',
    ResourceType: 'REAL_TIME_CONTACT_ANALYSIS_VOICE_SEGMENTS',
    StorageConfig: {
      StorageType: 'KINESIS_STREAM',
      KinesisStreamConfig: {
        StreamArn: 'the ARN of your Kinesis stream',
      },
    }
  };

  try {
    // Execute request
    const response: Connect.Types.AssociateInstanceStorageConfigResponse = await connect.associateInstanceStorageConfig(request).promise();

    // Process response
    console.log('raw response: ${JSON.stringify(response, null, 2)}');
  } catch (err) {
    console.error('Error calling associateInstanceStorageConfig. err.code: ${err.code},' +
      'err.message: ${err.message}, err.statusCode: ${err.statusCode}, err.retryable: ${err.retryable}');
  }
}

associate().then(r => console.log('Done'));
```

## Step 4: Enable Contact Lens for your Amazon Connect instance
Step 4: Enable Contact Lens

For instructions, see [Enable conversational analytics in Amazon Connect Contact Lens](enable-analytics.md).

## Step 5 (Optional): Review a sample segment stream
Step 5: Review a sample segment stream

We recommend you review a [voice](sample-real-time-contact-analysis-segment-stream.md) or [chat](chat-sample-real-time-contact-analysis-segment-stream.md) sample segment stream to familiarize yourself with what it looks like.

# Data model for conversational analytics segment streams to analyze voice contacts in Contact Lens
Voice: Data model for conversational analytics segment streams

Real-time contact analysis segment streams are generated in JSON. Event JSON blobs are published to the associated stream for every contact that has real-time conversational analytics enabled. The following types of events can be published for a conversational analytics session for a voice contact:
+ STARTED events—Each conversational analytics session publishes one STARTED event at the beginning of the session.
+ SEGMENTS events—Each conversational analytics session may publish zero or more SEGMENTS events during the session. These events contain a list of segments with analyzed information. For voice contacts, the list of segments may include "`Utterance`", "`Transcript`", "`Categories`", or "`PostContactSummary`" segments.
+ COMPLETED or FAILED events—Each conversational analytics session publishes one COMPLETED or FAILED event at the end of the session.

## Common properties included in all events for voice contacts
Common properties included in all events

Every event includes the following properties:

**Version**  
The version of the event schema.   
Type: String

**Channel**  
The type of channel for this contact.  
Type: String  
Valid values: `VOICE`, `CHAT`, `TASK`  
For more information about channels, see [Channels and concurrency for routing contacts in Amazon Connect](channels-and-concurrency.md).

**AccountId**  
The identifier of the account where this contact takes place.  
Type: String

**ContactId**  
The identifier of the contact being analyzed.  
Type: String

**InstanceId**  
The identifier of the instance where this contact takes place.  
Type: String 

**LanguageCode**  
The language code associated to this contact.  
Type: String   
Valid values: the language code for one of the [supported languages for Contact Lens real-time call analytics](supported-languages.md#supported-languages-contact-lens). 

**EventType**  
The type of event published.  
Type: String  
Valid values: `STARTED`, `SEGMENTS`, `COMPLETED`, `FAILED` 

## STARTED event
STARTED event

`STARTED` events include only the common properties:
+ Version
+ Channel
+ AccountId
+ ContactId
+ LanguageCode
+ EventType: STARTED

## SEGMENTS event
SEGMENTS event

`SEGMENTS` events include the following properties:
+ Version
+ Channel
+ AccountId
+ ContactId
+ LanguageCode
+ EventType: SEGMENTS
+ Segments: In addition to the common properties, `SEGMENTS` events include a list of segments with analyzed information.

  Type: Array of [Segment](#segment) objects
+ PostContactSummary: Information about the post-contact summary for a voice contact segment.

  Type: [PostContactSummary](https://docs.aws.amazon.com/connect/latest/APIReference/API_connect-contact-lens_PostContactSummary.html) objects 

  Required: No

**Segment**  
An analyzed segment for a real-time analysis session.  
Each segment is an object with the following optional properties. Only one of these properties is present, depending on the segment type:  
+ Utterance
+ Transcript
+ Categories
+ PostContactSummary

**Utterance**  
The analyzed utterance.  
Required: No  
+ **Id**

  The identifier of the utterance.

  Type: String
+ ** TranscriptId**

  The identifier of the transcript associated to this utterance.

  Type: String
+ **ParticipantId**

  The identifier of the participant.

  Type: String
+ ** ParticipantRole**

  The role of participant. For example, is it a customer, agent, or system.

  Type: String
+ ** PartialContent**

  The content of the utterance.

  Type: String
+ ** BeginOffsetMillis**

  The beginning offset in the contact for this transcript.

  Type: Integer
+ ** EndOffsetMillis**

  The end offset in the contact for this transcript.

  Type: Integer

**Transcript**  
The analyzed transcript.  
Type: [Transcript](https://docs.aws.amazon.com/contact-lens/latest/APIReference/API_Transcript.html) object   
Required: No

**Categories**  
The matched category rules.  
Type: [Categories](https://docs.aws.amazon.com/contact-lens/latest/APIReference/API_Categories.html) object  
Required: No

**PostContactSummary**  
Information about the post-contact summary for a voice contact segment.  
Type: [PostContactSummary](https://docs.aws.amazon.com/connect/latest/APIReference/API_connect-contact-lens_PostContactSummary.html) object  
Required: No

## COMPLETED event
COMPLETED event

`COMPLETED` events include only the following common properties:
+ Version
+ Channel
+ AccountId
+ ContactId
+ LanguageCode
+ EventType: COMPLETED

## FAILED event
FAILED event

`FAILED` events include only the following common properties:
+ Version
+ Channel
+ AccountId
+ ContactId
+ LanguageCode
+ EventType: FAILED

# Data model for conversational analytics segment streams to analyze chats in Contact Lens
Chat: Data model for conversational analytics segment streams

Conversational analytics segment streams for chat contacts are generated in JSON. Event JSON blobs are published to the associated stream for every contact that has real-time conversational analytics enabled. The following types of events can be published for a conversational analytics session for a chat contact:
+ STARTED events—Each conversational analytics session publishes one STARTED event at the beginning of the session.
+ SEGMENTS events—Each conversational analytics session may publish zero or more SEGMENTS events during the session. These events contain a list of segments with analyzed information. For chat contacts, the list of segments may include "`Attachments`," "`Transcript`," "`Categories`," "`Events`," "`Issues`," or "`PostContactSummary`" segments.
+ COMPLETED or FAILED events—Each conversational analytics session publishes one COMPLETED or FAILED event at the end of the session.

## Common properties included in all events for chat contacts
Common properties included in all events

Every event includes the following properties:

**Version**  
The version of the event schema. For chat contacts, this is 2.0.0.  
Type: String

**Channel**  
The type of channel for this contact.  
Type: String  
Valid values: `VOICE`, `CHAT`, `TASK`  
For more information about channels, see [Channels and concurrency for routing contacts in Amazon Connect](channels-and-concurrency.md).

**AccountId**  
The identifier of the account where this contact takes place.  
Type: String

**InstanceId**  
The identifier of the instance where this contact takes place.  
Type: String 

**ContactId**  
The identifier of the contact being analyzed.  
Type: String

**StreamingEventType**  
The type of event published.  
Type: String   
Valid values: `STARTED`, `SEGMENTS`, `COMPLETED`, `FAILED`

**StreamingSettings**  
The Contact Lens settings for this contact  
Type: [StreamingSettings](#streamingsettingsobject) object 

## StreamingSettings object
StreamingSettings object

**LanguageCode**  
The language code associated to this contact.  
Type: String   
Valid values: the language code for one of the [supported languages for Contact Lens real-time call analytics](supported-languages.md#supported-languages-contact-lens). 

**Output**  
The Contact Lens output type enabled for this contact.  
Type: String  
Valid values: `Raw`, `Redacted`, `RedactedAndRaw` 

**RedactionTypes**  
The type of redaction enabled for this contact.  
Type: Array of Strings  
Valid values: `PII` 

**RedactionTypesMetadata**  
The redaction metadata for each redaction type.  
Type: RedactionType string to [RedactionMetadata](#redactionmetadata) object   
Valid values: `PII` 

## RedactionMetadata object
RedactionMetadata object

Provides information on redaction settings.

**RedactionMaskMode**  
The data redaction replacement setting  
Type: String   
Valid values: `PII`, `EntityType`

## STARTED event
STARTED event

`STARTED` events include only the common properties:
+ Version
+ Channel
+ AccountId
+ ContactId
+ StreamingEventType: STARTED
+ StreamingSettings

## SEGMENTS event
SEGMENTS event

`SEGMENTS` events include the following properties:
+ Version
+ Channel
+ AccountId
+ OutputType
  + The Contact Lens output type of the current segment
  + Type: String
  + Valid values: `Raw`, `Redacted`
+ ContactId
+ StreamingEventType: SEGMENTS
+ StreamingSettings
+ Segments
  + A list of segments with analyzed information.
  + Type: Array of [Segment](#chat-segment) objects

**Segment**  
An analyzed segment for a real-time analysis session.  
Each segment is an object with the following optional properties. Only one of these properties is present, depending on the segment type:  
+  [Attachments](#chat-attachments)
+  [Categories](#chat-category)
+  [Event](#chat-event)
+  [Issues](#chat-issues)
+  [Transcript](#chat-transcript)
+ [PostContactSummary](#chat-postcontactsummary)

**Attachments**  
The analyzed attachments.  
Required: No  
Type: [RealTimeContactAnalysisSegmentAttachments](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealTimeContactAnalysisSegmentAttachments.html) object

**Categories**  
The matched category rules.  
Type: [RealTimeContactAnalysisSegmentCategories](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealTimeContactAnalysisSegmentCategories.html) object  
Required: No

**Event**  
Segment type describing a contact event.  
Type: [RealTimeContactAnalysisSegmentEvent](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealTimeContactAnalysisSegmentEvent.html) object  
Required: No

**Issues**  
Segment type containing a list of detected issues.  
Type: [RealTimeContactAnalysisSegmentIssues](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealTimeContactAnalysisSegmentIssues.html) object  
Required: No

**Transcript**  
The analyzed transcript segment.  
Type: [RealTimeContactAnalysisSegmentTranscript](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealTimeContactAnalysisSegmentTranscript.html) object  
Required: No

**PostContactSummary**  
Information about the post-contact summary for a real-time contact segment for chat.  
Type: [RealTimeContactAnalysisSegmentPostContactSummary](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealTimeContactAnalysisSegmentPostContactSummary.html) object   
Required: No

## COMPLETED event
COMPLETED event

`COMPLETED` events include only the following common properties:
+ Version
+ Channel
+ AccountId
+ InstanceId
+ ContactId
+ StreamingEventType: COMPLETED
+ StreamingSettings

## FAILED event
FAILED event

`FAILED` events include only the following common properties:
+ Version
+ Channel
+ AccountId
+ InstanceId
+ ContactId
+ StreamingEventType: FAILED
+ StreamingSettings

# Sample conversational analytics segment streams to analyze calls using Contact Lens
Voice: Sample conversational analytics segment stream

This topic provides sample segment streams for STARTED, SEGMENTS, COMPLETED, and FAILED events that can occur during a voice contact. 

## Sample STARTED event

+ EventType: STARTED
+ Published at the beginning of the conversational analytics session.

```
{
    "Version": "1.0.0",
    "Channel": "VOICE",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "LanguageCode": "en-US", // the language code of the contact
    "EventType": "STARTED"
}
```

## Sample SEGMENTS event

+ EventType: SEGMENTS
+ Published during a conversational analytics session. This event contains a list of segments with analyzed information. The list of segments may include "`Utterance`," "`Transcript`," "`Categories`" or "`PostContactSummary`" segments.

```
{
    "Version": "1.0.0",
    "Channel": "VOICE",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "LanguageCode": "en-US", // the language code of the contact
    "EventType": "SEGMENTS",
    "Segments": [
        {
            "Utterance": {
                "Id": "7b48ca3d-73d3-443a-bf34-a9e8fcc01747",
                "TranscriptId": "121d1581-905f-4169-9804-b841bb4df04a",
                "ParticipantId": "AGENT",
                "ParticipantRole": "AGENT",
                "PartialContent": "Hello, thank you for calling Example Corp. My name is Adam.",
                "BeginOffsetMillis": 19010,
                "EndOffsetMillis": 22980
            }
        },
        {
            "Utterance": {
                "Id": "75acb743-2154-486b-aaeb-c960ae290e88",
                "TranscriptId": "121d1581-905f-4169-9804-b841bb4df04a",
                "ParticipantId": "AGENT",
                "ParticipantRole": "AGENT",
                "PartialContent": "How can I help you?",
                "BeginOffsetMillis": 23000,
                "EndOffsetMillis": 24598
            }
        },
        {
            "Transcript": {
                "Id": "121d1581-905f-4169-9804-b841bb4df04a",
                "ParticipantId": "AGENT",
                "ParticipantRole": "AGENT",
                "Content": "Hello, thank you for calling Example Corp. My name is Adam. How can I help you?",
                "BeginOffsetMillis": 19010,
                "EndOffsetMillis": 24598,
                "Sentiment": "NEUTRAL"
            }
        },
        {
            "Transcript": {
                "Id": "4295e927-43aa-4447-bbfc-8fccc2027530",
                "ParticipantId": "CUSTOMER",
                "ParticipantRole": "CUSTOMER",
                "Content": "I'm having trouble submitting the application, number AX876293 on the portal. I tried but couldn't connect to my POC on the portal. So, I'm calling on this toll free number",
                "BeginOffsetMillis": 19010,
                "EndOffsetMillis": 22690,
                "Sentiment": "NEGATIVE",
                "IssuesDetected": [
                    {
                        "CharacterOffsets": {
                            "BeginOffsetChar": 0,
                            "EndOffsetChar": 81
                        }
                    }
                ]
            }
        },
        {
            "Categories": {
                "MatchedCategories": [
                    "CreditCardRelated",
                    "CardBrokenIssue"
                ],
                "MatchedDetails": {
                    "CreditCardRelated": {
                        "PointsOfInterest": [
                            {
                                "BeginOffsetMillis": 19010,
                                "EndOffsetMillis": 22690
                            }
                        ]
                    },
                    "CardBrokenIssue": {
                        "PointsOfInterest": [
                            {
                                "BeginOffsetMillis": 25000,
                                "EndOffsetMillis": 29690
                            }
                        ]
                    }
                }
            }
        },
        {
            "PostContactSummary": {
                "Content": "Customer contacted Example Corp because of an issue with their application",
                "Status": "COMPLETED"
            }
        }
    ]
}
```

## Sample COMPLETED event

+ EventType: COMPLETED
+ Published at the end of the conversational analytics session if the analysis completed successfully.

```
{
    "Version": "1.0.0",
    "Channel": "VOICE",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "LanguageCode": "en-US", // the language code of the contact
    "EventType": "COMPLETED"
}
```

## Sample FAILED event

+ EventType: FAILED
+ Published at the end of the conversational analytics session if the analysis failed.

```
{
    "Version": "1.0.0",
    "Channel": "VOICE",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "LanguageCode": "en-US", // the language code of the contact
    "EventType": "FAILED"
}
```

# Sample conversational analytics streams to analyze chats in Contact Lens
Chat: Sample conversational analytics segment stream

This topic provides sample segment streams for STARTED, SEGMENTS, COMPLETED, and FAILED events that occur during a chat contact. 

## Sample STARTED event

+ EventType: STARTED
+ Published at the beginning of the conversational analytics session.

```
{
    "Version": "2.0.0",
    "Channel": "CHAT",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "StreamingEventType": "STARTED",
    "StreamingSettings": {
      "LanguageCode": "en-US", // the language code of the contact
      "Output": "RedactedAndRaw",
      "RedactionTypes": [
          "PII"
      ],
      "RedactionTypesMetadata": {
        "PII": {
            "RedactionMaskMode": "PII"
         }
       }
    }
}
```

## Sample SEGMENTS event

+ EventType: [SEGMENTS](chat-real-time-contact-analysis-segment-streams-data-model.md#chat-segment-streams-data-model-segments-event) 
+ Published during a conversational analytics session. This event contains a list of [RealtimeContactAnalysisSegment](https://docs.aws.amazon.com/connect/latest/APIReference/API_RealtimeContactAnalysisSegment.html) objects with analyzed information. The list of segments may include `"Transcript"`, `"Categories"`, `"Issue"`, `"Event"`, `"Attachment"`, or "PostContactSummary" segments.

```
{
    "Version": "2.0.0",
    "Channel": "CHAT",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "OutputType": "Redacted",
    "StreamingEventType": "SEGMENTS",
    "StreamingSettings": {
        "LanguageCode": "en-US", // the language code of the contact
        "Output": "RedactedAndRaw",
        "RedactionTypes": [
            "PII"
        ],
        "RedactionTypesMetadata": {
            "PII": {
                "RedactionMaskMode": "PII"
            }
        }
    },
    "Segments": [{
        "Transcript": {
            "Id": "07a2d668-5c9e-4f69-b2fe-986261b0743a",
            "ParticipantId": "a309ac1e-ca87-44ca-bb5d-197eca8ed77a",
            "ParticipantRole": "AGENT",
            "DisplayName": "[PII]",
            "Content": "Hello, thank you for contacting Example Corp. My name is Ray.",
            "ContentType": "text/markdown",
            "Time": {
                "AbsoluteTime": "2024-03-14T19:39:26.715Z"
            },
            "Sentiment": "NEUTRAL"
        }
    }, {
        "Categories": {
            "MatchedDetails": {
                "Hi": {
                    "PointsOfInterest": [{
                        "TranscriptItems": [{
                            "Id": "5205b050-8aa9-4645-a381-a308801649ab",
                            "CharacterOffsets": {
                                "BeginOffsetChar": 0,
                                "EndOffsetChar": 40
                            }
                        }]
                    }]
                }
            }
        }
    }, {
        "Issues": {
            "IssuesDetected": [{
                "TranscriptItems": [{
                    "Content": "I have an issue with my bank account",
                    "Id": "0e5574a7-2aeb-4eab-8bb5-3a7f66a2284a",
                    "CharacterOffsets": {
                        "BeginOffsetChar": 7,
                        "EndOffsetChar": 43
                    }
                }]
            }]
        }
    }, {
        "Attachments": {
            "Id": "06ddc1eb-2302-4a8e-a73f-37687fe41aa9",
            "ParticipantId": "7810b1de-cca8-4153-b522-2498416255af",
            "ParticipantRole": "CUSTOMER",
            "DisplayName": "Customer",
            "Attachments": [{
                "AttachmentName": "Lily.jpg",
                "ContentType": "image/jpeg",
                "AttachmentId": "343e34da-391a-4541-8b7e-3909d931fcfa",
                "Status": "APPROVED"
            }],
            "Time": {
                "AbsoluteTime": "2024-03-14T19:39:26.715Z"
            }
        }
    }, {
        "Event": {
            "Id": "fbe61c5f-d0d8-4345-912a-4e81f5734d3b",
            "ParticipantId": "7810b1de-cca8-4153-b522-2498416255af",
            "ParticipantRole": "CUSTOMER",
            "DisplayName": "Customer",
            "EventType": "application/vnd.amazonaws.connect.event.participant.left",
            "Time": {
                "AbsoluteTime": "2024-03-14T19:40:00.614Z"
            }
        }
    },
    {
        "PostContactSummary": {
            "Content": "Customer contacted Example Corp because of an issue with their bank account",
            "Status": "COMPLETED"
        }
    }]
}
```

## Sample COMPLETED event

+ EventType: COMPLETED
+ Published at the end of the conversational analytics session if the analysis completed successfully.

```
{
    "Version": "2.0.0",
    "Channel": "CHAT",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "StreamingEventType": "COMPLETED",
    "StreamingEventSettings": {
        "LanguageCode": "en-US", // the language code of the contact
        "Output": "RedactedAndRaw",
        "RedactionTypes": ["PII"],
        "RedactionTypesMetadata": {
            "PII": {
                "RedactionMaskMode": "PII"
            }
        }
    }
}
```

## Sample FAILED event

+ EventType: FAILED
+ Published at the end of the conversational analytics session if the analysis failed.

```
{
    "Version": "2.0.0",
    "Channel": "CHAT",
    "AccountId": "123456789012", // your AWS account ID
    "InstanceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",  // your Amazon Connect instance ID
    "ContactId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", // the ID of the contact
    "StreamingEventType": "FAILED",
    "StreamingEventSettings": {
        "LanguageCode": "en-US",
        "Output": "RedactedAndRaw",
        "RedactionTypes": ["PII"],
        "RedactionTypesMetadata": {
            "PII": {
                "RedactionMaskMode": "PII"
            }
        }
    }
}
```

# Output file locations for files analyzed by Contact Lens conversational analytics
Output file locations

Following are examples of what the path looks like for Contact Lens conversational analytics output files when they are stored in the Amazon S3 bucket for your instance. 
+ Original analyzed transcript file (JSON)
  + /connect-instance- bucket/**Analysis/Voice**/2020/02/04/*contact's\$1ID*\$1analysis\$12020-02-04T21:14:16Z.json
  + /connect-instance- bucket/**Analysis/Chat**/2020/02/04/*contact's\$1ID*\$1analysis\$12020-02-04T21:14:16Z.json
  + /connect-instance- bucket/**Analysis/Email**/2026/03/10/*contact's\$1ID*\$1analysis\$120260310T22:35\$1UTC.json
+ Redacted analyzed transcript file in (JSON)
  + /connect-instance- bucket/**Analysis/Voice/Redacted**/2020/02/04/*contact's\$1ID*\$1**analysis\$1redacted**\$12020-02-04T21:14:16Z.json
  + /connect-instance- bucket/**Analysis/Chat/Redacted**/2020/02/04/*contact's\$1ID*\$1**analysis\$1redacted**\$12020-02-04T21:14:16Z.json
  + /connect-instance- bucket/**Analysis/Email/Redacted**/2026/03/10/*contact's\$1ID*\$1**analysis\$1redacted**\$120260310T22:35\$1UTC.json
+ Redacted audio file
  + /connect-instance- bucket/**Analysis/Voice/Redacted**/2020/02/04/*contact's\$1ID*\$1**call\$1recording\$1redacted**\$12020-02-04T21:14:16Z.**wav**

**Important**  
To delete a recording, you must delete the files for both the redacted and unredacted recordings. 

# Example Contact Lens conversational analytics output files for a call
Example Contact Lens output files for calls

The following sections provide examples of the output that results when Contact Lens conversational analytics detects issues, matches categories, indicates loudness, redacts sensitive data, and skipped analysis.

Expand each section to learn more.

## Example original file for a call analyzed by Contact Lens conversational analytics
Example original file for a call analyzed by Contact Lens conversational analytics

The following example shows the schema for a call that Contact Lens conversational analytics has analyzed. The example shows loudness, issue detection, call drivers, and the information that will be redacted.

Note the following about the analyzed file:
+ It doesn't indicate which sensitive data were redacted. All data are referred to as PII (personally identifiable information).
+ Each turn includes a `Redaction` section only if it includes PII.
+ If a `Redaction` section exists, it includes the offset in milliseconds. In a .wav file, the redacted portion will be silence. If desired, you can use the offset to replace the silence with something else, such as a beep. 
+ If two or more PII redactions exist in a turn, the first offset applies to the first PII, the second offset applies to the second PII, and so on.

```
{
  "Version": "1.1.0",    
  "AccountId": "your AWS account ID",
  "Channel": "VOICE",
  "ContentMetadata": {
      "Output": "Raw" 
  },
  "JobStatus": "COMPLETED",
  "JobDetails": {
    "SkippedAnalysis": [
        {
            "Feature": "CATEGORIZATION",
            "ReasonCode": "QUOTA_EXCEEDED", 
            "SkippedEntities": [
                {
                    "CategoryName": "PotentialFraud"
                    "RuleId": "a1130485-9529-4249-a1d4-5738b4883748"
                },
                {
                    "CategoryName": "Refund"
                    "RuleId": "bbbbbbb-9529-4249-a1d4-5738b4883748"
                }
            ]
        },
        {
            "Feature": "CATEGORIZATION",
            "ReasonCode": "FAILED_SAFETY_GUIDELINES", 
            "SkippedEntities": [
                {
                    "CategoryName": "ManagerEscalation"
                    "RuleId": "cccccccc-9529-4249-a1d4-5738b4883748"
                },
            ]
        },
    ]
  },
  "LanguageCode": "en-US",
  "Participants": [
      {
          "ParticipantId": "CUSTOMER",
          "ParticipantRole": "CUSTOMER"
      },
      
      {
          "ParticipantId": "AGENT",
          "ParticipantRole": "AGENT"
      }
  ],
  "Categories": {
      "MatchedCategories": ["Cancellation"],
      "MatchedDetails": {
          "Cancellation": {
              "PointsOfInterest": [
                  {
                      "BeginOffsetMillis": 7370,
                      "EndOffsetMillis": 11190
                  }
              ]
          }
      }
  },
  "ConversationCharacteristics": {
     "ContactSummary": {
          "PostContactSummary": {
           "Content": "The customer and agent's conversation did not have any clear issues, outcomes or next steps. Agent verified customer information and finished the call."
           }
      },
     "TotalConversationDurationMillis": 32110,
      "Sentiment": {
          "OverallSentiment": {
              "AGENT": 0,
              "CUSTOMER": 3.1
          },
          "SentimentByPeriod": {
              "QUARTER": {
                  "AGENT": [
                      {
                          "BeginOffsetMillis": 0,
                          "EndOffsetMillis": 7427,
                          "Score": 0
                      },
                      {
                          "BeginOffsetMillis": 7427,
                          "EndOffsetMillis": 14855,
                          "Score": -5
                      },
                      {
                          "BeginOffsetMillis": 14855,
                          "EndOffsetMillis": 22282,
                          "Score": 0
                      },
                      {
                          "BeginOffsetMillis": 22282,
                          "EndOffsetMillis": 29710,
                          "Score": 5
                      }
                  ],
                  "CUSTOMER": [
                      {
                          "BeginOffsetMillis": 0,
                          "EndOffsetMillis": 8027,
                          "Score": -2.5
                      },
                      {
                          "BeginOffsetMillis": 8027,
                          "EndOffsetMillis": 16055,
                          "Score": 5
                      },
                      {
                          "BeginOffsetMillis": 16055,
                          "EndOffsetMillis": 24082,
                          "Score": 5
                      },
                      {
                          "BeginOffsetMillis": 24082,
                          "EndOffsetMillis": 32110,
                          "Score": 5
                      }
                  ]
              }
          }
      },
      "Interruptions": {
        "InterruptionsByInterrupter": {
            "CUSTOMER": [
                {
                    "BeginOffsetMillis": 10710,
                    "DurationMillis": 3790,
                    "EndOffsetMillis": 14500
                }
            ],
            "AGENT": [
                {
                    "BeginOffsetMillis": 10710,
                    "DurationMillis": 3790,
                    "EndOffsetMillis": 14500
                }
            ]
        },
        "TotalCount": 2,
        "TotalTimeMillis": 7580
      },
      "NonTalkTime": {
          "TotalTimeMillis": 0,
          "Instances": []
      },
      "TalkSpeed": {
          "DetailsByParticipant": {
              "AGENT": {
                  "AverageWordsPerMinute": 239
              },
              "CUSTOMER": {
                  "AverageWordsPerMinute": 163
              }
          }
      },
      "TalkTime": {
          "TotalTimeMillis": 28698,
          "DetailsByParticipant": {
              "AGENT": {
                  "TotalTimeMillis": 15079
              },
              "CUSTOMER": {
                  "TotalTimeMillis": 13619
              }
          }
      }
  },
  "CustomModels": [
      {    // set via https://docs.aws.amazon.com/connect/latest/adminguide/add-custom-vocabulary.html             
           "Type": "TRANSCRIPTION_VOCABULARY",
           "Name": "ProductNames",  
           "Id": "4e14b0db-f00a-451a-8847-f6dbf76ae415" // optional field
      }
  ],
  "Transcript": [
      {
          "BeginOffsetMillis": 0,
          "Content": "Okay.",
          "EndOffsetMillis": 90,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              79.27
          ]
      },
      {
          "BeginOffsetMillis": 160,
          "Content": "Just hello. My name is Peter and help.",
          "EndOffsetMillis": 4640,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              66.56,
              40.06,
              85.27,
              82.22,
              77.66
          ],
          "Redaction": {
              "RedactedTimestamps": [
                  {
                      "BeginOffsetMillis": 3290,
                      "EndOffsetMillis": 3620
                  }
              ]
          }
      },
      {
          "BeginOffsetMillis": 4640,
          "Content": "Hello. Peter, how can I help you?",
          "EndOffsetMillis": 6610,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              70.23,
              73.05,
              71.8
          ],
          "Redaction": {
              "RedactedTimestamps": [
                  {
                      "BeginOffsetMillis": 5100,
                      "EndOffsetMillis": 5450
                  }
              ]
          }
      },
      {
          "BeginOffsetMillis": 7370,
          "Content": "I need to cancel. I want to cancel my plan subscription.",
          "EndOffsetMillis": 11190,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "NEGATIVE",
          "LoudnessScore": [
              77.18,
              79.59,
              85.23,
              81.08,
              73.99
          ],
          "IssuesDetected": [
              {
                  "CharacterOffsets": {
                      "BeginOffsetChar": 0,
                      "EndOffsetChar": 55
                  },
                  "Text": "I need to cancel. I want to cancel my plan subscription"
              }
          ]
      },
      {
          "BeginOffsetMillis": 11220,
          "Content": "That sounds very bad. I can offer a 20% discount to make you stay with us.",
          "EndOffsetMillis": 15210,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEGATIVE",
          "LoudnessScore": [
              75.92,
              75.79,
              80.31,
              80.44,
              76.31
          ]
      },
      {
          "BeginOffsetMillis": 15840,
          "Content": "That sounds interesting. Thank you accept.",
          "EndOffsetMillis": 18120,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              73.77,
              79.17,
              77.97,
              79.29
          ]
      },
      {
          "BeginOffsetMillis": 18310,
          "Content": "Alright, I made all the changes to the account and now these discounts applied.",
          "EndOffsetMillis": 21820,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              83.88,
              86.75,
              86.97,
              86.11
          ],
          "OutcomesDetected": [
              {
                  "CharacterOffsets": {
                      "BeginOffsetChar": 9,
                      "EndOffsetChar": 77
                  },
                  "Text": "I made all the changes to the account and now these discounts applied"
              }
          ]
      },
      {
          "BeginOffsetMillis": 22610,
          "Content": "Awesome. Thank you so much.",
          "EndOffsetMillis": 24140,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              79.11,
              81.7,
              78.15
          ]
      },
      {
          "BeginOffsetMillis": 24120,
          "Content": "No worries. I will send you all the details later today and call you back next week to check up on you.",
          "EndOffsetMillis": 29710,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              87.07,
              83.96,
              76.38,
              88.38,
              87.69,
              76.6
          ],
          "ActionItemsDetected": [
              {
                  "CharacterOffsets": {
                      "BeginOffsetChar": 12,
                      "EndOffsetChar": 102
                  },
                  "Text": "I will send you all the details later today and call you back next week to check up on you"
              }
          ]
      },
      {
          "BeginOffsetMillis": 30580,
          "Content": "Thank you. Sir. Have a nice evening.",
          "EndOffsetMillis": 32110,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              81.42,
              82.29,
              73.29
          ]
      }
  ]    
  }
}
```

## Example redacted file for a call analyzed by Contact Lens conversational analytics
Example redacted file for a call analyzed by Contact Lens conversational analytics

This section shows an example redacted file for a call after it's been analyzed by Contact Lens conversational analytics. It's a twin of the original analyzed file. The only difference is that sensitive data are redacted. In this example, three entities were selected for redaction: "`CREDIT_DEBIT_NUMBER`", "`NAME`", "`USERNAME`".

In this example, `RedactionMaskMode` is set to PII. When an entity is redacted, Contact Lens replaces it with `[PII]`. If it were set to `ENTITY_TYPE`, Contact Lens would replace the data with the name of the entity, for example, `[CREDIT_DEBIT_NUMBER]`.

```
{
  "Version": "1.1.0", 
  "AccountId": "your AWS account ID",
  "ContentMetadata": {
      "Output": "Redacted",
      "RedactionTypes": ["PII"],
      "RedactionTypesMetadata": {
          "PII": {
              "RedactionEntitiesRequested": ["CREDIT_DEBIT_NUMBER", "NAME", "USERNAME"],
              "RedactionMaskMode": "PII" // if you were to choose ENTITY_TYPE instead, the redaction would say, for example, [NAME]
          }
      }
  },
  "Channel": "VOICE",
  "JobStatus": "COMPLETED",
  "JobDetails": {
    "SkippedAnalysis": [
        {
            "Feature": "CATEGORIZATION",
            "ReasonCode": "QUOTA_EXCEEDED", 
            "SkippedEntities": [
                {
                    "CategoryName": "PotentialFraud"
                    "RuleId": "a1130485-9529-4249-a1d4-5738b4883748"
                },
                {
                    "CategoryName": "Refund"
                    "RuleId": "bbbbbbb-9529-4249-a1d4-5738b4883748"
                }
            ]
        },
        {
            "Feature": "CATEGORIZATION",
            "ReasonCode": "FAILED_SAFETY_GUIDELINES", 
            "SkippedEntities": [
                {
                    "CategoryName": "ManagerEscalation"
                    "RuleId": "cccccccc-9529-4249-a1d4-5738b4883748"
                },
            ]
        },
    ]
  },
  "LanguageCode": "en-US",
  "Participants": [
      {
          "ParticipantId": "CUSTOMER",
          "ParticipantRole": "CUSTOMER"
      },
      
      {
          "ParticipantId": "AGENT",
          "ParticipantRole": "AGENT"
      }
  ],
  "Categories": {
      "MatchedCategories": ["Cancellation"],
      "MatchedDetails": {
          "Cancellation": {
              "PointsOfInterest": [
                  {
                      "BeginOffsetMillis": 7370,
                      "EndOffsetMillis": 11190
                  }
              ]
          }
      }
  }, 
  "ConversationCharacteristics": {
       "ContactSummary": {
             "PostContactSummary": {
               "Content": "The customer and agent's conversation did not have any clear issues, outcomes or next steps. Agent verified customer information and finished the call."
              }
      },
      "TotalConversationDurationMillis": 32110,
      "Sentiment": {
          "OverallSentiment": {
              "AGENT": 0,
              "CUSTOMER": 3.1
          },
          "SentimentByPeriod": {
              "QUARTER": {
                  "AGENT": [
                      {
                          "BeginOffsetMillis": 0,
                          "EndOffsetMillis": 7427,
                          "Score": 0
                      },
                      {
                          "BeginOffsetMillis": 7427,
                          "EndOffsetMillis": 14855,
                          "Score": -5
                      },
                      {
                          "BeginOffsetMillis": 14855,
                          "EndOffsetMillis": 22282,
                          "Score": 0
                      },
                      {
                          "BeginOffsetMillis": 22282,
                          "EndOffsetMillis": 29710,
                          "Score": 5
                      }
                  ],
                  "CUSTOMER": [
                      {
                          "BeginOffsetMillis": 0,
                          "EndOffsetMillis": 8027,
                          "Score": -2.5
                      },
                      {
                          "BeginOffsetMillis": 8027,
                          "EndOffsetMillis": 16055,
                          "Score": 5
                      },
                      {
                          "BeginOffsetMillis": 16055,
                          "EndOffsetMillis": 24082,
                          "Score": 5
                      },
                      {
                          "BeginOffsetMillis": 24082,
                          "EndOffsetMillis": 32110,
                          "Score": 5
                      }
                  ]
              }
          }
      },
      "Interruptions": {
        "InterruptionsByInterrupter": {
            "CUSTOMER": [
                {
                    "BeginOffsetMillis": 10710,
                    "DurationMillis": 3790,
                    "EndOffsetMillis": 14500
                }
            ],
            "AGENT": [
                {
                    "BeginOffsetMillis": 10710,
                    "DurationMillis": 3790,
                    "EndOffsetMillis": 14500
                }
            ]
        },
        "TotalCount": 2,
        "TotalTimeMillis": 7580
      },  
      "NonTalkTime": {
          "TotalTimeMillis": 0,
          "Instances": []
      },
      "TalkSpeed": {
          "DetailsByParticipant": {
              "AGENT": {
                  "AverageWordsPerMinute": 239
              },
              "CUSTOMER": {
                  "AverageWordsPerMinute": 163
              }
          }
      },
      "TalkTime": {
          "TotalTimeMillis": 28698,
          "DetailsByParticipant": {
              "AGENT": {
                  "TotalTimeMillis": 15079
              },
              "CUSTOMER": {
                  "TotalTimeMillis": 13619
              }
          }
      }
  },
  "CustomModels": [
      {   // set via https://docs.aws.amazon.com/connect/latest/adminguide/add-custom-vocabulary.html
           "Type": "TRANSCRIPTION_VOCABULARY",
           "Name": " LNK POPProductNames",  
           "Id": "4e14b0db-f00a-451a-8847-f6dbf76ae415" // optional field
      }
  ],  
  "Transcript": [
      {
          "BeginOffsetMillis": 0,
          "Content": "Okay.",
          "EndOffsetMillis": 90,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              79.27
          ]
      },
      {
          "BeginOffsetMillis": 160,
          "Content": "Just hello. My name is [PII] and help.",  
          "EndOffsetMillis": 4640,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              66.56,
              40.06,
              85.27,
              82.22,
              77.66
          ],
          "Redaction": {
              "RedactedTimestamps": [
                  {
                      "BeginOffsetMillis": 3290,
                      "EndOffsetMillis": 3620
                  }
              ]
          }
      },
      {
          "BeginOffsetMillis": 4640,
          "Content": "Hello. [PII], how can I help you?",
          "EndOffsetMillis": 6610,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              70.23,
              73.05,
              71.8
          ],
          "Redaction": {
              "RedactedTimestamps": [
                  {
                      "BeginOffsetMillis": 5100,
                      "EndOffsetMillis": 5450
                  }
              ]
          }
      },
      {
          "BeginOffsetMillis": 7370,
          "Content": "I need to cancel. I want to cancel my plan subscription.",
          "EndOffsetMillis": 11190,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "NEGATIVE",
          "LoudnessScore": [
              77.18,
              79.59,
              85.23,
              81.08,
              73.99
          ],
          "IssuesDetected": [
              {
                  "CharacterOffsets": {
                      "BeginOffsetChar": 0,
                      "EndOffsetChar": 55
                  },
                  "Text": "I need to cancel. I want to cancel my plan subscription"
              }
          ]
      },
      {
          "BeginOffsetMillis": 11220,
          "Content": "That sounds very bad. I can offer a 20% discount to make you stay with us.",
          "EndOffsetMillis": 15210,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEGATIVE",
          "LoudnessScore": [
              75.92,
              75.79,
              80.31,
              80.44,
              76.31
          ]
      },
      {
          "BeginOffsetMillis": 15840,
          "Content": "That sounds interesting. Thank you accept.",
          "EndOffsetMillis": 18120,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              73.77,
              79.17,
              77.97,
              79.29
          ]
      },
      {
          "BeginOffsetMillis": 18310,
          "Content": "Alright, I made all the changes to the account and now these discounts applied.",
          "EndOffsetMillis": 21820,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "NEUTRAL",
          "LoudnessScore": [
              83.88,
              86.75,
              86.97,
              86.11
          ],
          "OutcomesDetected": [
              {
                  "CharacterOffsets": {
                      "BeginOffsetChar": 9,
                      "EndOffsetChar": 77
                  },
                  "Text": "I made all the changes to the account and now these discounts applied"
              }
          ]
      },
      {
          "BeginOffsetMillis": 22610,
          "Content": "Awesome. Thank you so much.",
          "EndOffsetMillis": 24140,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              79.11,
              81.7,
              78.15
          ]
      },
      {
          "BeginOffsetMillis": 24120,
          "Content": "No worries. I will send you all the details later today and call you back next week to check up on you.",
          "EndOffsetMillis": 29710,
          "Id": "the ID of the turn",
          "ParticipantId": "AGENT",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              87.07,
              83.96,
              76.38,
              88.38,
              87.69,
              76.6
          ],
          "ActionItemsDetected": [
              {
                  "CharacterOffsets": {
                      "BeginOffsetChar": 12,
                      "EndOffsetChar": 102
                  },
                  "Text": "I will send you all the details later today and call you back next week to check up on you"
              }
          ]
      },
      {
          "BeginOffsetMillis": 30580,
          "Content": "Thank you. Sir. Have a nice evening.",
          "EndOffsetMillis": 32110,
          "Id": "the ID of the turn",
          "ParticipantId": "CUSTOMER",
          "Sentiment": "POSITIVE",
          "LoudnessScore": [
              81.42,
              82.29,
              73.29
          ]
      }
  ]    
}
```

# Example Contact Lens output files for a chat analyzed by Contact Lens conversational analytics
Example Contact Lens output files for chats

This section shows an example schema for a chat conversation that has been analyzed by Contact Lens conversational analytics. The example shows inferred sentiment, matched categories, contact summary, and response time.

The original, analyzed file contains the full chat transcript. The same content that is present in the chat **Transcript** field on the **Contact details** page is present in `Transcript` field in the original Contact Lens analysis file. In addition, the analyzed file may contain more fields, such as a `Redaction` section to indicate that there is redacted data in the redacted analysis file.

**Note**  
 Some `ConversationCharacteristics` include `DetailsByParticipantRole` maps, with participant roles as keys. However, not all roles from the `Participants` list (such as `CUSTOMER` or `AGENT`) are guaranteed to have corresponding keys in the `DetailsByParticipantRole` objects. The presence of a key for a participant depends on whether there was eligible data for Contact Lens analysis.

## Categories
Categories

`PointsOfInterest` differs between post-chat and post-call categories:
+ Post-call `PointsOfInterest` has milliseconds offset. 
+ Post-chat `PointsOfInterest` has an array of `TranscriptItems`; each item has an `id` and `CharacterOffset`.

There is an array of `PointsOfInterest`. Each array has an array of `TranscriptItems`: each `PointOfInterest` is for a Category match, but each match can span across multiple transcript items.

For both calls and chats, the `PointsOfInterest` array can be empty. This means that the category is matched for the whole contact. For example, if you create a rule to match the category when `Hello` is not mentioned in the contact, there would be no portion of the transcript to pinpoint for this condition.

**Note**  
Currently, category is inferred for `text/plain`, `text/markdown` chat messages only.

## Key highlights
Key highlights

**Key highlights** are located in the `ConversationCharacteristics.ContactSummary.SummaryItemsDetected` array. No more than one item can be in that array, emphasizing that only one set of `Issue`, `Outcome`, and `Action` item can be found. 

Each object in the array has following fields: `IssuesDetected`, `OutcomesDetected`, `ActionItemsDetected`.

Each of the fields has an array of `TranscriptItems` that has `Id` and `CharacterOffsets`. They describe `TranscriptItems` and specific parts that were identified to contain that contact summary: issue, outcome, or action item.

**Note**  
Currently, key highlights are inferred for `text/plain` chat messages only.

## Sentiment
Sentiment

### Overall sentiment


The `DetailsByParticipantRole` field sentiment score for contact participants is similar to the Contact Lens for speech analytics file.

`DetailsByInteraction` field has `CUSTOMER` sentiment score for parts of chat interaction `WithAgent` and `WithoutAgent`. If there were no customer messages in those parts of interaction, the respective field will be absent.

**Note**  
Currently, sentiment is inferred is for `text/plain`, `text/markdown` chat messages only.

### Sentiment shift


The `DetailsByParticipantRole` field contains an object that describes the sentiment shift for contact participants (that is, `AGENT`, `CUSTOMER`): `BeginScore` and `EndScore`. 

The `DetailsByInteraction` field has `CUSTOMER` sentiment shift for parts of the chat interaction `WithAgent` and `WithoutAgent`. If there were no customer messages in those parts of the interaction, the respective field will be absent.

Sentiment shift provides information about how the participant's sentiment changed throughout the chat interaction.

## Response time


`AgentGreetingTimeMillis` measures the time between when the `AGENT` joined the chat and the moment when they ended their first message to customer.

`DetailsByParticipantRole` has following characteristics for each of participant:
+ `Average`: What is average response time for a participant.
+ `Maximum`: What is the longest response time for a participant. If there are multiple transcript items with the same maximum response time, which ones are they.

To calculate the `Average` and `Maximum` response times for a given participant, they need to respond to a message from another participant (`AGENT` needs to responds to the `CUSTOMER`, or vice versa). 

For example, if there was only one message from `CUSTOMER` and then only one message from `AGENT` before the chat ended, Contact Lens will calculate a response time for the `AGENT`, but not for the `CUSTOMER`. 

**Note**  
Currently, response time is inferred is for ` text/plain`, `text/markdown` chat messages only.

## Redaction


Note the following about the original analysis file for chats:
+ Transcript item includes a `Redaction` section only if it there is data to be redacted. The section contains character offsets for the data that is redacted in the redacted analysis file. 
+ If two or more pieces of a message are redacted, the first offset applies to the first redacted piece, the second offset applies to the second redacted piece, and so on.

`DisplayNames` for `AGENT` and `CUSTOMER` are redacted because they contain PII. This applies to `AttachmentName`, too.

`CharacterOffsets` take into consideration redaction changes of the `Content` length in the redacted analysis file. `CharacterOffsets` describe redacted content, not original content.

## Example original chat file


```
{
    "AccountId": "123456789012",
    "Categories": {
        "MatchedCategories": [
            "agent-intro"
        ],
        "MatchedDetails": {
            "agent-intro": {
                "PointsOfInterest": [
                    {
                        "TranscriptItems": [
                            {
                                "CharacterOffsets": {
                                    "BeginOffsetChar": 0,
                                    "EndOffsetChar": 73
                                },
                                "Id": "e4949dd1-aaa1-4fbd-84e7-65c95b2d3d9a"
                            }
                        ]
                    }
                ]
            }
        }
    },
    "Channel": "CHAT",
    "ChatTranscriptVersion": "2019-08-26",
    "ContentMetadata": {
        "Output": "Raw"
    },
    "ConversationCharacteristics": {
        "ContactSummary": {
           "PostContactSummary": {
               "Content": "The customer and agent's conversation did not have any clear issues, outcomes or next steps. Agent verified customer information and finished the call."
               }
           },
            "SummaryItemsDetected": [
                {
                    "ActionItemsDetected": [],
                    "IssuesDetected": [
                        {
                            "TranscriptItems": [
                                {
                                    "CharacterOffsets": {
                                        "BeginOffsetChar": 72,
                                        "EndOffsetChar": 244
                                    },
                                    "Id": "2b8ba020-53ee-4053-b5b7-35364ac1c7df"
                                }
                            ]
                        }
                    ]
                    "OutcomesDetected": [
                        {
                            "TranscriptItems": [
                                {
                                    "CharacterOffsets": {
                                        "BeginOffsetChar": 0,
                                        "EndOffsetChar": 150
                                    },
                                    "Id": "72cc8c8d-2199-422a-b363-01d6d3fdc851"
                                }
                            ]
                        }
                    ]
                }
            ],
            
        "ResponseTime": {
            "AgentGreetingTimeMillis": 2511,
            "DetailsByParticipantRole": {
                "AGENT": {
                    "Average": {
                        "ValueMillis": 5575
                    },
                    "Maximum": {
                        "TranscriptItems": [
                            {
                                "Id": "21acf0fc-7259-4a08-b4cd-688eb56587d3"
                            }
                        ],
                        "ValueMillis": 7309
                    }
                },
                "CUSTOMER": {
                    "Average": {
                        "ValueMillis": 5875
                    },
                    "Maximum": {
                        "TranscriptItems": [
                            {
                                "Id": "c71ad383-f876-4bb3-b254-7837b6a3d395"
                            }
                        ],
                        "ValueMillis": 11366
                    }
                }
            }
        },
        "Sentiment": {
            "DetailsByTranscriptItemGroup": [
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "e4949dd1-aaa1-4fbd-84e7-65c95b2d3d9a"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3673d926-6e75-4620-a6f0-7ea571790a15"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "46d37141-32d8-4f2e-a664-bcd3f34a68b3"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3c4a2a1e-6790-46a6-8ad4-4a0980b04795"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "f9cd41b6-3f68-4e83-a47d-664395f324c0"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "21acf0fc-7259-4a08-b4cd-688eb56587d3"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "2b8ba020-53ee-4053-b5b7-35364ac1c7df"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "28d0a1ce-64d1-4625-bbef-4cfeb97b6742"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "ef9b8622-32d5-4cfd-9ccc-a242502267bc"
                        },
                        {
                            "Id": "03a9de67-f9e1-4884-a1a3-ecea78a4ce9e"
                        },
                        {
                            "Id": "cfee5ece-a671-4a11-9ec2-89aba4b7d688"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "72cc8c8d-2199-422a-b363-01d6d3fdc851"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "61bb2591-fe87-44e4-bba0-a3619c4cef1f"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "1761f27e-0989-4b6d-a046-fc03d2c6bc9c"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 3.3333333333333335,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "8cdff161-dc25-44e6-986f-fc0e08ee0a7d"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -1.6666666666666667,
                    "Sentiment": "NEGATIVE",
                    "TranscriptItems": [
                        {
                            "Id": "bcc51949-3a79-4398-be1b-a27345a8a8ad"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -3.75,
                    "Sentiment": "NEGATIVE",
                    "TranscriptItems": [
                        {
                            "Id": "7d5c07d7-3d26-4b34-ae91-39aeaeef685c"
                        },
                        {
                            "Id": "e0efbd17-9139-439b-8c80-ebf2b9b703b9"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -3.75,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "8fbb8dd4-9fd4-4991-83dc-5f06eeead9aa"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -2.5,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3b856fd9-0eeb-4fb2-93ed-95ec4aeae3a6"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "ecb8c498-96d7-448b-8360-366eeddb4090"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "d334058f-e3de-4cf1-a361-32e4e61f1839"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3ec6adb5-3f11-409c-af39-40cf7ba6f078"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "c71ad383-f876-4bb3-b254-7837b6a3d395"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "4b292b64-4a33-45ff-89df-d5a175d16d70"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "2da5a3c2-9d1b-458c-ae53-759a4e63198d"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "e23a2331-f3fc-4d3c-8a51-1541451186c9"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 3.75,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "5a27cc39-9b73-4ebe-9275-5e6723788a1b"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 3.75,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "540368c7-ec19-4fc0-8c86-0a5ee62d31a0"
                        }
                    ]
                }
            ],
            "OverallSentiment": {
                "DetailsByInteraction": {
                    "DetailsByParticipantRole": {
                        "CUSTOMER": {
                            "WithAgent": 0
                        }
                    }
                },
                "DetailsByParticipantRole": {
                    "AGENT": 1.1538461538461537,
                    "CUSTOMER": 0
                }
            },
            "SentimentShift": {
                "DetailsByInteraction": {
                    "DetailsByParticipantRole": {
                        "CUSTOMER": {
                            "WithAgent": {
                                "BeginScore": -3,
                                "EndScore": 3.75
                            }
                        }
                    }
                },
                "DetailsByParticipantRole": {
                    "AGENT": {
                        "BeginScore": 0,
                        "EndScore": 2.5
                    },
                    "CUSTOMER": {
                        "BeginScore": -3.75,
                        "EndScore": 3.75
                    },
                    "SYSTEM": {
                        "BeginScore": 2.5,
                        "EndScore": 0
                    }
                }
            }
        }
    },
    "CustomerMetadata": {
        "ContactId": "b49644f6-672f-445c-b209-f76b36482830",
        "InputS3Uri": "path to the json file in s3",
        "InstanceId": "f23fc323-3d6d-48aa-95dc-EXAMPLE012"
    },
    "JobStatus": "COMPLETED",
    "LanguageCode": "en-US",
    "Participants": [
        {
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER"
        },
        {
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM"
        },
        {
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT"
        }
    ],
    "Transcript": [
        {
            "AbsoluteTime": "2022-10-27T03:31:50.735Z",
            "ContentType": "application/vnd.amazonaws.connect.event.participant.joined",
            "DisplayName": "John",
            "Id": "740c494d-9df7-4400-91c0-3e4df33922c8",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "EVENT"
        },
        {
            "AbsoluteTime": "2022-10-27T03:31:53.390Z",
            "Content": "Hello, thanks for contacting us. This is an example of what the Amazon Connect virtual contact center can enable you to do.",
            "ContentType": "text/plain",
            "DisplayName": "SYSTEM_MESSAGE",
            "Id": "78aa8229-714a-4c87-916b-ce7d8d567ab2",
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:31:55.131Z",
            "Content": "The time in queue is less than 5 minutes.",
            "ContentType": "text/plain",
            "DisplayName": "SYSTEM_MESSAGE",
            "Id": "1276382b-facb-49c5-8d34-62e3b0f50002",
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:31:56.618Z",
            "Content": "You are now being placed in queue to chat with an agent.",
            "ContentType": "text/plain",
            "DisplayName": "SYSTEM_MESSAGE",
            "Id": "88c2363e-8206-4781-a353-c15e1ccacc12",
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:00.951Z",
            "ContentType": "application/vnd.amazonaws.connect.event.participant.joined",
            "DisplayName": "Jane",
            "Id": "c05cca74-d50b-4aa5-b46c-fdb5ae8c814c",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "EVENT"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:03.462Z",
            "Content": "Hello, thanks for reaching Example Corp. This is Jane. How may I help you?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "e4949dd1-aaa1-4fbd-84e7-65c95b2d3d9a",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 46,
                        "EndOffsetChar": 53
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:08.102Z",
            "Content": "I'd like to see if I can get a refund or an exchange, because I ordered one of your grow-it-yourself indoor herb garden kits and nothing sprouted after a couple weeks so I think something is wrong with the seeds and this product may be defective.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "bcc51949-3a79-4398-be1b-a27345a8a8ad",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:14.137Z",
            "Content": "My wife is blind and sensitive to the sun so I was going to surprise her for her birthday with all the herbs that she loves so you guys actually really let me down.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "7d5c07d7-3d26-4b34-ae91-39aeaeef685c",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:18.781Z",
            "Content": "I should be taking my business elsewhere. I don't see why I should be giving money to a company that isn't even going to sell a product that works.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "e0efbd17-9139-439b-8c80-ebf2b9b703b9",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:24.123Z",
            "Content": "Ok. Can I get your first and last name please?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "3673d926-6e75-4620-a6f0-7ea571790a15",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:29.879Z",
            "Content": "Yeah. My first name is John and last name is Doe.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "8fbb8dd4-9fd4-4991-83dc-5f06eeead9aa",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 21,
                        "EndOffsetChar": 26
                    },
                    {
                        "BeginOffsetChar": 44,
                        "EndOffsetChar": 49
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:34.670Z",
            "Content": "Could you please provide me with the order ID number?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "46d37141-32d8-4f2e-a664-bcd3f34a68b3",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:39.726Z",
            "Content": "Yes, just . Looking ...",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "3b856fd9-0eeb-4fb2-93ed-95ec4aeae3a6",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:44.887Z",
            "Content": "Not a problem, take your time.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "3c4a2a1e-6790-46a6-8ad4-4a0980b04795",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:52.978Z",
            "Content": "Okay, that should be #5376897. You know, if the product was fine I wouldn't have to scrounge through emails.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "ecb8c498-96d7-448b-8360-366eeddb4090",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:59.441Z",
            "Content": "alright, perfect. And could you also just confirm the shipping address for me?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "f9cd41b6-3f68-4e83-a47d-664395f324c0",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 77,
                        "EndOffsetChar": 78
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:05.455Z",
            "Content": "123 Any Street, Any Town, and the zip code is 98109.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "d334058f-e3de-4cf1-a361-32e4e61f1839",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 0,
                        "EndOffsetChar": 27
                    },
                    {
                        "BeginOffsetChar": 49,
                        "EndOffsetChar": 54
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:12.764Z",
            "Content": "Thank you very much. Just waiting on my system here. .. I'll also need the last four digits of your debit card.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "21acf0fc-7259-4a08-b4cd-688eb56587d3",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:17.412Z",
            "Content": "Ok. Last four for my debit care are 9008",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "3ec6adb5-3f11-409c-af39-40cf7ba6f078",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 27,
                        "EndOffsetChar": 31
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:22.486Z",
            "Content": "It's just too bad. I thought this was going to be the best gift idea. How can you guys be sending out defective seeds? Isn't that your whole business?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "2b8ba020-53ee-4053-b5b7-35364ac1c7df",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },        
        {
            "AbsoluteTime": "2022-10-27T03:33:38.961Z",
            "Content": "I apologize for the experience you had Mr. Doe, its very uncommon that our customer will have this issue. We will look into this and get this sorted out for you right away.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "28d0a1ce-64d1-4625-bbef-4cfeb97b6742",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 41,
                        "EndOffsetChar": 46
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:44.192Z",
            "Content": "Well, my wife's birthday already passed, so. There's not too much you can do. But I would still like to grow the herbs for her, if possible.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "4b292b64-4a33-45ff-89df-d5a175d16d70",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:51.310Z",
            "Content": "Totally understandable. Let me see what we can do for you. Please give me couple of minutes as I check the system.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "ef9b8622-32d5-4cfd-9ccc-a242502267bc",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:56.287Z",
            "Content": "Thank you sir one moment please.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "03a9de67-f9e1-4884-a1a3-ecea78a4ce9e",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:01.224Z",
            "Content": "Alright are you still there Mr Doe?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "cfee5ece-a671-4a11-9ec2-89aba4b7d688",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 30,
                        "EndOffsetChar": 35
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:07.093Z",
            "Content": "Yeah.",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "2da5a3c2-9d1b-458c-ae53-759a4e63198d",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:12.562Z",
            "Content": "We are not only refunding the cost of the grow-it-yourself indoor herb kit but we will also be sending you a replacement. Would you be okay with this?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "72cc8c8d-2199-422a-b363-01d6d3fdc851",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:17.029Z",
            "Content": "Yeah! That would be great. I just want my wife to be able to have these herbs in her room. And I'm always happy to get my money back!",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "e23a2331-f3fc-4d3c-8a51-1541451186c9",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:22.269Z",
            "Content": "Awesome! We really want to keep our customers happy and satisfied, and again I want to apologize for your less than satisfactory experience with the last product you ordered from us.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "61bb2591-fe87-44e4-bba0-a3619c4cef1f",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:26.353Z",
            "Content": "Okay! No problem. Sounds great. Thank you for all your help!",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "5a27cc39-9b73-4ebe-9275-5e6723788a1b",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:31.431Z",
            "Content": "Is there anything else I can help you out with John?",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "1761f27e-0989-4b6d-a046-fc03d2c6bc9c",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 48,
                        "EndOffsetChar": 53
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:36.704Z",
            "Content": "Nope!",
            "ContentType": "text/markdown",
            "DisplayName": "John",
            "Id": "540368c7-ec19-4fc0-8c86-0a5ee62d31a0",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:41.448Z",
            "Content": "Ok great! Have a great day.",
            "ContentType": "text/markdown",
            "DisplayName": "Jane",
            "Id": "8cdff161-dc25-44e6-986f-fc0e08ee0a7d",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:42.799Z",
            "ContentType": "application/vnd.amazonaws.connect.event.participant.left",
            "DisplayName": "John",
            "Id": "d1ba54ba-61d4-4a48-9a9a-6cd17d70b8fb",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "EVENT"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:43.192Z",
            "ContentType": "application/vnd.amazonaws.connect.event.chat.ended",
            "Id": "2d9a0e4f-faec-485f-97af-2767dde1f30a",
            "Type": "EVENT"
        }
    ],
    "Version": "CHAT-2022-11-30"
}
```

## Example redacted chat file


```
{
    "AccountId": "123456789012",
    "Categories": {
        "MatchedCategories": [
            "agent-intro"
        ],
        "MatchedDetails": {
            "agent-intro": {
                "PointsOfInterest": [
                    {
                        "TranscriptItems": [
                            {
                                "CharacterOffsets": {
                                    "BeginOffsetChar": 0,
                                    "EndOffsetChar": 71
                                },
                                "Id": "e4949dd1-aaa1-4fbd-84e7-65c95b2d3d9a"
                            }
                        ]
                    }
                ]
            }
        }
    },
    "Channel": "CHAT",
    "ChatTranscriptVersion": "2019-08-26",
    "ContentMetadata": {
        "Output": "Redacted",
        "RedactionTypes": [
            "PII"
        ],
        "RedactionTypesMetadata": {
            "PII": {
                "RedactionMaskMode": "PII"
            }
        }
    },
    "ConversationCharacteristics": {
        "ContactSummary": {
            "SummaryItemsDetected": [
                {
                    "ActionItemsDetected": [],
                    "IssuesDetected": [
                        {
                            "TranscriptItems": [
                                {
                                    "CharacterOffsets": {
                                        "BeginOffsetChar": 72,
                                        "EndOffsetChar": 244
                                    },
                                    "Id": "2b8ba020-53ee-4053-b5b7-35364ac1c7df"
                                }
                            ]
                        }
                    ],
                    "OutcomesDetected": [
                        {
                            "TranscriptItems": [
                                {
                                    "CharacterOffsets": {
                                        "BeginOffsetChar": 0,
                                        "EndOffsetChar": 150
                                    },
                                    "Id": "72cc8c8d-2199-422a-b363-01d6d3fdc851"
                                }
                            ]
                        }
                    ]
                }
            ]
            "ContactSummary": {
                       "PostContactSummary": {
                          "Content": "The customer and agent's conversation did not have any clear issues, outcomes or next steps. Agent verified customer information and finished the call."
                           }
                    }
            ],
        },
        
        "ResponseTime": {
            "AgentGreetingTimeMillis": 2511,
            "DetailsByParticipantRole": {
                "AGENT": {
                    "Average": {
                        "ValueMillis": 5575
                    },
                    "Maximum": {
                        "TranscriptItems": [
                            {
                                "Id": "21acf0fc-7259-4a08-b4cd-688eb56587d3"
                            }
                        ],
                        "ValueMillis": 7309
                    }
                },
                "CUSTOMER": {
                    "Average": {
                        "ValueMillis": 5875
                    },
                    "Maximum": {
                        "TranscriptItems": [
                            {
                                "Id": "c71ad383-f876-4bb3-b254-7837b6a3d395"
                            }
                        ],
                        "ValueMillis": 11366
                    }
                }
            }
        },
        "Sentiment": {
            "DetailsByTranscriptItemGroup": [
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "e4949dd1-aaa1-4fbd-84e7-65c95b2d3d9a"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3673d926-6e75-4620-a6f0-7ea571790a15"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "46d37141-32d8-4f2e-a664-bcd3f34a68b3"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3c4a2a1e-6790-46a6-8ad4-4a0980b04795"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "f9cd41b6-3f68-4e83-a47d-664395f324c0"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "21acf0fc-7259-4a08-b4cd-688eb56587d3"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "2b8ba020-53ee-4053-b5b7-35364ac1c7df"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "28d0a1ce-64d1-4625-bbef-4cfeb97b6742"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "ef9b8622-32d5-4cfd-9ccc-a242502267bc"
                        },
                        {
                            "Id": "03a9de67-f9e1-4884-a1a3-ecea78a4ce9e"
                        },
                        {
                            "Id": "cfee5ece-a671-4a11-9ec2-89aba4b7d688"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "72cc8c8d-2199-422a-b363-01d6d3fdc851"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "61bb2591-fe87-44e4-bba0-a3619c4cef1f"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "1761f27e-0989-4b6d-a046-fc03d2c6bc9c"
                        }
                    ]
                },
                {
                    "ParticipantRole": "AGENT",
                    "ProgressiveScore": 3.3333333333333335,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "8cdff161-dc25-44e6-986f-fc0e08ee0a7d"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -1.6666666666666667,
                    "Sentiment": "NEGATIVE",
                    "TranscriptItems": [
                        {
                            "Id": "bcc51949-3a79-4398-be1b-a27345a8a8ad"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -3.75,
                    "Sentiment": "NEGATIVE",
                    "TranscriptItems": [
                        {
                            "Id": "7d5c07d7-3d26-4b34-ae91-39aeaeef685c"
                        },
                        {
                            "Id": "e0efbd17-9139-439b-8c80-ebf2b9b703b9"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -3.75,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "8fbb8dd4-9fd4-4991-83dc-5f06eeead9aa"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": -2.5,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3b856fd9-0eeb-4fb2-93ed-95ec4aeae3a6"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "ecb8c498-96d7-448b-8360-366eeddb4090"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "d334058f-e3de-4cf1-a361-32e4e61f1839"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "3ec6adb5-3f11-409c-af39-40cf7ba6f078"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "c71ad383-f876-4bb3-b254-7837b6a3d395"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "4b292b64-4a33-45ff-89df-d5a175d16d70"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 0,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "2da5a3c2-9d1b-458c-ae53-759a4e63198d"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 1.6666666666666667,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "e23a2331-f3fc-4d3c-8a51-1541451186c9"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 3.75,
                    "Sentiment": "POSITIVE",
                    "TranscriptItems": [
                        {
                            "Id": "5a27cc39-9b73-4ebe-9275-5e6723788a1b"
                        }
                    ]
                },
                {
                    "ParticipantRole": "CUSTOMER",
                    "ProgressiveScore": 3.75,
                    "Sentiment": "NEUTRAL",
                    "TranscriptItems": [
                        {
                            "Id": "540368c7-ec19-4fc0-8c86-0a5ee62d31a0"
                        }
                    ]
                }
            ],
            "OverallSentiment": {
                "DetailsByInteraction": {
                    "DetailsByParticipantRole": {
                        "CUSTOMER": {
                            "WithAgent": 0
                        }
                    }
                },
                "DetailsByParticipantRole": {
                    "AGENT": 1.1538461538461537,
                    "CUSTOMER": 0
                }
            },
            "SentimentShift": {
                "DetailsByInteraction": {
                    "DetailsByParticipantRole": {
                        "CUSTOMER": {
                            "WithAgent": {
                                "BeginScore": -3,
                                "EndScore": 3.75
                            }
                        }
                    }
                },
                "DetailsByParticipantRole": {
                    "AGENT": {
                        "BeginScore": 0,
                        "EndScore": 2.5
                    },
                    "CUSTOMER": {
                        "BeginScore": -3.75,
                        "EndScore": 3.75
                    }
                }
            }
        }
    },
    "CustomerMetadata": {
        "ContactId": "b49644f6-672f-445c-b209-f76b36482830",
        "InputS3Uri": "path to the json file in s3",
        "InstanceId": "f23fc323-3d6d-48aa-EXAMPLE012"
    },
    "JobStatus": "COMPLETED",
    "LanguageCode": "en-US",
    "Participants": [
        {
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER"
        },
        {
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM"
        },
        {
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT"
        }
    ],
    "Transcript": [
        {
            "AbsoluteTime": "2022-10-27T03:31:50.735Z",
            "ContentType": "application/vnd.amazonaws.connect.event.participant.joined",
            "DisplayName": "[PII]",
            "Id": "740c494d-9df7-4400-91c0-3e4df33922c8",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "EVENT"
        },
        {
            "AbsoluteTime": "2022-10-27T03:31:53.390Z",
            "Content": "Hello, thanks for contacting us. This is an example of what the Amazon Connect virtual contact center can enable you to do.",
            "ContentType": "text/plain",
            "DisplayName": "SYSTEM_MESSAGE",
            "Id": "78aa8229-714a-4c87-916b-ce7d8d567ab2",
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:31:55.131Z",
            "Content": "The time in queue is less than 5 minutes.",
            "ContentType": "text/plain",
            "DisplayName": "SYSTEM_MESSAGE",
            "Id": "1276382b-facb-49c5-8d34-62e3b0f50002",
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:31:56.618Z",
            "Content": "You are now being placed in queue to chat with an agent.",
            "ContentType": "text/plain",
            "DisplayName": "SYSTEM_MESSAGE",
            "Id": "88c2363e-8206-4781-a353-c15e1ccacc12",
            "ParticipantId": "2b2288b4-ff6e-4996-8d8e-260fd5a8ac02",
            "ParticipantRole": "SYSTEM",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:00.951Z",
            "ContentType": "application/vnd.amazonaws.connect.event.participant.joined",
            "DisplayName": "[PII]",
            "Id": "c05cca74-d50b-4aa5-b46c-fdb5ae8c814c",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "EVENT"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:03.462Z",
            "Content": "Hello, thanks for reaching Example Corp. This is [PII]. How may I help you?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "e4949dd1-aaa1-4fbd-84e7-65c95b2d3d9a",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 46,
                        "EndOffsetChar": 51
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:08.102Z",
            "Content": "I'd like to see if I can get a refund or an exchange, because I ordered one of your grow-it-yourself indoor herb garden kits and nothing sprouted after a couple weeks so I think something is wrong with the seeds and this product may be defective.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "bcc51949-3a79-4398-be1b-a27345a8a8ad",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:14.137Z",
            "Content": "My wife is blind and sensitive to the sun so I was going to surprise her for her birthday with all the herbs that she loves so you guys actually really let me down.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "7d5c07d7-3d26-4b34-ae91-39aeaeef685c",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:18.781Z",
            "Content": "I should be taking my business elsewhere. I don't see why I should be giving money to a company that isn't even going to sell a product that works.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "e0efbd17-9139-439b-8c80-ebf2b9b703b9",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:24.123Z",
            "Content": "Ok. Can I get your first and last name please?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "3673d926-6e75-4620-a6f0-7ea571790a15",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:29.879Z",
            "Content": "Yeah. My first name is [PII] and last name [PII].",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "8fbb8dd4-9fd4-4991-83dc-5f06eeead9aa",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 21,
                        "EndOffsetChar": 26
                    },
                    {
                        "BeginOffsetChar": 44,
                        "EndOffsetChar": 49
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:34.670Z",
            "Content": "Could you please provide me with the order ID number?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "46d37141-32d8-4f2e-a664-bcd3f34a68b3",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:39.726Z",
            "Content": "Yes, just . Looking ...",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "3b856fd9-0eeb-4fb2-93ed-95ec4aeae3a6",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:44.887Z",
            "Content": "Not a problem, take your time.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "3c4a2a1e-6790-46a6-8ad4-4a0980b04795",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:52.978Z",
            "Content": "Okay, that should be #5376897. You know, if the product was fine I wouldn't have to scrounge through emails.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "ecb8c498-96d7-448b-8360-366eeddb4090",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:32:59.441Z",
            "Content": "alright, perfect. And could you also just confirm the shipping address for me, [PII]",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "f9cd41b6-3f68-4e83-a47d-664395f324c0",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 77,
                        "EndOffsetChar": 82
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:05.455Z",
            "Content": "[PII], and the zip code [PII].",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "d334058f-e3de-4cf1-a361-32e4e61f1839",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 0,
                        "EndOffsetChar": 5
                    },
                    {
                        "BeginOffsetChar": 27,
                        "EndOffsetChar": 32
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:12.764Z",
            "Content": "Thank you very much. Just waiting on my system here. .. I'll also need the last four digits of your debit card.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "21acf0fc-7259-4a08-b4cd-688eb56587d3",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:17.412Z",
            "Content": "Ok. Last four for my debit card [PII]",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "3ec6adb5-3f11-409c-af39-40cf7ba6f078",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 27,
                        "EndOffsetChar": 32
                    }
                ]
            },
            "Type": "MESSAGE"
        },        
        {
            "AbsoluteTime": "2022-10-27T03:33:33.852Z",
            "Content": "It's just too bad. I thought this was going to be the best gift idea. How can you guys be sending out defective seeds? Isn't that your whole business?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "c71ad383-f876-4bb3-b254-7837b6a3d395",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:38.961Z",
            "Content": "I apologize for the experience you had Mr [PII], its very uncommon that our customer will have this issue. We will look into this and get this sorted out for you right away.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "28d0a1ce-64d1-4625-bbef-4cfeb97b6742",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 41,
                        "EndOffsetChar": 46
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:44.192Z",
            "Content": "Well, my wife's birthday already passed, so. There's not too much you can do. But I would still like to grow the herbs for her, if possible.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "4b292b64-4a33-45ff-89df-d5a175d16d70",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:51.310Z",
            "Content": "Totally understandable. Let me see what we can do for you. Please give me couple of minutes as I check the system.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "ef9b8622-32d5-4cfd-9ccc-a242502267bc",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:33:56.287Z",
            "Content": "Thank you sir one moment please.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "03a9de67-f9e1-4884-a1a3-ecea78a4ce9e",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:01.224Z",
            "Content": "Alright are you still there Mr [PII]?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "cfee5ece-a671-4a11-9ec2-89aba4b7d688",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 30,
                        "EndOffsetChar": 35
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:07.093Z",
            "Content": "Yeah.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "2da5a3c2-9d1b-458c-ae53-759a4e63198d",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:12.562Z",
            "Content": "We are not only refunding the cost of the grow-it-yourself indoor herb kit but we will also be sending you a replacement. Would you be okay with this?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "72cc8c8d-2199-422a-b363-01d6d3fdc851",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:17.029Z",
            "Content": "Yeah! That would be great. I just want my wife to be able to have these herbs in her room. And I'm always happy to get my money back!",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "e23a2331-f3fc-4d3c-8a51-1541451186c9",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:22.269Z",
            "Content": "Awesome! We really want to keep our customers happy and satisfied, and again I want to apologize for your less than satisfactory experience with the last product you ordered from us.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "61bb2591-fe87-44e4-bba0-a3619c4cef1f",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:26.353Z",
            "Content": "Okay! No problem. Sounds great. Thank you for all your help!",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "5a27cc39-9b73-4ebe-9275-5e6723788a1b",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:31.431Z",
            "Content": "Is there anything else I can help you out with Mr [PII]?",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "1761f27e-0989-4b6d-a046-fc03d2c6bc9c",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Redaction": {
                "CharacterOffsets": [
                    {
                        "BeginOffsetChar": 48,
                        "EndOffsetChar": 53
                    }
                ]
            },
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:36.704Z",
            "Content": "Nope!",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "540368c7-ec19-4fc0-8c86-0a5ee62d31a0",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:41.448Z",
            "Content": "Ok great! Have a great day.",
            "ContentType": "text/plain",
            "DisplayName": "[PII]",
            "Id": "8cdff161-dc25-44e6-986f-fc0e08ee0a7d",
            "ParticipantId": "f36a545d-67b2-4fd4-89fb-896136b609a7",
            "ParticipantRole": "AGENT",
            "Type": "MESSAGE"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:42.799Z",
            "ContentType": "application/vnd.amazonaws.connect.event.participant.left",
            "DisplayName": "[PII]",
            "Id": "d1ba54ba-61d4-4a48-9a9a-6cd17d70b8fb",
            "ParticipantId": "e9b36a6d-12aa-4c21-9745-1881648ecfc8",
            "ParticipantRole": "CUSTOMER",
            "Type": "EVENT"
        },
        {
            "AbsoluteTime": "2022-10-27T03:34:43.192Z",
            "ContentType": "application/vnd.amazonaws.connect.event.chat.ended",
            "Id": "2d9a0e4f-faec-485f-97af-2767dde1f30a",
            "Type": "EVENT"
        }
    ],
    "Version": "CHAT-2022-11-30"
}
```

# Example Contact Lens output files for an email analyzed by Contact Lens conversational analytics
Example Contact Lens output files for emails

This section shows an example schema for an email contact that has been analyzed by Contact Lens conversational analytics. The example shows matched categories and a contact chain summary.

Note the following about email analytics output files:
+ The `Channel` field is set to `EMAIL`.
+ The `Version` field uses the `EMAIL` prefix (for example, `EMAIL-2026-01-01`).
+ Email output files do not include sentiment scores, sentiment shift, loudness, or non-talk time data.
+ The `Categories` section includes an `EventSource` field set to `OnEmailAnalysisAvailable`.
+ Contact summaries use `ContactChainSummary` instead of `PostContactSummary`, because email analytics summarizes the full email thread (contact chain).
+ The `CustomerMetadata.InputFiles` section references the email message and plain text files stored in Amazon S3.

## Example email analytics output file


The following example shows the output for an email contact with categorization, redaction, and contact chain summary enabled.

```
{
  "Version": "EMAIL-2026-01-01",
  "AccountId": "123456789012",
  "Channel": "EMAIL",
  "Configuration": {
    "ChannelConfiguration": {
      "AnalyticsModes": [
        "ContactLens"
      ]
    },
    "LanguageLocale": "en-US",
    "RedactionConfiguration": {
      "Behavior": "Enable",
      "Policy": "RedactedAndOriginal",
      "Entities": [],
      "MaskMode": "EntityType"
    },
    "SummaryConfiguration": {
      "SummaryModes": [
        "ContactChain"
      ]
    }
  },
  "CustomerMetadata": {
    "ContactId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "InstanceId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
    "InputFiles": {
      "EmailMessageS3URI": "connect/your-instance/EmailMessages/2026/01/15/a1b2c3d4_message.json",
      "EmailMessagePlainTextS3URI": "connect/your-instance/EmailMessages/2026/01/15/a1b2c3d4_plain_text.json"
    }
  },
  "Categories": {
    "MatchedCategories": [
      "refund-request",
      "shipping-issue"
    ],
    "MatchedDetails": {
      "refund-request": {
        "PointsOfInterest": [
          {
            "Contacts": [
              {
                "ContactId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
              }
            ]
          }
        ],
        "EventSource": "OnEmailAnalysisAvailable"
      },
      "shipping-issue": {
        "PointsOfInterest": [],
        "EventSource": "OnEmailAnalysisAvailable"
      }
    }
  },
  "ConversationCharacteristics": {
    "ContactSummary": {
      "ContactChainSummary": {
        "Content": "The customer reported that their order arrived damaged and requested a full refund including shipping costs. The agent confirmed the refund would be processed within 3-5 business days and offered a replacement unit."
      }
    }
  },
  "JobDetails": {}
}
```

# Troubleshoot issues in Amazon Connect Contact Lens
Troubleshoot

## Why don't I see or hear unredacted content?


If your organization is using the Contact Lens redaction feature, by default only redacted content appears in the Amazon Connect admin website. 

You must have permissions to view unredacted content. For more information, see [Assign permissions to use Contact Lens conversational analytics in Amazon Connect](permissions-for-contact-lens.md). 

# Evaluate agent and self-service interaction performance in Amazon Connect
Evaluate performance

**Tip**  
**New user?** Check out the [Amazon Connect Agent Evaluation Forms Workshop](https://catalog.workshops.aws/amazon-connect-evaluation-forms/en-US). This online course guides you through creating a working example of an evaluation form.  
**IT administrators**: To enable Amazon Connect evaluation capabilities, go to the Amazon Connect console, choose your instance alias, choose **Data storage**, **Content evaluations**, **Edit**. You'll be prompted to create or choose an S3 bucket. After the bucket is created, you can store evaluations and export them.

Amazon Connect performance evaluations enables you to define custom performance evaluation criteria to assess, monitor and improve how agents and automated systems (bots, AI agents) interact with customers and resolve issues. You can then monitor performance by reviewing aggregated insights in dashboards, and drill-down into individual contacts where you can see evaluations alongside recordings, transcript, conversation summaries and analytics in a single view. With integrated coaching, you can provide feedback to agents highlighting their strengths and opportunities to improve. 

You can perform manual evaluations for all contact types (voice, chat, email, and task). You can perform automated interactions for voice and chat contacts analyzed by Amazon Connect conversational analytics. You can perform automated evaluations of both agent interactions and automated interactions (handled by bots or AI agents). For more details on automated evaluations, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

To perform manual evaluations, you can search for a contact, choose the appropriate evaluation form, review the contact audio, screen recording or transcript, and then evaluate how the human, AI agent, or bot interacted with the customer. You can then use those insights to improve the customer experience by providing agent coaching feedback and optimizing bots, AI agents and self-service workflows.

**To evaluate performance**

1. Log in to Amazon Connect with a user account that has [permissions to perform evaluations](evaluation-and-coaching-permissions.md). 

1. Access the contact that you want to evaluate. There are a few ways you can do this. For example, someone may have shared the contact URL with you, or assigned you a task that has the URL. Or, you may have the contact ID, which lets you search for the contact record by doing the following: on the navigation pane, choose **Analytics and optimization**, **Contact search**, and then search for the contact that you want to evaluate.

1. On the **Contact details** page, choose **Evaluations** or the **<** icon.  
![\[The Contact details page, the Evaluations button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-evaluatebutton.png)

1. The **Evaluations** panel lists any evaluations that are in progress or completed for the contact.  
![\[The evaluations pane, the status of two evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-startevaluation.png)

1. To start an evaluation, choose an evaluation form from the dropdown menu, and then choose **Start evaluation**. If you have not set up an evaluation form yet, then you will need to do so beforehand. For more information, see [Create an evaluation form](create-evaluation-forms.md).

1. To navigate an especially long evaluation form, use the arrows next to each section to collapse or expand it.   
![\[The evaluations pane, the arrow to collapse or expand a section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-exampleevaluation.png)

1. Choose **Save** to save a form in progress. The status of the form becomes **Draft**. You can return to it any time to continue, or you can delete it and start over.  
![\[The evaluations pane, the status of an evaluation set to draft.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-draft.png)

1. When you're done, choose **Submit**. If you have skipped optional questions in the form, you will see a warning asking you to confirm that you want to submit the evaluation. Choose **Yes**. The evaluation is now **Completed**.  
![\[Skip optional questions and submit the evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-draft-submit.png)

# Assign security profile permissions for performance evaluations and coaching
Assign permissions for evaluations and coaching

To allow users to create, automate, and access evaluation forms, assign the following **Analytics and optimization** security profile permissions: 
+ **Evaluation forms - perform contact evaluations**: Allows a user, such as a Quality Assurance team member, to use an evaluation form to review a contact. For an example image, see [Evaluate agent and self-service interaction performance in Amazon Connect](evaluations.md). 

  This permission allows users to [search](search-evaluations.md) evaluations by evaluation form, score, last updated date/range, evaluator, and status. It also allows them to view the evaluation form audit trail.
  + **View** permissions enable users to view submitted evaluations. You can grant this permissions to users who perform evaluations (such as managers) and users (such as agents) who need to view evaluations.
  + **Create** permissions enable users to create new evaluations, view and edit draft evaluations.
  + **Edit** permissions enable users to edit submitted evaluations.
  + **Delete** permissions enable users to delete both draft and submitted evaluations.
+ **Evaluation forms - manage form definitions**: Allows admins and managers to [create](create-evaluation-forms.md) and [manage](evaluationform-audit-trail.md) evaluation forms.
+ **Rules**: Permissions to create, view, edit, and delete rules are required to [automatically categorize contacts](rules.md) based on certain agent behaviors and customer outcomes. These contact categories can be used to [configure automation](create-evaluation-forms.md#step-automate) on evaluation forms. In addition, rules permissions are needed to [create a rule to submit automated evaluations](contact-lens-rules-submit-automated-evaluation.md).
+ **Evaluation forms - ask AI assistant**: Provides access to the **Ask AI** button while performing evaluations. The **Ask AI** button enables the user to get [generative AI-powered recommendations](generative-ai-performance-evaluations.md) for answers to questions in evaluation forms.
+ **Evaluation forms - manage calibration sessions**: Allows admins to create and manage calibration sessions to drive consistency and accuracy in how managers evaluate agent performance.
+ **Sample contacts**: Allows managers to randomly sample agents' contacts for evaluation. For example, a manager can select all agents in his hierarchy, and get 5 random contacts per agent from the last week for evaluation.

To allow users to manage or access coaching sessions, assign the following **Analytics and optimization** security profile permissions: 
+ **Coaching - my coaching sessions**: Access coaching sessions where you are assigned as a coach or a participant.
  + **View**: View coaching sessions where you are the coach or the participant. If you are the participant, you can acknowledge the coaching session with this permission.
  + **Create**: Create new coaching sessions with yourself as the coach.
  + **Edit**: Edit coaching sessions where you are the coach.
  + **Delete**: Delete coaching sessions where you are the coach.
+ **Coaching - manage coaching sessions**: Access coaching sessions performed by yourself or others. This permission is for admins or quality managers.
  + **View**: View any coaching session.
  + **Create**: Create new coaching sessions. You can choose yourself as the coach or assign other users as the coach.
  + **Edit**: Edit any coaching session.
  + **Delete**: Delete any coaching session.

The **Admin** security profile has these permissions by default. 

For information about how to add more permissions to an existing security profile, see [Update security profiles in Amazon Connect](update-security-profiles.md).

# View an evaluation audit trail in Amazon Connect


 An evaluation can be amended and submitted multiple times. When an evaluator submits changes to an existing evaluation, managers can view an audit trail that records:
+ Who submitted the original evaluation
+ Who re-submitted the evaluation
+ What changes they made (for example, changing answers or answer notes in an evaluation)

Contact center managers can use this information to perform internal audits and uncover opportunities to improve consistency across evaluators.

**To view an evaluation audit trail**

1. Log into Amazon Connect with a user account that has **Analytics and optimization ** - ** [Evaluation forms - perform evaluations](evaluation-and-coaching-permissions.md)** permission on their security profile. 

1. Access a contact with an evaluation that was edited after it was submitted.

1. Choose the evaluation you want to investigate. The following image shows the **Evaluations** page with a link to a completed evaluation.  
![\[A link to a completed evaluation that you can choose to view the audit trail.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-audit-example.png)

1. The **Overview** section of the evaluation contains **Change history**. It indicates the number of times the evaluation has been submitted. Choose the link as shown in the following image.  
![\[The Change history property.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-audit-change-history.png)

1. You can view the audit trail of subsequent submissions after the initial submission. Choose the arrow next to a re-submission to view details of the edits. The following image shows an example of an audit trail of that were made to an evaluation after it was submitted.  
![\[An audit trail of an evaluation that was changed after it was submitted.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluation-audit.png)

# Create an evaluation form in Amazon Connect
Create an evaluation form

In Amazon Connect, you can create [many different evaluation forms](feature-limits.md#evaluationforms-feature-specs). For example, you may need a different evaluation form for each business unit, and for different queues. You can also create different evaluation forms for evaluating the agent interaction and the self-service interaction with a Lex bot or AI agent.

Each form can contain multiple sections and questions. 
+ You can assign [weights](about-scoring-and-weights.md) to each question and section to indicate how much their score impacts the overall score of the evaluation form.
+ You can configure automation on each question so that answers to those questions are automatically filled using insights and metrics from Contact Lens conversational analytics.

This topic explains how to create a form and configure automation using the Amazon Connect admin website. To create and manage forms programmatically, see [Evaluation actions](https://docs.aws.amazon.com/connect/latest/APIReference/evaluation-api.html) in the *Amazon Connect API Reference*.

**Topics**
+ [

## Step 1: Create an evaluation form with a title
](#step-title)
+ [

## Step 2: Add sections and questions
](#step-sections)
+ [

## Step 3: Add answers
](#step-answers)
+ [

## Step 4: Conditionally enable questions
](#step-conditionally-enable-questions)
+ [

## Step 5: Assign scores and ranges to answers
](#step-assignscores)
+ [

## Step 6: Enable automated evaluations
](#step-automate)
+ [

## Step 7: Preview the evaluation form
](#step-preview)
+ [

## Step 8: Assign weights for final score
](#step-weights)
+ [

## Step 9: Activate an evaluation form
](#step-activateform)

## Step 1: Create an evaluation form with a title
Create an evaluation form with a title

The following steps explain how to create or duplicate an evaluation form and set a title.

1. Log in to Amazon Connect with a user account that has the following security profile permission: **Analytics and Optimization** - **Evaluation forms - manage form definitions** - **Create**.

1. Choose **Analytics and optimization**, then choose **Evaluation forms**. 

1. On the **Evaluation forms** page, choose **Create new form**. 

   —or—

   Select an existing form and choose **Duplicate**.

1. Enter a title for the form, such as *Sales evaluation*, or change the existing title. Add any tags to the form for controlling access to the form (see [ Set up tag-based-access controls on performance evaluations](https://docs.aws.amazon.com/connect/latest/adminguide/tag-based-access-control-performance-evaluations.html)) When finished, choose **Ok**.   
![\[The evaluation forms page, the set form title section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-title.png)

   The following tabs appear at the top of the evaluation form page:
   + **Sections and questions**. Add sections, questions, and answers to the form.
   + **Scoring**. Enable scoring on the form. You can also apply scoring to sections or questions.

1. Choose **Save** at any time while creating your form. This enables you to navigate away from the page and return to the form later.

1. Continue to the next step to add sections and questions.

## Step 2: Add sections and questions
Add sections and questions

1. While on the **Sections and questions** tab, add a title to the section 1, for example, *Greeting*.   
![\[The evaluation form page, the sections and queues tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-greetingtitle.png)

1. Choose **Add question** to add a question. 

1. In the **Question title** box, enter the question that will appear on the evaluation form. For example, *Did the agent state their name and say they are here to assist?*   
![\[The evaluation form page, the question title box.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-greetingquestion1.png)

1. In the **Instructions to evaluators** box, add information to help the evaluators or generative AI to answer the question.

   For example, for the question *Did the agent try to validate the customer identity?* you may provide additional instructions such as, *The agent is required to always ask a customer their membership ID and postal code before addressing the customer's questions*.

1. In the **Question type** box, choose one of the following options to appear on the form:
   + **Single selection**: The evaluator can choose from a list of options, such as **Yes**, **No**, or **Good**, **Fair**, **Poor**.
   + **Multiple selection**: The evaluator can choose multiple answers from a list of options, such as list of products that the customer was interested in purchasing, or non-compliant agent behaviours. 
   + **Text field**: The evaluator can enter free form text. 
   + **Number**: The evaluator can enter a number from a range that you specify, such as 1-10. 
   + **Date**: The evaluator can choose a date as an answer. 

1. Continue to the next step to add answers.

## Step 3: Add answers
Add answers

1. On the **Answers** tab, add answer options that you want to display to evaluators, such as **Yes**, **No**.

1. To add more answers, choose **Add option**. 

   The following image shows example answers for a **Single selection** question.  
![\[The Answers tab, the "Add option" command.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-greetingquestion1-answer.png)

   The following image shows an answer range for a **Number** question.  
![\[The Answers tab, the Min value and Max value boxes.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-questionscoring4.png)

1. You can also mark a question as optional. This enables managers to skip the question (or mark it as **Not applicable**) while performing an evaluation.   
![\[The option to mark a question "not applicable".\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-questionscoring-not-applicable.png)

## Step 4: Conditionally enable questions
Conditionally enable questions

Evaluation forms can have questions that are conditionally enabled or disabled, based on answers to other questions. For example, you can configure a follow-up question to appear in the form only if it is needed.

1. Choose a question that needs a follow-up question. The question type must be **Single selection** or **Multiple selection**, and it must be not be an optional question (do not select the ** Optional question** checkbox).

   For example, in the following image, question 1.1 is *What was the reason for the call?* and the **Optional question** checkbox is not selected.   
![\[The Question type is Single selection and the Optional question checkbox is not selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions1.png)

1. Add a follow-up question and now select the **Optional question** checkbox.

   In the following image, the follow-up question is question 1.2 *Did the agent check if the customer attempted new account registration online?* and the **Optional question** checkbox is selected.   
![\[A follow up question, and the Optional question checkbox is selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions2.png)

1. Choose the **Conditionally enable question** tab and then turn on **Conditional question**. The toggle is shown in the following image.   
![\[The Conditionally enable question tab, the Conditional question toggle.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions3.png)

1. Configure the follow-up question to be enabled only if answer to question 1.1. *What was the reason for the call?* is **New account registration**. These options are shown in the following image.  
![\[The Conditional question is one of Other.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/conditionalquestions4.png)

   With this configuration, the follow-up question *Did the agent check if the customer attempted new account registration online?* is dynamically added to the form only if the answer to *What was the reason for the call?* is **New account registration**. In all other cases this question is not present in the form and does not need to be answered.

1. To verify that this configuration works as expected, use the **Preview** action. 

Following are a few things to keep in mind when creating conditional questions:
+ When a question is conditionally enabled, it is by default disabled.
+ When a question is conditionally disabled, it is by default enabled.
+ You can only use **Single selection** or ** Multiple selection** questions to conditionally enable or disable other questions. The question cannot be optional.
+  You can choose one or more answer options to trigger the condition of a conditional question. 

**Note**  
If Gen AI-powered automation is enabled on a question that is conditionally enabled, then the use of Gen AI on that question counts towards the usage limit of questions that can be evaluated on a contact using Gen AI. It counts even if the question was conditionally disabled.  
For the default limit of the **Number of evaluation questions that can be answered automatically on a contact using generative AI**, see [Contact Lens service quotas](amazon-connect-service-limits.md#contactlens-quotas). 

## Step 5: Assign scores and ranges to answers
Assign scores and ranges to answers

1. Go to the top of the form. Choose the **Scoring** tab, and then select the **Enable scoring** checkbox.  
![\[The evaluation forms page, the scoring tab, the Enable scoring checkbox.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-enablescoring.png)

   This enables scoring for the entire form. It also enables you to add ranges for answers to **Number** question types.

1. Return to the **Sections and questions** tab. Now you have the option to assign scores to **Single selection**, and add ranges for **Number** question types.  
![\[The Sections and questions tab, the scoring tab specific to the question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoring-feature.png)

1. When you create a **Number** type question, on the **Scoring** tab, choose **Add range** to enter a range of values. Indicate the worst to best score for the answer. 

   The following image shows an example of ranges and scoring for a **Number** question type.   
![\[The Scoring tab specific to the question, the answer ranges.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-questionscoring5.png)
   + If the agent interrupted the customer 0 times, they get a score of 10 (best).
   + If the agent interrupted the customer 1-4 times, they get a score of 5. 
   + If the agent interrupted the customer 5-10 times, they get a score of 1 (worst). 
**Note**  
You can configure a score of **0 (Automatic fail)** for an answer option. You can choose to apply **Automatic fail** to the section, the subsection, or the entire form. This means that selecting the answer on an evaluation will assign a score of zero to the corresponding section, the subsection, or the entire form. The **Automatic fail** option is shown in the following image.  

![\[The Automatic fail option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automaticfail.png)


1. After you assign scores to all the answers, choose **Save**.

1. When you're finished assigning scores, continue to the next step to automate the question of certain questions, or continue to [preview the evaluation form](#step-preview). 

## Step 6: Enable automated evaluations
Enable automated evaluations

Amazon Connect enables you to automatically answer questions within evaluation forms (for example, did the agent adhere to the greeting script?) using insights and metrics from conversational analytics. Automation can be used to:
+ **Assist evaluators with performance evaluations**: Evaluators are provided with automated answers to questions on evaluation forms while performing evaluations. Evaluators can override automated answers before submission.
+ **Automatically fill and submit evaluations**: Administrators can configure evaluation forms to automate responses to all questions within an evaluation form and automatically submit evaluations for up to 100% of customer interactions. Evaluators can edit and re-submit evaluations (if needed).

The ways of automation vary by whether you are evaluating the agent interaction or automated interaction (for example, self-service while interacting with a Lex bot or AI agent). You can choose between agent and automated interaction by choosing the **Additional settings**, under **Contact interaction type**.

Both for assisting evaluators, and for automated submission of evaluations, you need to first set up automation on individual questions within an evaluation form. Amazon Connect provides three ways of automating evaluations:
+ **Contact categories**: *Single selection* questions (for example, did the agent properly greet the customer (Yes/ No)?), and *Multiple selection* questions (for example, what parts of the greeting script did the agent state correctly?) can be automatically answered using contact categories defined with rules. For more information, see [Create Contact Lens rules using the Amazon Connect admin website](build-rules-for-contact-lens.md).
+ **Generative AI**: Both *Single selection* and *Text field* questions can be automatically answered using generative AI.
**Note**  
Currently integrated generative AI cannot be used to automate evaluations of self-service (automated) interactions with Lex bots and AI agents.
+ **Metrics**: *Numeric* questions (for example, what was the longest that the customer was put on hold?) can be automatically answered using metrics such as longest hold time, sentiment score, etc.

Following are examples of each type of automation for each type of question.

**Example automation for a Single selection question using Contact Lens categories**
+ The following image shows that the answer to the evaluation question is yes when Contact Lens has categorized the contact with a label **ProperGreeting**. To label contacts as **ProperGreeting**, you must first setup a rule that detects the words or phrases expected as part of a proper greeting, for example, the agent mentioned "Thank you for calling" in the first 30 seconds of the interaction. For more information, see [Automatically categorize contacts](rules.md).  
![\[A question section, the automation tab with Contact Lens categories.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation1.png)

  For information about setting up contact categories, see [Automatically categorize contacts](rules.md).

**Example automation for an *optional* Single selection question using contact categories**
+ The following image shows example automation of an optional Single selection question. The first check is whether the question is applicable or not. A rule is created to check whether the contact is about opening a new account. If so, the contact is categorized as **CallReasonNewAccountOpening**. If the call is not about opening a new account, the question is marked as **Not Applicable**.

  The subsequent conditions run only if the question is applicable. The answer is marked as **Yes** or **No** based on the contact category **NewAccountDisclosures**. This category checks whether the agent provided the customer with disclosures about opening a new account.  
![\[A question section, the automation tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation1a.png)

  For information about setting up contact categories, see [Automatically categorize contacts](rules.md).

**Example automation for an *optional* Single selection question using Generative AI**
+ The following image show example automation using Generative AI. Generative AI will automatically answer the evaluation question by interpreting the question title and evaluation criteria specified in the instructions of the evaluation question, and using it to analyze the conversation transcript. Using complete sentences to phrase the evaluation question and clearly specifying the evaluation criteria within the instructions improves accuracy of generative AI. For information, see [Evaluate agent performance in Amazon Connect using generative AI](generative-ai-performance-evaluations.md).  
![\[A question section, the generative AI Contact Lens option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation-genai.png)

**Example automation for a Multiple selection question using Contact Lens categories**
+ Multiple selection questions can be used to capture answer reasoning for a single select question. It can also be used to trigger conditional questions, by checking for customer scenarios, such as call reasons. The following example shows how you can leverage rules that capture customer call reasons to automatically fill answers to a multiple selection question. Unlike single select questions, all of the conditions are executed sequentially to answer a multiple selection question. In the below example, if the categories **StatusCheck** and ** ChangeExistingRequest** are both present on the contact, then the answer would be both “Checking status of existing service request” and “Changing a service request”.  
![\[A question section, the automation tab with Contact Lens categories.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation1b.png)

  For information about setting up contact categories, see [Automatically categorize contacts](rules.md).

**Example automation for a Numeric question**
+ If the agent interaction duration was less than 30 seconds, score the question as a 10.   
![\[A question section, the scoring tab, a numeric question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation2.png)
+ On the **Automation** tab, choose the metric that is used to automatically evaluate the question.  
![\[A question section, the automation tab, a metric to automatically evaluate the question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation3.png)
+ You can automate responses to numeric questions using Contact Lens metrics (such as sentiment score of the customers, non-talk time percentage, and number of interruptions) and contact metrics (such as longest hold duration, number of holds, and agent interaction duration).

After an evaluation form is activated with automation configured on some of the questions, then you will receive automated responses to those questions when you start an evaluation from within the Amazon Connect admin website.

**To automatically fill and submit evaluations**

1. Set up automation on every question within an evaluation form as previously described.

1. Turn on **Enable fully automated submission of evaluations** before activating the evaluation form. This toggle is shown in the following image.  
![\[The Enable fully automated evaluations toggle set to On.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-automation4.png)

1. Activate the evaluation form.

1. Upon activation you will be asked to create a rule in Contact Lens that submits an automated evaluation. For more information, see [Create a rule in Contact Lens that submits an automated evaluation](contact-lens-rules-submit-automated-evaluation.md). The rule enables you to specify which contacts should be automatically evaluated using the evaluation form.

## Step 7: Preview the evaluation form
Preview the evaluation form

The **Preview** button is active only after you have assigned scores to answers for all of the questions.

![\[The evaluation form page, the preview button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-previewbutton.png)


The following image shows the form preview. Use the arrows to collapse sections and make the form easier to preview. You can edit the form while viewing the preview, as shown in the following image.

![\[The preview of the evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-previewmode.png)


## Step 8: Assign weights for final score
Assign weights for final score

When scoring is enabled for the evaluation form, you can assign *weights* to sections or questions. The weight raises or lowers the impact of a section or question on the final score of the evaluation.

![\[The evaluation form page, the scoring tab, the score weights section, the question option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoring.png)


### Weight distribution mode
Weight distribution mode

With **Weight distribution mode**, you choose whether to assign weight by section or question: 
+ **Weight by section**: You can evenly distribute the weight of each question in the section.
+ **Weight by question**: You can lower or raise the weight of specific questions.

When you change a weight of a section or question, the other weights are automatically adjusted so the total is always 100 percent.

For example, in the following image, question 2.1 was manually set to 50 percent. The weights that display in italics were adjusted automatically. In addition, you can turn on **Exclude optional questions from scoring**, which assigns all optional questions a weight of zero and redistributes the weight among the remaining questions.

![\[Score weights for a question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-weightdistribution3.png)


## Step 9: Activate an evaluation form
Activate an evaluation form

Choose **Activate** to make the form available to evaluators. Evaluators will no longer be able to choose the previous version of the form from the dropdown list when starting new evaluations. For any evaluations that were completed using previous versions, you will still be able to view the version of the form on which the evaluation was based on.

If you are still working on setting up the evaluation form and want to save your work at any point you can choose **Save**, **Save draft**.

If you want to check whether the form has been correctly set up, but not activate it, select **Save**, **Save and validate**.

# Set up tag-based-access controls on performance evaluations


Amazon Connect enables businesses to restrict access to specific performance evaluation forms, preventing unauthorized access to evaluation form templates and completed evaluations. Businesses can provide managers access to modify or use only the evaluation form templates that are relevant to their business line or function, improving security and making it easier for managers to select the right form while completing evaluations. Additionally, both managers and agents can be restricted from viewing certain completed evaluations. For example, you can restrict agents from viewing test evaluations filled with a form template that is yet to be finalized.

You can start by tagging evaluation forms, for example "Department: New customer". When you tag an evaluation form, all subsequent evaluations filled with the evaluation form also carry the same tag. You can then enable tag-based access controls to evaluation forms and evaluations within the security profiles of users for whom you wish to restrict access to specific evaluation forms and evaluations. Once tag-based-access control on evaluation forms is enabled, users will be able to modify only specific evaluation forms on the **Evaluation forms** page. On Contact Search, users will only be able to search for evaluation forms for which they have access, and use the evaluation forms to start evaluations. Similarly within Amazon Connect **Dashboards**, users will only be able to view aggregated scores for evaluation forms for which they have access. Tag-based access control on evaluations restricts users to only be able to view specific evaluations on the **Contact Details** page. For example, if a specific evaluation should only be visible to certain personas, such as fraud investigation, then you can restrict agents from viewing those evaluations on the Contact Details page.

**Important Notes**  
Once you enable tag based access control on evaluations, the users will lose access to any evaluations prior to tagging the evaluation form. If you are already using performance evaluations, we recommend to first tag evaluation forms and accumulate evaluations over several months, prior to enabling tag based access to evaluations.
It is recommended to use a single tag on an evaluation form (e.g. "Department: New customer") while configuring tag-based access. While assigning and permitting access on multiple tags is possible, it creates complexity. This is discussed in more detail below.

## Tagging evaluation forms


You can tag evaluation forms while creating a new evaluation form, or by updating an existing evaluation form. The tags that you can add to an evaluation form will depend on tag-based-access control granted on your security profile(s):
+ If your security profile has no tag-based access controls configured for evaluation forms, then you can create or update a form with any tag(s).
+ If you have one security profile with tag-based-access control enabled on an evaluation forms, then evaluation form tags from your security profile will be added automatically while creating evaluation forms through the Amazon Connect UI. You will not be able to update tags on evaluation forms in this scenario.
+ If you have multiple security profiles, you must add all the tags from one of your security profiles to the evaluation form while creating or updating an evaluation form. For example, if one of your security profile grants you access to "Department: Sales" and another grants you access to "Department: Retention", then you must add either the "Department: Sales" or "Department: Retention" tag on the evaluation form. While creating an evaluation form, tags from one of your security profiles will be automatically added.

Below are the steps to add tags to an evaluation form.

**While creating an evaluation form**
+ You will be prompted to add tags to an evaluation form when you create it (see [Create an evaluation form](create-evaluation-forms.md)).  
![\[The evaluation forms page, the set form title section with tags field.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-title.png)

**While editing an evaluation form**

1. Open the evaluation form with a security profile that has the permission **Evaluation forms - manage form definitions** - **Edit**.

1. Click on the edit icon next to the Tags.  
![\[The edit tags icon in the evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-tags-edit-form-tags.png)

1. Update the tags.  
![\[The update tags dialog.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-tags-update-form-tags.png)

**Note**  
Tag changes are applied immediately to all versions of the form. Updating tags does not require you to save or activate the form.

## Tag inheritance from evaluation forms to evaluations


While creating an evaluation from the Amazon Connect UI, the tags from the evaluation form are copied over to the evaluation upon creation. For example, if the evaluation form is tagged as "Department: Sales" then the evaluation created with this evaluation will also carry the same tag. If the evaluation form contains multiple tags (Department: Sales, Product: Dishwasher) then those will also be carried over to the evaluation provided you have access to create an evaluation with those tags (discussed in more detail in the next section).

**Note**  
Tags are copied over only to new evaluations. If you have existing evaluations, then adding or updating tags on evaluation forms will not change evaluations on historically completed evaluations.

## Set up tag-based access to evaluation forms and evaluations


1. Login to **Amazon Connect** with a user profile that has access to **Security Profiles - View** and **Edit** permissions.

1. Go to the **Users > Security Profiles** page within security profiles, and select a security profile that you want to modify.

1. Click **Show advanced options**.

1. Select **Allow: Tag-based access control**.

1. Under resources, select **Evaluation forms** and **Contact Evaluations**.

1. Enter the tag that you want to restrict the users' security profile to.  
![\[The tag-based access control setup screen.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-tags-tbac-setup.png)

If you have existing evaluations, then enabling tag-based access to contact evaluations will result in individuals who already have access to evaluations losing access to historical evaluations. To retain access to historical evaluations you can:
+ Start by tagging forms. This would result in any evaluations performed subsequently carrying the same tag. Once you have accumulated several months' evaluations you can enable tag-based-access.
+ Your technical administrator can use the [TagResource](https://docs.aws.amazon.com/connect/latest/APIReference/API_TagResource.html) API to tag any historical evaluations.
+ Enable tag-based access on **evaluation forms** but not **contact evaluations**. This may be desirable in situations where there is already security that limits access to which contacts are accessible. For example, supervisors may already be restricted to access contacts within their own hierarchy, and you may want to grant your supervisors access to all evaluations on those contacts.

If you have enabled tag-based access control on **Contact Evaluations**, it is recommended to have consistency with tag-based-access on the **Evaluation Forms**. It is also recommended that users' security profiles have access to all tags on the form(s) that they need to use. For example, if a user is to use a form with tags "Department: New customer", "Product: Auto Insurance", the security profile of the user should have access control enabled for both these tags across both **Evaluation Forms** and **Contact Evaluations**. If they have only one of the tags, then creating an evaluation manually in the UI will fail.

## Restricting access to automated evaluation forms under testing


Tag-based-access-control can be used to run automated evaluation tests in production, without revealing evaluation results to agents and supervisors. This is useful if you are already using evaluation forms in production. An example setup is as follows:
+ On the **Evaluation forms** page, tag evaluation forms that are live and should be visible to agents and supervisors as "Live: Yes"
+ On **Users > Security Profiles**, you can turn on tag-based access control on **Evaluation Forms** and **Evaluations**, restricting agent and supervisors access to forms with the tag "Live:Yes"
**Note**  
Before enabling tag-based-access-control, you may want sufficient history to accumulate, e.g. 2 months of evaluations, as this would result in a loss in historical evaluations
+ Automated evaluation forms that are still under testing can be tagged as "Live:No", preventing them from being visible by agents and supervisors
+ Quality managers responsible for creating evaluation forms can be granted access to evaluation forms without tag-based restrictions. Alternatively, you can assign two security profiles to quality managers:
  + The first would grant them access to **Evaluation Forms** and **Evaluations** with the tag "Live: No"
  + The second would grant them access to **Evaluation Forms** and **Evaluations** with the tag "Live: Yes"
+ Once you are ready to go live with automated evaluations, you can duplicate the form, and change the tag to "Live: Yes". The original form when it was under testing should continue carrying the tag "Live: No". This ensures that supervisors and agents cannot see historical aggregated evaluation scores in **Dashboards** when the form was under testing.

## Tag Based Access Control while setting up rules to submit automated evaluations


You can only create a rule to submit automated evaluations using a form that you have access to. For example, suppose there is an automated evaluation form **Auto Insurance Sales Scorecard** with the tags "Department: New customer", "Product: Auto Insurance", and your security profile grants you access to the tag "Department: New customer" for evaluation forms. Then you would be able to setup a rule to auto-submit evaluations using the form **Auto Insurance Sales Scorecard**.

## Tag Based Access Control while setting up Calibration Sessions


As an administrator of a calibration session, you can only create a calibration session with evaluation forms that you have access to.

# View an evaluation form audit trail in Amazon Connect
View an evaluation form audit trail

1. Select the evaluation form that you want to research.  
![\[The evaluation forms page, a box to the left of an evaluation form.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-select.png)

1. At the bottom of the page, under **Example Evaluation**, use the dropdown menu to view previous versions, who accessed them, and when. The following image shows an example audit trail.   
![\[An example audit trail for an evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-version.png)

1. Optionally, choose one of the forms to open it.

## What do Active, Draft, and Locked mean?
Active, Draft, Locked

An form is in one of the following states:
+ **Active**. A published version of the form that is available to evaluators.
+ **Draft**. An inactive, locked version of the form. A draft is unlocked only when you are working on it.
+ **Locked**. An evaluation form is locked when you activate or publish it. Even after you deactivate the form, it stays locked, and becomes a historical version of the form. However, you can activate the historical version to save it as new version. 

# Evaluate agent performance in Amazon Connect using generative AI
Evaluate performance using generative AI

**Note**  
**Powered by Amazon Bedrock**: AWS implements automated abuse detections. Because generative AI features in Contact Lens are built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI).

 Managers can specify their evaluation criteria in natural language, and use generative AI for automating evaluations of up to 100% of customer interactions. Generative AI can enable you to automate evaluations of additional agent behaviors (for example, was the agent able to resolve the customer’s issue?), enabling managers to comprehensively monitor and improve regulatory compliance, agent adherence to quality standards and sensitive data collection, while reducing the time spent on evaluating agent performance. Along with answers, you are also provided with context and justification, and references to specific points in the transcript that you can use to provide agent coaching.

You can use generative AI to assist managers with filling evaluations or use it to automatically fill and submitting evaluations. For more information about setting up automated evaluations, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

Evaluations questions are answered using generative AI by interpreting the question title and evaluation criteria specified within the instructions to evaluators associated with each question, and using these to analyze the conversation transcript. For more information, see [Step 2: Add sections and questions](create-evaluation-forms.md#step-sections).

## Process to automate evaluations using generative AI


The following is the overview of the automation process:

1. Get a high-level understanding of which of the evaluation questions should be answered with generative AI by reading [Guidelines to improve generative AI accuracy](#guidelines-to-improve-generative-ai-accuracy).

1. Assign permissions to select users within your quality management team to use Ask AI assistant. These users will start seeing the Ask AI button next to each question, while performing evaluations and can use that to get answer recommendations. These users can provide feedback on which questions are receiving accurate answers using generative AI. For more information, see [Assign security profile permissions for performance evaluations and coaching](evaluation-and-coaching-permissions.md).

1. To improve accuracy, you can provide additional evaluation criteria within [instructions to evaluators](create-evaluation-forms.md#step-sections). For more information, see [Guidelines to improve generative AI accuracy](#guidelines-to-improve-generative-ai-accuracy).

1. Once you have a good understanding of which questions can be accurately answered with generative AI, you can do a broader rollout by pre-configuring on the evaluation form, whether a question will receive an automated answer using generative AI.

1. Once you have setup automation, any user performing evaluations using the evaluation form will get automated generative AI answers to the pre-configured questions (without requiring additional permissions). For more information, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

1. You can setup automation such that an evaluator first reviews the generative AI answers before submission or you can automatically fill and submit evaluations. 

## Use Ask AI to get generative AI answer recommendations


1.  Log into Amazon Connect with a user account that has [permissions to perform evaluations](evaluation-and-coaching-permissions.md) and [ask AI assistant](evaluation-and-coaching-permissions.md). 

1.  Choose the **Ask AI** button below a question to receive a generative AI-powered recommendation for the answer, along with context and justification (reference points from the transcript that were used to provide answers). 

   1.  The answer will get automatically selected based on the generative AI recommendation, but can be changed by the user.  

   1.  You can get generative AI-powered recommendations by choosing **Ask AI** for up to 10 questions per contact. For more information, see [Contact Lens service quotas](amazon-connect-service-limits.md#contactlens-quotas).

1.  You can choose the time associated with a transcript reference to be directed to the point in the conversation   
![\[Generative AI-powered recommendations while evaluating agent performance.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/get-generative-ai-powered-recommendations-performance.png)

## Provide additional criteria for answering evaluation form questions using generative AI


 While configuring an evaluation form, you can provide criteria for answering questions within the **instructions to evaluators** associated with each evaluation form question. Apart from driving consistency in evaluations by evaluators, these instructions are also used to provide generative AI-powered evaluations. 

![\[New account opening scorecard.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/provide-criteria-for-answering-evaluation-form-questions.png)


## Set up automated evaluations using generative AI on the evaluation form


You can pre-configure on an evaluation form whether a question will be automatically answered using generative AI. Then, if you start an evaluation using the evaluation form on the Amazon Connect UI, answers to these questions will get automatically filled using generative AI (without requiring you to click Ask AI). You can also use generative AI to automatically fill and submit evaluations. For automatically submitted evaluations, you can use generative AI to answer up to 10 questions per contact (see [Contact Lens service quotas](amazon-connect-service-limits.md#contactlens-quotas)). Note that this limit does not apply to automation using contact categories or metrics (for example, longest hold duration, etc.).

To learn more about setting up automated evaluations using generative AI, see [Guidelines to improve generative AI accuracy](#guidelines-to-improve-generative-ai-accuracy).

## Set up generative AI-powered evaluations in non-English languages


By default, if you do not set the language of an evaluation form, the generative AI model automatically detects the language of your evaluation form questions and tries to provide answers in the same language, if the AI model understands that language. By default, generative AI answer justifications are typically provided in English.

To consistently receive both AI-generated answers and answer justifications in your preferred language, you can set the language of an evaluation form, choosing from **English**, **Spanish**, **Portuguese**, **French**, **German** and** Italian**. By explicitly setting the language of an evaluation, you can also perform cross-language evaluations, where generative AI fills a evaluation form in English, even when the conversation transcript is in another language, say Spanish. This enables multilingual contact centers to use a standardized evaluation framework across languages.

To set the language of the evaluation form:

1. Select the **Additional settings** tab while creating or updating an evaluation form.

1. Choose **Form language** from the dropdown.

1. Ensure your form’s questions, instructions and answer choices are in the same language as the selected **Form language**, for optimal AI performance.

![\[The evaluation form page, the Additional settings tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-languageexample1.png)


## Guidelines to improve generative AI accuracy


**Selecting questions for getting generative AI recommendations**

1. Use generative AI to respond to questions that can be answered using information from the conversation transcript, without the need to validate information through third-party applications such as CRM systems.

1. Using generative AI to answer questions requiring numeric responses, such as "How long did the agent interact with the customer?" is not recommended. Instead, consider [setting up automation](create-evaluation-forms.md#step-automate) for such evaluation form questions using Contact Lens or contact metrics.

1. Avoid using generative AI to answer highly subjective questions, for example, "Was the agent attentive during the call?" 

**Improving phrasing of questions and associated instructions**

1. Use complete sentences to word questions, for example, replacing *ID validation* with "Did the agent attempt to validate the customer’s identity?" enables the generative AI to better understand the question.

1.  It is recommended that you provide detailed criteria for answering the question within the **instructions to evaluators,** especially if its not possible to answer the question based on the question text alone. For example, for the question "Did the agent try to validate the customer identity?" you may want to provide additional instructions such as, *The agent is required to always ask a customer their membership ID and postal code before addressing the customer’s questions*.

1.  If answering a question requires knowledge of some business specific terms, then specify those terms in the instruction. For example, if the agent needs to specify the name of the department in the greeting, then list the required department name(s) that the agent needs to state as part of the **instructions to evaluators** associated with the question.

1.  If possible, use the term 'agent' instead of terms like 'colleague', 'employee', 'representative', 'advocate', or 'associate'. Similarly use the term 'customer', instead of terms like 'member', 'caller', 'guest', or 'subscriber'.

1. Only use double quotes in your instruction if you want to check for exact words being spoken by the agent or the customer. For example, If the instruction is to check for the agent saying `"Have a nice day"`, then the generative AI will not detect *Have a nice afternoon*. Instead the instruction should say: `The agent wished the customer a nice day`. 

# Performance evaluations of self-service interactions in Amazon Connect
Performance evaluations of self-service interactions

Amazon Connect provides you with the ability to automatically evaluate the quality of self-service interactions and get aggregated insights to improve customer experience. Managers can define custom criteria to assess the quality of self-service interactions, that can be filled manually or automatically using insights from conversational analytics, and other Amazon Connect data. For example, you can automatically assess if the AI agent repeatedly fails to understand the customer, resulting in poor customer sentiment and transfer to a human agent. Managers can review these insights in aggregate and on individual contacts, alongside self-service interaction recordings and transcripts, to identify opportunities to improve bot or AI agent performance.

**Note**  
Performance evaluations of self-service interactions is only available as part of Amazon Connect (with unlimited AI). For more information, see [Amazon Connect pricing](https://aws.amazon.com/connect/pricing/).

To automatically evaluate self-service interactions, you need to first [Enable conversational analytics in Amazon Connect Contact Lens](enable-analytics.md). Performance evaluations can evaluate the entire self-service interaction, irrespective of whether it's handled by touch tone, Lex bots, Amazon Connect AI agents or custom bots within Amazon Connect. The steps to set up automated evaluations of self-service interactions are as follows:
+ [Step 1: Create a draft evaluation form](#step-create-draft-form-self-service)
+ [Step 2: Set up automation](#step-setup-automation-self-service)
+ [Step 3: Set up a rule to automatically submit evaluations of self-service interactions](#step-setup-rule-self-service)

## Step 1: Create a draft evaluation form


You can define custom criteria to evaluate self-service interactions. These criteria can measure self-service resolution, customer experience or bot/AI agent behaviors.

An example evaluation form is as follows:

Section 1: Self-service success  
+ **1.1** Was the contact handled during self-service, without transferring to a human agent? (Single selection)
+ **1.2** Was the customer able to self-serve at least one of their needs? (Single selection)

Section 2: Customer experience  
+ **2.1** What was the overall customer sentiment score during self-service? (Number)
+ **2.2** Did the customer express frustration during self-service? (Single selection)

Section 3: AI agent behaviors  
+ **3.1** Did the AI agent fail to understand the customer and asked them to repeat themselves? (Single selection)
+ **3.2** Was the AI agent rude or aggressive towards the customer at any point? (Single selection)

For additional details, see [Create an evaluation form in Amazon Connect](create-evaluation-forms.md).

## Step 2: Set up automation


You can automate evaluations of self-service interactions using Amazon Connect rules (including generative AI-powered semantic match rules) and using integrated metrics such as customer sentiment. Note that currently, you cannot use the integrated generative AI within the evaluation form to automatically evaluate self-service interactions.

### Automation using rules


Start with setting up a rule:

1. On the navigation menu, choose **Analytics and optimization**, **Rules**.

1. Select **Create a rule**, **Conversational analytics**.

1. Under **When**, use the dropdown list to choose **post-call analysis** or **post-chat analysis**.

Example rules that you can create:

Self-service containment  
+ Add a new condition checking that the queue was not assigned and the contact was handled during the automated interaction.
+ You can also use natural language intent to confirm that the customer did not request for a human agent during the automated interaction with the Lex bot or AI agent.
Amazon Connect understands the following keywords within semantic match rules:  
+ **System:** Denotes a bot or AI agent
+ **Agent:** Refers to the human agent
+ **Customer:** The person interacting with the contact center
+ **Automated interaction:** Part of the customer interaction where human agent was not present on the conversation, including self-service interaction with bot or AI agent, and wait time in the queue
+ **Human agent interaction:** Customer interaction with the human agent

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-containment-rule.png)

+ If you are using a Amazon Connect AI agent, you can also check if the AI agent for self-service escalated to a human or not.

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-ai-agent-escalation-check.png)


Self-service success for at least one intent  
Create a rule using **natural language - semantic match** condition:  
"During the automated interaction, the system successfully fulfilled at least one of the customer requests, such as providing information or completing another service request."

Bot/AI agent failing to understand the customer  
Create a rule using **natural language - semantic match** condition:  
"The system failed to understand the customer and asked the customer to repeat themselves."

Customer expressed frustration  
Create a rule using **natural language - semantic match** condition:  
"Customer expressed frustration during the automated interaction."

After you set up a rule you can use it to answer single selection or multiple selection questions in your evaluation form. For example, if you created a rule to check for self-service containment, then you can use that to answer a question on whether the contact was handled during self-service.

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-use-rules-in-form.png)


### Automation using metrics


You can use contact metrics to automatically answer questions on the self-service experience. For example, you can check for customer sentiment during the automated interaction. To use metrics, ensure that the Question Type is chosen as Number.

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-metrics-automation.png)


After you have set up automation on every question, you toggle on **Enable automated submission of evaluations** and activate the form. You would then be guided to create a rule to automatically submit the evaluation form.

For additional details, see [Step 6: Enable automated evaluations](create-evaluation-forms.md#step-automate).

## Step 3: Set up a rule to automatically submit evaluations of self-service interactions


You can use the following conditions to identify specific self-service interactions.

AI Agent  
To trigger a self-service interaction evaluation, you can identify if specific AI agent(s) were active on the contact. You can also check for a specific AI agent version.  

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-ai-agent-identification.png)


Custom contact attributes and contact segment attributes  
You can also use **custom contact attributes** and **contact segment attributes** set within flows to identify specific workflows, bots, customer intents or outcomes. For example, you may set a contact attribute within flows, `pizzaOrderBot = true` if a Lex bot called "Pizza Order Bot" is invoked during the conversation.  

![\[alt text not found\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/self-service-eval-custom-contact-attributes.png)


After you have defined conditions:

1. On the **Define actions** page, provide a category name to identify the rule.

1. Choose **Add action**, select **Submit automated evaluation**, and select the form that you want to use for automatically submitting an evaluation. (This action is already selected on the page if you created the rule when you activate the form.)

For more information, see [Create a rule in Contact Lens that submits an automated evaluation](contact-lens-rules-submit-automated-evaluation.md).

# Use scoring and weights on agent evaluation forms in Amazon Connect
How scoring and weights work

By using *weights*, you can increase or decrease the impact of a question or section score on the overall evaluation score. 

When scoring is enabled for the evaluation form, you can assign *weights* to sections or questions. The weight raises or lowers the impact of a section or question on the final score of the evaluation.

## Example score
Example score

Let's say you are assigning the score to a question is that critically important to your business. If the answer is a Yes, the agent gets 10 points. For No they get 0 points. This is shown in the following image.

![\[The evaluation form page, the scoring tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoringexample1.png)


The answer to first question is more important to your business than the answer to *Did the agent close with "Is there anything else I can assist you with today?"*, which is also worth 0-10 points, as shown in the following image. 

![\[The evaluation form page, the scoring tab.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoringexample2.png)


To differentiate scores of the questions, you indicate that weight of one question is more than the other. 

The following image shows that the answer to *Did the agent recite the compliance script for the medication* is 50% of the agent's score. Whereas the answer to *Did the agent close with "Is there anything else I can assist you with today"* weighs only 5% of the score.

![\[The evaluation form page, the scoring tab, the score weights section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-scoringexample3.png)


The total weight must always equal 100%.

## Weight distribution mode
Weight distribution mode

With **Weight distribution mode**, you choose whether to assign weight by section or question: 
+ **Weight by section**: You can evenly distribute the weight of each question in the section.
+ **Weight by question**: You can lower or raise the weight of specific questions.

When you change a weight of a section or question, the other weights are automatically adjusted so the total is always 100 percent.

For example, in the following image, three of the questions were manually set to 10 percent. The weights that display in italics were adjusted automatically. 

![\[Score weights for a question.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-weightdistribution3.png)


## Weights of optional questions
Weights of optional questions

When a question is optional or applicable only in certain scenarios, choose **Enable "Not Applicable"** as an answer option to the question. The following image shows this setting on the **Answers** tab.

![\[The Answers tab, the Enable "Not Applicable" option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-weightsoptional.png)


After an evaluation is completed, Amazon Connect calculates the evaluation score:
+ Questions that are answered as **Not Applicable** do not count toward the form's final score. 
+ Their weight is redistributed proportionally among the remaining questions so that the total sum of weights across all questions remains 100%. 

For example, consider the following table. It represents a form with four questions (Q1, Q2, Q3, and Q4) that have weights of 40%, 20%, 20%, and 20% respectively. Each question has three answer options (A1, A2, and A3) with scores of 10, 5, and 0. An evaluation with answers Q1:A1, Q2:A2, Q3:A2, Q4:A3 would be scored as shown in the table.


| Question | Question weight | Answer | Answer score | Weighted answer score | 
| --- | --- | --- | --- | --- | 
|  Q1  |  40%  | A1  | 10  | 40%  | 
|  Q2  |  20%  | A2  | 5  | 10%  | 
|  Q3  |  20%  | A2  | 5  | 10%  | 
|  Q4  |  20%  | A3  | 0  | 0%  | 

The form's evaluation score = 40% \$1 10% \$1 10% \$1 0% = 60%.

However, if the answer to question Q4 is changed to **Not Applicable**, then the evaluation is scored as follows:


| Question | Question weight | Answer | Additional question weight | Redistributed question weight | Answer score | Weighted answer score | 
| --- | --- | --- | --- | --- | --- | --- | 
|  Q1  |  40%  | A1  | 10% | 50% | 10  | 50%  | 
|  Q2  |  20%  | A2  | 5% | 25% | 5  | 12.5%  | 
|  Q3  |  20%  | A2  | 5% | 25% | 5  | 12.5%  | 
|  Q4  |  20%  | Not Applicable | - | - | -  | - | 

Here's what's going on:
+ Question Q4 is effectively removed from the calculation. Its weight (20%) is distributed among the remaining 3 questions in proportion to their weights.
+ Question Q1 has double the weight of questions Q2 and Q3, so it receives double the amount of added weight. 
+ The form's evaluation score = 50% \$1 12.5% \$1 12.5% = 75%.

# Notify supervisors and agents about performance evaluations
Notify supervisors and agents about performance evaluations

You can create rules that automatically send emails or tasks to supervisors and agents based on evaluation results. 
+ Supervisor notifications can drive timely coaching based on performance evaluations. For example, you can notify supervisors if an agent receives an evaluation score below a certain threshold. 
+ Agent notifications can be used to prompt agents to review and acknowledge their evaluations.

**Topics**
+ [Step 1: Define rule conditions for evaluation forms](#rule-conditions-eval)
+ [Step 2: Define rule actions](#rule-actions-eval)
+ [Example rule with multiple conditions](#rule-example-eval)

## Step 1: Define rule conditions for evaluation forms
Step 1: Define rule conditions for evaluation forms

1. On the navigation menu, choose **Analytics and optimization**, **Rules**.

1. Select **Create a rule**, **Evaluation forms**.

1. Under **When**, use the dropdown list to choose **A Contact Lens evaluation result is available**, as shown in the following image.  
![\[The option When an evaluation result is available.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-rule-condition.png)

1. Choose **Add condition**.   
![\[The list of conditions for when an evaluation result is available.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-rule-condition-all.png)

   You can combine criteria from a set of conditions to build very specific Contact Lens rules. The following are some of the available conditions: 
   + **Evaluation - Form score**: Build rules that run when the score for a specific evaluation form is met. 
   + **Evaluation - Section score**: Build rules that run when the score for a specific section is met. 
   + **Evaluation - Question answer**: Build rules that run when the score for a specific question and answer is met. 
   + **Evaluation - Results available**: Build rules that run on any evaluation submissions. 
   + **Agent hierarchy**: Build rules that run on a specific agent hierarchy. Agent hierarchies may represent geographical locations, departments, products, or teams.

     To see list of agent hierarchies so you can add them to rules, you need **Agent hierarchy - View** permissions in your security profile.
   + **Agent**: Build rules that run on a subset of agents. For example, receive notifications on agents belonging to your team.

     To see agent names so you can add them to rules, you need **Users - View** permissions in your security profile. 
   + **Queues**: Build rules that run on a subset of queues. Often organizations use queues to indicate a line of business, topic, or domain. For example, you could build rules specifically for the evaluations of those agents assigned to sales queues.

     To see the queue names so you can add them to rules, you need **Queues - View** permissions in your security profile. 
   + **Contact attributes**: Build rules that run on the values of custom [contact attributes](what-is-a-contact-attribute.md). For example, you can build rules for agent evaluations for a particular line of business or for specific customers, such as based on their membership level, their current country of residence, or if they have an outstanding order. 
   + **Contact segment attributes**: You can identify contacts within rules using custom contact segment attributes with values populated from other systems or using custom logic. You can [define an attribute](predefined-attributes.md#predefined-attributes-create-web-admin) and set its value in flows. Custom segment attributes are only present on that specific contact ID, and not the entire contact chain. For example, you can build a rule that identifies that customer closed their account during the conversation.

     To see the list of contact segment attributes to add to a rule, you need **Predefined attributes - View** permission.

1. Choose **Next**.

## Step 2: Define rule actions
Step 2: Define rule actions

1. Choose **Add action**. You can choose the following actions:
   + [Create Task](contact-lens-rules-create-task.md)
   + [Send email notification](contact-lens-rules-email.md)
   + [Generate an EventBridge event](contact-lens-rules-eventbridge-event.md)  
![\[The add action dropdown menu, a list of actions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-add-action-no-wisdom.png)

1. Choose **Next**.

1. Review and make any edits, then choose **Save**. 

1. After you add rules, they are applied to new evaluation submissions that occur after the rule was added. You cannot apply rules to past, stored evaluations.

## Example rule with multiple conditions
Example rule with multiple conditions

The following image shows a sample rule with six conditions. If any of these conditions are met, the action is triggered.

![\[A rule with six conditions.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-multiple-conditions.png)


1. **Evaluation - Form score**: Does the Compliance Form have a score greater than or equal to 50%?

1. **Evaluation - Section score**: In a Compliance Form, does the Greeting section have a score greater than or equal to 70%?

1. **Evaluation - Question score**: Does the Compliance Form question *Did the agent greet the customer properly* equal **Yes**?

1. **Evaluation - Results available**: Have any results been generated for the Compliance Form?

1. **Queues**: Is this for the **BasicQueue**?

1. **Contact attributes**: Does CustomerType equal VIP?

# Provide agent coaching in Amazon Connect
Provide agent coaching

Amazon Connect provides integrated coaching tools that help supervisors deliver structured, data-driven feedback to agents based on performance evaluations. For upcoming one-on-one sessions with agents, supervisors can share detailed coaching feedback with concrete examples, and set performance goals directly within Amazon Connect. Quality management teams can also assign coaching to supervisors with due dates when they identify improvement opportunities, such as showing greater empathy towards customer issues. Once coaching is completed, agents can acknowledge the feedback in Amazon Connect, ensuring that they understand next steps for improvement. Past coaching feedback is centrally accessible, making it easier for agents, supervisors, and quality managers to track agent progress over time.

**Note**  
This feature is available as part of Amazon Connect performance evaluations.

## Assign permissions for coaching


Permissions can be configured as follows:

1. **Admins and quality managers**: Provide **coaching – manage coaching sessions** permissions. These permissions grant them access to all coaching sessions in your Amazon Connect instance. With this permission, they can assign agent coaching to agents' supervisors.

1. **Supervisors**: Provide **coaching – my coaching sessions** (View, Create, Delete, Edit) permissions. These permissions enable them to create and manage agent coaching with themselves as the coach.

1. **Agents**: Provide **coaching – my coaching sessions – View** permission. This permission enables the agent to view and acknowledge coaching where they are the participant.

For more information, see [Assign security profile permissions for performance evaluations and coaching](evaluation-and-coaching-permissions.md).

## Provide coaching to agents


1. Log in to Amazon Connect with a security profile that can [search contacts](contact-search.md) and perform coaching.

1. Select **Analytics and Optimization** > **Contact search** from the navigation bar on the left.

1. From **Contact Search**, find contacts that have been evaluated for the agent that you want to coach. For example, you can find contacts where the evaluation score is less than 70%:  
![\[The Contact Search page with an evaluation score filter applied.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-evaluation-score-filter.png)

1. Open a contact that has been evaluated, and view the evaluations on the right pane.

1. Open an evaluation and click **Coach on this evaluation**.  
![\[The Coach on this evaluation button on an evaluation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-coach-on-this-evaluation-button.png)

1. You can add the entire evaluation, a specific section and/or question to a coaching session:  
![\[Adding evaluation items to a coaching session.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-add-evaluation-items.png)

1. You can link the evaluation, its sections and/or questions to an existing coaching session, or create a new session. Items can be linked as strength or growth opportunities.  
![\[The dialog for adding a question to a coaching session.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-add-question-to-coaching-dialog.png)

1. After you add an evaluation or its items for coaching, a link will be provided to view the coaching session.

1. You can link up to 10 evaluations or evaluation items to a single coaching session as examples of agent strength or growth opportunities. To link additional evaluations, repeat steps 2 through 7

1. You can edit the coaching session by specifying dates, times, and location, providing detailed feedback, and setting improvement goals on coaching topics.  
![\[The edit coaching session page with fields for dates, times, location, feedback, and goals.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-edit-coaching-session.png)
**Note**  
**Session due date** is mandatory.

1. Click **Submit** to save the coaching session as a draft.

1. When the coaching session is ready, click **Share** to make the coaching session visible to the agent. If the agent has an email configured within Amazon Connect (or has a secondary email for a SAML instance), they will receive an email notification with a link to view the coaching session.

1. At the time of coaching, you can access the coaching session on **Analytics and Optimization** > **Coaching sessions**. This page displays all past and upcoming coaching sessions.

1. After the coaching session is finished, click **Mark as Complete** and optionally add a note.

1. Agents can acknowledge the coaching along with their own coaching notes.

## Search for coaching sessions


You can view all past and upcoming coaching sessions from the **Analytics and Optimization** > **Coaching sessions** page.

This page provides advanced search capabilities. You can search for coaching sessions:
+ Performed by a particular coach
+ Where a specific agent was the participant
+ Created by a specific quality manager
+ On a specific topic
+ That are past due date but not completed
+ That are pending completion (shared or draft status)
+ That are completed, but not yet acknowledged by the participant
+ And more

![\[The coaching sessions search page with filter options.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/coaching-search-filters.png)


# Acknowledge performance evaluations in Amazon Connect
Acknowledge performance evaluations

When an agent performance evaluation is submitted, you can automatically notify the agent to review their evaluation. For example, you can set up a [rule to send an email](contact-lens-rules-email.md) to the agent when an evaluation is available. You can also walk an agent through their evaluation during coaching.

After the agent has reviewed the performance evaluation, they can acknowledge their review of the evaluation and write an optional note in the Amazon Connect admin website. This acknowledgement enables managers to track whether agents are reviewing the feedback provided on their performance evaluations.

This topic explains the steps for agents to view and acknowledge an evaluation.

**To acknowledge an evaluation**

1. After you have received a performance evaluation for a contact, use your agent account to log in to the Amazon Connect admin website at https://*instance name*.my.connect.aws/.

1. Access the contact evaluation that you want to acknowledge. There are a few ways you can do this:
   + Someone may have shared the contact URL with you.

   - OR - 
   + You may have been assigned a task or received an email notification containing the URL for the contact that received an evaluations.

   - OR - 
   + You may have the contact ID and evaluation form name. You can use this information to search for the contact that received the evaluations using the following steps.

     1. On the navigation pane, choose **Analytics and optimization**, **Contact search**.

     1. Search for the contact which was evaluated, but is not yet acknowledged. The following image shows the filters to search for **Acknowledged** = **No**.  
![\[The Filters section of the Contact search page, set to Acknowledged = No.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack1.png)

1. On the **Contact details** page, choose **Evaluations** or expand the evaluation panel by choosing the **<** icon, as shown in the following image.  
![\[The Evaluations button, and the icon to expand the evaluation pane.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack2.png)

1. The **Evaluations** panel lists any evaluations that are in progress or completed for the contact. To acknowledge an evaluation, choose an evaluation from the list of **Completed evaluations**. The following image shows one evaluation that has been completed: **Customer servicing scorecard**.  
![\[The Evaluations pane, the completed evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack3.png)

1. Choose the evaluation you want to review. At the bottom of the evaluation, choose **Acknowledge**, as shown in the following image. 
**Note**  
Only the agent who was evaluated can acknowledge the evaluation.  
![\[The Evaluations pane, the completed evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack4.png)

1. In the **Acknowledge evaluation result** dialog box, provide an optional comment. For example, *Manager walked through the evaluation during coaching on March 5th, 2025*. 

   When you're finished, choose **Confirm**.   
![\[The Acknowledge evaluation result section, the Confirm button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack5.png)

1. A message is displayed that the evaluation acknowledgement is **Completed**, as shown in the following image.   
![\[A message that the evaluation is successfully acknowledged.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack6.png)

1. You can only acknowledge an evaluation after it is submitted. If an evaluation is re-submitted, it again becomes eligible for acknowledgement.

1. To view the acknowledgement note, select the acknowledged evaluation, and then choose the **view note** link.  
![\[The Acknowledgement note.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-ack7.png)

# Random sampling of contacts for evaluation in Amazon Connect
Random sampling of contacts for evaluation

 Amazon Connect provides managers with a random sample of their agents’ contacts for evaluation, removing manager bias and streamlining the evaluation process. On Contact Search, managers can specify the number of contacts that they need to evaluate for each agent, as per union agreements, regulation or internal guidelines. They then receive the required number of contacts, randomly selected from the specified timeframe, for example, 3 contacts per agent from the last week. In addition, managers can apply additional filters within Contact Search to ensure that the provided contacts are suitable for evaluation. For example, contacts must be longer than 180 seconds, have an associated audio or screen recordings, transcripts and have not yet been evaluated. Once the sample is generated, you can select an evaluation form and create draft evaluations in bulk for each of the contacts within the sample. Evaluations created in this way will denote that the contact was selected through random sampling, and provide auditability to ensure that the filter criteria did not introduce any bias in selection. 

**Random sampling of contacts for evaluation**

1.  Login to Amazon Connect with a user, who has the following set of permissions on their security profile: 

   1.  Contact Search - View 

   1.  Sample contacts 

   1.  Evaluation forms – perform evaluations 

1. Select the timeframe of contacts for evaluation, such as trailing week. Note that you can sample contacts from a maximum period of 5 weeks.  
![\[Select timeframe\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-time-range.png)

1. Select the agent or agent hierarchy that you need to evaluate.  
![\[Filter search - Agent\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-agent-filter.png)  
![\[Add filter - Agent\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-agent-filter-select.png)

1. Apply any additional filters to select only those contacts that are suitable for evaluation.
   + **Conversational analytics**: Ensures that the contact is analyzed by conversational analytics and has a transcript
   + **Recording**: Filter contacts with audio recording (voice) or screen recording (video)
   + **Interaction Duration**: You can choose contacts with a minimum and maximum agent-customer interaction
   + **Evaluation Status**: Only select contacts that have not yet been evaluated  
![\[Add additional filters\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-search-filters.png)

1. Specify the sampling criteria, such as 5 contacts per agent and click **apply** to generate a sample.  
![\[Sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-criteria.png)

1. You can save the set of filters and sampling criteria within saved search.  
![\[Save filters and sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-save-search.png)![\[Save filters and sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-save-search-name.png)![\[Save filters and sampling criteria\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-save-search-banner.png)

1. Once the sample is generated, you can create draft evaluations in bulk across all the contacts.
   + Select **Create Draft Evaluations**
   + Select the **Evaluation Form**  
![\[Create draft evaluations\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-create-draft-eval-empty.png)  
![\[Select evaluation form\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-create-draft-eval-form-select.png)

   This associates the draft evaluations with the sample name.
**Note**  
This step is required if you need to retrieve the contact sample in the future.  

![\[Creating draft evaluations\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-in-progress-banner.png)


![\[Draft evaluations successfully created\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-success-banner.png)


## Retrieving and viewing sampled contacts for evaluation


 To retrieve the contact sample in the future, go to Contact Search and apply the filter Evaluation – contact samples. Note that contact samples are specific to the user that generated the sample. 

![\[Create draft evaluations\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-contact-samples-filter.png)


## Auditing sampling criteria


 If you open an evaluation, it will indicate if contact sampling was used to create the evaluation. You can click **Yes** to audit the filter criteria used to generate the contact sample, ensuring that filters did not introduce any bias (e.g., negative customer sentiment) during the contact selection process. 

![\[Create draft evaluations - contact details\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-evals-list.png)


![\[Create draft evaluations - evaluation overview\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-sampled-eval.png)


![\[Create draft evaluations - contact sample details\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-randomsampling-sampled-eval-details.png)


# Request reviews of (appeal) performance evaluations in Amazon Connect
Request reviews of performance evaluations

When an agent performance evaluation is submitted, you can automatically notify the agent to review their evaluation. For example, you can set up a [rule to send an email](contact-lens-rules-email.md) to the agent when an evaluation is available. Once they have reviewed an evaluation, they can [acknowledge](acknowledge-evaluations.md) the evaluation. If they disagree with the feedback within an evaluation, they can request a review of (appeal) performance evaluations. When a review is requested, designated managers are automatically notified via email. They can then revise the evaluation or add additional notes that justify the original evaluation, before completing the review. Upon completion, the user who had requested the review and the agent evaluated is notified via email.

## How do I enable review requests (appeals)?


Amazon Connect enables you to specify which evaluation forms support review requests. To enable review requests on an evaluation form:

1. Log in to Amazon Connect with a user account that has the following security profile permission: **Analytics and Optimization** - **Evaluation forms - manage form definitions** - **Create**

1. Choose **Analytics and optimization**, then choose **Evaluation forms**.

1. Open an existing form by clicking on the hyperlink for the Last version or create a new evaluation form.

1. Click on the **Additional settings** tab

1. Click **Allow review requests**

1. You can specify the time window till when a review can be requested on an evaluation. The time window is measured from the time of the original submission of an evaluation.

1. You can also choose one or more recipients who will be notified via email when a review is requested. The email has a link to the contact with the evaluation for which a review is requested. Note that in order for the users to receive emails on a SAML authenticated instance, the secondary email needs to be provided within the user's profile in Connect.

1. Once you **Activate** the form, subsequent evaluations performed using the form will support review requests.

![\[Additional settings tab showing Allow review requests option\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-enable.png)


## Who can request reviews of an evaluation?


For users to request reviews of evaluations, they should have the permissions: **Evaluation forms - request evaluation reviews - Create and View**, in addition to access to the underlying contacts and evaluations. Permissions to request reviews can be granted to agents, or their supervisors, who can request evaluation reviews from the quality management team on the behalf of their agents. Supervisors granted the permission to **request evaluation reviews** can request review on any evaluation that they can access.

Users granted the permission **Evaluation forms - request evaluation reviews - Delete** permission can delete a request before the review has started.

## Who can review an evaluation?


Users with the permission **Evaluation forms - review evaluations - Create and View** permissions can perform reviews. If certain personas need to be consulted on reviews, but should not be granted permissions to perform reviews themselves, then you can grant them **Evaluation forms - review evaluations - View** permissions.

## Requesting a review


1. On the **Contact details** page, open a completed evaluation for which you want to request a review

1. Select **request a review** at the bottom of the evaluation

1. Explain why you are requesting a review (you cannot leave this blank). Click **confirm**

1. The evaluation will show under **Review requested** on the evaluations pane

1. You can cancel a request if the review is yet to be started

![\[Request a review button on evaluation\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-request.png)


![\[Request review dialog with explanation field\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-requestcomment.png)


![\[Evaluation showing Review requested status\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-requested.png)


## Searching for pending reviews


As mentioned above, you can configure in the evaluation form, who would be automatically notified via email if a review is requested. These notification emails contain links to contacts with evaluations for which a review is requested. Additionally, users with appropriate permissions can search for contacts with evaluations for which a review is requested or which are already under review:

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts) and the **Evaluation forms - perform evaluations** permission.

1. On the navigation bar, choose **Analytics and optimization**, **Contact search**.

1. Use the time range filter to search for contacts from the relevant time window, e.g. last month.

1. Use the evaluation status filter with the value **Review requested** to search for contacts with evaluations where a review has been requested, and is yet to be picked up for review

1. Use the evaluation status filter with the value **Under review** to search for contacts with evaluations that are picked up for review

![\[Contact search with evaluation status filter\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-searchrequested.png)


## Starting and completing reviews


1. Open the evaluations pane on the **Contact details** page.

1. Click on an evaluation listed under **Review requested**.

1. Click **Start review**.

1. The original evaluation is listed below **Under review** and can be viewed by clicking on it.

1. The in-progress review is listed under **Evaluation reviews**. Users with the **Evaluation forms - review evaluations - Create** permissions can make edits to the evaluation such as changing answers, amending the notes. You can **Save** your review at anytime and click **Resolve review** to finalize the review.

1. This will send an automated email notification to the user who had requested the review.

![\[Evaluation review in progress\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-review-view.png)


# Search for contacts using evaluation forms in Amazon Connect
Search evaluation forms

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts) and the **Evaluation forms - perform evaluations** permission. 

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**. 

1. Use the filters on the page to narrow your search. For date, you can search up to 8 weeks at a time.  
![\[The search filters for evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluationforms-searchfilters1.png)

# Use a reference ID to represent questions in a report about contact center agent performance
Use reference ID for questions

A *reference ID* is a token that appears in the JSON output file. It represents a specific question. When building reports, you can use it in place of the exact wording of a question. 

For example, a question might be "Did agents stick to the script?" but the next day the question might be changed to "Was there good script adherence?" Regardless of how the question is worded, the reference ID always stays the same.

# Evaluation metrics in Amazon Connect
Evaluation metrics

You can view the following metrics on the [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md). These metrics enable you to view aggregated agent performance, and get insights across agent cohorts and over time. 

## Average evaluation score


This metric provides the average evaluation score for all submitted evaluations. Evaluations for calibrations are excluded from this metric.

The average evaluation score corresponds to the grouping. For example, if the grouping contains evaluation questions, then the average evaluation score is provided for the questions. If the grouping does not contain evaluation form, section or question, then the average evaluation score is at an evaluation form level.

**Metric type**: Percent

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_EVALUATION_SCORE`

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Get sum of of evaluation scores: forms \$1 sections \$1 questions.
+ Get total number of evaluations where scoring has been completed and recorded.
+ Calculate average score: (sum of scores) / (total evaluations).

**Notes**:
+ Excludes calibration evaluations. 
+ Score granularity depends on grouping level. 
+ Returns percentage value. 
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups. 
+ Based on submitted evaluation timestamp. 
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

## Average weighted evaluation score


This metric provides the average weighted evaluation score for all submitted evaluations. Evaluations for calibrations are excluded from this metric.

The weights are per the evaluation form version that was used to perform the evaluation. 

 The average evaluation score corresponds to the grouping. For example, if the grouping contains evaluation questions, then the average evaluation score is provided for the questions. If the grouping does not contain evaluation form, section or question, then the average evaluation score is at an evaluation form level. 

**Metric type**: Percent

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `AVG_WEIGHTED_EVALUATION_SCORE`

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Get sum of weighted scores using form version weights.
+ Get total number of evaluations where scoring has been completed and recorded.
+ Calculate weighted average: (sum of weighted scores) / (total evaluations).

**Notes**:
+ Uses evaluation form version-specific weights. 
+ Excludes calibration evaluations. 
+ Score granularity depends on grouping level. 
+ Returns percentage value. 
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups. 
+ Based on submitted evaluation timestamp. 
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

## Automatic fails percent


This metric provides the percentage of performance evaluations with automatic fails. Evaluations for calibrations are excluded from this metric. 

If a question is marked as an automatic fail, then the parent section and the form is also marked as an automatic fail. 

**Metric type**: Percent

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Get total automatic fails count.
+ Get total evaluations performed.
+ Calculate percentage: (automatic fails / total evaluations) \$1 100.

**Notes**:
+ Automatic fail cascades up (question → section → form).
+ Excludes calibration evaluations.
+ Returns percentage value.
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups.
+ Based on submitted evaluation timestamp.
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

## Evaluations performed


This metric provides the number of evaluations performed with evaluation status as "Submitted." Evaluations for calibrations are excluded from this metric.

**Metric type**: Integer

**Metric category**: Contact evaluation driven metric

**How to access using the Amazon Connect API**: 
+ [GetMetricDataV2](https://docs.aws.amazon.com/connect/latest/APIReference/API_GetMetricDataV2.html) API metric identifier: `EVALUATIONS_PERFORMED`

**How to access using the Amazon Connect admin website**: 
+ Dashboard: [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md)

**Calculation logic**:
+ Check evaluationId present?
+ Verify itemType is form.
+ Count submitted evaluations (excluding calibrations).

**Notes**:
+ Counts only submitted evaluations.
+ Excludes calibration evaluations.
+ Returns integer count.
+ Requires at least one filter from: queues, routing profiles, agents, or user hierarchy groups.
+ Based on submitted evaluation timestamp.
+ Data for this metric is available starting from January 10, 2025 0:00:00 GMT.

# Agent evaluation form output in Amazon Connect
Example evaluation form output

This section shows the export output path for evaluations, provides an example of evaluation form scores, and describes the evaluation form metadata.

**Topics**
+ [Verify your S3 bucket](#verify-evaluation-s3bucket)
+ [Example output locations](#example-evaluationform-output-locations)
+ [Known issue](#release-note-evaluation-output)
+ [Example scores](#example-evaluation-output-file)
+ [Evaluation form metadata definitions](#evaluation-form-metadata)
+ [

## Sample exported evaluation
](#exported-evaluation)

## Verify your S3 bucket
Verify your S3 bucket

When you enable **Contact evaluations** in the Amazon Connect console, you are prompted to create or choose an S3 bucket to store the evaluations. To verify the name of the bucket, go to your instance alias, choose **Data storage**, **Contact evaluations**, then **Edit**.

## Example output locations
Example output locations

Following is the output file path for evaluation forms:
+ *contact\$1evaluations\$1S3\$1bucket*/Evaluations/*YYYY/MM/DD/hh:mm:ss.sTZD*-*evaluation\$1id*.json

For example:

`amazon-connect-s3/Evaluations/2022/04/14/05:04:20.869Z-11111111-2222-3333-4444-555555555555.json`

## Known issue: Two output files for the same evaluation
Known issue

Contact Lens generates two output files for the same evaluation form.
+ One file is written to the new default S3 path. You can configure the path in the AWS console.
+ Another file, which will be deprecated, is written to a different, previous S3 path. You can disregard this file.

  The previous S3 path looks like the following:
  + *s3\$1bucket*/Evaluations/contact\$1*contactId*/evaluation\$1*evaluationId*/YYYY-MM-DDThh:mm:ss.sTZD.json

## Example scores
Example scores

The following example shows a typical score.

```
{
  "schemaVersion": "3.5",
  "evaluationId": "fb90de35-4507-479a-8b57-970290fd5c2c",
  "metadata": {
    "contactId": "badd4896-75f7-43b3-bee6-c617ed3d04cb",
    "accountId": "874551140838",
    "instanceId": "8f753c94-9cd2-4f16-85eb-945f7f0d559a",
    "agentId": "286bcec0-e722-4166-865f-84db80252218",
    "evaluationDefinitionTitle": "Compliance Evaluation Form",
    "evaluator": "jane",
    "evaluationDefinitionId": "15d8fbf1-b4b2-4ace-869b-82714e2f6e3e",
    "evaluationDefinitionVersion": 2,
    "evaluationStartTimestamp": "2025-11-14T17:57:08.649Z",
    "evaluationSubmitTimestamp": "2025-11-14T17:59:29.052Z",
    "score": {
      "percentage": 100
    },
    "creator": "jane.doe@acme.com",
    "autoEvaluated": false,
    "resubmitted": false,
    "evaluationSource": "ASSISTED_BY_AUTOMATION",
    "evaluationType": "CONTACT_EVALUATION",
    "evaluationAcknowledgerComment": "The Acknowledgment comment",
    "evaluationAcknowledgedTimestamp": "2025-12-22T05:20:39.297Z",
    "evaluationAcknowledgedByUserName": "john",
    "evaluationAcknowledgedByUserId": "286bcec0-e722-4166-865f-84db80252218"
  },
  "sections": [
    {
      "sectionRefId": "s1a1b58d6",
      "sectionTitle": "The title of the section",
      "notes": "Section note",
      "score": {
        "percentage": 100
      }
    },
    {
      "sectionRefId": "s46661c49",
      "sectionTitle": "The title of the subsection",
      "parentSectionRefId": "s1a1b58d6",
      "score": {
        "percentage": 100
      }
    }
  ],
  "questions": [
    {
      "questionRefId": "q570b206a",
      "sectionRefId": "s46661c49",
      "questionType": "NUMERIC",
      "questionText": "How do you rate the contact between 1 and 10?",
      "answer": {
        "value": "",
        "notes": "Add more information here",
        "metadata": {
          "notApplicable": true
        }
      },
      "score": {
        "notApplicable": true
      }
    },
    {
      "questionRefId": "q73bc5b9d",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent introduce themselves?",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "o6999aa94",
            "selected": true
          },
          {
            "valueText": "No",
            "valueRefId": "o284e4d9e",
            "selected": false
          },
          {
            "valueText": "Maybe",
            "valueRefId": "o1b2f0a14",
            "selected": false
          }
        ],
        "notes": "Add more information here",
        "automation": {
          "status": "SYSTEM_ANSWER",
          "systemSuggestedValue": "Yes"
        },
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 100
      }
    },
    {
      "questionRefId": "h89bc7a9t",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent offer a promotion?",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "p7888bb85",
            "selected": false
          },
          {
            "valueText": "No",
            "valueRefId": "p395f5e8f",
            "selected": true
          },
          {
            "valueText": "Maybe",
            "valueRefId": "p2c3g1b25",
            "selected": false
          }
        ],
        "notes": "Add more information here",
        "assistedSuggestion": {
          "value": "No. A promotion was not offered by the agent."
        },
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 100
      }
    },
    {
      "questionRefId": "qc2effc9d",
      "sectionRefId": "s46661c49",
      "questionType": "TEXT",
      "questionText": "Describe the outcome.",
      "answer": {
        "value": "Example answer text",
        "notes": "Add more information here",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 50
      }
    }
  ]
}
```

## Evaluation form metadata definitions
Evaluation form metadata definitions

The following list describes the fields in the evaluation form.

**evaluationId**  
A unique identifier for the contact evaluation  
*Type* – String  
*Length constraints* – Minimum length of 1. Maximum length of 500

**metadata**    
**contactId**  
The identifier of the contact in this instance of Amazon Connect.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 256  
**accountId**  
The identifier of AWS account running the instance of Amazon Connect.  
*Type* – String  
*Length constraints* – Constraints: 12 digits  
*Pattern* – `^\d{12}$`  
**instanceId**  
The identifier of the Amazon Connect instance. You can [find the instance ID](find-instance-arn.md) in the Amazon Resource Name (ARN) of the instance.  
*Length constraints* – Minimum length of 1, maximum length of 100  
**agentId**  
The identifier of the agent who performed the contact.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 500  
**evaluationDefinitionTitle**  
The title of the evaluation form.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 128  
**evaluator**  
Name of the user who last updated the evaluation.  
*Type* – String  
**evaluationDefinitionId**  
The unique identifier for the evaluation form.  
*Type* – String  
*Length contraints* – Minimum length of 1, maximum length of 500  
**evaluationDefinitionVersion**  
The version of the evaluation form.  
*Type* – Integer  
*Valid range* – Minimum value of 1  
**evaluationStartTimestamp**  
The evaluation's creation timestamp.  
*Type* – Timestamp  
*Example* – 2025-11-14T17:57:08.649Z  
**evaluationSubmitTimestamp**  
The evaluation's submission timestamp.  
*Type* – Timestamp  
*Example* – 2025-11-14T17:59:29.052Z  
**score**  
The evaluation's score.  
**creator**  
 The entity that created the evaluation the very first time (as opposed to "evaluator" which represents the entity that last submitted the evaluation). When the call is made from the Amazon Connect admin website it contains the username. Wen the call comes from the API it contains the ARN of the caller.   
*Type* – String  
**autoEvaluated **  
 Indicates whether the evaluation was submitted using fully automated evaluations.  
*Type* – Boolean  
**resubmitted **  
 Indicates whether the evaluation has been re-submitted (edited and submitted again).  
*Type* – Boolean  
**evaluationSource **  
The type of evaluation answer source.  
*Type* – String  
Valid values:  
+ `ASSISTED_BY_AUTOMATION` - indicates that [question automation](create-evaluation-forms.md#step-automate) was used to answer some of the questions.
+ `MANUAL` - indicates that the evaluation was performed manually.
+ `AUTOMATED` - indicates that the evaluation was submitted using fully automated evaluations (see "autoEvaluated" field).  
**evaluationType**  
The type of evaluation.  
*Type* – String  
Valid values:  
+ `CONTACT_EVALUATION` - evaluation of a contact.  
**calibrationSessionId**  
The identifier of the calibration session associated with this evaluation.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 500  
**evaluatedParticipantId**  
The identifier of the participant being evaluated.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 256  
**evaluatedParticipantRole**  
The role of the participant being evaluated.  
*Type* – String  
Valid values:  
+ `AGENT` - the agent participant.
+ `CUSTOMER` - the customer participant.
+ `SYSTEM` - the system participant.  
**acknowledgerComment**  
Comment left by the user who acknowledged the evaluation.  
*Type* – String  
*Length constraints* – Minimum length of 0, maximum length of 3072  
**evaluationAcknowledgedByUserId**  
The identifier of the person who acknowledged the evaluation.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 500  
**evaluationAcknowledgedByUserName**  
The name of the person who acknowledged the evaluation.  
*Type* – String  
**evaluationAcknowledgedTimestamp**  
The evaluation's acknowledgment timestamp.   
*Type* – Timestamp  
*Example* – 2025-12-24T15:45:56.662Z

**sections**  
Array of the sections of the evaluation.    
**sectionRefId**  
The identifier of the section. An identifier must be unique within the evaluation form.   
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40  
**parentSectionRefId**  
The identifier of the parent section.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40  
**sectionTitle**  
The title of the section.  
*Type* – String  
*Length constraints* – Constraints: Minimum length of 0, maximum length of 128  
**notes**  
The notes left for the section.  
*Type* – String  
*Length constraints* – Minimum length of 0, maximum length of 3072  
Notes have the following limits:  
+ Individual notes have a limit of 3072 characters. 
+ The combined notes in an evaluation have a limit of *N* x 1024 characters, where *N* is the number of questions in the evaluation.  
**score**  
The score for the section.    
**percentage**  
The score percentage for an item in a contact evaluation.  
*Type* – Double  
*Valid range* – Minimum value of 0, maximum value of 100  
**automaticFail**  
The flag that marks the item as automatic fail. If the item or a child item gets an automatic fail answer, this flag will be true.  
*Type* – Boolean  
**notApplicable**  
The flag that marks the item as automatic fail. If the item or a child item gets an automatic fail answer, this flag will be true.  
*Type* – Boolean

**questions**  
Array of the questions of the evaluation.    
**questionRefId**  
The identifier of the question. An identifier must be unique within the evaluation form.  
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40.  
**sectionRefId**  
The identifier of the parent section.   
*Type* – String  
*Length constraints* – Minimum length of 1, maximum length of 40  
**questionType**  
The type of the question.  
*Type* – StrThe combined notes in an evaluation have a limit of *N* x 1024 characters, where *N* is the number of questions in the evaluation.ing  
*Valid values* – `TEXT | SINGLESELECT | NUMERIC`  
**questionText**  
The title of the question.  
*Type* – String  
*Length constraints* – Minimum length of 0, maximum length of 350  
**answer**  
The answer for the question.    
**value**  
The string/numeric value for an answer in a contact evaluation.  
*Type* – String/Double  
*Length constraints* – String: Minimum length of 0, maximum length of 128  
**notes**  
The notes left for the section.  
*Type* – String  
*Length constraints* – Minimum length of 0. Maximum length of 3072  
Notes have two character limits. Individual notes have a limit of 3072 characters. The combined notes in an evaluation have a limit of N x 1024 characters, where N is the number of questions in the evaluation.  
**metadata**  
**notApplicable **  
Flag that marks the question as not applicable.  
*Type* – Boolean  
**assistedSuggestion**  
Answer suggested by the [generative AI](generative-ai-performance-evaluations.md).  
*Type* – String  
**automation**    
**status**  
The status of the automation answer.  
*Type* – String  
*Valid values* – `UNAVAILABLE | SYSTEM_ANSWER | OVERRIDDEN_ANSWER`  
**systemSuggestedValue**  
The string or numeric value for an automation answer in a contact evaluation.  
*Type* – String or Double  
*Length constraints* – String: Minimum length of 0, maximum length of 128  
**score**  
The [score](#score) for the question.  
+ automaticFail - The flag that marks the item as critical for the form and the full form will fail (marked with zero score) when the item fails. If the item or a child item gets an automatic fail answer, this flag will be true and the full form will also fail.

  *Type* – Boolean
+ notApplicable - The flag that mark the item as not applicable for scoring, it will be excluded from scoring calculations.

  *Type* – Boolean

## Sample exported evaluation


The following example shows a typical exported evaluation.

```
{
  "schemaVersion": "3.5",
  "evaluationId": "fb90de35-4507-479a-8b57-970290fd5c2c",
  "metadata": {
    "accountId": "874551140838",
    "instanceId": "8f753c94-9cd2-4f16-85eb-945f7f0d559a",
    "contactId": "badd4896-75f7-43b3-bee6-c617ed3d04cb",
    "agentId": "286bcec0-e722-4166-865f-84db80252218",
    "evaluationDefinitionTitle": "Legal Compliance Evaluation Form",
    "evaluator": "jane",
    "evaluationDefinitionId": "15d8fbf1-b4b2-4ace-869b-82714e2f6e3e",
    "evaluationDefinitionVersion": 2,
    "evaluationStartTimestamp": "2022-11-14T17:57:08.649Z",
    "evaluationSubmitTimestamp": "2022-11-14T17:59:29.052Z",
    "score": {
      "percentage": 85
    },
    "autoEvaluated": false,
    "creator": "john",
    "resubmitted": false,
    "evaluationSource": "ASSISTED_BY_AUTOMATION",
    "evaluationType": "CONTACT_EVALUATION",
    "calibrationSessionId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "evaluationAcknowledgedByUserId": "286bcec0-e722-4166-865f-84db80252218",
    "evaluationAcknowledgedByUserName": "mike",
    "evaluationAcknowledgedTimestamp": "2022-12-24T15:45:56.662Z",
    "evaluationAcknowledgerComment": "Manager walked through the evaluation during coaching",
    "evaluatedParticipantId": "participant-123",
    "evaluatedParticipantRole": "AGENT"
  },
  "sections": [
    {
      "sectionRefId": "s1a1b58d6",
      "sectionTitle": "Communication Skills",
      "notes": "Overall communication was professional",
      "score": {
        "percentage": 90
      }
    },
    {
      "sectionRefId": "s46661c49",
      "sectionTitle": "Greeting and Introduction",
      "parentSectionRefId": "s1a1b58d6",
      "notes": "Agent followed proper greeting protocol",
      "score": {
        "percentage": 100
      }
    }
  ],
  "questions": [
    {
      "questionRefId": "q570b206a",
      "sectionRefId": "s46661c49",
      "questionType": "NUMERIC",
      "questionText": "How many times did agent interrupt the customer",
      "answer": {
        "value": "2",
        "notes": "Interruptions were minimal and appropriate",
        "metadata": {
          "notApplicable": false,
          "automation": {
            "status": "OVERRIDDEN_ANSWER",
            "systemSuggestedValue": "3"
          }
        }
      },
      "score": {
        "percentage": 80
      }
    },
    {
      "questionRefId": "q73bc5b9d",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent introduce themselves?",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "o6999aa94",
            "selected": true
          },
          {
            "valueText": "No",
            "valueRefId": "o284e4d9e",
            "selected": false
          },
          {
            "valueText": "N/A",
            "valueRefId": "system_default_null_value",
            "selected": false
          }
        ],
        "notes": "Agent provided clear introduction with name and department",
        "metadata": {
          "notApplicable": false,
          "assistedSuggestion": {
            "value": "The agent introduced themselves at the beginning of the call."
          }
        }
      },
      "score": {
        "percentage": 100
      }
    },
    {
      "questionRefId": "h89bc7a9t",
      "sectionRefId": "s46661c49",
      "questionType": "SINGLESELECT",
      "questionText": "Did the agent ask for consent to perform a credit check",
      "answer": {
        "values": [
          {
            "valueText": "Yes",
            "valueRefId": "o6999aa94",
            "selected": false
          },
          {
            "valueText": "No",
            "valueRefId": "o284e4d9e",
            "selected": true
          },
          {
            "valueText": "N/A",
            "valueRefId": "system_default_null_value",
            "selected": false
          }
        ],
        "notes": "Agent failed to obtain consent before credit check",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "percentage": 0,
        "automaticFail": true
      }
    },
    {
      "questionRefId": "qc2effc9d",
      "sectionRefId": "s46661c49",
      "questionType": "MULTISELECT",
      "questionText": "What topics were discussed during the call",
      "answer": {
        "values": [
          {
            "valueText": "Account balance",
            "valueRefId": "topic_balance",
            "selected": true
          },
          {
            "valueText": "Payment options",
            "valueRefId": "topic_payment",
            "selected": true
          },
          {
            "valueText": "Account closure",
            "valueRefId": "topic_closure",
            "selected": false
          }
        ],
        "notes": "Customer inquired about balance and payment plans",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "notApplicable": true
      }
    },
    {
      "questionRefId": "q8a9b0c1d",
      "sectionRefId": "s46661c49",
      "questionType": "TEXT",
      "questionText": "What was your general impression about the customer's satisfaction",
      "answer": {
        "value": "The customer seemed satisfied with the resolution and thanked the agent",
        "notes": "Positive customer sentiment throughout the call",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "notApplicable": true
      }
    },
    {
      "questionRefId": "q2b3c4d5e",
      "sectionRefId": "s46661c49",
      "questionType": "DATETIME",
      "questionText": "What time was the follow-up scheduled",
      "answer": {
        "value": "2024-04-16T14:30:00+01:00",
        "notes": "Follow-up appointment confirmed with customer",
        "metadata": {
          "notApplicable": false
        }
      },
      "score": {
        "notApplicable": true
      }
    }
  ]
}
```

# Monitor performance evaluation failure events
Monitor evaluation failure events

You can monitor failures of automated evaluations as well as S3 exports of contact evaluations using EventBridge and CloudWatch. You can use these events to investigate and fix failures. The following guide is a walk through of the process of creating custom EventBridge rules to monitor performance evaluation failure events.

## Step-by-step guide


This is a guide on how to create an EventBridge rule to log Amazon Connect failed auto-evaluation submission events and failed S3 exports of contact evaluations in your AWS console.

1. Log into your AWS account and navigate to the EventBridge console. Choose **Rules** under the **Buses** section.  
![\[The Rules tab under the Buses section in the EventBridge console.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-rules-tab.png)

1. Choose **Create rule** with the default Event bus selected.  
![\[The Create rule button with the default Event bus selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-create-rule.png)

1. Give the rule a name and select **Rule with an event pattern** for the Rule type. Choose **Next**.  
![\[The rule name and Rule with an event pattern option selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-rule-name.png)

1. With **AWS events or EventBridge partner events** selected under **Events**, select the **Use pattern form** option under **Event pattern**. This is where you will define the pattern to match for triggering the rule.

1. Type and select **Amazon Connect** under the **AWS service** dropdown to narrow down the event types. Select the desired event type in the dropdown below. Choose **Next** once the pattern is set up.

   To subscribe to EventBridge event types, create a custom EventBridge rule that matches the following:
   + `"source"` = `"aws.connect"`
   + `"detail-type"` can be one of the following:
     + `"Contact Lens Automated Evaluation Submission Failed"`
     + `"Contact Lens Evaluation Export Failed"`  
![\[The event pattern with Amazon Connect selected as the AWS service.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-eventbridge-event-pattern.png)

1. The next step allows you to configure the target(s) to process/receive the matched events. For simplicity, select the **CloudWatch log group** option under **Select a target** and choose a log group.

1. Choose **Next** and advance to the final **Review and create** step. Choose **Create rule** once more to complete the rule creation process.

1. Now, if the rule is in the **Enabled** state and a matching event occurs, corresponding logs should show up in the configured CloudWatch log group with the relevant IDs under the metadata section and the failure reason under the data section.  
![\[CloudWatch log group showing matched EventBridge events.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-cloudwatch-log-group.png)  
![\[CloudWatch log detail showing metadata and failure reason.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/perf-eval-cloudwatch-log-detail.png)

## Example EventBridge payload


The following is an example EventBridge payload when the rule is matched:

```
{  
  "version": "0",  
  "id": "00005435-d12d-c93b-d9d2-b64cba85fbb6",
  "detail-type": "Contact Lens Automated Evaluation Submission Failed",  
  "source": "aws.connect",  
  "account": "Your AWS account ID",  
  "time": "2025-10-02T10:34:56Z",  
  "region": "us-west-2",
  "resources": [],  
  "detail": {  
    "version": "1.0.0",  
    "metadata": {  
      "contactId": "4266f8e9-8420-4ee7-96cd-515d2edae1f2",
      "instanceId": "d9b0b09d-7dab-47e5-9f82-d6787fbc068c",
      "formId": "8b1365bd-1415-41a9-a491-af226e1bda4e"
    },  
    "data": {  
      "reasonCode": "ANALYSIS_FILE_ERROR",
      "message": "Automated contact evaluation submission failed due to an error when searching/retrieving/parsing the analysis file."
    }  
  }  
}
```

## Common errors


The following errors may occur when the system eventually fails to process evaluations after multiple retry attempts.

### Automated evaluation submission errors



| Error | Error message | 
| --- | --- | 
| AUTOMATED\$1SUBMISSION\$1FAILED | Automated contact evaluation submission failed because some of the questions could not be answered. Please verify the evaluation form and/or the Amazon Connect rule configurations. | 
| ANALYSIS\$1FILE\$1ERROR | Automated contact evaluation submission failed due to an error when searching/retrieving/parsing the analysis file. | 
| INTERNAL\$1SERVER\$1ERROR | Automated contact evaluation submission failed due to an internal server error. Please expect delayed processing. | 
| QUOTA\$1EXCEEDED\$1ERROR | Automated contact evaluation submission failed because the remaining quota for using Gen AI to automatically answer evaluation questions for the contact is insufficient. | 

### Evaluation S3 export errors



| Error | Error message | 
| --- | --- | 
| S3\$1BUCKET\$1ACCESS\$1DENIED | Contact evaluation JSON export failed due to insufficient permissions. | 
| S3\$1STORAGE\$1NOT\$1CONFIGURED | The export S3 bucket is not configured for your instance. | 
| INTERNAL\$1SERVER\$1ERROR | Contact evaluation JSON export failed due to an internal server error. Please expect delayed delivery of the export file. | 

# Calibration sessions for performance evaluations


Amazon Connect Contact Lens enables you to conduct calibration sessions to drive consistency and accuracy in how managers evaluate agent performance, so that agents receive feedback that is consistent. During a calibration, multiple managers can evaluate the same contact using the same evaluation form. You can then review differences in evaluations filled by different managers to align managers on evaluation best practices and identify opportunities to improve the evaluation form, e.g. rephrasing an evaluation question to be more specific, so that it is consistently answered by managers. You can also compare manager’s answers with a designated expert, to measure and improve manager accuracy on evaluating agent performance. The expert is usually the quality manager who is conducting the calibration session.

## Permissions needed for calibrations


You need the following permissions for calibrations:
+ **Creating calibration sessions:** Add the permission **Evaluation forms - manage calibration sessions** to the security profiles of the set of users that should be permitted to conduct calibration sessions for performance evaluations.
+ **Participating in a calibration session:** Any user who has the permission to perform evaluations, namely **Evaluation forms - perform evaluations**, can participate in a calibration session if they are added as one of the participants.

In addition, for both sets of users, you also need permissions to search and view contacts. For more information, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts).

## Create a calibration session


**To create a calibration session**

1. Login to Amazon Connect with a user account that has the necessary permissions within their security profile.

1. On the left nav, go to **Analytics and optimization, Contact search**.

1. Search for a contact that you wish to perform calibrations on, for example, minimum interaction duration, specific queue, etc.

1. On the **Contact details** page of a contact, choose **Evaluations** on the top right to open the **Evaluations** side panel.

1. In the side panel, select the **Calibration session** radio button, choose the desired form for the calibration using the dropdown menu, and then choose the **Setup calibration session** button.  
![\[A diagram of the calibrations session setup.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibrations-setup1.png)

1. Enter a title for the calibration session, select the participants, and optionally designate an expert participant and set a due date.  
![\[A diagram of the calibrations session setup with participants and due date.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-setup2.png)

1. After creation, the calibration session will appear in the side panel. An evaluation will be automatically generated for each participant.  
![\[A diagram of the created calibrations session for each participant.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-setup3.png)

## Edit a calibration session


**To edit a calibration session**

1. On the side panel locate the calibration sessions and choose **Edit**.  
![\[A diagram of choosing to edit a calibrations session.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibrations-edit1.png)

1. In the form that opens in the side panel you can modify the calibration session title, add or remove participants, optionally designate an expert participant, and set or adjust the due date.

1. Choose **Save** to update the calibration session. The changes will be reflected in the side panel. New participants will automatically receive an evaluation, while removed participants will have their evaluations deleted. 

## Perform evaluations as a part of a calibration session


Use the following procedure to perform evaluations as a part of a calibration session:

**To perform evaluations**

1. On the side panel locate the **Calibration evaluations assigned to you** section to view your calibration evaluations.  
![\[A diagram of calibration evaluations assigned to you.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-evaluations1.png)

1. Choose an evaluation to open it. You can respond to these evaluations in the same manner as standard evaluations, with options to save your progress or submit the completed evaluation. Note that automation is disabled on calibration sessions.  
![\[A diagram of responding to calibration evaluations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibration-evaluations2.png)

1. Calibration managers can access a list of all evaluations associated with a specific calibration session by viewing the calibration session details in the side panel. Calibration managers will also be able to view evaluations submitted by participants.

## Finalize a calibration


**To finalize a calibration**

1. Access the calibration session details view and choose **Finalize**.  
![\[A diagram showing the finalize button for calibrations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/calibrations-finalize.png)

1. Confirm the finalization when prompted. Note that once finalized, neither the session nor its evaluations can be edited.

1. Within a few seconds, a calibration report will be available for download in .csv format. This report contains the answers of participants that have submitted evaluations, along with the weighted scores for each question, section and the overall form, evaluator notes and comparison of the evaluator’s scores with the expert evaluator.

   Use the field **absolute deviation from expert** (lower is better) for each participant to determine if an evaluator is significantly deviating from the expert while answering evaluation questions. You can also see **average absolute deviation from expert** (lower is better) to see if there are certain questions that get inconsistent answers from participants and need improvement (For example, better phrasing, more specific questions, etc.) 

## Finding calibration sessions


Amazon Connect notifies users participating in calibration sessions via email (for example, if a user is added as a participant, if there is a change to the due date, etc.). If a user managing a calibration session has added themselves as the **expert** participant, then they would also receive emails. The email contains a link to the contact which is being used for calibration. Note that in order for users to receive email notifications, you need to assign emails to the users on Amazon Connect. For more information, see [Add users to Amazon Connect](user-management.md).

As a manager setting up a calibration, you can copy the contact ID to search for the contact on which the calibration session was setup. Note that if you have not added yourself as an expert or if user emails are not setup within Amazon Connect, you will not receive an email containing a link to the contact on which the calibration session was setup.

# Ingest agent activities from third-party applications to evaluate agent performance
Evaluate activities performed outside of Amazon Connect

You can import agent activities completed in third-party applications into Amazon Connect. These activities are imported as Amazon Connect tasks, which you can evaluate alongside work completed in Amazon Connect. This provides managers with a unified application for quality management.

To import activities completed in third-party applications (such as application processing or social media interactions) as completed tasks, use the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API. When you import these activities, you can capture details relevant for performance evaluation as task attributes. Unlike tasks created in the Amazon Connect admin website, these imported tasks are already marked as completed and don't need to be accepted by the agent who completed the activity in the external application.

Managers can then evaluate these external activities alongside native Amazon Connect interactions and back-office tasks. This gives managers a unified view of agent performance in the [Agent performance evaluations dashboard](agent-performance-evaluation-dashboard.md). 

## How to ingest activities from third-party applications
Ingest activities

The following steps are typically performed by an IT admin.
+  Ensure that agents or back-office workers who you want to evaluate are users on Amazon Connect. To add new users, see [Add users to Amazon Connect](user-management.md). 
+ Use the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API to ingest all external activities completed by these agents into Amazon Connect as completed Amazon Connect tasks. 

   You can ingest:
  + All activities completed in third-party applications (for example, triggered by the completion of these activities). This provides you with a comprehensive view of agent activities in a single application. 
  + A percentage of agents' external activities as a sample that you use for performance evaluation.

  Following is a sample API request for ingesting a claims authorization activity that was completed in another system.

  ```
  awscurl \
  --service connect \
  -X PUT \
  'https://connect.us-east-1.amazonaws.com/Prod/contact/create-contact' \
  --region us-east-1 \
  -d \
  '{
    "Channel":"TASK",
    "InstanceId":"8f3b9ab3-df68-4124-8573-2626b5c939ac", 
    "InitiationMethod":"API",
    "InitiateAs":"COMPLETED",
    "UserInfo": {"UserId": "arn:aws:connect:us-west-2:295154396770:instance/8f3b9ab3-df68-4124-8573-2626b5c939ac/agent/1c99b776-8e56-4aaa-a1bf-b950ffbe61e4"},
    "Name": "Processing Authorization #12345",
    "Description": "Customer Name: John Doe; Customer Condition: Asthma; Medication: Levocetrizin",
    "Attributes": {
      "Authorization": "12345",
      "ExternalContactType": "Authorization" 
    },
    "References": {
      "ThirdPartySystemURL": {
        "Type": "URL",
        "Value": "https://example.com/customer/12345"
      }
    }
  }'
  ```
+  You can add additional activity information within attributes. This information may be useful for quality managers who are searching and evaluating contacts. For example, the previous API call includes the a custom attribute called `ExternalContactType`. It enables managers to distinguish between different types of external activities within Contact search. 

   You can also add links to the third-party system within contact references. These links enable managers to reference additional information that's not included with the task. 
+  To enable managers to search for activities using these attributes, you need to enable search on these attributes. For more information, see [Search for contacts in Amazon Connect by using custom contact attributes or contact segment attributes](search-custom-attributes.md). 
**Note**  
Only tasks that are created after this setting is configured are searchable using these attributes.

## How to evaluate external activities
Evaluate external activities

The following steps are typically performed by managers.

 Managers can evaluate ingested activities in Amazon Connect the same way that they evaluate native Amazon Connect contacts. For more information, see [Evaluate performance](evaluations.md).

 If your admin has configured search on custom contact attributes, you can search for external activities with identifiers, such as the type of activity and ID. 

The following image shows a search for `Completed` contacts, with `Attribute` = `ExternalContactType`.

![\[A contact search for completed contacts with Attribute = ExternalContactType.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluate-external-activities1.png)


The following image shows an example of what contact details look like for a completed external contact. In this image: 
+ Channel subtype = connect:ExternalTask
+ Initiation method = API
+ References includes the URL to the third-party system

![\[Contact details for an external contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/evaluate-external-activities2.png)


# Set up and review agent screen recordings in Amazon Connect Contact Lens
Set up and review agent screen recordings

To help coach your agents to provide great customer service, you can use the Contact Lens screen recording feature to gain quality management insights. It records the agent's desktop, which helps you identify opportunities to improve performance. This information is also useful for ensuring compliance.

For example, let's assume it takes most agents two minutes to process a refund, but Jane Doe takes four minutes. You can watch a recording of her desktop when she's doing a refund and discover why she is taking longer. 

The following diagram shows the high-level components of screen recording. For a sequence diagram that shows the network calls between different components, see [Network requirements](sr-system-req.md#network-requirements). 

![\[A diagram of the screen recording flow.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-flow.png)


**Topics**
+ [Amazon Connect Client Application](amazon-connect-client-app.md)
+ [System and network requirements](sr-system-req.md)
+ [Enable screen recording](enable-sr.md)
+ [Review agent screen recordings](review-screen-recordings.md)
+ [Download log files for the screen recording app](troubleshoot-sr.md)
+ [Use Amazon EventBridge events to track screen recording status](track-screen-recording-status.md)
+ [FAQ for screen recording capabilities](faq-screenrecording.md)

# Amazon Connect Client Application
Amazon Connect Client Application

Amazon Connect screen recording is supported in Windows and Chrome OS. This page provides the download and installation instructions for the screen recording application in each operating system, and the minimum system requirements for the agent devices.

**Topics**
+ [

## Windows
](#windows-client)
+ [

## Chrome OS
](#chrome-os)

## Windows


### Version information

+ Version: v2.0.3 (latest)
+ Release date: January 16 2025
+ Download link: [AmazonConnectClientWin-v2.0.3](https://d4yqf2f7seiym.cloudfront.net/builds/AmazonConnectClientWin-v2.0.3.zip) 
+ Release note: This version supports AWS GovCloud (US) customers and has security improvements.

The above link downloads the **AmazonConnectClientWin-[version].zip** file. The zip file contains the **Amazon.Connect.Client.Service.Setup.[version].msi** file. For installation instructions, see [Enable screen recording](enable-sr.md).

To be notified when there is an update to the Amazon Connect Client Application, we recommend subscribing to the RSS feed of this administrator guide. Choose the **RSS** link that appears under the title of this page (it's next to the PDF link).

### Client install instructions


In this step you install the **Amazon.Connect.Client.Service** file onto the agent's desktop, or into the virtual environment that the agent uses. This is the Amazon Connect Client Application.

**Note**  
In case of Windows multi-session OS, run the installer only once on the machine. Screen recording on Windows multi-session OS is supported only by version 2.0.0 or later.
If your Amazon Connect instance is in AWS GovCloud (US-West), you must install version 2.0.3 or later.
You need to configure an allowlist of Amazon Connect domains that are allowed to communicate with the client application. Screen recordings are captured only from Amazon Connect domains specified in your allowlist.

#### Programmatic installation by using software distribution tools

+ Download the latest version of the **Amazon.Connect.Client.Service.Setup.msi** file.
+ Use your organization's software distribution mechanism, such as Software Center, to install the A**mazon.Connect.Client.Service** client app on agent desktops.
+ Deploy using your organization's enterprise software distribution system such as Microsoft System Center Configuration Manager, SCCM, or other automated deployment tools.
+ Include the `ALLOWED_CONNECT_DOMAINS` parameter by using the following syntax:

  ```
  msiexec /i Amazon.Connect.Client.Service.Setup.msi ALLOWED_CONNECT_DOMAINS="connect-dev-instance.my.connect.aws,connect-prod-instance.my.connect.aws"
  ```

#### Manual installation

+ Download the latest version of the **Amazon.Connect.Client.Service.Setup.msi **file.
+ Double-click the installer file.
+ Enter the Amazon Connect domains allowlist when prompted. The following image shows an example of how to specify a domain in the allowlist on the **Configure Installation Settings** dialog box. For more examples, see *Guidelines for specifying your Amazon Connect domains allowlist* below.  
![\[The Configure Installation Settings dialog box.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/domain-allowlist-windows.png)
+ Choose **Install** to complete the installation.

#### Verify the Amazon Connect Client Application is running and functioning correctly


##### To verify that the application is running:

+ In Windows Task Manager, check for a background process named **Amazon.Connect.Client.Service**. This is the Amazon Connect Client Application.
+ In Windows Task Manager, under **Users processes**, check for another process named **Amazon.Connect.Client.RecordingSession** after the user accepts the very first contact where screen recording is enabled. 

  The following image shows **Amazon.Connect.Client.RecordingSession** in Task Manager.  
![\[Amazon.Connect.Client.RecordingSession in Task Manager.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/taskmanager.png)

##### To verify that the application is functioning correctly and creating log files:


1. Navigate to the following directory: `C:\ProgramData\Amazon\Amazon.Connect.Client.Service\logs`

1. Open log files that are present in the directory.

1. In a successful installation the log files contain the following line: `Checking that services are still running, result : true`

1. Navigate to the following directory: `%USERPROFILE%\AppData\Local\Amazon\Amazon.Connect.Client.RecordingSession\Logs`

1. Open log files that are present in the directory.

1. In a successful installation the log files contain the following line: `Session initiation completed with result: True`

#### Guidelines for specifying your Amazon Connect domains allowlist


Be sure to adhere to the following guidelines when you enter domains in the **Allowed Connect Domains** box. Otherwise your installation will fail.
+ Format: Comma-separated Amazon Connect domains
+ Valid characters for Amazon Connect domains: Use only A-Z, a-z, 0-9, hyphen (-), period (.)
+ Protocol prefixes such as https:// or http:// are not required.
+ Limitations:
  + Maximum 500 domain entries
  + Maximum 256 characters per domain entry
  + Maximum 128,000 characters total input length

Following are examples of how to specify your domain.

##### Correct

+ domain1.my.connect.aws,domain2.my.connect.aws
+ ddomain-1.my.connect.aws, 1-domain.my.connect.aws
+ domain-12.my.connect.aws

##### Incorrect

+ \$1123domain.foo
+ domain:2.foo
+ \$1domain.my.connect.aws
+ https://domain1.my.connect.aws
+ \$1.my.connect.aws

## Chrome OS


Amazon Connect screen recording feature on ChromeOS requires two components:
+ Isolated Web App
+ Google Chrome Browser Extension

The installation of these components on Agent Chrome devices can be performed through Google Enterprise Admin Console. The URLs to configure the installation of the Isolated Web App and Chrome browser extension are provided below and can be set to automatic update through web manifest configuration JSON.

### Download location and Install instructions


Complete the following steps on the Google Enterprise Admin Console. Apply the policy for all the agent devices where screen recording feature needs to be enabled.

#### Install Isolated Web App

+ Web Bundle ID: `ajbye5keylrcyakugr3zttu6f524eoamjc7mc6ubw3x3547xu3hxqaacai`
+ Update Manifest URL: `https://screenrecording.connect.aws/chromeos/amazon-connect-client-iwa/releases/update_manifest.json`

**To install Isolated Web App**

1. Navigate to the [Google Admin Portal](https://admin.google.com) (https://admin.google.com) and login with your Google enterprise admin credentials.

1. Select **Add an Isolated Web App**.

1. Copy and paste the following details, and then choose **Save**:
   + Web Bundle ID: `ajbye5keylrcyakugr3zttu6f524eoamjc7mc6ubw3x3547xu3hxqaacai`
   + Update Manifest URL: https://screenrecording.connect.aws/chromeos/amazon-connect-client-iwa/releases/update\$1manifest.json

   The following image shows an example **Add an Isolated Web App** dialog box that has been completed.  
![\[A completed Add an Isolated Web App dialog box.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/addisolatedwebapp.png)

1. Configure **Installation Policy** to `Force Install + Pin to ChromeOS Taskbar` and change **Launch on Login** to `Force Launch and Prevent Closing` to make sure Isolated Web App starts automatically when a computer is logged into and restarts.  
![\[The Installation policy and Launch on login sections.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/installationpolicy.png)

1. Configure **Managed configuration** to allowlist your Amazon Connect domains that are allowed to initiate screen recording on agent machines. An example of **Managed configuration** is shown in the following image.   
![\[The Managed configuration section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/managedconfiguration.png)
   + The key name MUST be `allowListedDomain`. Domain names should not include any paths, query strings, or trailing slashes (/).
   + Replace `your-instance-alias-*` with your actual Amazon Connect instance alias.

   ```
   {
   "allowListedDomain": [
   "https://your-instance-alias-1.my.connect.aws",
   "https://your-instance-alias-2.my.connect.aws"]
   }
   ```

1. Complete the following steps to configure the Isolated Web App to allow Direct sockets, Screen recording, and Window management permissions: 
   + Navigate to **Devices**, **Chrome**, **Web capabilities**, **Add Origin**.
   + input `ajbye5keylrcyakugr3zttu6f524eoamjc7mc6ubw3x3547xu3hxqaacai`, and then choose **Save**.

   The following image shows where Devices, ChromeS, and Web capabilities are located in the left navigation menu in Chrome.   
![\[The left navigation menu in the Chrome OS.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/allorigins.png)

The following image shows the location of **Direct sockets**, **Screen recording**, and **Window management** on the Web capabilities page.

![\[The location of Direct sockets, Screen recording, and Window management Web capabilities page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/directsockets.png)


#### Install Google Chrome Browser Extension

+ Extension ID: cjmichfmnimgeoadokmeaiclklkdccod
+ Custom URL: `https://screenrecording.connect.aws/chromeos/amazon-connect-extension/releases/updates.xml`

**To install Google Chrome Browser Extension**

1. Navigate to **Add Chrome app or extension by ID**, as shown in the following image.  
![\[The Add Chrome app or extension by ID option in the left navigation.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/appandextensions.png)

1. In the **Add Chrome app or extension by ID**, choose **From a custom URL** and enter the following information:
   + Extension ID: `cjmichfmnimgeoadokmeaiclklkdccod`
   + Custom URL: `https://screenrecording.connect.aws/chromeos/amazon-connect-extension/releases/updates.xml`  
![\[The Add Chrome app or extension by ID dialog box, the From a custom URL option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/chromeapp.png)

1. Configure **Installation Policy** to **Force Install**, and then choose **Save**, as shown in the following image.  
![\[The Installation Policy option set to Force install.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/forceinstall.png)

# System and network requirements for screen recording in Amazon Connect
System and network requirements

This topic provides the system requirements for using screen recording, and describes the detailed dataflow it uses in each platform.

## System requirements


Here are the minimum system requirements for agent devices to perform screen recording. You'll need to scope additional memory, bandwidth, and CPU for the operating system and anything else running on the device to avoid resource contention. 
+ CPU: 2.0 GHz (4 cores or 4 vCPU recommended)
+ Memory: 4 GB
+ Network: 600 Kbps

### Supported operating systems

+ 64-bit Windows 10 and 11 based on the x86-64 architecture
+ Chrome OS version 140 or higher enrolled in a Google Enterprise Domain

**Note**  
When Windows multi-session configuration is enabled allowing multiple agents to use a single Windows host, ensure that the agent's workstation has the recommended resource availability for each concurrent session.

## Network requirements

+ **Port used for screen recording**: The Amazon Connect Client Application communicates with the CCP through a local websocket on port 5431 (on Windows) and 25431 (on Chrome OS).
+ **URLs to add to your firewall allowlist**: To ensure smooth screen recording functionality, add the following URL patterns to your allowlist:
  + From CCP: `connect-recording-staging-*.s3.dualstack.your-region-name.amazonaws.com`. If you prefer to not use wild cards, the list of endpoints is available at https://screenrecording.connect.aws/config/connect-recording-endpoint-allowlist.json. This list may be updated in the future. Refer to the `createDate` at the top of the file to check for updates.
  + From screen recording client application: `https://your-connect-instance-alias.my.connect.aws/taps/client/auth`
+ **Sequence diagram**: The following sequence diagram shows the network calls between different components involved in screen recording.  
![\[A sequence diagram shows the network calls between different components involved in screen recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/sequence-diagram.png)
  + In Windows, the Amazon Connect Client is the combination of the Amazon.Connect.Client.Service background process and Amazon.Connect.Client.RecordingSession.
  + In ChromeOS, the Amazon Connect Client is the combination of Isolated Web App and Browser extension.

# Enable screen recording for your Amazon Connect instance
Enable screen recording

This topic provides steps to enable screen recording for your Amazon Connect instance, download and install the Amazon Connect Client Application, and perform key configuration steps. 

**Topics**
+ [Step 1: Enable screen recording for your instance](#install-sr-step1)
+ [Step 2: Download and install the Amazon Connect Client Application](#install-sr-step2)
+ [

## Step 3: Configure the Set recording and analytics behavior block
](#configure-recording-block)
+ [Configuration tips](#tips-sr)

## Step 1: Enable screen recording for your instance
Step 1: Enable screen recording for your instance

**Important**  
If your Amazon Connect instance was created before October, 2018, and you don't have service-linked roles set up, follow the steps in [Use service-linked roles](https://docs.aws.amazon.com/connect/latest/adminguide/connect-slr.html#migrate-slr) to migrate to the Amazon Connect service-linked role.

The steps in this section explain how to update your instance settings to enable screen recording, and how to encrypt recording artifacts.

1. Open the Amazon Connect console at [https://console.aws.amazon.com/connect/](https://console.aws.amazon.com/connect/).

1. Choose your instance alias.

1. In the navigation pane, choose **Data storage**, scroll down to **Screen recordings** and choose **Edit**, as shown in the following image.  
![\[The Screen recordings section of the Data storage page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/console-screenrecordings.png)

1. Choose **Enable screen recording**, and then choose **Create a new S3 bucket (recommended)** or **Select an existing S3 bucket**.

1. If you chose **Create a new Amazon S3 bucket (recommended)**, enter a name in the **Name** box. If you chose to use an existing bucket, select it from the **Name** list.

1. (Optional) To encrypt the recording artifacts in your Amazon S3 bucket, select **Enable encryption**, then choose a KMS key.
**Note**  
When you enable encryption, Amazon Connect uses the KMS key to encrypt any intermediate recording data while the service processes it.

1. When finished, choose **Save**.

For more information about instance settings, see [Update settings for your Amazon Connect instance](update-instance-settings.md). 

## Step 2: Download and install the Amazon Connect Client Application
Step 2: Download and install the Amazon Connect Client Application

Follow the instructions in [Amazon Connect Client Application](amazon-connect-client-app.md) to download and install the Amazon Connect Client Application for your operating system.

## Step 3: Configure the Set recording and analytics behavior block

+ Add a [Set recording and analytics behavior](set-recording-behavior.md) block immediately after the point of entry to the flow. Add the block to every flow that you want to enable for screen recording.
+ The following image shows the properties page of the [Set recording and analytics behavior](set-recording-behavior.md) block. In the **Screen Recording** section, choose **On**.  
![\[The Set recording behavior block, the Screen recording section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screenrecordingblock.png)

## Configuration tips
Configuration tips
+ To enable supervisors to search for contacts that have screen recordings, add a [Set contact attributes](set-contact-attributes.md) block before **Set recording and analytics behavior**. Add a custom attribute called something like **screen recording = true**. Supervisors can [search on this custom attribute](search-custom-attributes.md) to find those that have screen recordings.
+ You may want to add a [Distribute by percentage](distribute-by-percentage.md) block before **Set recording and analytics behavior**. This enables you to use screen recording for some but not all contacts.
+ You may want to leverage the [SuspendContactRecording](https://docs.aws.amazon.com/connect/latest/APIReference/API_SuspendContactRecording.html) and [ResumeContactRecording](https://docs.aws.amazon.com/connect/latest/APIReference/API_ResumeContactRecording.html) APIs to prevent sensitive information from being captured in the screen recording.

# Review agent screen recordings in the Amazon Connect Client Application
Review agent screen recordings

Use screen recordings to identify areas for agent coaching (for example, long contact handle duration or non-compliance with business processes) by watching an agent's actions while they handle a call, chat, or task contact. 

The screen recording is synchronized with the voice recording and contact transcript, so you can hear or read what is being said at the same time.

**Note**  
Screen recordings are only available for Completed contacts.

**Topics**
+ [

## Step 1: Assign permissions to review screen recordings in the Amazon Connect Client Application
](#assign-permissions-sr)
+ [

## Step 2: Review screen recordings
](#review-sr-2)
+ [

## Watch in Picture-in-picture mode
](#picture-in-picture)

## Step 1: Assign permissions to review screen recordings in the Amazon Connect Client Application
Step 1: Assign permissions

To allow users to review screen recordings, assign the following **Analytics and optimization** security profile permission: 
+ **Screen recording - Access**: Allows a user, such as a supervisor or Quality Assurance team member, to access and review screen recordings.
**Important**  
Screen recording merges the screen recording video with the unredacted call recording file. If users have permission to view screen recordings, they can listen to the unredacted audio.
+ **Screen recording - Enable download button**: Allows a user, such as a supervisor or Quality Assurance team member, to view a download button on the **Contact details** page to download screen recording videos.

For information about how to add more permissions to an existing security profile, see [Update security profiles in Amazon Connect](update-security-profiles.md).

## Step 2: Review screen recordings
Step 2: Review screen recordings

1. Log in to Amazon Connect with a user account that has the **Analytics and optimization** - **Screen recording - Access** permission in its security profile.

   If you also have **Screen recording - Enable download button** permission, you can view a button on the **Contact details** page that enables you to download a screen recording and view it offline. 

1. On the navigation menu, choose **Analytics and optimization**, **Contact search**.

1. Search for the contact you want to review.
**Tip**  
If you've added a custom attribute to your flows to indicate when screen recording is enabled, you can [search by the custom attribute](search-custom-attributes.md) to locate contact records with screen recordings. For more information, see [Configuration tips](enable-sr.md#tips-sr). 

1. Click or tap the contact ID to view the **Contact details** page.

1. The **Recording** section contains a video player that displays the screen recording, as shown in the following image.  
![\[A screen recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-show-recording.png)
**Important**  
Screen recording playback in the **Contact details** page is not supported in the legacy `https://your-instance-alias/awsapps.com` domain. We recommend using the `https://your-instance-alias.my.connect.aws/` domain to play screen recordings. For more information, see [Update your Amazon Connect domain](update-your-connect-domain.md) in this guide.

1. Use the right-side controls to zoom in and out, fit the video to the window, download video, expand to full-screen, and play picture-in-picture.  
![\[The zoom in and out controls.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-zoom.png)

1. If you don't see a video recording, check that the **Show screen recording** toggle is on. 

   If no video appears, then the screen recording may not yet be ready (that is, uploaded into the Amazon S3 bucket). If the problem persists, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/).

## Watch in Picture-in-picture mode
Watch in Picture-in-picture mode

You may want to move the video elsewhere on your monitor while you watch it. For example, you can reposition the video so you can read the transcript. Use **Watch in Picture-in-picture** mode to achieve this. 

1. Choose the picture-in-picture button on the right side controls, as shown in the following image.  
![\[The picture in picture button the right side of the page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-picture-in-picture.png)

1. Choose the **X** in the top right corner to pop the window back. The following image shows the video in Picture-in-picture mode, and the location of the **X** to pop the window back.   
![\[The video in picture-in-picture mode and the location of the back to tab button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-back-tab.png)

# Download the Amazon Connect Client Application log files for troubleshooting
Download log files for the screen recording app

When you open an AWS Support ticket for issues with screen recordings, include the log files for Amazon Connect Client Application and shared worker from the agent desktop.

## Amazon Connect Client Application log files (Windows)


On the agent's desktop, navigate to:
+ `C:\ProgramData\Amazon\Amazon.Connect.Client.Service\logs`

  This file contains the logs including the Websocket connection between browser and Client Application, and another Websocket connection between **Amazon.Connect.Client** and **Amazon.Connect.RecordingSession**.
+ `%USERPROFILE%\AppData\Local\Amazon\Amazon.Connect.Client.RecordingSession\Logs`

  This file contains logs for screen recording activities. (Not applicable for version 1.x.)

## Shared Worker logs (Windows and ChromeOS)


Open your CCP. It must be open so you can view the **ClientAppInterface** shared worker.

### Chrome


1. Open a Chrome browser. For the URL type `chrome://inspect/#workers`.

1. In the **Shared workers** section, locate the shared worker named **ClientAppInterface**.

1. Choose **inspect** to open a DevTools instance.

1. Choose the **Console** tab, right-click the log dump, and then select **Save as...** to store the log dump to a local file.

### Firefox


1. Open a Firefox browser. For the URL type `about:debugging#workers`.

1. In the **Shared workers** section, choose **Inspect** for **/connect/ccp-naws/static/client-app-interface.js**.

1. Right-click the **Console** tab and select **Save all Messages to File** to store the log dump to a local file.

### Edge (Chromium)


1. Open a Chrome browser. For the URL type `edge://inspect/#workers`.

1. In the **Shared workers** section, locate the shared worker named **ClientAppInterface**.

1. Choose **inspect** to open a DevTools instance.

1. Choose the **Console** tab, right-click the log dump, and then select **Save as...** to store the log dump to a local file.

# Use Amazon EventBridge events to track screen recording status
Use Amazon EventBridge events to track screen recording status

With Amazon EventBridge, you can view the status of [agent screen recordings](agent-screen-recording.md) in near real-time. The event for each agent screen recording includes success/failure status, failure codes with descriptions, recording location, recording size, installed client version, and screen recording start and end times.

You can integrate with other AWS services to get analytical or monitoring insights of agent screen recordings:
+ Query with [Amazon CloudWatch Log Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html)
+ Get near real-time alerts in an [Amazon Quick](https://aws.amazon.com/quicksight/) dashboard
+ Create aggregated reports outside of Amazon Connect
+ Connect your other customized data pipeline solutions with Amazon EventBridge

**Topics**
+ [

## Amazon EventBridge event payload formats
](#eventbridge-payload-formats)
+ [

## Create a rule to match Amazon EventBridge events
](#create-eventbridge-rule)
+ [

## Configure the target of the created Amazon EventBridge rule
](#configure-eventbridge-target)

## Amazon EventBridge event payload formats


### Event with screen recording status - INITIATED


This event is emitted when a contact is accepted by the agent, which may be before recording starts, for every contact with agent screen recording enabled.

```
{  
  "version": "0",  
  "id": "the_event_id_from_eventbridge",  
  "detail-type": "Screen Recording Status Changed",  
  "source": "aws.connect",  
  "account": "your_aws_account_id",  
  "time": "2026-01-01T00:00:00Z",  
  "region": "us-west-2",  
  "resources": [  
    "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/your_contact_id",  
    "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id"  
  ],  
  "detail": {  
    "version": "1.0",  
    "recordingStatus": "INITIATED",  
    "eventDeduplicationId": "unique_uuid",  
    "instanceArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id",  
    "contactArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/your_contact_id",  
    "agentArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/agent/your_agent_id",  
    "clientInfo": {  
      "appVersion": "2.0.3.0",  
    }  
  }  
}
```

### Event with screen recording status - COMPLETED


This event is emitted when screen recording ends on the agent desktop. This doesn't mean the screen recording has been successfully uploaded to your Amazon S3 bucket.

```
{  
  "version": "0",  
  "id": "the_event_id_from_eventbridge",  
  "detail-type": "Screen Recording Status Changed",  
  "source": "aws.connect",  
  "account": "your_aws_account_id",  
  "time": "2026-01-01T00:00:00Z",  
  "region": "us-west-2",  
  "resources": [  
    "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/your_contact_id",  
    "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id"  
  ],  
  "detail": {  
    "version": "1.0",  
    "recordingStatus": "COMPLETED",  
    "eventDeduplicationId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeee",  
    "instanceArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id",  
    "contactArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/your_contact_id",  
    "agentArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/agent/your_agent_id",  
    "clientInfo": {  
      "appVersion": "2.0.3.0",  
    },  
    "recordingInfo": {  
      "startTime": "2026-01-01T00:00:00.000Z",  
      "endTime": "2026-01-01T00:00:00.000Z",  
    }  
  }  
}
```

### Event with screen recording status - PUBLISHED


This event is emitted when the screen recording is successfully uploaded to your Amazon S3 bucket. Details include Amazon S3 bucket location, recording size, and recording duration.

```
{  
  "version": "0",  
  "id": "the_event_id_from_eventbridge",  
  "detail-type": "Screen Recording Status Changed",  
  "source": "aws.connect",  
  "account": "your_aws_account_id",  
  "time": "2026-01-01T00:00:00Z",  
  "region": "us-west-2",  
  "resources": [  
    "contactArn",  
    "instanceArn"  
  ],  
  "detail": {  
    "version": "1.0",  
    "recordingStatus": "PUBLISHED",  
    "eventDeduplicationId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeee",  
    "instanceArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id",  
    "contactArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/your_contact_id",  
    "agentArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/agent/your_agent_id",  
    "clientInfo": {  
      "appVersion": "2.0.3.0",  
    },  
    "recordingInfo": {  
      "startTime": "2026-01-01T00:00:00.000Z",  
      "endTime": "2026-01-01T00:00:00.000Z",  
      "publishTime": "2026-01-01T00:00:00.000Z",  
      "location": "s3://your-bucket-name/object-prefix/object-key",  
      "durationInMillis": 100000,  
      "sizeInBytes": 1000000  
    }  
  }  
}
```

### Event with screen recording status - FAILED


This event is emitted if screen recording fails. Details on failure information are provided as a best-effort estimation of the possible failure reason that we are able to detect.

```
{  
  "version": "0",  
  "id": "the_event_id_from_eventbridge",  
  "detail-type": "Screen Recording Status Changed",  
  "source": "aws.connect",  
  "account": "your_aws_account_id",  
  "time": "2026-01-01T00:00:00Z",  
  "region": "us-west-2",  
  "resources": [  
    "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/cccccccc-cccc-cccc-cccc-ccccccccccccc",  
    "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id"  
  ],  
  "detail": {  
    "version": "1.0",  
    "recordingStatus": "FAILED",  
    "eventDeduplicationId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeee",  
    "instanceArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id",  
    "contactArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/contact/cccccccc-cccc-cccc-cccc-ccccccccccccc",  
    "agentArn": "arn:aws:connect:us-west-2:your_aws_account_id:instance/your_instance_id/agent/your_agent_id",  
    "clientInfo": {  
      "appVersion": "2.0.3.0",  
    },  
    "failureInfo": {  
      "code": "UNKNOWN",  
      "message": "UNKNOWN",  
      "source": "Unknown failure"  
    },  
    "recordingInfo": {  
      "startTime": "2026-01-01T00:00:00.000Z"  
    }  
  }  
}
```

## Create a rule to match Amazon EventBridge events


To subscribe to Amazon EventBridge events for screen recording status, you need to create an Amazon EventBridge rule that matches the defined event source and event detail-type. This can be achieved through either the AWS Console or AWS CDK libraries.

### Create a rule using the AWS Console


In the AWS Console, create a new rule in Amazon EventBridge → Buses → Rules.

#### Use the default event bus


![\[The Create rule page showing the default event bus selection.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-eventbridge-event-rule.png)


#### Use a template event pattern


Select the defined event pattern from the dropdown lists.

![\[The Event source dropdown showing aws.connect selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-eventbridge-event-source.png)


![\[The Event pattern showing Screen Recording Status Changed selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-eventbridge-event-pattern.png)


If the event type is not showing up in the dropdown list, you can alternatively create the same pattern using **Custom pattern (JSON editor)** with:

```
{  
  "source": [ "aws.connect" ],  
  "detailType": [ "Screen Recording Status Changed" ]  
}
```

### Create a rule using AWS CDK


Alternatively, if you manage AWS resources with AWS CDK, here is a sample TypeScript code snippet to construct an Amazon EventBridge rule:

```
import { Rule } from 'aws-cdk-lib/aws-events';  
  
const eventBridgeRule = new Rule(this, 'YourEventBridgeRuleLogicalName', {  
    ruleName: 'your-event-bridge-rule-name',  
    description: 'your rule description',  
    eventPattern: {  
        source: [ "aws.connect" ],  
        detailType: [ "Screen Recording Status Changed" ]  
    }  
});
```

## Configure the target of the created Amazon EventBridge rule


Amazon EventBridge supports a number of AWS services as targets. Depending on your needs, it's flexible to build your own event processing pipeline with other AWS services. You can define up to five targets for each rule. For more information, see [Amazon EventBridge targets](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html) in the *Amazon EventBridge User Guide*.

### Amazon CloudWatch log group as an example target


The following example uses an [Amazon CloudWatch log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) as a target.

![\[The Target configuration showing CloudWatch log group selected.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/screen-recording-eventbridge-target-cwl.png)


In AWS CDK code, create the resource and add it to the Amazon EventBridge rule:

```
import { LogGroup, RetentionDays } from "aws-cdk-lib/aws-logs";  
import { CloudWatchLogGroup } from 'aws-cdk-lib/aws-events-targets';  
   
const logGroup = new LogGroup(this, 'YourLogGroupLogicalName', {  
    logGroupName: '"/aws/events/your-log-group-name',  
    retention: RetentionDays.ONE_YEAR  
});  
  
eventBridgeRule.addTarget(new CloudWatchLogGroup((logGroup)));
```

#### Example Amazon CloudWatch Log Insights queries


Using Amazon CloudWatch Insights query language, here are some example queries:
+ **Sample query on success ratio**

  ```
  fields @timestamp, @message, detail  
  | stats sum(detail.recordingStatus= "PUBLISHED") as Count_Success,   
    sum(detail.recordingStatus= "INITIATED") as Count_Total,   
    Count_Success / Count_Total as Success_Ratio
  ```
+ **Sample query to get counts of each recording status**

  ```
  fields @timestamp, @message, detail  
  | stats count(*) as Count group by detail.recordingStatus as recordingStatus
  ```
+ **Sample query on failed contacts with most common failure codes**

  ```
  fields @timestamp, @message, detail  
  | filter detail.recordingStatus = "FAILED"   
  | stats count(*) as Count group by detail.failureInfo.code as FailureCode  
  | sort by Count desc
  ```
+ **Sample query on agents with most failed contacts**

  ```
  fields @timestamp, @message, detail  
  | filter detail.recordingStatus = "FAILED"   
  | stats count(*) as Count group by detail.agentArn as AgentArn  
  | sort by Count desc
  ```

# Frequently asked questions about Amazon Connect screen recording capabilities
FAQ for screen recording capabilities

This topic provides frequently asked questions about using Amazon Connect screen recording capabilities.

**Topics**
+ [

## General specifications
](#faq-sr-general)
+ [

## Configuration
](#faq-sr-configuration)
+ [

## Performance
](#faq-sr-performance)

## General specifications

+ **What is the file format of screen recordings?**

  The screen recording files are saved in MP4 format.
+ **Which Amazon Connect channels are supported?**

  You can generate screen recordings for voice, chat, and task contacts.
+ **Do you capture the entire screen?**

  Yes, the Amazon Connect Client Service records all the open applications on the agent's monitor, up to three monitors.
+ **Does screen recording support concurrent user sessions on Windows using Virtual Desktop Infrastructure (VDI) environments?**

  Yes, screen recording supports concurrent user sessions on Windows when using Amazon Connect Client Application version 2.0.0 or later.
+ **Where are the screen recording files stored in my AWS account?**

  The screen recordings are delivered to your Amazon S3 bucket and encrypted using the KMS key you specify. This is similar to how call recordings are stored and encrypted.
+ **How can I be notified when there is a latest version of the client application?**
  + For Windows, to be notified when there is an update to the Amazon Connect Client Application, we recommend subscribing to the RSS feed of this administrator guide. Choose the **RSS** link that appears under the title of this page (it's next to the PDF link).
  + For ChromeOS, Isolated Web App and Chrome Extension are hosted and managed by Amazon Connect. They are automatically updated as newer versions are published.
+ **Can I opt only for screen recording and not for call recording? **

  Yes, you can enable screen recording without call recording for a voice call. 
+ **How do I find the Amazon S3 location of the screen recording? **

  You can find the screen recording location in the [RecordingsInfo](ctr-data-model.md#ctr-RecordingsInfo) section of the contact record. See the **Location** field.
+ **How do I enable screen recording for a percentage of my contacts?**

  You can use the [Distribute by percentage](distribute-by-percentage.md) block in the flow to enable a percentage of contacts for screen recording.
+ **Is screen recording PCI compliant?**

  Amazon Connect, including the screen recording capability, is compliant with the Payment Card Industry Data Security Standard (PCI DSS). However, you are responsible for determining whether your specific implementation meets your compliance requirements.
**Important**  
During a video call or screen sharing session, agents are able to see the customer's video or screen share even when the customer is on hold. It is the customer's responsibility to handle PII accordingly. If you want to change this behavior, you can build a custom CCP and communication widget. For more information, see [Integrate in-app, web, video calling, and screen sharing natively into your application](config-com-widget2.md).
+ **Does screen recording work with custom CCP and agent desktops? **

   Screen recording is designed to work with custom CCP and agent workspace built with the [Amazon Connect Streams JS library](https://github.com/amazon-connect/amazon-connect-streams). We recommend testing your custom solution before deploying screen recording in production. 
+ **Can I use screen recording anywhere in the world?**

  Screen recording is available in AWS GovCloud (US) and all AWS commercial Regions where Amazon Connect is available. However, your use of screen recording may be subject to compliance with privacy and other laws. Please consult your compliance team before enabling this capability for your agents.

  To use screen recording in AWS GovCloud (US-West) requires client version 2.0.3 or later.
+ **Are agents alerted when screen recording is enabled for a contact?**

  By default Amazon Connect doesn't provide a notification feature. However, you can use the [Amazon Connect Streams JS library](https://github.com/amazon-connect/amazon-connect-streams/blob/master/cheat-sheet.md) to create a notice or other visual indicator on an agent's desktop to signal that screen recording is in use.
+ **What happens if an agent closes the browser during a contact, or immediately after a contact ends?**

  If the browser is closed at the beginning of contact before any screen capture data can be uploaded to Amazon Connect, the final screen recording may not be published. If the browser is closed immediately after a contact ends but before the final screen capture data can be uploaded, the screen recording is published when the agent next logs in to CCP. 
+ **Does screen recording STOP when an agent places a customer on hold?**

  No, the screen recording continues recording when an agent places a customer on hold.
+ **Is screen recording supported when Agents are logged into multiple CCP instances?**

  No, screen recording is not supported when Agents are logged into multiple CCP instances simultaneously either in the same or different browsers. You might see inconsistent behavior with screen recordings in these cases.

## Configuration

+ **Can I opt only for screen recording and not for call recording?**

  Yes, you can enable screen recording without call recording for a voice call. To do so, disable voice recording in the [Set recording and analytics behavior](set-recording-behavior.md) block while keeping the screen recording enabled.
+ **How do I find the Amazon S3 location of the screen recording?**

  You can find the screen recording location in the [RecordingsInfo](ctr-data-model.md#ctr-RecordingsInfo) section of the contact record. See the **Location** field.
+ **How do I enable screen recording for a percentage of my contacts?**

  You can use the [Distribute by percentage](distribute-by-percentage.md) block in the flow to enable a percentage of contacts for screen recording.
+ **What is the average size of a screen recording file per minute in S3?**

  The average size of screen recording is 1.5MB/minute. This size can vary depending on factors like video encoding etc.
+ **What is the frame rate for screen recording and is this configurable?**

  The screen is recorded at 5 frames per second and this is not configurable.
+ **What codec is used for screen recording?**

  Screen recording uses OpenH264 codec.
+ **Is there a way to choose which audio (redacted or unredacted) gets used for screen recording?**

  No, today only the unredacted audio gets used for screen recording.
+ **Is there a service limit for screen recording?**

  No, there is no service limit or quota for screen recording service.
+ **Is there a maximum duration for screen recording?**

  No, the screen recording solution imposes no maximum duration for a recording.
+ **How many agent monitors can be recorded?**

  Screen recording can record up to 3 screens/monitors.
+ **Can I configure my call/screen recording storage S3 bucket to enable bucket level encryption with a KMS key that is different from the KMS key used as part of instance data storage configuration?**

  No, the same key should be used at bucket level and also as part of instance data storage configuration.

## Performance

+ **What are the bandwidth requirements for screen recording?**

  We recommend 500kbps per concurrent contact with screen recording enabled. 
+ **Why do I see higher CPU usage after installing screen recording client application on my windows machine?**

  Screen recording in general is a CPU intensive application and hence CPU utilization increase is expected. We recommend ensuring you provide sufficient resources as documented in [System requirements](sr-system-req.md#sr-requirements) to avoid any resource contention issues.

# Search for completed and in-progress contacts in Amazon Connect
Search for completed and in-progress contacts

**Note**  
End of support notice: On May 20, 2026, AWS will end support for Amazon Connect Voice ID. After May 20, 2026, you will no longer be able to access Voice ID on the Amazon Connect console, access Voice ID features on the Amazon Connect admin website or Contact Control Panel, or access Voice ID resources. For more information, visit [Amazon Connect Voice ID end of support](https://docs.aws.amazon.com/connect/latest/adminguide/amazonconnect-voiceid-end-of-support.html). 

This topic is for administrators and contact center managers who need to search for contacts using the Amazon Connect admin website. For the APIs used to search for contacts programmatically, see [APIs to search contacts](#apis-search-contacts). 

**Topics**
+ [

## Important things to know
](#important-contact-search)
+ [

## Key search features
](#key-search-features)
+ [

## Manage who can search for contacts and access detailed information
](#required-permissions-search-contacts)
+ [

## How to search for a contact
](#how-to-search-contacts)
+ [

## Additional fields: Add columns to your search results
](#additional-fields)
+ [

## Download search results
](#download-search-results)
+ [

## APIs to search contacts
](#apis-search-contacts)
+ [

# Search for in-progress contacts in Amazon Connect
](search-in-progress-contacts.md)
+ [

# Search for contacts in Amazon Connect by using custom contact attributes or contact segment attributes
](search-custom-attributes.md)

## Important things to know

+ You can search for contacts as far back as two years ago.
+ You can search for both completed and in-progress contacts. For contacts handled by agents, a contact is only marked as completed after the agent has completed After Contact Work (ACW).
+ The ability to search for in-progress contacts varies by channel (see [Contact events data model](contact-events.md#contact-events-data-model) for reference):
  + **Voice**
    + You can search for in-progress queued callbacks after they are queued, connected to an agent or disconnected.
    + For other voice contacts, you can search them only after they are connected to an agent, or have been disconnected. Queued in-progress voice contacts (with the exception of callbacks) are not shown on the **Contact search** page.
  + **Chat**: You can search for contacts after they are connected to system, queued, connected to an agent or disconnected.
  + **Tasks** and **Email**: You can search for all in-progress after they are initiated.
+ The search results for a given query are limited to the first 10K results returned.
+ You cannot search for multiple contact IDs at the same time.

## Key search features
Key search features
+ [Search by custom contact attributes](search-custom-attributes.md) (user-defined attributes).
+ [Search for contacts that are in progress](search-in-progress-contacts.md) or completed using the **Contact status** filter.
+ Search a time range up to 8 weeks. Within the time range filter, you can specify the **Timestamp type**. This enables you to specify the time range. You can choose from initiated, connected to agent, disconnected, and scheduled timestamps.
**Important**  
Time range filter on Contact search has Timestamp type set to "Initiated" by default. Before the Timestamp type selection was introduced, the Timestamp type used by the Time Range filter was "Disconnected".
Saved searches on Contact search created prior to the launch of the ability to search for in-progress contacts (launched September 2023) have been updated with the filters Contact status = "Completed" and Timestamp type = "Disconnected". These selections were implied before the launch of in-progress contacts.
+ Multi-select filters for agent names, contact queues and the name of the initial flow for the contact.
+ Filter for agent hierarchy. You can progressively apply filters to drill-down into agent hierarchy levels. 
**Note**  
When you select multiple values at any hierarchy level, you cannot filter on the next hierarchy level(s).
+ Filter contacts by channel and channel subtype, such as SMS.
+ Filter to search for email contacts using email address (To, From and CC) and email subject. Searching on an email subject is not case sensitive. Also, searching for a subset of words within an email subject provides search results. For example, if you enter **inquiry**, Amazon Connect return emails with the subject **Customer Inquiry**.
+ Filters for [conversation analytics](analyze-conversations.md). You can search for contacts that have conversational analytics enabled. e.g. **Conversational analytics: Voice - Agent interaction** returns contacts where the agent interaction has been analyzed by conversational analytics. You can [search for Contact categories](search-conversations.md#contact-category-search) by specifying the full category name. Choose to search using **Match any** or **Match all** or **Match none**. For example, you can search contacts with both "category A" and "category B", or with either one of the two categories. 

   You can refer to the complete list of conversational analytics filters [here](search-conversations.md). You can apply these filters only if your organization has enabled Contact Lens. 

  In the **Add filter** drop-down box, the Contact Lens filters have **CL** next to them. You can apply these filters only if your organization has enabled Contact Lens.   
![\[The contact search page, the filters section, the filter dropdown menu.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-search-contact-category-1.png)

  If you want to remove the Contact Lens filters from a user's drop-down list, remove the following permissions from their security profile: 
  + **Search contacts by conversation**: This controls access to the sentiment scores, non-talk time, and category searches.
  +  **Search contacts by keywords**: This controls access to the keywords search.
  +  **Contact Lens - conversational analytics**: On the **Contact details** page, this displays graphs that summarize conversational analytics.
+ Filters for recordings. Using the **recording** filter, you can filter for contacts with a screen recording (video) or audio recording (voice). 
+ Filter for Active Region. Search for contacts by the AWS region where they were handled. This filter is available for Amazon Connect instances using global resiliency, where contacts may be handled in a different AWS region than the region you are logged into.
**Important**  
Some Amazon Connect features may be unavailable when accessing cross-region contact data. For complete details, refer to the [Set up Amazon Connect Global Resiliency](setup-connect-global-resiliency.md).
+ Filters for [Voice ID](voice-id.md). You can search for the Voice ID authentication and fraud detection status of contacts, if your organization has enabled Voice ID. To access this functionality, on your security profile, you need **Analytics and Optimization**, **Voice ID - attributes and search** - **View** permission.

  The following image shows the filters available to search Voice ID: **Authentication result**, **Fraud detection result**, **Speaker actions**.  
![\[The filter dropdown menu, filters for Voice ID.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/voiceid-search-filters.png)

## Manage who can search for contacts and access detailed information
Manage who can search for contacts and access detailed information

Before users can search for contacts in Amazon Connect, or access detailed contact information, they need to be assigned to the **CallCenterManager** security profile, or have the following **Analytics and Optimization** permissions:
+ At least one of the following permissions is required to view contacts on **Contact search** and **Contact details** pages:
  + **Contact search - View**: Allows a user to access all contacts on **Contact search** and **Contact details** pages.
  + **View my contacts - View**: On the **Contact search** and **Contact details** pages, allows agents to view only those contacts that they handled.
+ **Restrict contact access** (Optional): Manage a user's access to results on the **Contact search** page based on their agent hierarchy group. For example:
  + Agents who are assigned to AgentGroup-1 can only view contact records for contacts handled by agents in that hierarchy group, and any groups below them. (If they have permissions for **Recorded conversations**, they can also listen to call recordings and view transcripts.)
  + Agents assigned to AgentGroup-2 can only access contact records for contacts handled by their group, and any groups below them. 
  + Managers and others who are in higher level groups can view contact records for contacts handled by all the groups below them, such as AgentGroup-1 and 2.

  For this permission, **All** = **View** since **View** is the only action granted.

  For more information about hierarchy groups, see [Organize agents into teams and groups for reporting and access by creating hierarchies](agent-hierarchy.md).
**Important**  
Deleting a hierarchy level severs the link to existing contacts. This action can not be reversed.
When you change a user's hierarchy group, it may take a couple of minutes for their contact search results to reflect their new permissions.

  The following table lists the typical permissions and what contacts can be views on **Contact search** and **Contact details** pages.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/connect/latest/adminguide/contact-search.html)
**Important**  
We do not recommend assigning permissions in any other combination than what is shown in the preceding table.
+ **Contact Lens - conversational analytics**: On the ** Contact details** page for a contact, you can view graphs that summarize conversational analytics: customer sentiment trend, sentiment, and non-talk time. 
+ **Call recordings (redacted) - Access**: If your organization uses Contact Lens, you can assign this permission so agents access only those agent call recordings in which sensitive data has been redacted.
+ **Contact transcripts (redacted) - Access**: If your organization uses Contact Lens, you can assign this permission so agents access only those contact transcripts in which sensitive data has been redacted.
+ **Call recordings (unredacted) - Access**: Use this permission to manage who can access recordings on the **Contact search** and **Contact details** pages. If desired, you can use **Restrict contact access** to ensure they only have access to detailed information for those contacts handled by their hierarchy group.
+ **Contact transcripts (unredacted) - Access**: Use this permission to manage who can view unredacted chat and email conversations, and unredacted voice transcripts produced by Contact Lens on the **Contact search** and **Contact details** pages. If desired, you can use **Restrict contact access** to ensure they only have access to detailed information for those contacts handled by their hierarchy group.
+ **Evaluation forms - perform evaluations**: Allows users to [search for](search-evaluations.md) evaluations by evaluation form, score, last updated date/range, evaluator, and status. 
+ **Voice ID - attributes and search**: If your organization uses Voice ID, users with this permission can search for and view Voice ID results in the **Contact detail** page. 
+ **Users - View** permission: You must have this permission to use the **Agent** filter on the **Contact search** page.

By default, the Amazon Connect **Admin** and **CallCenterManager** security profiles have these permissions.

For information about how to add more permissions to an existing security profile, see [Update security profiles in Amazon Connect](update-security-profiles.md).

## How to search for a contact


1. Log in to Amazon Connect with a user account that has [permissions to access contact records](#required-permissions-search-contacts).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**.

1. Use the filters on the page to narrow your search. For date, you can search up to 8 weeks at a time.

**Tip**  
To see if a conversation was recorded, you need to be assigned to a profile that has **Manager monitor** permissions. If a conversation was recorded, by default the search result will indicate so with an icon in the **Recording** column. You won't see this icon if you don't have permission to review the recordings.

## Additional fields: Add columns to your search results


Use the options under **Additional fields** to add columns in your search results. These options are not used to filter your search.

For example, if you want to include columns for **Agent Name** and **Routing profile** in your search output, choose those columns here.

**Tip**  
The **Is transferred out** option indicates whether the contact was transferred to an external number. For the date and time (in UTC time) when the transfer was connected, see `TransferCompletedTimestamp` in the [ContactTraceRecord](ctr-data-model.md#ctr-ContactTraceRecord). 

## Download search results


You can download up to 3,000 search results at a time. 

## APIs to search contacts
APIs to search contacts

Use the following APIs to search contacts programmatically:
+ [SearchContacts](https://docs.aws.amazon.com/connect/latest/APIReference/API_SearchContacts.html)
+ [DescribeContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_DescribeContact.html)
+ [DescribeContactEvaluation](https://docs.aws.amazon.com/connect/latest/APIReference/API_DescribeContactEvaluation.html)

# Search for in-progress contacts in Amazon Connect
Search for in-progress contacts

For a contact that is handled by an agent, a contact is considered **In Progress** until the agent completes After Contact Work. For a contact that is never handled by an agent, a contact is considered **In Progress** until the contact is disconnected.

**Topics**
+ [

## Permissions needed to search for in-progress contacts
](#permissions-inprogress)
+ [

## Contact states supported by Contact search
](#contactstates-inprogress)
+ [

## How to search for in-progress contacts
](#howto-search-inprogress)
+ [

## Filter contacts by using timestamp types
](#filter-by-timestamp)
+ [

## View in progress contacts
](#view-inprogress-contacts)
+ [

## Review real-time transcripts
](#review-realtime-transcripts)

## Permissions needed to search for in-progress contacts
Permissions needed to search for in-progress contacts

The permissions needed to search for in-progress contacts are the same as those for searching for completed contacts. For more information, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts).

## Contact states supported by Contact search
Contact states supported by Contact search

The ability to search for in-progress contacts varies by channel (see [Contact events data model](contact-events.md#contact-events-data-model) for reference):
+ **Voice**
  + You can search for in-progress queued callbacks after they are queued, connected to an agent or disconnected.
  + For other voice contacts, you can search them only after they are connected to an agent, or have been disconnected. Queued in-progress voice contacts (with the exception of callbacks) are not shown on the **Contact search** page.
+ **Chat**: You can search for contacts after they are connected to system, queued, connected to an agent or disconnected.
+ **Tasks** and **Email**: You can search for all in-progress after they are initiated.

## How to search for in-progress contacts
How to search for in-progress contacts

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**.

1. Select the **Contact status** filter and change the selected value to **In progress**. The default Contact status is **Completed**.  
![\[The in progress filter.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-in-progress-filter.png)

## Filter contacts by using timestamp types
Filter contacts by using timestamp types

You can search for contacts in a particular contact state using **Timestamp type** within the **Time range** filter. For example, you can search for task contacts that are scheduled for the next day by selecting **Contact status = In progress**, **Timestamp type = Scheduled** and the appropriate date within **Time range**.

The following timestamp types are supported: initiated, connected (to agent), disconnected and scheduled. When you search for contacts using a certain ** Timestamp type**, the search results do not contain contacts that do not have that timestamp populated, e.g. if you search for a contact with **Timestamp type = Disconnected** and **Contact status = In progress**, then you will only view contacts that are in After Contact Work state.

**Important**  
The **Time range** filter on the **Contact search** page has **Timestamp type** set to ** Initiated** by default. Before the Timestamp type selection was introduced, the Timestamp type used by the **Time range** filter was **Disconnected**.
Saved searches on **Contact search** created before to the launch of the ability to search for in-progress contacts (launched September 2023) have been updated with the filters **Contact status = Completed** and **Timestamp type = Disconnected**. These selections were implied before the launch of in-progress contacts.

## View in progress contacts
View in progress contacts

You can click on a Contact ID within the **Contact search** results to view details of an in-progress contact. 

![\[View an in-progress contact.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-in-progress-view.png)


### Important things to know
Important things to know
+ The **Contact details** page for an in-progress contact shows data available at the time **Contact details** page was opened. It does not automatically refresh as the contact progresses. You need to refresh the page manually using your browser.
+ Certain fields on **Contact search** and **** may have missing or inconsistent information while the contact is in progress. After a contact is completed, information is eventually made consistent with the underlying contact record, after the page is manually refreshed. 
+ There may be a delay between the contact being **Completed** and the contact being marked as **Completed** on the contact record.

## Review real-time transcripts
Review real-time transcripts

For voice contacts, with real-time call analytics enabled, you can view transcripts of a contact in real-time on a **Contact details** page if you have the security profile permission **Contact transcripts (unredacted) - Access**. 

**Note**  
Redaction is not supported for in-progress voice contacts. Users with **Contact transcripts (unredacted) - Access** can not view in-progress voice contacts.

Choose the refresh icon on the bottom of the transcript to pull the latest available turns of the conversation. The following image shows the location of the refresh icon on the page.

![\[A transcript, the refresh icon at the bottom of the page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-real-time-transcripts.png)


# Search for contacts in Amazon Connect by using custom contact attributes or contact segment attributes
Search by custom contact attributes or contact segment attributes

You can create search filters based on custom contact attributes (also called [user-defined contact attributes](connect-attrib-list.md#user-defined-attributes)) or contact segment attributes. 

For example, if you add `AgentLocation` and `InsurancePlanType` to your contact records as custom attributes, you can search for contacts with specific values in these attributes, such as calls handled by agents located in Seattle, or calls made by customers who purchased homeowners insurance.

**Topics**
+ [

## Required permissions to configure searchable contact attributes
](#permissions-search-custom-attributes)
+ [

## Configure searchable custom contact attributes
](#configure-search-custom-attributes)
+ [

## Edit, add, or remove contact attributes
](#edit-add-remove-attribute-keys)
+ [

## Filter contact search results on contact attributes
](#howto-search-for-custom-attributes)
+ [

## Filter contact search results on contact segment attributes
](#filter-contact-search-segment)

## Required permissions to configure searchable contact attributes


By default, a custom attribute isn't indexed until someone with appropriate permissions, such as an admin or manager, specifies it should be searchable. You grant permissions to select users so they can configure which custom contact attributes can be added as a search filter. 

Assign the following permissions to their security profile: 
+ Enable one of the following permissions to access the **Contact Search** page:
  + **Contact search**. Allows you to search for all contacts.
  + **View my contacts**: Allows agents to view only those contacts that they handled.
+ **Contact attributes**: Allows users to view contact attributes. Also controls access to the search filters based on contact attributes.
+ **Configure searchable contact attributes** - **All**: People who have this permission determine what custom data will be searchable (by people who have the **Contact attributes** permission). It allows them to access the following configuration page:   
![\[The search customer contact attributes page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-custom-attributes-configuration-page.png)

## Configure searchable custom contact attributes


1. On the **Contact search** page, choose **Add filter**, **Custom contact attribute**. Only people with **Configure searchable contact attributes** permissions in their security profile see this option.  
![\[The contact search page, the filters dropdown menu, the Customer contact attribute option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-custom-attributes-specify1.png)

1. The first time you choose **Custom contact attribute**, the following box appears, indicating no attributes have been configured for this Amazon Connect instance. Choose **Specify searchable attribute keys**.  
![\[The add filter option, a message that no keys have been specified for search.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-custom-attributes-specify2.png)

1. In the **Attribute key** box, type the name of your custom attribute, and then choose **Add key**.
**Important**  
You must type the exact key name. It is case sensitive.

1. When finished, choose **Save**.

Your users will be able to search on these keys for any future contacts.

## Edit, add, or remove contact attributes


To edit, add, or remove keys, choose **Attribute**, **Settings**. If you don't see the **Settings** option, you don't have the required permissions.

![\[The add filter tab, the settings gear in the upper right corner of the page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-search-custom-attributes-settings.png)


## Filter contact search results on contact attributes


Users who have the **Contact attributes** permission in their security profile can find contacts by using the contact attribute filters.

1. On the **Contact search** page, choose **Add filter**, **Custom contact attribute**, and then choose **Specify searchable attribute keys**.

1. On the **Searchable customer contact attributes** page, in the **Attribute key** box, enter the attribute key, and choose **\$1Add key** and then choose **Save**.

1. Return to the **Contact search** page. Use **Add filter** to choose from the dropdown menu the attribute you just added. In the **Attribute value box**, enter the value you want to find. 

## Filter contact search results on contact segment attributes


After you create predefined attributes and attached them to a contact segment (explained in [Use contact segment attributes](use-contact-segment-attributes.md)), you can filter contact search results based on the segment attribute values. 

The following image shows the **Contact search** page, and the option to filter contact search results based on custom segment attribute values. 

![\[The Contact search page, the Segment attributes filter.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/attribute-management-4.png)


1. On the **Contact search** page, under the **Add filter** drop-down, select **Custom contact segment attributes**.

1. Select the predefined attribute you want to apply to filtering criteria. For example, the previous images shows Business-unit-name as the **Attribute name**.

1. If the selected predefined attribute has established values, they are listed under **Attribute value(s)** as a multi-selection choice. For example, the previous image shows Accounts, Billing, Customer Support, and Marketing as options.

1. Choose **Apply**. 

# Monitor live and recorded conversations using Amazon Connect Contact Lens
Monitor live & recorded conversations

Managers can monitor or listen-in to live conversations between agents and contacts. They can also review and download recordings of past interactions for both automated interactions (IVR) and agent interactions. 

Amazon Connect provides two options to set up contact monitoring:
+ **Multi-party contacts**: Monitor live conversations that have up to six participants. There's no additional charge for this option.

  This option enables you to [barge](monitor-barge.md) into live conversations (voice and chats), and record chat transcripts.

  You enable this capability on the Amazon Connect console by choosing **Enable Multi-Party Calls and Enhanced Monitoring for Voice** and **Enable Multi-Party Chats and Enhanced Monitoring for Chat**, as shown in the following image.   
![\[The Telephony and chat options page, the enhanced contact monitoring capabilities section.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/barge-voice-chat-enable.png)
+ **Three-party voice contacts**: Monitor conversations that have up to three participants. This is the default behavior. There's no additional charge for this option.

  You cannot barge into calls or chats.

  You enable this capability by adding a [Set recording and analytics behavior](set-recording-behavior.md) block to your flow.

How agents manage the conferencing experience is very different between these two options. Enhanced monitoring provides more functionality for the agents. See [Comparison of enhanced contact monitoring (multi-party) and three-party functionality in Amazon Connect](three-party-multi-party-comparison.md).

**Important**  
New events are added to the agent event stream when you choose **Enhanced contact monitoring capabilities**.   
If you choose to start with the default three-party capability enabled by the [Set recording and analytics behavior](set-recording-behavior.md) block, and then later switch to **Enhanced contact monitoring capabilities**, know that new events will be added to the agent event stream. This will cause problems if you have customized your contact center based on the previous agent event stream.

**Topics**
+ [When, what, and where for contact recordings](about-recording-behavior.md)
+ [How to set up S3 Object Lock for immutable call recordings](s3-object-lock-call-recordings.md)
+ [Comparison of multi-party and three-party functionality](three-party-multi-party-comparison.md)
+ [Enable enhanced multi-party contact monitoring](monitor-conversations.md)
+ [Enable three-party call monitoring](enable-three-party-monitoring.md)
+ [Enable contact recording](set-up-recordings.md)
+ [Assign permissions](monitor-conversations-permissions.md)
+ [Monitor live conversations](monitor-conversations-howto.md)
+ [Barge live voice and chat conversations](monitor-barge.md)
+ [Review recorded conversations](review-recorded-conversations.md)
+ [Troubleshoot monitoring conversations](ts-monitoring-conversations.md)

# When, what, and where for contact recordings in Amazon Connect
When, what, and where for contact recordings

This topic explains when conversations are recorded, where recordings are stored, and how to access them. It also provides best practices for managing recordings and transcripts.

**Topics**
+ [

## When is a conversation recorded?
](#when-conversation-recorded)
+ [

## Where are recordings and transcripts stored?
](#where-are-recordings-stored)
+ [

## When are recordings available?
](#when-are-recordings-available)
+ [

## Prevent agents from accessing recordings
](#recording-prevent-access)
+ [

## Headset requirements for listening to recordings
](#recording-headset-requirements)

## When is a conversation recorded?

+ The call recording feature has options for choosing whether to record the customer and system audio during IVR interactions or any combination of customer, agent, or both during agent interactions. 
+ There are a total of two possible recordings per contact: one for automated interactions (that is, IVR) and one for agent interactions. Enabling or disabling recording for automated interactions takes effect immediately. Conversely, modifying recording for agent interactions only takes effect after the agent joins the call.
+ Agent audio is NOT transmitted to Amazon Connect when the agent is not on a call. On November 9, 2023, Amazon Connect deployed an optimization to improve agent productivity that pre-configures the microphone media stream of the agent's browser before the contact arrives. This reduces setup time for both incoming and outgoing calls. As a result, the microphone icon in the agent's browser appears to be on, even when the agent is not on a call. 
+ When a customer is on hold during agent interaction, the agent is still recorded.
+ The transfer conversation between agents is recorded.
+ When a call is transferred during a flow or IVR interaction (for example, by using the Transfer to phone number block) the recording continues to capture what the customer says and hears even after they are tranferred to an external voice system.
+ Any transfers to external numbers during the agent interaction are not recorded after the agent leaves the call.
+ If a participant mutes their own microphone, for example, to consult with a someone sitting next to them, their side-bar conversation is not recorded. 

## Where are recordings and transcripts stored?
Where are recordings stored?

Agents and contacts are stored on separate, stereo audio channels.
+ For automated (IVR) interactions, the stereo file contains customer audio in the right channel and system prompts in the left channel.
+ For agent interactions, the agent audio is stored in the right channel and customer (as well conferenced third parties) audio in the left channel.

Recordings are stored in the Amazon S3 bucket that are [created for your instance](amazon-connect-instances.md#get-started-data-storage). Any user or application with the appropriate permissions can access the recordings in the Amazon S3 bucket. 

Encryption is enabled by default for all call recordings using Amazon S3 server-side encryption with KMS. The encryption is at the object level. The reports and recording objects are encrypted; there's no encryption at the bucket level.

You shouldn't disable encryption.

**Important**  
For voice conversations to be stored in an Amazon S3 bucket, you need to enable recording in the flow block using the [Set recording and analytics behavior](set-recording-behavior.md) block.
For chat conversations, if there's an S3 bucket for storing chat transcripts, then all chats are recorded and stored there. If no bucket exists, then no chats are recorded. However, if you want to monitor chat conversations, you still need to add the [Set recording and analytics behavior](set-recording-behavior.md) block to the flow.
If a recording is moved from one S3 bucket to another for any reason, such as the retention period has expired, then the recording will no longer be accessible by Amazon Connect.

**Tip**  
We recommend using the contact ID to search for recordings.  
Even though many call recordings for specific contact IDs may be named with the contact ID prefix itself (for example, 123456-aaaa-bbbb-3223-2323234.wav), there is no guarantee that the contact IDs and name of the contact recording file *always* match. By using **Contact ID** for your search on the [Contact search](search-recordings.md) page, you can find the correct recording by referring to the audio file on the contact record.

## When are recordings available?


When the recording for an agent interaction is enabled, the recording is placed in your S3 bucket shortly after the contact is disconnected. When IVR recording is enabled, the recording is placed in your S3 bucket shortly after the contact is disconnected or once the call is answered by an agent. You can [review the recording](review-recorded-conversations.md) for both agent interactions and automated interactions (IVR)..

**Important**  
You can also access the recording from the customer's [contact record](sample-ctr.md). The recording is available in the contact record, however, only after the contact has left the [After Contact Work (ACW) state](metrics-agent-status.md#agent-status-acw). The IVR recording becomes available shortly after the call gets connected to the agent or contact is disconnected.

**Tip**  
Amazon Connect uses the Amazon S3 [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) and [MultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_MultipartUpload.html) APIs to upload the call recording to your S3 bucket. If you are using [S3 Event Notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html) when call recordings are uploaded successfully to your bucket, make sure you enable the notification for **All object create events**, or for both *s3:ObjectCreated:Put* and *s3:ObjectCreated:CompleteMultipartUpload* event types. 

## Prevent agents from accessing recordings
Prevent agents from accessing recordings

 To prevent agents from accessing recordings outside of their agent hierarchy, assign them the **Restrict contact access** security profile permission. For more information, see [Assign permissions to review past contact center conversations in Amazon Connect](assign-permissions-to-review-recordings.md). 

## Headset requirements for listening to recordings
Headset requirements

You need to use an output device (headset or other device) that supports stereo output so you can hear both the agent and customer audio.

Agent and customer recordings are presented in two separate channels. With a full headset, each side will play one channel. But for a one-ear headset, there isn't a mechanism to mix two channels into one. 

# How to set up S3 Object Lock for immutable call recordings
How to set up S3 Object Lock for immutable call recordings

You can use Amazon S3 Object Lock in combination with your recording bucket to help prevent call recordings and IVR recordings from being deleted or overwritten for a fixed amount of time, or indefinitely. 

Object Lock adds another layer of protection against object changes and deletion. It can also help meet regulatory requirements for Write-Once-Read-Many (WORM) storage.

## Important things to know
Important things to know
+ You can enable Amazon S3 Object Lock on new and existing buckets.
+ You must enable versioning on your call recording bucket.
+ After you enable Amazon S3 Object Lock, you cannot remove it.
+ We recommend using a dedicated call recording bucket because all objects will be locked after the default Object Lock retention policy is applied.
+ Ensure that your retention policy is appropriate for your requirements. After the policy is configured, your call recordings will be protected from deletion for the duration specified.
+ We strongly recommended you thoroughly test the policy in a non-production environment before implementing it in production.

## Step 1: Create an S3 bucket with Object Lock enabled
Step 1: Create a new S3 bucket with Object Lock enabled

For a tutorial on creating a new S3 bucket with Object Lock enabled, see [Protect Data on Amazon S3 Against Accidental Deletion or Application Bugs Using S3 Versioning, S3 Object Lock, and S3 Replication](https://aws.amazon.com/getting-started/hands-on/protect-data-on-amazon-s3/). 

## Step 1A: Enable Object Lock for an existing Amazon S3 bucket


For information about enabling Object Lock on an existing bucket, see [ Enable Object Lock on an existing Amazon S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-configure.html#object-lock-configure-existing-bucket), in the *Amazon S3 User Guide*.

## Step 2: Configure Amazon Connect to use the S3 bucket for call recordings
Step 2: Configure Amazon Connect

1. Open the Amazon Connect console at [https://console.aws.amazon.com/connect/](https://console.aws.amazon.com/connect/).

1. On the instances page, choose the instance alias.  
![\[The Amazon Connect virtual contact center instances page, the instance alias.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/instance.png)

1. In the navigation pane, choose **Data storage**.

1. In the **Call recordings** section, choose **Edit**.

1. Choose **Select an existing S3 bucket**, and then in the **Name** dropdown box choose the bucket that you enable Object Lock for.

1. Choose **Save**.

## Step 3: Test Object Lock is enabled
Step 3: Test Object Lock is enabled

1. Make a test call to your contact center to generate a call recording.

1. Log in to Amazon Connect at https://*your-instance*.my.connect.aws/home, with an Admin account, or an account that has [permissions to search for contacts](contact-search.md#required-permissions-search-contacts). 

1. Choose **Analytics and optimization**, **Contact search**. Search for your call recording to find the contact ID. Copy the contact ID. You're going to use it in the next step to locate the call recording in your S3 bucket.

1. Open the Amazon S3 console, select the bucket you created in Step 1, and follow the path prefix. The path to the call recording includes the year, month, and day the recording was made. After you're in the correct path prefix, search for the contact ID of the call recording.   
![\[The Amazon S3 console, the search box, the path prefix.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/s3-objectlock-pathprefix.png)

1. Select the **Show versions** toggle next to the **Search** box. This option allows you to attempt to delete the object instead of only applying a delete marker. Applying a delete marker is the standard behavior when you delete an object from an S3 bucket with versioning enabled.

1. Select the call recording (the box to the left of the recording name), and then choose **Delete**. In the confirmation box, enter **permanently delete** and select **Delete objects**.

1. Review the **Delete objects: status** notification to confirm that the delete operation has been blocked due to the Object Lock policy.   
![\[The Amazon S3 console, Delete objects status notification.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/s3-objectlock-failed.png)

# Comparison of enhanced contact monitoring (multi-party) and three-party functionality in Amazon Connect
Comparison of multi-party and three-party functionality

This topic describes how the agent's experience differs when [enhanced contact monitoring](monitor-conversations.md) (multi-party) is enabled instead of the default three-party capability.

For information about new functionality in the existing Connection and Contact API in Amazon Connect Streams, see the [Amazon Connect Streams Readme](https://github.com/amazon-connect/amazon-connect-streams/blob/master/README.md). 

Following are key features for agents who use multi-party monitoring:
+ All agents see all of the connections in a call.
+ All agents have exactly the same capabilities as any other agent on the call. This takes into affect the moment an agent accepts the invitation to join the call.
+ Before a warm transfer is complete, an agent can start talking to the caller as well as disconnect any other agent on the call.

**Note**  
When calls have three or more participants, agents can add participants to the call even after a caller drops.  
The following example illustrates how previous and next contact IDs are mapped when an agent performs series of consults followed by a transfer.  

![\[Diagram showing how contact IDs are mapped during a multi-party call.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/connect-consult-diagram.png)

The following example illustrates how previous and next contact IDs are mapped in a scenario where agents perform a series of transfers.  

![\[Diagram showing how previous and next contact IDs are mapped when agents transfer callers.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/connect-transfer-diagram.png)

The following example illustrates how previous and next contact IDs are mapped in a scenario where additional web, in-app, and video calling users are added  

![\[Diagram showing how contact IDs are mapped when additional web, in-app, and video calling users are added.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/webrtc-diagram2.png)


The following table summarizes the differences between the agent's experience using the Contact Control Panel (CCP) for three-party calls and multi-party calls. For more information about the agent's experience with multi-party conversations, see [Host multi-party calls](multi-party-calls.md) and [Host multi-party chats](multi-party-chat.md).
+ Primary agent: the first agent on the call.
+ Secondary agent: any agent other than the first agent on the call.


| Three-party calls | Multi-party calls | 
| --- | --- | 
|  Agent can control hold, resume, and disconnect only the parties they add.  |  All agents are have the same call control capabilities.  | 
|  Agent can add one other participant to an existing call, for a total of three participants (the agent, the caller, and another participant).  |  Any agent on the call can add additional participants, as long as the total number of participants on the call, including themselves, does not exceed six.  When calls have three or more participants, agents can add participants to the call even after a caller drops.   | 
|  Agent can put only the party they added on hold.  |  Any agent on the call can put any party on hold.  | 
|  When a primary agent places a secondary agent on hold, the secondary agent can't take themselves off hold.  |  Any agent on the call can take themselves off hold.  | 
|  Secondary agent can talk to the primary agent during hold.  |  Secondary agents cannot talk to each other until they are taken off hold.   | 
|  Primary agent can only mute themselves. Secondary agent can only mute themselves.  |  Any agent on the call can mute any other participant on the call.  | 
|  An agent can only unmute themselves, not another agent.  |  An agent can only unmute themselves, not another agent.  However, an agent can unmute participants who are not agents.   | 
|  When an agent disconnects (leaves or is disconnected), call control continues to be available to the remaining agent(s) on the call.  |  When an agent disconnects, control of the call is transferred to the remaining agents.   | 
|  Only the primary agent can disconnect a party on the call. The secondary agent can disconnect the caller only if the primary agent has disconnected.  |  All agents have the capability to disconnect any other party.  | 
|  The primary agent can see two connections (caller and another party), while a secondary agent sees only the transfer connection.  |  All agents can see all connections.  | 
|  An agent only sees **internal transfer** for another agent on the call.  |  An agent sees the quick connect ID for other agents, instead of just **internal transfer**.  | 
|  Not applicable.  |  When an party is being dialed, an agent on a multi-party call cannot add another party until the prior dial operation is completed (party added or call leg terminated).  | 
|  Additional WebRTC users cannot be added.  |  [Additional WebRTC users can be added](enable-multiuser-inapp.md).   | 

# Enable enhanced multi-party contact monitoring in Amazon Connect
Enable enhanced multi-party contact monitoring

Enhanced contact monitoring applies to voice calls and all supported types of chats: chat/SMS, WhatsApp, and Apple Messages for Business.

## Calls
Calls

Enhanced contact monitoring enables agents to [host](multi-party-calls.md) up to 6 participants on a call. Two supervisors can [monitor](monitor-conversations-howto.md) the call. It also enables managers to [barge](monitor-barge.md) into conversations.

For example, agents can have a group of six participants in the call at the same time. Two supervisors can monitor the call. The two supervisors can do two silent monitor sessions, or one silent monitor and one barge-in session. 

The total number of participants on a call would look like this:

1. Customer - participant

1. Agent 1 - participant

1. Agent 2 - participant

1. Agent 3 - participant

1. Agent 4 - participant

1. Agent 5 - participant

1. Supervisor who can listen but not barge in the call

1. Supervisor who can listen or barge in the call

There is no limit to the number of conversations that can be monitored in an instance. 

## Chats
Chats

Enhanced contact monitoring enables agents to [host](multi-party-chat.md) four additional participants on an ongoing customer service chat, for a total of six participants: the agent, the customer, and four other people. Agents can use quick connects to add participants.

Regardless of whether the enhanced contact monitoring capability is enabled for an instance, you can have up to five people monitor a chat at the same time. Only one supervisor can be in barged in mode for a given chat.

The total number of participants on the chat would look like this:

1. Customer

1. Agent

1. Supervisor who can monitor the chat and barge in

1. Supervisor who can monitor the chat but not barge in

1. Supervisor who can monitor the chat but not barge in

1. Supervisor who can monitor the chat but not barge in

1. Supervisor who can monitor the chat but not barge in

## Important things to know
Important things to know
+ New events are added to the agent event stream when you choose **Enhanced contact monitoring capabilities** on the Amazon Connect console. 

  However, if you instead choose to start with the default three-party capability enabled by the [Set recording and analytics behavior](set-recording-behavior.md) block, and then later switch to **Enhanced contact monitoring capabilities**, know that new events will be added to the agent event stream. This will cause problems if you have customized your contact center based on the previous agent event stream.
+ If you do not enable **Enhanced contact monitoring capabilities** at the instance level, you need to add and configure a [Set recording and analytics behavior](set-recording-behavior.md) block to your flow in order to get the chat monitoring and barge features.
+ By default, calls can have three participants, such as two agents and a caller, or an agent, a caller, and an external party. When you enable enhanced contact monitoring, the agent's experience changes. See [Comparison of multi-party and three-party functionality](three-party-multi-party-comparison.md). 
+ All agents have a ParticipantRole of 'AGENT' in the transcript. Supervisors have a ParticipantRole of 'SUPERVISOR' in the transcript.
+ The initiation method for the contact where the agent is invited is TRANSFER. For information about how to distinguish in reporting how often a participant is being invited instead of being transferred to, see [Identify conferences and transfers by using Amazon Connect contact records](identify-conferences-transfers.md).
+ This feature is only available in CCPv2. That is, the URL to access the CCP is https://*instance name*.my.connect.aws/ccp-v2/ and the URL to access the agent workspace is https://*instance name*.my.connect.aws/agent-app-v2/. It's also available in custom CCP using Amazon Connect Streams.js.
+ Before enabling the multi-party calls, if you are using Contact Lens or planning to do so in the future, see [Multi-party calls and conversational analytics](enable-analytics.md#multiparty-calls-contactlens). Contact Lens supports calls with up to 2 participants. We recommend that you disable Contact Lens in the [Set recording and analytics behavior](set-recording-behavior.md) block for contacts that are expected to have 3 and more participants.
+ In custom CCPs, use the updated Amazon Connect Streams API to enable multi-party calling, up to six parties. See the [Amazon Connect Streams](https://github.com/amazon-connect/amazon-connect-streams/blob/master/Documentation.md#connectcoreinitccp) documentation on GitHub. 
+ AWS GovCloud (US-West): You can't enable this feature using the console user interface. Instead, use the [https://docs.aws.amazon.com//connect/latest/APIReference/API_UpdateInstanceAttribute.html](https://docs.aws.amazon.com//connect/latest/APIReference/API_UpdateInstanceAttribute.html) API or contact AWS Support.

## How to enable enhanced multi-party contact monitoring
How to enable enhanced multi-party contact monitoring

1. In the Amazon Connect console, on the menu pane, choose **Telephony**.

1. On the **Telephony and chat options** page, scroll to the **Enhanced contact monitoring capabilities** section.  
![\[The Telephony and chat options page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/telephony-chat-options.png)

1. Choose the options you want to enable, and then choose **Save**.

1. Log in to the Amazon Connect admin website. [Assign security profile permissions](assign-permissions-to-review-recordings.md) to managers so they can monitor and barge live conversations, and review recordings.

1. Show managers how to [monitor live conversations](monitor-conversations-howto.md), [barge live conversations](monitor-barge.md) and [review past recordings](review-recorded-conversations.md) in Amazon Connect.

# Enable three-party call monitoring in Amazon Connect
Enable three-party call monitoring

**Important**  
This topic applies only if you have **NOT** enabled **Enhanced contact monitoring capabilities** on the Amazon Connect console as explained in [Enable enhanced multi-party contact monitoring](monitor-conversations.md).  
It only applies to voice calls that are limited to three parties or less.  
For information about how the conferencing experience differs for agents when enhanced monitoring capabilities are enabled, see [Comparison of multi-party and three-party functionality](three-party-multi-party-comparison.md).   
We recommend choosing three-party monitoring only if you have an external system that imposes a technical constraint that requires you to choose this option. Otherwise, enhanced monitoring is the way to go. There is no pricing difference.

You can add and configure a [Set recording and analytics behavior](set-recording-behavior.md) block to your flows to enable 3 participants on a contact, and 5 supervisors monitoring the call. Managers cannot barge in to a call.

For example, you can have a group of 3 participants on the call at the same time. Up to 5 supervisors can monitor the call. 

The total number of participants on a call would look like this:

1. Customer - participant

1. Agent 1 - participant

1. Agent 2 - participant

1. Supervisor who can listen but not barge in the call

1. Supervisor who can listen but not barge in the call

1. Supervisor who can listen but not barge in the call

1. Supervisor who can listen but not barge in the call

1. Supervisor who can listen but not barge in the call

To view a sample flow with the **Set recording behavior** block configured, see [Sample recording behavior in Amazon Connect](sample-recording-behavior.md).

**Note**  
 We recommend using the **Set recording behavior** block in an inbound or outbound whisper flow for the most accurate behavior.   
Using this block in a queue flow does not always guarantee that calls are recorded. This is because the block might run after the contact is joined to the agent.

**To set up monitoring for three-party contacts**

1. Log in to your Amazon Connect instance using an account that has permissions to edit flows.

1. On the navigation menu, choose **Routing**, **Flows**.   
![\[Amazon Connect navigation menu, Routing, flows.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/menu-contact-flows.png)

1. Open the flow that handles customer contacts you want to monitor.

1. In the flow, before the contact is connected to an agent, add a [Set recording and analytics behavior](set-recording-behavior.md) block to the flow.

1. To configure the [Set recording and analytics behavior](set-recording-behavior.md) block, under **Agent and customer voice recording**, choose **On** and then choose **Agent and Customer**. This only takes effect after the agent joins the call. 

1. Choose **Save** and then **Publish** to publish the updated flow.

1. [Assign security profile permissions](assign-permissions-to-review-recordings.md) to managers monitor conversations.

1. Show managers how to monitor conversations.

# Enable contact recording
Enable contact recording

To enable the recording of voice conversations, you need to add a [Set recording and analytics behavior](set-recording-behavior.md) block to your flow. You need to do this regardless of whether your Amazon Connect instance is enabled for multi-party contacts (enhanced contact monitoring) or third-party contacts.

**Important**  
**Chats**: You only need to perform these steps for chat conversations if [enhanced contact monitoring for chat contacts](monitor-conversations.md) is not enabled for your instance. Otherwise, chat transcripts are automatically recorded because an S3 bucket was created to store them when you set up your instance. To stop recording chat transcripts, remove the S3 bucket. 

**To set up recording of conversations**

1. Log in to your Amazon Connect instance using an account that has permissions to edit flows.

1. On the navigation menu, choose **Routing**, **Flows**.   
![\[Amazon Connect navigation menu, Routing, flows.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/menu-contact-flows.png)

1. Open the flow that handles customer contacts you want to record.

1. In the flow, before the contact is connected to an agent, add a [Set recording and analytics behavior](set-recording-behavior.md) block to the flow.

1. To configure the [Set recording and analytics behavior](set-recording-behavior.md) block, choose from the following: 
   + Automated interaction call recording
     + **On** starts recording customer and IVR audio immediately.
     + **Off** pauses any ongoing IVR recording.
   + Agent and customer voice recording
     + When **On**, you can select from Agent and Customer, Agent only, or Customer only. This only take effect after the agent joins the call. 
     + When **Off**, no recording is captured when the agent joins the call.
   + To record chat conversations, choose **Agent and Customer**.
**Important**  
You only need to perform these steps for chat conversations if [enhanced contact monitoring for chat contacts](monitor-conversations.md) is not enabled for your instance. Otherwise, chat transcripts are automatically recorded because an S3 bucket was created to store them when you set up your instance. To stop recording chat transcripts, remove the S3 bucket. 

1. Choose **Save** and then **Publish** to publish the updated flow.

1. [Assign security profile permissions](assign-permissions-to-review-recordings.md) to managers so they can review recordings.

1. Show managers how to access past recordings in Amazon Connect. See [Review recorded conversations](review-recorded-conversations.md).

**To set up recording behavior for outbound calls**

1. Create a flow, using the outbound whisper flow type.

1. Add a [Set recording and analytics behavior](set-recording-behavior.md) block to that flow.

1. Set up a queue that will be used for making outbound calls. In the **Outbound whisper flow** box, choose the flow that has [Set recording and analytics behavior](set-recording-behavior.md) in it. 

**To set up human readable logs that contain key interaction points with Amazon Lex**

1. Log in to the Amazon Connect console.

1. On the navigation menu, choose **Flows**. 

1. Scroll down the page, choose **Enable Bot Analytics and Transcripts in Amazon Connect**, and then choose **Save**. 

1. In the Amazon Connect admin website, [assign security profile permissions](assign-permissions-to-review-recordings.md#assign-permissions-to-view-automated-recordings-transcripts) to managers so they can view details of the interaction with DTMF menus and Lex bots and/ or additional information about flows.

# Assign permissions to monitor live conversations in the Amazon Connect Contact Control Panel (CCP)
Assign permissions

For managers to monitor live conversations, you assign them the **CallCenterManager** and **Agent** security profiles. To allow agent trainees to monitor live conversations, you may want to create a security profile specific for this purpose.

**To assign a manager permissions to monitor a live conversation**

1. Go to **Users**, **User management**, choose the manager, and then choose **Edit**.

1. In the Security Profiles box, assign the manager to the **CallCenterManager** security profile. This security profile also includes a setting that makes the icon to download recordings appear in the results of the **Contact search** page. 

1. Assign the manager to the **Agent** security profile so they can access the Contact Control Panel (CCP), and use it to monitor the conversation.

1. Choose **Save**. 

**To create a new security profile for monitoring live conversations**

1. Choose **Users**, **Security profiles**. 

1. Choose **Add new security profile**. 

1. Expand **Analytics and optimization**, then choose **Access metrics** and **Real-time contact monitoring**.

   **Access metrics** is needed so they can access the real-time metrics report, which is where they choose which conversations to monitor.

1. Expand **Contact Control Panel**, then choose **Access Contact Control Panel** and **Make outbound calls**.   
![\[The contact control panel section of the security profiles page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/monitor-conversations-agent-permissions2.png)

   These permissions are needed so they can monitor the conversation through the Contact Control Panel.

1. Choose **Save**. 

Next, show your managers how to monitor conversations. Continue to [Listen to live conversations or read live chats in Amazon Connect](monitor-conversations-howto.md).

# Listen to live conversations or read live chats in Amazon Connect
Monitor live conversations

Before you can listen to live conversations or read live chats, the Amazon Connect admin needs to [enable](monitor-conversations.md) the feature, [assign you permissions](monitor-conversations-permissions.md), and ensure you are assigned to a routing profile that supports the channel being monitored. After that's done, you can do these steps. 

For information about how many people can listen in to a conversation or follow a chat, see [Amazon Connect feature specifications](feature-limits.md).

1. Log in to Amazon Connect with a user account that is assigned the **CallCenterManager** security profile, or that has the **Real-time contact monitoring** security profile permission.

1. Open the Contact Control Panel (CCP) by choosing the phone icon in the top-right corner of your screen. You'll need the CCP open to connect to the conversation. 

1. To choose the agent conversation you want to monitor, in Amazon Connect choose **Analytics and optimization**, **Real-time metrics**, **Agents**. The following image shows the **Real-time metrics** page, with an arrow pointing to the **Agents** option.  
![\[The real-time metrics page, the Agents option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/real-time-metrics-agents.png)

1. To monitor voice conversations: Next to the names of agents in a live voice conversation, there is an eye icon. Choose the icon to start monitoring the conversation. The following image shows the eye icon next to the **Voice** channel.  
![\[The real-time metrics page, the Channels column, the voice channel.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/monitor-call-icon.png)
**Note**  
**Firefox users**: When using the Firefox browser to monitor and barge, you need to switch to the CCP tab after starting to monitor. The CCP conforms to Firefox microphone usage guidance, and only has access to connect to your microphone when CCP tab is in focus.

   When you're monitoring a conversation, the status in your CCP changes to **Monitoring**.

1. To monitor chat conversations: For each agent you'll see the number of live chat conversations they're in. Click on the number. Then choose the conversation you want to start monitoring. 

   When you're monitoring a conversation, the status in your CCP changes to **Monitoring**.

1. To stop monitoring the conversation, in the CCP choose **End call** or **End chat**.

   When the agent ends the conversation, monitoring stops automatically.

# Barge into live voice and chat conversations between contact center agents and customers
Barge live voice and chat conversations

**Tip**  
**New user?** Check out the [Amazon Connect Supervisor Experience Workshop](https://catalog.workshops.aws/amazon-connect-supervisor-experience). This online course has a section on how to monitor contacts.

Supervisors and managers can barge into live voice and chat conversations between agents and customers. To set this up, you need to turn on the **Enhanced monitoring** capability in the Amazon Connect console, provide managers with the appropriate permissions, and show them how to barge into conversations.

**Looking for how many people can barge the same conversation at one time? ** See [Amazon Connect feature specifications](feature-limits.md).

There is no limit to the number of conversations that you can barge in an instance.

The barge feature is included in Amazon Connect voice service fees. For pricing, see the [Amazon Connect Pricing](https://aws.amazon.com/connect/pricing/) page.

## Set up barge for voice and chat
Set up barge for voice and chat

In the Amazon Connect console, select the following telephony options: 
+ **Enable Multi-Party Calls and Enhanced Monitoring for Voice**. This option enables access to multi-party calling, detailed contact records, silent monitoring, and barge capabilities.
+ **Enable Multi-Party Chats and Enhanced Monitoring for Chat**. This option enables users with the appropriate security profile permissions to barge chats.

The following image shows these options on the **Telephony and chat options** page.

![\[The Telephony options page, the enhanced contact monitoring capabilities.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/barge-voice-chat-enable.png)


**Note**  
If multi-party calling is already enabled, to also enable enhanced monitoring you need to use the *UpdateInstanceAttribute* API with the `ENHANCED_CONTACT_MONITORING` attribute for the first time. Or, you can turn the feature OFF and then back ON to update your settings. For more information, see [ UpdateInstanceAttribute](https://docs.aws.amazon.com/connect/latest/APIReference/API_UpdateInstanceAttribute.html) in the *Amazon Connect API Reference Guide*.
Any new instances will automatically have this feature enabled.
Before enabling **Enhanced contact monitoring capabilities**, ensure that you are using the latest version of the [Contact Control Panel](https://docs.aws.amazon.com/connect/latest/adminguide/upgrade-to-latest-ccp.html) (CCP) or [Agent workspace](https://docs.aws.amazon.com/connect/latest/adminguide/agent-user-guide.html). If you are using [StreamsJS](https://github.com/amazon-connect/amazon-connect-streams) to customize or embed the CCP, upgrade to version 2.4.2 or later.
For instances that do not have a service-linked role, you must create one in order to enable the feature. For more information on how to enable service-linked roles, see [Use service-linked roles for Amazon Connect](https://docs.aws.amazon.com/connect/latest/adminguide/connect-slr.html).

## Assign security profile permissions
Assign security profile permissions

For managers to barge live conversations, you assign them the **CallCenterManager** and **Agent** security profiles. 

To allow specific supervisors to barge live conversations, we recommend that you create a security profile specific for this purpose. They need the following security profile permissions:
+ **Access metrics**. Enables you to access real-time metrics reports, which is where you choose which conversation you would like to monitor and barge.
+ **Real-time contact monitoring**: Enables you to monitor both voice and chat conversations.
+ **Real-time contact barge-in**: Enables you to barge both voice and chat conversations.
+ **Access Contact Control Panel**

## Barge live calls with contacts
Barge live calls with contacts

**Tip**  
For the number of supervisors who can monitor a call at the same time, see [Amazon Connect feature specifications](feature-limits.md). 

1. Log in to the Amazon Connect admin website at https://*instance name*.my.connect.aws/. Use an account that is assigned the **CallCenterManager** security profile or that has the required security profile permissions.

1. Open your CCP. It must be open before you can barge a call. 

1. On the Amazon Connect admin website navigation menu, choose **Analytics and optimization**, **Real-time metrics**, **Agents**.

1. Choose the eye icon that appears next to the **Voice** channel of the agent that you want to monitor, as shown in the following image. You can barge into a conversation that you had been monitoring already.   
![\[The Real-time metrics page, the eye icon next to a Voice channel.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/monitor-barge-voice-channel.png)

1. This takes you to the open CCP, as shown in the following image. You can monitor the call and toggle between the **Monitor** and **Barge** states. The following image shows the **Monitor** state.  
![\[The CCP, the Monitor and Barge toggles.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/monitor-barge-voice-channel-ccp.png)

## Barge live chats with contacts
Barge live chats with contacts

1. Log in to the Amazon Connect admin website at https://*instance name*.my.connect.aws/. Use an account that is assigned the **CallCenterManager** security profile or that has the required security profile permissions.

1. Open your CCP. It must be open before you can barge a chat. 

1. On the Amazon Connect admin website navigation menu, choose **Analytics and optimization**, **Real-time metrics**, **Agents**.

1. Choose the eye icon that appears next to the **Chat** channel of the agent that you want to monitor, as shown in the following image. You can barge into a conversation that you had been monitoring already.   
![\[The Real-time metrics page, the eye icon next to a chat channel.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/monitor-barge-chat-channel.png)

1. This takes you to the open CCP, as shown in the following image. You can monitor the chat conversation and toggle between the **Monitor** and **Barge** states. The following image shows the **Monitor** state.  
![\[The CCP, the Monitor and Barge toggles.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/barge-chat-ccp.png)

   Following is an example of what the CCP looks like when a supervisor barges into a chat.  
![\[The CCP, a barge message from the supervisor.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/barge-chat-message.png)

# Review recorded conversations between agents and customers using Amazon Connect
Review recorded conversations

Managers can review past conversations between agents and customers. To set this up, you need to [set up recording behavior](set-up-recordings.md), assign managers the appropriate permissions, and then show them how to access the recorded conversations. 

**When is a conversation recorded?** For detailed information about call recording behavior, see [When, what, and where for contact recordings](about-recording-behavior.md). 

**Tip**  
When call recording is enabled, the recording is placed in your S3 bucket shortly after the contact is disconnected. Then the recording is available for you to review it using the steps in this article.   
You can also access the recording from the customer's [contact record](sample-ctr.md). The recording is available in the contact record, however, only after the contact has left the [After Contact Work (ACW) state](metrics-agent-status.md#agent-status-acw).

**How do I manage access to recordings?** Use the **Call recordings (unredacted)** security profile permission to manage who can listen to recordings, and access the corresponding URLs that are generated in S3. For more information about this permission, see [Assign permissions](assign-permissions-to-review-recordings.md).

## Review recordings and transcripts of past agent conversations


This section covers the steps that a manager takes to review past recordings and transcripts of past agent conversations. For chat contacts, the same transcript contains the agent interaction and the automated interaction (for example, with chat bots).

1. Log in to Amazon Connect with a user account that has that has permissions to access [ the contact search page](contact-search.md#required-permissions-search-contacts) and to [access recordings](assign-permissions-to-review-recordings.md).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**. 

1. Filter the list of contacts by date, agent login, phone number, or other criteria. Choose **Search**.
**Tip**  
We recommend using the **Contact ID **filter to [search for recordings](search-recordings.md). This is the best way to ensure you get the right recording for the contact. Many recordings have the same name as the contact ID, but not all. 

1. Conversations that were recorded have icons in the **Recording/Transcript** column, as shown in the following image. If you don't have the appropriate permissions, you won't see these icons.  
![\[The voice recording icons play, download, and delete on the Contact search results page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recording-icons.png)

1. To listen to a recording of a voice conversation, or read the transcript of a chat, choose the **Play** icon, as shown in the following image.  
![\[The voice recording icons play icon on the Contact search results page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/play-recordings.png)

1. If you choose the play icon for a transcript, it appears, as shown in the following image.   
![\[A sample chat transcript.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/sample-chat-transcript.png)

### Pause, rewind, or fast-forward a recording
Pause, rewind, or fast-forward

Use the following steps to pause, rewind, or fast-forward a voice recording. 

1. On the **Contact search** results, instead of choosing the **Play** icon, choose the contact ID to open the contact record.  
![\[The location of the contact ID that you need to choose.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recordings-contactid.png)

1. On the **Contact record** page, there are more controls to navigate the recording, as shown in the following image.  
![\[The contact record page, additional controls to listen to the recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recording-pause-rewind-fastforward.png)

   1. Click or tap to the time you want to investigate.

   1. Adjust the playing speed.

   1. Play, pause, skip backwards or forwards in 10 second increments.

### Troubleshoot problems pausing, rewinding, or fast-forwarding
Troubleshoot problems

If you are unable to pause, rewind or fast-forward recordings on the **Contact search** page, one possible reason could be that your network is blocking HTTP range requests. See [HTTP range requests]( https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests) on the MDN Web Docs site. Work with your network administrator to unblock HTTP range requests.

## Review recordings and transcripts of automated voice interactions (with IVR and bots)


IVR recordings and logs enable you to monitor and improve your automated experiences to better resolve the needs of the end-customer and maintain audio and system execution records of the interaction for compliance purposes. To review automated interaction (IVR) recordings and logs:

1. Log in to Amazon Connect with a user account that has permissions to access [the contact search page](contact-search.md#required-permissions-search-contacts) and to [access recordings](assign-permissions-to-review-recordings.md). Note that to view information on flow execution, you would need permissions to view **Flows** and **Flow modules**.

1. On the navigation menu, choose **Analytics and optimization, Contact search**.

1. Search for the contact you want to review, e.g. use can search by contact queues, the name of the initial flow for the contact, or user-defined [custom contact attributes](search-custom-attributes.md).

1. Choose the contact ID to view the **Contact details** page.

1. Under **Recording and Transcript** section, select **Automated Interaction (IVR)** that will contain an audio player that you can use to play the IVR recording, as shown in the following image. In this section you can also see the IVR prompts that were played, customers response to those prompts, as well as transcripts of Amazon Lex interactions.   
![\[The location of the contact ID that you need to choose.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recordings-ivr.png)

1.  If you only wish to view the details on the customer interaction (without seeing additional details on which flow was executed), you can turn off the **Show flow details** toggle. See image below:  
![\[The location of the contact ID that you need to choose.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recordings-ivr-no-detail.png)

**Flow blocks available within the automated interaction logs and transcripts**  
You can view the following flow blocks within the Amazon Connect UI on the contact details page;
+ [Get customer input](get-customer-input.md)
+ [Store customer input](store-customer-input.md)
+ [Play prompt](play.md)
+ [Loop prompts](loop-prompts.md)
+ [Lamda functions](invoke-lambda-function-block.md)

# Assign permissions to review past contact center conversations in Amazon Connect
Assign permissions

To access recordings and transcripts on the Amazon Connect admin website, you need security profile permissions to search for and view contacts on the **Contact search** page. You also need permissions to access:
+ Recordings and transcripts of agent interactions
+ Automated interactions (IVR) recordings
+ Automated interactions (IVR) transcripts

This topic explains the required security profile permissions.

**Topics**
+ [

## Permissions to search and view contacts
](#assign-permissions-to-search-and-view-contacts)
+ [

## Permissions to access recordings and transcripts of agent interactions
](#assign-permissions-to-access-recordings-transcripts)
+ [

## Permissions to view automated interaction (IVR) recordings and transcripts
](#assign-permissions-to-view-automated-recordings-transcripts)

## Permissions to search and view contacts


Contacts and underlying recordings and transcripts are accessible through the **Contact search** and **Contact details** pages. At least one of the following permissions is required to view contacts on **Contact search** and **Contact details** pages:
+ **Contact search - View**: Allows a user to access all contacts on **Contact search** and **Contact details** pages.
+ **View my contacts - View**: On the **Contact search** and **Contact details** pages, allows agents to view only those contacts that they handled.

You can also enable the **Restrict contact access** permission to restrict access to contacts based on the user's hierarchy. For example: 
+ Agents who are assigned to AgentGroup-1 can only view contact records for contacts handled by agents in that hierarchy group and any groups below them.
+ Agents assigned to AgentGroup-2 can only access contact records for contacts handled by their group, and any groups below them. 
+ Managers and others who are in higher level groups can view contact records for contacts handled by all the groups below them, such as AgentGroup-1 and 2.

For more information, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts). 

## Permissions to access recordings and transcripts of agent interactions


Complete the following steps to assign permissions to access agent interactions for voice, chat and email channels. 

**Note**  
For chat interactions, the same transcript contains the agent interaction and the automated interaction (for example, with chat bots).

1. Assign the **CallCenterManager** security profile so a user can listen to call recordings or review chat transcripts. This security profile also includes a setting that makes the icon to download recordings appear in the results of the **Contact search** page. The following image shows the recording play, download, and delete icons that are displayed to a user who has these permissions.  
![\[The Contact search page, showing the options for reviewing recorded conversations.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recording-permissions-listen-download-delete.png)

– OR –

1.  Assign the following individual permissions:
   + **Call recordings (redacted) - Access**: If your organization uses Amazon Connect Contact Lens, you can assign this permission so agents access only those agent call recordings in which sensitive data has been redacted.
   + **Contact transcripts (redacted) - Access**: If your organization uses Amazon Connect Contact Lens, you can assign this permission so agents access only those contact transcripts in which sensitive data has been redacted.

     The redaction feature is provided as part of Contact Lens. For more information, see [Use sensitive data redaction to protect customer privacy using Contact Lens](sensitive-data-redaction.md).
   + **Manager monitor**: This permission allows users to monitor live conversations and listen to recordings.
**Tip**  
Be sure to assign managers to the **Agent** security profile so they can access the Contact Control Panel (CCP). This enables them can monitor the conversation through the CCP.
   + **Call recordings (unredacted) - Access**: Use this permission to manage who can access recordings on the **Contact search** and **Contact details** pages, through corresponding URLs that are generated in S3. From there, these users can delete recordings. 

     Note the following:
     + If users do not have **Call recordings (unredacted) - Access** permission—or they're not logged in to Amazon Connect—they cannot listen to the call recording or access the URL in S3, even if they know how the URL is formed.
     + The **Call recordings (unredacted) - Enable download button** permission controls only whether the download button appears in the user interface. It does not control access to the recording. 
   + **Contact transcripts (unredacted) - Access**: Use this permission to manage who can view unredacted chat and email conversations, and unredacted voice transcripts produced by Contact Lens on the **Contact search** and **Contact details** pages.

     Note the following:
     + If users do not have **Call recordings (unredacted) - Access** permission—or they're not logged in to Amazon Connect.
     + The **Call recordings (unredacted) - Enable download button** permission controls only whether the download button appears in the user interface. It does not control access to the recording.
   + **Delete recorded conversations**: To enable a user to delete recordings on the **Contact search** and **Contact details** pages, choose the **Delete** permission.
   + **Automated interaction voice (IVR) recordings (unredacted)**: Use this permission to grant access to manage and view IVR recordings on the **Contact details** page.
   + **Automated interaction voice (IVR) transcripts (unredacted)**: Use this permission to grant access to transcripts for the above Automated Interaction Voice (IVR) Recordings. 

## Permissions to view automated interaction (IVR) recordings and transcripts


Assign the following permissions:
+ **Automated interaction voice (IVR) recordings (unredacted) - Access**: Enables user’s access to the recording of a contact during automated interactions (with IVR, Amazon Lex, or other bots). 
+ **Automated interaction voice (IVR) recordings (unredacted) - Enable Download Button**: Controls whether the download button appears next to the IVR recording on the **Contact details** page within Amazon Connect.

### Access automated interaction (IVR) logs and transcripts


Assign the following permissions:
+ **Automated interaction voice (IVR) transcripts (unredacted) - Access**: Enables a user's access to the interaction between the customer, IVR and any bots. They can see the customer's keypad inputs in response to IVR prompts and see the transcript for the interaction with Amazon Lex. 

  The transcript obfuscates customer inputs entered for the [Store customer input](store-customer-input.md) flow block. The transcript also obfuscates any [slots that are configured to be obfuscated](https://docs.aws.amazon.com/lexv2/latest/dg/monitoring-obfuscate.html) in the *Amazon Lex developer guide* within Amazon Lex. Users with access to the IVR recording can still listen to the voice customer inputs during Amazon Lex interactions.
+ **Flow - View** and **Flow modules - View**: Grant users with both of these permissions so they can view flow execution details for voice contacts on the **Contact details** page. For example, which flow was executed and what was the outcome. 
**Note**  
These permissions also grant users with access to the Flows and Flow modules pages on the Amazon Connect admin website.

## Pause, rewind, or fast-forward a recording
Pause, rewind, or fast-forward

Use the following steps to pause, rewind, or fast-forward a voice recording. 

1. On the **Contact search** results, instead of choosing the **Play** icon, choose the contact ID to open the contact record.  
![\[The location of the contact ID that you need to choose.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recordings-contactid.png)

1. On the **Contact record** page, there are more controls to navigate the recording, as shown in the following image.  
![\[The contact record page, additional controls to listen to the recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recording-pause-rewind-fastforward.png)

   1. Click or tap to the time you want to investigate.

   1. Adjust the playing speed.

   1. Play, pause, skip backwards or forwards in 10 second increments.

## Troubleshoot problems pausing, rewinding, or fast-forwarding
Troubleshoot problems

If you are unable to pause, rewind or fast-forward recordings on the **Contact search** page, one possible reason could be that your network is blocking HTTP range requests. See [HTTP range requests]( https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests) on the MDN Web Docs site. Work with your network administrator to unblock HTTP range requests.

# Download recordings and transcripts of past conversations in Amazon Connect
Download recordings

These are the steps that a manager does to download past recordings or transcripts of conversations.
+ If the contact reached you by phone call (the Voice channel), you can download a .wav file.
+ If the contact reached you by chat (the Chat channel), you can download a .json file.

**Tip**  
To have Amazon Connect create transcripts of phone calls, see the Contact Lens feature. 

## Download a voice recording as a .wav file


1. Log in to the Amazon Connect admin website with a user account that has [permissions to access recordings](assign-permissions-to-review-recordings.md).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**. 

1. Filter the list of contacts by date, agent login, phone number, or other criteria. Choose **Search**.

1. Conversations that were recorded have icons in the **Recording/Transcript** column. If you don't have the appropriate permissions, you won't see these icons.

   The following image shows what the icons look like for a voice recording. Note the play icon that indicates it's a voice recording.  
![\[The contact search page, the play icon, download icon, and delete icon for a voice recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recording-icons.png)

1. Choose the **Download** icon, as shown in the following image.   
![\[The contact search page, the download icon for a voice recording.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/download-recordings.png)

1. A voice recording is saved automatically to your **Downloads** folder as a .wav file. 

   The following image shows a list of .wav files in a Downloads folder. The name of the .wav file is the contact ID.  
![\[A list of .wav file recordings in the downloads folder.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/downloaded-wav-files.png)
**Tip**  
In the recording, you may hear only the agent, only the customer, or both the agent and customer. This is determined by how the [Set recording and analytics behavior](set-recording-behavior.md) block is configured. 

## Download a chat transcript as a .json file


1. The following image shows what the icons look like for a chat transcript.  
![\[The contact search page, the transcript icon, download icon, and delete icon.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/download-transcript.png)

   A chat transcript is saved to the Downloads folder as a .json file. 

   The following image shows a .json file in the Downloads folder. The name of the .json file is the contact ID.  
![\[A json file transcript in the downloads folder.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/downloaded-json-file.png)

1. To view a downloaded chat transcript, right-click the .json file, and then open it with another app that enables you to view the contents in a readable format. 

   The following image shows a sample downloaded transcript that has been opened using Firefox. The image shows the middle of the transcript, where the agent and customer are chatting.   
![\[A json file transcript opened with Firefox.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/download-transcript-firefox.png)

## Events in a chat transcript


If you have a process that consumes events in S3 transcripts, note that chat transcripts contain the following event content types if the event has occurred during the chat session:
+ `application/vnd.amazonaws.connect.event.participant.left`
+ `application/vnd.amazonaws.connect.event.participant.joined`
+ `application/vnd.amazonaws.connect.event.chat.ended`
+ `application/vnd.amazonaws.connect.event.transfer.succeeded`
+ `application/vnd.amazonaws.connect.event.transfer.failed`
+ `application/vnd.amazonaws.connect.event.participant.invited`

# Search for recordings of conversations by a customer's contact ID in Amazon Connect
Search for recordings

To find a recording of a specific contact, you only need the contact ID. You don't need to know the date range, agent, or any other information about the contact. 

**Tip**  
We recommend using the contact ID to search for recordings.  
Even though many call recordings for specific contact IDs may be named with the contact ID prefix itself (for example, 123456-aaaa-bbbb-3223-2323234.wav), there is no guarantee that the contact IDs and name of the contact recording file always match. By using **Contact ID** for your search on the **Contact search** page, you can find the correct recording by referring the audio file on the contact's record.

**To search for recordings**

1. Log in to Amazon Connect with a user account that has [permissions to access recordings](assign-permissions-to-review-recordings.md).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**. 

1. In the **Contact ID** box, enter the contact ID, and then choose **Search**.

1. Conversations that were recorded have icons in the **Recording/Transcript** column. The following image shows the play, download, and delete icons. If you don't have the appropriate permissions, you won't see these icons.   
![\[The contact search page, the play, download, and delete recording icons.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/recording-icons.png)

To learn more about searching, see [Search for completed and in-progress contacts in Amazon Connect](contact-search.md).

# Troubleshoot agent conversation monitoring ability in Amazon Connect
Troubleshoot monitoring conversations

The following table explains how to resolve error messages (exception messages) that may be displayed when you use Amazon Connect to monitor live agent conversations with contacts. 


| Error message | Resolution | Exception type | Exception code | 
| --- | --- | --- | --- | 
| **You do not have access to the agent. Contact your admin to learn more.** | You must enable the service linked role for the instance. See [Use service-linked roles and role permissions for Amazon Connect](connect-slr.md) for information about enabling the role. | AccessDeniedException | 403 | 
| **One or more of the input parameters are invalid** | A developer needs to make sure that the input parameters for the `MonitorContact` action are valid. See [MonitorContact Request Syntax](https://docs.aws.amazon.com/connect/latest/APIReference/API_MonitorContact.html#API_MonitorContact_RequestSyntax).  |  InvalidRequestException  |  400  | 
| **Monitoring failed, please enable call recording** | In the flow, make sure that the [Set recording and analytics behavior](set-recording-behavior.md) block is configured to allow call recording for both the agent and customer.  |  InvalidRequestException  |  400  | 
| **User's phone number is invalid** | Check that the phone number associated with the agent's deskphone meets following requirements: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/connect/latest/adminguide/ts-monitoring-conversations.html)  |  InvalidRequestException  |  400  | 
| **The contact or agent is not in the state that can be monitored** | The contact is not in an active state. The agent or customer may have disconnected from the call or chat before the monitoring request could be processed. Choose another contact to monitor. |  InvalidRequestException  |  400  | 
| **Monitoring failed, please enable multi party conferencing feature** | The Amazon Connect instance must have the multi-party calls and enhanced monitoring feature enabled. In your instance settings, choose **Enable Multi-Party Calls and Enhanced Monitoring**. For instructions, see [Update settings for your Amazon Connect instance](update-instance-settings.md).   |  InvalidRequestException  |  400  | 
| **No AGENT participant found in the contact** | The call or chat doesn't have an active agent who is connected to it and working on the contact. Choose another contact to monitor. |  InvalidRequestException  |  400  | 
| **MonitorContact is not supported for `TASK` contacts** | The monitoring feature is supported only for voice and chat contacts. Choose a voice or chat contact to monitor. |  InvalidRequestException  |  400  | 
| **AllowedMonitorCapabilities must be provided and have `SILENT_MONITOR` value at the least** | If your Amazon Connect instance has the multi-party calls and enhanced monitoring feature enabled, the developer must make sure to pass the `AllowedMonitorCapabilities` input parameter with at least the `SILENT_MONITOR` value set. See [MonitorContact Request Syntax](https://docs.aws.amazon.com/connect/latest/APIReference/API_MonitorContact.html#API_MonitorContact_RequestSyntax). |  InvalidRequestException  |  400  | 
| **One or more of the request resources were not found** | A developer needs to make sure that the resources in the `MonitorContact` input request that's being passed exist in the Amazon Connect instance.  |  ResourceNotFoundException  |  404  | 
| **Internal service exception** | The request processing has failed because of an unknown error, exception, or failure with an internal server. Wait a bit and then try again to monitor the contact. |  InternalServiceException  |  500  | 
| **Service quota has been exceeded** | There are certain limits on how many contacts a supervisor can monitor at a time or how many supervisors can monitor one contact. Check the limits for the voice and chat contacts on the [Amazon Connect feature specifications](feature-limits.md) page. |  ServiceQuotaExceededException  |  402  | 
| **Another request with same clientToken is in progress** | In the [MonitorContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_MonitorContact.html) action, a `ClientToken` is a unique, case-sensitive identifier that developers provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see [Making retries safe with idempotent APIs](https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/).  |  IdempotencyException  |  409  | 
| **Access denied** | You don't have the appropriate permissions in your security profile to perform this action. For a list of the security profile permissions required for monitoring conversations see [Assign permissions to monitor live conversations in the Amazon Connect Contact Control Panel (CCP)](monitor-conversations-permissions.md). |  AccessDeniedException  |  403  | 
| **Too Many Requests** | The API TPS quotas have been exceeded. Submit a request for a TPS quota increase. For instructions, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html). |  ThrottlingException  |  429  | 

# Manage contacts from the Contact details page in Amazon Connect
Manage contacts from the Contact details page

On the **Contact details** page of an in-progress contact, you can manage a contact by transferring, rescheduling, or ending the contact.

You can also perform these actions programmatically using the [TransferContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_TransferContact.html), [UpdateContactSchedule](https://docs.aws.amazon.com/connect/latest/APIReference/API_UpdateContactSchedule.html), and [StopContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_StopContact.html) operations.

This section explains how to transfer, reschedule, and end contacts by using the Amazon Connect admin website.

**Topics**
+ [Transfer in-progress contacts](transfer-contacts-admin.md)
+ [Reschedule contacts](reschedule-contacts-admin.md)
+ [End contacts](end-contacts-admin.md)

# Transfer in progress contacts to a quick connect agent or a queue in Amazon Connect
Transfer in-progress contacts

On the **Contact details** page of an in-progress contact, you can transfer a contact to a quick connect agent or queue. This capability supports task, email, or chat contacts.

To transfer contacts programmatically, use the [TransferContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_TransferContact.html).

## Required permissions
Required permissions

1. Enable one of the following permissions to view contacts on the **Contact search** and **Contact details** pages:

   1. **Contact search - View**: Allows a user to view all contacts 

   1. **View my contacts - View**: Allows agents to view contacts that they themselves had handled

1. **Restrict contact access** (Optional): Restrict a user's access to contacts on the **Contact search** and **Contact details** pages within their own hierarchy group or any hierarchy groups below them. For more information about this permissions, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts).

1. **Transfer Contact**: Enables a user to transfer contacts on the **Analytics & Optimization** pages. The following image shows the **Contact Actions - Transfer Contact** permission.  
![\[The Contact details page, contact transferred successfully.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-transfer-permissions.png)

## How to transfer a task, email, or chat contact
How to transfer a task, email, or chat contact

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**.

1. Search for an in-progress task or email contact to transfer:

   1. Select the **Contact status** filter and set it to **In progress**, as shown in the following image.   
![\[The Contact search page, task filter, contact status filter.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-transfer-filters.png)

   1. Set the **Channel** filter to **Tasks**, **Email**, or **Chat** to view only task, email, or chat contacts. 

   1. Choose the task, email, or chat contact to view its details.

1. On the **Contact details** page for the task, email, or chat contact, choose **Actions**, **Transfer**.  
![\[The Contact details page, contact transferred successfully.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-transfer-action.png)

1. Select an agent or queue from a list of your quick connects and choose **Transfer**.

1. When the contact is transferred successfully, the page automatically refreshes with the **Next contact** link to the contact created as a result of the transfer. The following image shows the location of the **Next contact** link.  
![\[The Contact details page, contact transferred successfully.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-transferred.png)

# Reschedule contacts from the Contact details page in Amazon Connect
Reschedule contacts

On the **Contact details** page of an in-progress contact, you can reschedule a contact that was previously scheduled. This capability is currently supported only for task contacts.

To reschedule contacts programmatically, use the [UpdateContactSchedule](https://docs.aws.amazon.com/connect/latest/APIReference/API_UpdateContactSchedule.html).

## Required permissions
Required permissions

1. Enable one of the following permissions to view contacts on the **Contact search** and **Contact details** pages:

   1. **Contact search - View**: Allows a user to view all contacts 

   1. **View my contacts - View**: Allows agents to view contacts that they themselves had handled

1. **Restrict contact access** (Optional): Restrict a user's access to contacts on the **Contact search** and **Contact details** pages within their own hierarchy group or any hierarchy groups below them. For more information about this permissions, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts).

1. **Reschedule contact**: Enables a user to reschedule contacts on the **Analytics & Optimization** pages. The following image shows the **Contact Actions - Reschedule contact** permission.  
![\[Security profiles permissions page, reschedule contact permission.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-reschedule-permissions.png)

## How to reschedule a contact
How to reschedule a contact

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts).

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**.

1. Search for an in-progress task contact to reschedule:

   1. Select the **Contact status** filter and change the selected value to **In progress**. 

   1. Select the **Time range** filter. Set the **Timestamp type** to **Scheduled** to view only scheduled contacts. Filter for the time range. The following image shows these filters.  
![\[The contact details page, filters for scheduled timestamp.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-choose.png)

1. Choose the scheduled contact to view its details. 

1. On the **Contact details** page of the task contact, choose **Actions**, **Reschedule**, as shown in the following image.  
![\[The contact details page, Reschedule option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-reschedule-action.png)

1. Select the time and range to reschedule the contact. The scheduled time must be within 6 days of when the task was initiated.

1. When the contact is rescheduled successfully, the page automatically refreshes with the new schedule time for the task.

# End contacts from the Contact details page in Amazon Connect
End contacts

On the **Contact details** page of an in-progress contact, you can end a contact. Ending a contact results in the contact being disconnected. If the contact was already connected to an agent, ending the contact starts After Contact Work (ACW) for the contact. 

To end contacts programmatically, use the [StopContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_StopContact.html). 

## Important things to know
Important things to know
+ If you end a task contact after ACW is in progress, the contact is terminated. Voice and chat contacts that are in ACW state cannot be terminated by performing **End contact** action on the **Contact details** page.
+ You cannot end voice contacts when they are initiated using the following methods:
  + DISCONNECT
  + TRANSFER
  + QUEUE\$1TRANSFER
+ You can end chat and task contacts regardless of how they were initiated.

## Required permissions
Required permissions

1. Enable one of the following permissions to view contacts on the **Contact search** and **Contact details** pages:

   1. **Contact search - View**: Allows a user to view all contacts. 

   1. **View my contacts - View**: Allows agents to view contacts that they themselves had handled.

1. **Restrict contact access** (Optional): Restrict a user's access to contacts on the **Contact search** and **Contact details** pages within their own hierarchy group or any hierarchy groups below them. For more information about this permissions, see [Manage who can search for contacts and access detailed information](contact-search.md#required-permissions-search-contacts).

1. **End Contact**: Enables a user to end contacts on the **Analytics & Optimization** pages. The following image shows the **Contact Actions - End contact** permission.  
![\[The end contact permission.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-end-permissions.png)

## How to end an in-progress contact
How to end an in-progress contact

1. Log in to Amazon Connect with a user account that has [permissions to access contact records](contact-search.md#required-permissions-search-contacts). 

1. In Amazon Connect choose **Analytics and optimization**, **Contact search**.

1. Select the **Contact status** filter and change the selected value to **In progress**. 

1. Choose an in-progress contact to view its details.

1. On the **Contact details** page choose **Actions**, **End**.  
![\[The contact details page, the end option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-details-contact-end-action.png)

1. Confirm the action to end the contact by choosing **End**.

1. When the contact is ended successfully, the page automatically refreshes.

# Integrate Amazon Connect Contact Lens with external voice systems
Integrate Contact Lens with external voice systems

Migrating a contact center from an external system to the cloud can be complicated. It requires moving many different components such as telephony, IVR, ACD, call recording, call analytics, and more. By integrating your external system with Contact Lens for analytics, however, you can accelerate your migration to Amazon Connect. Here's how this first step can benefit your business:
+ Contact Lens integration enhances your existing external contact center recording and analytics capabilities.
+ It provides an opportunity to train your contact center administrators, managers, and agents on Amazon Connect. 
+ Contact Lens helps uncover key trends, issues, and themes from customer interactions happening across multiple voice systems such as external contact centers or customer facing voice solutions (for example, phone consults, financial advisors, or banking relationship managers).

The following diagram shows how the voice call audio flows between your external voice system and Contact Lens. You use the Contact Lens Connector to send a replica of your contact center audio to Contact Lens. The external call flow continues to operate as normal for your agents, while Contact Lens provides real-time and post-call analytics using the replicated call audio. 

![\[A conceptual diagram that shows how the voice call audio flows between your external voice system and Contact Lens.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-connector-diagram.png)


1. A call sent through PSTN lands on your external voice system.

1. A read-only copy of the call audio is forked into Amazon Connect.

1. A flow is started for the call. The Contact Lens connector routes the call to Amazon Connect Contact Lens.

## Requirements
Requirements

Before you start setting up Contact Lens integration, check that your Amazon Connect and external systems meet the following requirements:
+ Verify your Amazon Connect instance is created in a [supported AWS Region](regions.md#contactlens_region). Make sure your external voice system can connect to the Region.
+ Make sure the external device that initiates the SIPREC session and the voice system that is used for the call are supported. For a list of supported systems, see `ContactCenterSystemTypes` and `SessionBorderControllerTypes` in the [PutVoiceConnectorExternalSystemsConfiguration](https://docs.aws.amazon.com/chime-sdk/latest/APIReference/API_voice-chime_PutVoiceConnectorExternalSystemsConfiguration.html) in the Amazon Chime API. Usually the SIPREC session is a Session Border Controller (SBC) and the voice system is your contact center.
+ Verify you have SIPREC support or the ability to add SIPREC to the source system that will send the SIPREC replica call audio to Contact Lens. 

## Set up steps
Set up steps

Following is a summary of the steps you'll take to set up Contact Lens integration with your external voice system. The linked topics provide more detail.
+ [Create an Amazon Connect instance](amazon-connect-instances.md) if you don't already have one.
  + You don't need to claim a phone number to Amazon Connect in order to integrate with Contact Lens. 
  + [Add agents](user-management.md) and [set up agent hierarchies](agent-hierarchy.md). This will help you to attribute the analytics generated by Contact Lens to specific agents. 
**Note**  
If no agent is identified for a call, the replica call in Contact Lens terminates. No recording and conversation analytics are produced. For more information, see [Provide call metadata for Contact Lens integration](callmetadata-contactlens-integration.md).
+ [Request service quota increases](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html) for the following quotas in your Amazon Connect account: 
  + **Contact Lens connectors per account**
  + **Maximum active recording sessions from external voice systems per instance**
**Important**  
After your service quotas are requested and approved, Contact Lens integration will be visible in the Amazon Connect console and the Amazon Connect admin website.
+ [Create a Contact Lens connector](create-contact-lens-connector.md) in the Amazon Connect console.
+ [Configure your SBC](configure-external-voice-system.md) to send SIPREC audio to that connector host along with call metadata.
+ [Enable the Contact Lens connector on the Amazon Connect admin website](enable-contactlens-integration.md). You do this by assigning the following security profiles permissions to Admins and other users who need to access the Contact Lens connectors:
  + **Analytics and Optimization - Contact Lens connectors - View** and **Edit**. The **View** permission allows you see the list of available Contact Lens connectors. The **Edit** permission allows you to associate flows with a Contact Lens connector.
  + **Channels and Flows - Flows - View**: This permission enables you to see the available flows you can associate with a Contact Lens connector.

  Only users who have these permissions will be able to access the Contact Lens connector on the Amazon Connect admin website.
+ Create a flow to specify how to process the call audio including recording, live or post call analytics, and [associate the flow with the Contact Lens connector](associate-contactlens-integration.md). 
+ Optionally, create a Lambda that can be invoked when the Amazon Connect flow is triggered. Use the Lambda to parse the SIPREC request and additional call meta data, and take actions. For more information, see [Call metadata for Contact Lens integrations](callmetadata-contactlens-integration.md).

# Create a Contact Lens connector to integrate with your external voice system
Create a Contact Lens connector

This topic explains how to create a Contact Lens connector to integrate with your external voice system. Complete the following steps.

1. Open the Amazon Connect console at [https://console.aws.amazon.com/connect/](https://console.aws.amazon.com/connect/).

1. On the instances page, choose the instance alias. The instance alias is also your **instance name**, which appears in your Amazon Connect URL. The following image shows the **Amazon Connect virtual contact center instances** page, with a box around the instance alias.  
![\[The Amazon Connect virtual contact center instances page, the instance alias.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/instance.png)

1. In the Amazon Connect console, in the navigation pane, choose **External voice systems**, **Contact Lens integrations**, and then choose **Create Contact Lens connector**, as shown in the following image.  
![\[The Contact Lens integrations page, the Create Contact Lens connector button.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-create-connector.png)

1. On the **Contact Lens connector** page, type a friendly name for the connector.

1. Under **Connector source type**, use the dropdown menu to select from a list of available connector source types. Usually this is an external Session Boarder Controller (SBC) that will initiate the SIPREC session. The following image shows a sample dropdown list of source types.  
![\[The Contact Lens connector page, the Connect source type dropdown list.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-connector-source-types.png)

1. Under **Voice system type**, use the dropdown list to select the voice system used for the call. Usually this is your external contact center system. The following image shows a sample dropdown list of voice system types.  
![\[The Contact Lens connector page, the voice system type dropdown list.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-voice-system-types.png)

1. Enable **Encryption** and **Logging** of the SIP and Media metric messages. 
   + Amazon Chime SDK Voice Connector uses TLS server certificates issued by Amazon Trust Services. Most modern operating systems trust Amazon Trust Services by default. If this is not the case for your SIP infrastructure and you enable encryption, you may need to add the Starfield and Amazon Trust Services root CA certificates, excluding the EU roots, to your trust stores. You can find these certificates [here](https://www.amazontrust.com/repository/). 
   + Although logging is optional, we recommend you enable it to help you debug integration issues.

1.  In the **Source IP addresses** section, you can configure a range of Source IP addresses that are allowed to send voice to this connector.

1. In the **Credentials - optional** section, we recommend that you create credentials. They can help authenticate the SIPREC sessions.
**Note**  
If you do this, you'll need to provide the same credentials when you configure your external system.

1. Optionally, add tags to identify, organize, search for, filter, and control who can access this connector. For more information, see [Add tags to resources in Amazon Connect](tagging.md).

1. Choose **Create Contact Lens connector** to create the connector. After the connector is created, a success message is displayed.

1. On the **Contact Lens integrations** page you'll see the short host name. This is the host that your external voice system will send SIPREC voice traffic to. 

   When you configure your external voice system, you'll use the fully qualified domain name of the host, not this short host name.   
![\[The Contact Lens integrations page, the short host name of the connector.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-connector-shorthostname.png)

1. You're done creating the Contact Lens connector. Continue to the next step: [Configure your external voice system for integration with Contact Lens](configure-external-voice-system.md).

# Configure your external voice system for integration with Contact Lens
Configure your external voice system

After you [create a Contact Lens connector](create-contact-lens-connector.md) you need to configure your external voice system to point to the connector. Complete the following steps.

1. In the Amazon Connect console navigation pane, choose **External voice systems**, **Contact Lens integrations**. You'll see the name of available Contact Lens connectors. Select the one you want to use. The following image shows an example Contact Lens connector named **MyTestConnector**.  
![\[The Contact Lens integrations page, an example connector named MyTestConnector.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-connector-name.png)

1. On the connector details page, note the fully qualified host name. This is the name of the host in Amazon Connect that will receive the SIPREC audio. The following image shows an example fully qualified host name.  
![\[The MyTestConnector details page, the fully qualified name of the host that will receive the SIPREC audio.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-connector-detailspage.png)

1. For information about how to configure your external source system, go to the [Amazon Chime SDK resources](https://aws.amazon.com/chime/chime-sdk/resources/?whats-new-chime-sdk.sort-by=item.additionalFields.postDateTime&whats-new-chime-sdk.sort-order=desc) page, and choose **Configuration Guides**. Scroll down the page to **SIPREC/NBR Configuration Guides**, as shown in the following image.  
![\[The Configuration Guides on the Amazon Chime SDK resource page.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/configuration-guides.png)
**Note**  
If you created credentials for the connector, you need to use the same credentials for your external system.

1. After you configure your external source system, continue to the next step: [enable Contact Lens integration](enable-contactlens-integration.md).

# Model contact transfers and conferencing in Amazon Connect
Model contact transfers and conferencing

This topic is for developers who have integrated their external voice system with Amazon Connect Contact Lens. 

Your external voice system may support contact transfers (cold and warm) and conferencing multiple agents in a single call. You can signal these cases to Amazon Connect by calling the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) and [StopContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_StopContact.html) APIs. These APIs create a contact chain similar to native Amazon Connect voice contacts. Each leg of the call will get its own recording, contact record, and analytics, just like native Amazon Connect voice contacts. 

Each agent-customer interaction is modeled by an independent contact segment.
+ To model adding an agent to an ongoing call, you create a new contact segment using the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API with initiation method `TRANSFER`. Transfer contacts are linked to the previous contact by their `previousContactId`. 
+ If enabled, call recordings are generated independently for each contact segment and delivered upon completion of that segment.
+ Contact Lens real-time and post-call analytics are generated for each contact segment independently. 
+ A contact record is generated for each independent contact segment.
+ To model an agent leaving a call, you can end a contact segment by calling the [StopContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_StopContact.html) API.

## Workflow for warm transfer
Workflow for warm transfer

Warm transfers involve putting the customer on hold as the agent makes an introduction about the caller to another party.

To model a warm transfer using the contact APIs, implement the following workflow:

1. A call in your external voice system creates an initial contact segment.

1. When the new agent joins the call, invoke the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API. Use the initial contact segment's `contactId` as the `PreviousContactId` parameter. Provide the new agent's ID in the `UserInfo` parameter.

1. Let the initial agent introduce the new agent to the call and then disconnect from the call.

1. When the initial agent disconnects from the call, invoke the [StopContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_StopContact.html) API.

1. When the call ends in your external voice system (upon SIP BYE), the contact chain ends.

## Workflow for cold transfer
Workflow for cold transfer

Cold transfers involve directly transferring the customer from one agent to another without any introduction or context shared between them.

To model a cold transfer using the contact APIs, implement the following workflow:

1. A call in your external voice system creates an initial contact segment.

1. When the initial agent disconnects from the call, invoke the [StopContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_StopContact.html) API.

1. When the new agent joins the call, invoke the [CreateContact](https://docs.aws.amazon.com/connect/latest/APIReference/API_CreateContact.html) API. Use the initial contact segment's `contactId` as the `PreviousContactId` parameter. Provide the new agent's ID in the `UserInfo` parameter.

1. When the call ends in your external voice system (upon SIP BYE), the contact chain ends.

## Contact segment limits
Contact segment limits

You can have up to two concurrent contact segments and 10 total contact segments in a chain.

# Enable Amazon Connect Contact Lens integration
Enable Contact Lens integration

After you create a Contact Lens connector, you need to enable the integration by assigning users security profile permissions so they can access it on the Amazon Connect admin website.

1. Log in to the Amazon Connect admin website at https://*instance name*.my.connect.aws/ using an Admin account.

1. On the navigation bar, choose **Security profiles**. On the **Manage security profiles** page, choose **Admin**, **Edit**. 

1. On the **Edit security profile** page, choose **Channels and Flows** - **AnalyticsConnectors** - **View** and **Edit** permissions, and then choose **Save**. 
**Important**  
If you don't see the Contact Lens connectors permission under **Channels and Flows**, request service quota increases for the following quotas in your Amazon Connect account:  
Contact Lens connectors per account
Maximum active recording sessions from external voice systems per instance

1. Assign this permission to the security profiles for users who you want to access the Contact Lens connectors. 
**Note**  
You can only delete the last Contact Lens connector in your Amazon Connect instance when the access to the Contact Lens connector is removed from the users of that instance.  
If you attempt to delete the last Contact Lens connector without first removing the Contact Lens connectors access from the users of that instance, the following error message is displayed: **error - Failed to delete connector \$1connector-name\$1 with error: An analytics connector permissions is being used in a security profile**.

1. After you apply the permission, users who have it will be able to see the **Contact Lens connectors** option in the Amazon Connect admin website left navigation menu, as shown in the following image.  
![\[The left menu on the Amazon Connect admin website, the Contact Lens option.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contact-lens-connector-menuitem.png)

1. You're done enabling the Contact Lens connector. Continue to the next step: [associate a Contact Lens connector with a flow](associate-contactlens-integration.md).

# Associate a Contact Lens connector with a flow
Associate a Contact Lens connector with a flow

After you have [configured](configure-external-voice-system.md) your external SBC to point to the Contact Lens integration connector host, you need to configure how the audio will be processed when it reaches Amazon Connect Contact Lens. To do this, you define the audio processing steps in an Amazon Connect flow. It specifies what steps the call audio will go through, including invoking Contact Lens conversational analytics.

Complete the following steps to create a flow that enables Contact Lens, and then associate the flow with the Contact Lens connector. This flow will be invoked when the Contact Lens connector receives call audio.

1. In the Amazon Connect admin website, create a flow that uses the [Set recording and analytics behavior](set-recording-behavior.md). Configure the block to enable **Agent and customer voice recording**, **Contact Lens speech analytics**, and **Automated interaction call recording**. End the flow with the [End flow / Resume](end-flow-resume.md) block. This configuration is shown in the following image. 

   For a list of blocks you can use in a Contact Lens integration, see [Supported flow blocks for Contact Lens integration](contactlens-integration-supportedflowblocks.md).   
![\[The properties page of the Set recording behavior and analytics block.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-connector-setblock.png)

   For detailed instructions, see [Enable conversational analytics](enable-analytics.md).

1. On the navigation menu, choose **Channels**, **Contact Lens connectors**. Choose the Contact Lens integration connector that you want to associate with the flow. In the **Flow name** field, start typing the name of your flow to display a list, and then choose the flow.   
![\[The Connectors page, a list of available flows.\]](http://docs.aws.amazon.com/connect/latest/adminguide/images/contactlens-connector-flow.png)

# Provide call metadata for Contact Lens integration
Call metadata for Contact Lens integrations

In Amazon Connect, each interaction with a customer is an Amazon Connect contact. Each voice session that comes through the Contact Lens connector creates an Amazon Connect contact. The connector creates an Amazon Connect contact using the fields provided in the call metadata. The call metadata includes the agent user ID and agent queue ID for the streamed call in the call metadata. 

You can provide the agent user ID and other call metadata to the Contact Lens connector by using supported SIPREC metadata parameters within the SIP INVITE of the audio stream session. The connector parses the following call metadata fields and adds this information to the Amazon Connect contact.


| Call State Field | SIPREC Metadata | Value | If not provided | 
| --- | --- | --- | --- | 
| Agent user id | AmznConnectAgentUserId | Amazon Connect agent user id | Required | 
| Queue id | AmznConnectQueueId | Amazon Connect queue id | Optional. If not provided, the default queue of the Amazon Connect instance is used. | 
| Participant order | AmznConnectParticipantOrder | Valid values: asc, desc | Optional. If not provided, ascending order is used. Amazon Connect sorts the SIPREC streams by using labels. The first stream in label order is the agent and the second is the caller. | 

A contact must have an Amazon Connect agent user ID. Contact Lens starts capturing the streamed audio, and generating call recording and call analysis, only when the agentId is provided. 

If agentid is missing then the Amazon Connect Contact Lens connector session is terminated. If your SIPREC metadata was not parsed automatically by the Amazon Connect Contact Lens connector and agent user ID is not set, you can create a flow lambda and access all the SIP and SIPREC metadata by using the following fields:


| Attribute | Description | JSONPath Reference | 
| --- | --- | --- | 
| SIPREC metadata | SIPREC metadata from the SIP event | \$1.Media.Sip.SiprecMetadata | 
| SIP header | SIP header from the SIP event. \$1SIP header name\$1 is the name of the SIP header provided in the SIP event. For example, "To", "From", and others. | \$1.Media.Sip.Headers.\$1SIP header name\$1 | 

For more information, see [Telephony call metadata attributes (call attributes)](connect-attrib-list.md#telephony-call-metadata-attributes).

## How to use event metadata
How to use event metadata

Amazon Connect publishes SIP, streaming, and contact events. These events include the metadata gathered from the SIPREC SIP INVITE of the calls. The metadata includes the SIPREC Metadata, SIP headers, fromNumber, toNumber, and others. Here are some things you can do with this event metadata:

1. You can process the metadata in these events to determine your own unique identifier for the calls and correlate the calls with the your own system.

1.  You can then add your unique identifier for the call into the call's contact attributes by using [Set contact attributes](set-contact-attributes.md) block.

1.  You can search by custom contact attributes in the Amazon Connect admin website to find the contact for the third-party call in the two Amazon Connect instances.

For information about how to create Amazon Connect flow Lambda functions, see [Grant Amazon Connect access to your AWS Lambda functions](connect-lambda-functions.md). For a list of all the supported contact attributes that you can access in your flow Lambda, see [List of available contact attributes in Amazon Connect and their JSONPath references](connect-attrib-list.md).

# Supported flow blocks for Contact Lens integration
Supported flow blocks for Contact Lens integration

The following tables list the flow blocks that you can use to specify how Amazon Connect processes the audio stream sessions. 

**Set blocks**


| Flow block | Effect | Description | 
| --- | --- | --- | 
| Set Working Queue | No Effect | Sets Working Queue | 
| Set Contact Attributes | Supported | Stores key-value pairs as contact attributes. You set a value that is later referenced in a flow. | 
| Get Queue Metrics | No Effect | Gets queue metrics | 
| Change routing priority/age | No Effect | change routing prioroty of contact | 
| Set Hold Flow | No Effect | Specifies the flow to invoke when a customer or agent is put on hold. | 
| Set Whisper Flow | No Effect | Specifies the flow to invoke when a customer or agent joined in a voice or chat conversation. | 
| Set callback Number | No Effect | Specify the attribute to set the callback number. | 
| Set Voice | No Effect | Sets the text-to-speech (TTS) language and voice to use for the contact flow. | 
| Set Customer Queue | No Effect | Sets the customer queue for customer queue flow | 
| Set Disconnect Flow | No Effect | Sets disconnect flow for disconnect queue flow | 
| Set event flow | No Effect | Specifies which flow to run during a contact event. | 
| Set routing criteria | No Effect | Sets the routing criteria for the contact. | 

**Analyze blocks**


| Flow block | Effect | Description | 
| --- | --- | --- | 
| Set Recording and Analytics behavior | Supported | Sets options for recording and enables features in Contact Lens. | 
| Set logging behavior | Supported | Enable or disable flow logs | 

**Logic blocks**


| Flow block | Effect | Description | 
| --- | --- | --- | 
| Distribute by percentage | Supported | Routes contacts randomly based on a percentage | 
| Loop | Supported | Executes looping branch for specified amount of times | 

**Branch blocks**


| Flow block | Effect | Description | 
| --- | --- | --- | 
| Check Queue Status | No Effect | Checks Queue Status | 
| Check Staffing | No Effect | Checks staffing in queues | 
| Check hours of operation | Supported | Branches based on specified hours of operation. | 
| Check Contact Attributes | Supported | Branches based on a comparison to the value of a contact attribute. | 

**Integrate blocks**


| Flow block | Effect | Description | 
| --- | --- | --- | 
| Create Task | Supported | Creates a new task manually or by leveraging a task template. | 
| Customer profiles | Supported | Enables you to retrieve, create, and update a customer profile. | 
| Invoke AWS Lambda | Supported | Calls AWS Lambda, and optionally returns key-value pairs. | 
| Invoke module | Supported | Calls a published module, which enables you create reusable sections of a contact flow. | 

**Terminate/Transfer blocks**


| Flow block | Effect | Description | 
| --- | --- | --- | 
| Disconnect/Hangup | Supported | Disconnects the contact and end the audio stream session. | 
| End Flow | Supported | Ends the current flow without disconnecting the contact. | 

# Set up multi-region redundancy for Contact Lens integration
Set up multi-region redundancy for Contact Lens integration

Multi-region redundancy enables you to scale your external voice system for highest reliability, performance, and efficiency. You can support multi-region redundancy using Amazon Connect replica instance. 

## Active/Passive redundancy configuration


You can create one Amazon Connect instance in one Region (for example, US East (N. Virginia)) and a replica instance in another Region (for example, US West (Oregon)). You can then configure your external voice system to send SIPREC SIP INVITE to the primary Region. When the Amazon Connect instance in the primary Region fails, you can update your external voice system to failover to the replica Amazon Connect instance in the passive Region.

## Active/Active redundancy configuration


You can implement the active-active strategy by concurrently streaming audio to both Amazon Connect instances. To implement this strategy, configure your external voice system to concurrently stream audio to the two separate Regions. In each Region, Contact Lens integration will do the following:

1. Create its own Amazon Connect contact.

1. Captures the audio stream to create call recordings

1. Perform Contact Lens analysis

This approach requires you to replicate all the Amazon Connect contact center configurations manually. However, you can use Amazon Connect Global Resiliency and it will replicate all the Amazon Connect instance settings across the Regions automatically. For more information, see [Set up Amazon Connect Global Resiliency](setup-connect-global-resiliency.md). 