

# Troubleshoot Connect AI agent issues
<a name="ts-ai-agents-self-service"></a>

Use this topic to help diagnose and resolve common issues with Connect AI agents.

**Topics**
+ [Logging and tracing for Connect AI agents](viewing-logs-for-connect-ai-agents-self-service.md)
+ [Troubleshoot agentic self-service issues](ts-agentic-self-service.md)
+ [Common issues](ts-common-self-service-issues.md)
+ [(Legacy) Self-service issues](ts-non-agentic-self-service.md)

# Logging and tracing for Connect AI agents
<a name="viewing-logs-for-connect-ai-agents-self-service"></a>

To troubleshoot Connect AI agent issues effectively, use the following logging and tracing options.
+ **ListSpans API (recommended for orchestrator AI agents)**: Use the [ListSpans](https://docs.aws.amazon.com/connect/latest/APIReference/API_amazon-q-connect_ListSpans.html) API to retrieve AI agent execution traces for a session. This is the recommended starting point for debugging orchestrator AI agent interactions, as it provides granular visibility into agent orchestration flows, LLM interactions, and tool invocations, allowing you to trace how the AI agent reasoned through a request and which tools it selected and executed.
+ **CloudWatch Logs**: Enable CloudWatch Logging for your Connect AI agents by following the steps in [Monitor Connect AI agents](monitor-ai-agents.md).

  Legacy self-service interactions generate log entries with the event type `TRANSCRIPT_SELF_SERVICE_MESSAGE` in the following format:

  ```
  {
      "assistant_id": "{UUID}",
      "event_timestamp": 1751414298692,
      "event_type": "TRANSCRIPT_SELF_SERVICE_MESSAGE",
      "session_id": "{UUID}",
      "utterance": "[CUSTOMER]...",
      "prompt": "{prompt used}",
      "prompt_type": "SELF_SERVICE_PRE_PROCESS|SELF_SERVICE_ANSWER_GENERATION",
      "completion": "{Response from model}",
      "model_id": "{model id e.g.: us.amazon.nova-pro-v1:0}",
      "session_message_id": "{UUID}",
      "parsed_response": "{model response}"
  }
  ```

  Agentic self-service interactions generate log entries with the event type `TRANSCRIPT_LARGE_LANGUAGE_MODEL_INVOCATION`. These entries include the full orchestration context such as the prompt with tool configurations, conversation history with tool calls and results, the model completion, and the AI agent configuration. The following example shows the key fields:

  ```
  {
      "assistant_id": "{UUID}",
      "event_timestamp": 1772748470993,
      "event_type": "TRANSCRIPT_LARGE_LANGUAGE_MODEL_INVOCATION",
      "session_id": "{UUID}",
      "prompt": "{full prompt including system instructions, tool configs, and conversation history}",
      "prompt_type": "ORCHESTRATION",
      "completion": "{model response with message and tool use}",
      "model_id": "{model id e.g.: us.anthropic.claude-haiku-4-5-20251001-v1:0}",
      "parsed_response": "{parsed customer-facing message}",
      "generation_id": "{UUID}",
      "ai_agent_id": "{UUID}"
  }
  ```
+ **Amazon Lex logging (self-service only)**: Enable Amazon Lex logging by following the steps in [Logging errors with error logs in Amazon Lex V2](https://docs.aws.amazon.com/lexv2/latest/dg/error-logs.html). 
+ **Amazon Connect logging**: Enable Amazon Connect logging by adding a [Set logging behavior](set-logging-behavior.md) flow block in your Amazon Connect flow.

# Troubleshoot agentic self-service issues
<a name="ts-agentic-self-service"></a>

The following issues are specific to [agentic self-service](agentic-self-service.md).

## AI agent is not responding to customers
<a name="ts-ai-agent-not-responding"></a>

If your AI agent is processing requests but customers are not seeing any responses, the orchestration prompt may be missing the required message formatting instructions.

Orchestrator AI agents only display messages to customers when the model's response is wrapped in `<message>` tags. If your prompt does not instruct the model to use these tags, responses will not be rendered to the customer.

**Solution**: Ensure your orchestration prompt includes formatting instructions that require the model to wrap responses in `<message>` tags. For more information, see [Message parsing](use-orchestration-ai-agent.md#message-parsing).

## MCP tool invocation failures
<a name="ts-mcp-tool-failures"></a>

If your AI agent fails to invoke MCP tools during a conversation, check the following:
+ **Security profile permissions** – Verify that the AI agent's security profile grants access to the specific MCP tools it needs. The AI agent can only invoke tools it has explicit permission to access.
+ **Gateway connectivity** – Confirm that the Amazon Bedrock AgentCore Gateway is correctly configured and that the discovery URL is valid. Verify that the inbound authentication audiences are set to the gateway ID. Check the gateway status in the AgentCore console.
+ **API endpoint health** – Verify that the backend API or Lambda function behind the MCP tool is running and responding correctly. Check CloudWatch Logs for errors in the target service.

## IAM permissions for MCP tools
<a name="ts-mcp-iam-permissions"></a>

If MCP tool calls return access denied errors, verify that the IAM roles have the required permissions:
+ **Amazon Bedrock AgentCore Gateway role** – The gateway's execution role must have permission to invoke the backend APIs or Lambda functions that your MCP tools connect to.
+ **Amazon Connect service-linked role** – The Amazon Connect service-linked role must have permission to invoke the Amazon Bedrock AgentCore Gateway.

# Common issues
<a name="ts-common-self-service-issues"></a>

## Bundle the latest AWS SDK with your Lambda functions
<a name="ts-lambda-sdk-bundling"></a>

If you are calling Connect AI agents APIs directly from Lambda functions, you must package and bundle the latest version of the AWS SDK along with your function code. The Lambda runtime environment may include an older version of the SDK that does not support the latest Connect AI agents API models and features.

**Symptoms**: You may experience parameter validation exceptions or request input parameters being silently ignored when using an outdated SDK version.

To avoid API model drift, include the latest AWS SDK as a dependency in your deployment package or as a Lambda layer rather than relying on the SDK provided by the Lambda runtime. The steps to bundle the SDK vary by language. For example, for Node.js, see [Creating a deployment package with dependencies](https://docs.aws.amazon.com/lambda/latest/dg/nodejs-package.html#nodejs-package-create-dependencies). For other languages, refer to the corresponding Lambda deployment packaging documentation. For sharing the SDK across multiple functions, see [Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html).

# (Legacy) Self-service issues
<a name="ts-non-agentic-self-service"></a>

The following issues are specific to [legacy self-service](generative-ai-powered-self-service.md).

## Customers are unexpectedly receiving "Escalating to agent..."
<a name="customers-unexpectedly-receiving-escalating-to-agent"></a>

Unexpected agent escalation occurs when there's an error during the self-service bot interaction or when the model doesn't produce a valid `tool_use` response for `SELF_SERVICE_PRE_PROCESS`.

### Troubleshooting steps
<a name="escalation-ts-steps"></a>

1. **Check the Connect AI agent logs**: Examine the `completion` attribute in the associated log entry.

1. **Validate the stop reason**: Confirm that the `stop_reason` is `tool_use`.

1. **Verify parsed response**: Check if the `parsed_response` field is populated, as this represents the response you'll receive from the model.

### Known issue with Claude 3 Haiku
<a name="known-issue-with-claude-3-haiku"></a>

If you're using Claude 3 Haiku for self-service pre-processing, there's a known issue where it generates the `tool_use` JSON as text, resulting in a `stop_reason` of `end_turn` instead of `tool_use`.

**Solution**: Update your custom prompt to wrap the `tool_use` JSON string inside `<tool>` tags by adding this instruction:

```
You MUST enclose the tool_use JSON in the <tool> tag
```

## Self-service chat or voice call is unexpectedly terminating
<a name="self-service-unexpectedly-terminating"></a>

This issue can occur due to timeouts from Amazon Lex or incorrect Amazon Nova Pro configuration. These issues are described below.

### Timeouts from Amazon Lex
<a name="timeouts-from-amazon-lex"></a>
+ **Symptoms**: Amazon Connect logs show "Internal Server Error" for the [Get customer input](get-customer-input.md) block
+ **Cause**: Your self-service bot timed out while providing results within the 10-second limit. Timeout errors won't appear in Connect AI agent logs.
+ **Solution**: Simplify your prompt by removing complex reasoning to reduce processing time.

### Amazon Nova Pro configuration
<a name="amazon-nova-pro-configuration"></a>

If you're using Amazon Nova Pro for your custom AI prompts, ensure that the tool\$1use examples follow [Python-compatible format](create-ai-prompts.md#nova-pro-aiprompt). 