

# Connect to your data with integrations and datasets
<a name="connecting-to-data-examples"></a>

You can connect Amazon Quick Sight to different types of data sources. This includes data residing in Software-as-a-Service (SaaS) applications, flat files stored in Amazon S3 buckets, data from third-party services like Salesforce, and query results from Athena. Use the following examples to learn more about the requirements for connecting to specific data sources. 

**Topics**
+ [Creating a dataset using Amazon Athena data](create-a-data-set-athena.md)
+ [Using Amazon OpenSearch Service with Amazon Quick Sight](connecting-to-os.md)
+ [Creating a dataset using Amazon S3 files](create-a-data-set-s3.md)
+ [Creating a data source using Apache Spark](create-a-data-source-spark.md)
+ [Using Databricks in Quick Sight](quicksight-databricks.md)
+ [Creating a dataset using Google BigQuery](quicksight-google-big-query.md)
+ [Creating a dataset using a Google Sheets data source](create-a-dataset-google-sheets.md)
+ [Creating a dataset using an Apache Impala data source](create-a-dataset-impala.md)
+ [Creating a dataset using a Microsoft Excel file](create-a-data-set-excel.md)
+ [Creating a data source using Presto](create-a-data-source-presto.md)
+ [Using Snowflake with Amazon Quick Sight](connecting-to-snowflake.md)
+ [Using Starburst with Amazon Quick Sight](connecting-to-starburst.md)
+ [Creating a data source and data set from SaaS sources](connecting-to-saas-data-sources.md)
+ [Creating a dataset from Salesforce](create-a-data-set-salesforce.md)
+ [Using Trino with Amazon Quick Sight](connecting-to-trino.md)
+ [Creating a dataset using a local text file](create-a-data-set-file.md)
+ [Using Amazon Timestream data with Amazon Quick Sight](using-data-from-timestream.md)

# Creating a dataset using Amazon Athena data
<a name="create-a-data-set-athena"></a>

Use the following procedure to create a new dataset that connects to Amazon Athena data or to Athena Federated Query data.

**To connect to Amazon Athena**

1. Begin by creating a new dataset. Choose **Data** from the navigation pane at left.

1. Choose **Create**, then choose **New dataset**.

1. 

   1. To use an existing Athena connection profile (common), choose the card for the existing data source that you want to use. Choose **Select**. 

      Cards are labeled with the Athena data source icon and the name provided by the person who created the connection.

   1. To create a new Athena connection profile (less common), use the following steps:

      1. Choose **New data source**, then choose the **Athena** data source card.

      1. Choose **Next**.

      1. For **Data source name**, enter a descriptive name.

      1. For **Athena workgroup**, choose your workgroup.

      1. Choose **Validate connection** to test the connection.

      1. Choose **Create data source**.

      1. (Optional) Select an IAM role ARN for queries to run as. 

1. On the **Choose your table** screen, do the following:

   1. For **Catalog**, choose one of the following:
      + If you are using Athena Federated Query, choose the catalog you want to use.
      + Otherwise, choose **AwsDataCatalog**.

   1. Choose one of the following:
      + To write a SQL query, choose **Use custom SQL**. 
      + To choose a database and table, choose your catalog that contains your databases from the dropdown under **Catalog**. Then, choose a database from the dropdown under **Database** and choose a table from the **Tables** list that appears for your database.

   If you don't have the right permissions, you receive the following error message: "You don't have sufficient permissions to connect to this dataset or run this query." Contact your Quick administrator for assistance. For more information, see [Authorizing connections to Amazon Athena](athena.md). 

1. Choose **Edit/preview data**. 

1. Create a dataset and analyze the data using the table by choosing **Visualize**. For more information, see [Analyses and reports: Visualizing data in Amazon Quick Sight](working-with-visuals.md). 

# Using Amazon OpenSearch Service with Amazon Quick Sight
<a name="connecting-to-os"></a>

Following, you can find how to connect to your Amazon OpenSearch Service data using Amazon Quick Sight.

## Creating a new Quick Sight data source connection for OpenSearch Service
<a name="create-connection-to-es"></a>

Following, you can find how to connect to OpenSearch Service

Before you can proceed, Amazon Quick Sight needs to be authorized to connect to Amazon OpenSearch Service. If connections aren't enabled, you get an error when you try to connect. A Quick Sight administrator can authorize connections to AWS resources. 

**To authorize Quick Sight to initiate a connection to OpenSearch Service**

1. Open the menu by clicking on your profile icon at top right, then choose **Manage Quick**. If you don't see the **Manage Quick** option on your profile menu, ask your Amazon Quick administrator for assistance.

1. Choose **Security & permissions**, **Add or remove**.

1. Enable the option for **OpenSearch**.

1. Choose **Update**.

After OpenSearch Service is accessible, you create a data source so people can use the specified domains.

**To connect to OpenSearch Service**

1. Begin by creating a new dataset. Choose **Data** from the navigation pane at left, then choose **Create** and **New Dataset**.

1. Choose the **Amazon OpenSearch** data source card.

1. For **Data source name**, enter a descriptive name for your OpenSearch Service data source connection, for example `OpenSearch Service ML Data`. Because you can create many datasets from a connection to OpenSearch Service, it's best to keep the name simple.

1. For **Connection type**, choose the network you want to use. This can be a virtual private cloud (VPC) based on Amazon VPC or a public network. The list of VPCs contains the names of VPC connections, rather than VPC IDs. These names are defined by the Quick administrator. 

1. For **Domain**, choose the OpenSearch Service domain that you want to connect to. 

1. Choose **Validate connection** to check that you can successfully connect to OpenSearch Service.

1. Choose **Create data source** to proceed.

1. For **Tables**, choose the one you want to use, then choose **Select** to continue. 

1. Do one of the following:
   + To import your data into the Quick Sight in-memory engine (called SPICE), choose **Import to SPICE for quicker analytics**. For information about how to enable importing OpenSearch data, see [Authorizing connections to Amazon OpenSearch Service](opensearch.md).
   + To allow Quick Sight to run a query against your data each time you refresh the dataset or use the analysis or dashboard, choose **Directly query your data**. 

     To enable autorefresh on a published dashboard that uses OpenSearch Service data, the OpenSearch Service dataset needs to use a direct query.

1. Choose **Edit/Preview** and then **Save** to save your dataset and close it.

## Managing permissions for OpenSearch Service data
<a name="dataset-permissions-for-es"></a>

The following procedure describes how to view, add, and revoke permissions to allow access to the same OpenSearch Service data source. The people that you add need to be active users in Quick Sight before you can add them. 

**To edit permissions on a data source**

1. Choose **Data** at left, then scroll down to find the data source card for your Amazon OpenSearch Service connection. An example might be `US Amazon OpenSearch Service Data`.

1. Choose the **Amazon OpenSearch** dataset.

1. On the dataset details page that opens, choose the **Permissions** tab.

   A list of current permissions appears.

1. To add permissions, choose **Add users & groups**, then follow these steps:

   1. Add users or groups to allow them to use the same dataset.

   1. When you're finished adding everyone that you want to add, choose the **Permissions** that you want to apply to them.

1. (Optional) To edit permissions, you can choose **Viewer** or **Owner**. 
   + Choose **Viewer** to allow read access.
   + Choose **Owner** to allow that user to edit, share, or delete this Quick Sight dataset. 

1. (Optional) To revoke permissions, choose **Revoke access**. After you revoke someone's access, they can't create new datasets from this data source. However, their existing datasets still have access to this data source.

1. When you are finished, choose **Close**.

## Adding a new Quick Sight dataset for OpenSearch Service
<a name="create-dataset-using-es"></a>

After you have an existing data source connection for OpenSearch Service, you can create OpenSearch Service datasets to use for analysis. 

**To create a dataset using OpenSearch Service**

1. From the start page, choose **Data**, **Create**, **New dataset**.

1. Scroll down to the data source card for your OpenSearch Service connection. If you have many data sources, you can use the search bar at the top of the page to find your data source with a partial match on the name.

1. Choose the **Amazon OpenSearch** data source card, and then choose **Create data set**.

1. For **Tables**, choose the OpenSearch Service index that you want to use.

1. Choose **Edit/Preview**.

1. Choose **Save** to save and close the dataset. 

## Adding OpenSearch Service data to an analysis
<a name="open-analysis-add-dataset-for-es"></a>

After you have an OpenSearch Service dataset available, you can add it to a Quick Sight analysis. Before you begin, make sure that you have an existing dataset that contains the OpenSearch Service data that you want to use.

**To add OpenSearch Service data to an analysis**

1. Choose **Analyses** at left.

1. Do one of the following:
   + To create a new analysis, choose **New analysis** at right. 
   + To add to an existing analysis, open the analysis that you want to edit. 
     + Choose the pencil icon near at top left.
     + Choose **Add data set**.

1. Choose the OpenSearch Service dataset that you want to add. 

   For information on using OpenSearch Service in visualizations, see [Limitations for using OpenSearch Service](#limitations-for-es). 

1. For more information, see [Working with analyses](https://docs.aws.amazon.com/quicksight/latest/user/working-with-analyses.html).

## Limitations for using OpenSearch Service
<a name="limitations-for-es"></a>

The following limitations apply to using OpenSearch Service datasets:
+ OpenSearch Service datasets support a subset of the visual types, sort options, and filter options.
+ To enable autorefresh on a published dashboard that uses OpenSearch Service data, the OpenSearch Service dataset needs to use a direct query.
+ Multiple subquery operations aren't supported. To avoid errors during visualization, don't add multiple fields to a field well, use one or two fields per visualization, and avoid using the **Color** field well.
+ Custom SQL isn't supported.
+ Crossdataset joins and self joins aren't supported.
+ Calculated fields aren't supported. 
+ Text fields aren't supported. 
+ The "other" category isn't supported. If you use an OpenSearch Service dataset with a visualization that supports the "other" category, disable the "other" category by using the menu on the visual. 

# Creating a dataset using Amazon S3 files
<a name="create-a-data-set-s3"></a>

To create a dataset using one or more text files (.csv, .tsv, .clf, or .elf) from Amazon S3, create a manifest for Quick Sight. Quick Sight uses this manifest to identify the files that you want to use and to the upload settings needed to import them. When you create a dataset using Amazon S3, the file data is automatically imported into [SPICE](spice.md).

You must grant Quick Sight access to any Amazon S3 buckets that you want to read files from. For information about granting Quick Sight access to AWS resources, see [Configuring Amazon Quick Sight access to AWS data sources](access-to-aws-resources.md).

**Topics**
+ [Supported formats for Amazon S3 manifest files](supported-manifest-file-format.md)
+ [Creating Amazon S3 datasets](create-a-data-set-s3-procedure.md)
+ [Datasets using S3 files in another AWS account](using-s3-files-in-another-aws-account.md)

# Supported formats for Amazon S3 manifest files
<a name="supported-manifest-file-format"></a>

You use JSON manifest files to specify files in Amazon S3 to import into Quick Sight. These JSON manifest files can use either the Quick Sight format described following or the Amazon Redshift format described in [Using a manifest to specify data files](https://docs.aws.amazon.com/redshift/latest/dg/loading-data-files-using-manifest.html) in the *Amazon Redshift Database Developer Guide*. You don't have to use Amazon Redshift to use the Amazon Redshift manifest file format. 

If you use an Quick Sight manifest file, it must have a .json extension, for example `my_manifest.json`. If you use an Amazon Redshift manifest file, it can have any extension. 

If you use an Amazon Redshift manifest file, Quick Sight processes the optional `mandatory` option as Amazon Redshift does. If the associated file isn't found, Quick Sight ends the import process and returns an error. 

Files that you select for import must be delimited text (for example, .csv or .tsv), log (.clf), or extended log (.elf) format, or JSON (.json). All files identified in one manifest file must use the same file format. Plus, they must have the same number and type of columns. Quick Sight supports UTF-8 file encoding, but not UTF-8 with byte-order mark (BOM). If you are importing JSON files, then for `globalUploadSettings` specify `format`, but not `delimiter`, `textqualifier`, or `containsHeader`.

Make sure that any files that you specify are in Amazon S3 buckets that you have granted Quick Sight access to. For information about granting Quick Sight access to AWS resources, see [Configuring Amazon Quick Sight access to AWS data sources](access-to-aws-resources.md).

## Manifest file format for Quick Sight
<a name="quicksight-manifest-file-format"></a>

Quick Sight manifest files use the following JSON format.

```
{
    "fileLocations": [
        {
            "URIs": [
                "uri1",
                "uri2",
                "uri3"
            ]
        },
        {
            "URIPrefixes": [
                "prefix1",
                "prefix2",
                "prefix3"
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON",
        "delimiter": ",",
        "textqualifier": "'",
        "containsHeader": "true"
    }
}
```

Use the fields in the `fileLocations` element to specify the files to import, and the fields in the `globalUploadSettings` element to specify import settings for those files, such as field delimiters. 

The manifest file elements are described following:
+ **fileLocations** – Use this element to specify the files to import. You can use either or both of the `URIs` and `URIPrefixes` arrays to do this. You must specify at least one value in one or the other of them.
  + **URIs** – Use this array to list URIs for specific files to import.

    Quick Sight can access Amazon S3 files that are in any AWS Region. However, you must use a URI format that identifies the AWS Region of the Amazon S3 bucket if it's different from that used by your Quick account.

    URIs in the following formats are supported.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/quick/latest/userguide/supported-manifest-file-format.html)
  + **URIPrefixes** – Use this array to list URI prefixes for S3 buckets and folders. All files in a specified bucket or folder are imported. Quick Sight recursively retrieves files from child folders.

    Quick Sight can access Amazon S3 buckets or folders that are in any AWS Region. Make sure to use a URI prefix format that identifies the S3 bucket's AWS Region if it's different from that used by your Quick account.

    URI prefixes in the following formats are supported.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/quick/latest/userguide/supported-manifest-file-format.html)
+ **globalUploadSettings** – (Optional) Use this element to specify import settings for the Amazon S3 files, such as field delimiters. If this element is not specified, Quick Sight uses the default values for the fields in this section.
**Important**  
For log (.clf) and extended log (.elf) files, only the **format** field in this section is applicable, so you can skip the other fields. If you choose to include them, their values are ignored. 
  + **format** – (Optional) Specify the format of the files to be imported. Valid formats are **CSV**, **TSV**, **CLF**, **ELF**, and **JSON**. The default value is **CSV**.
  + **delimiter** – (Optional) Specify the file field delimiter. This must map to the file type specified in the `format` field. Valid formats are commas (**,**) for .csv files and tabs (**\$1t**) for .tsv files. The default value is comma (**,**).
  + **textqualifier** – (Optional) Specify the file text qualifier. Valid formats are single quote (**'**), double quotes (**\$1"**). The leading backslash is a required escape character for a double quote in JSON. The default value is double quotes (**\$1"**). If your text doesn't need a text qualifier, don't include this property.
  + **containsHeader** – (Optional) Specify whether the file has a header row. Valid formats are **true** or **false**. The default value is **true**.

### Manifest file examples for Quick Sight
<a name="quicksight-manifest-file-examples"></a>

The following are some examples of completed Quick Sight manifest files.

The following example shows a manifest file that identifies two specific .csv files for import. These files use double quotes for text qualifiers. The `format`, `delimiter`, and `containsHeader` fields are skipped because the default values are acceptable.

```
{
    "fileLocations": [
        {
            "URIs": [
                "https://yourBucket.s3.amazonaws.com/data-file.csv",
                "https://yourBucket.s3.amazonaws.com/data-file-2.csv"
            ]
        }
    ],
    "globalUploadSettings": {
        "textqualifier": "\""
    }
}
```

The following example shows a manifest file that identifies one specific .tsv file for import. This file also includes a bucket in another AWS Region that contains additional .tsv files for import. The `textqualifier` and `containsHeader` fields are skipped because the default values are acceptable.

```
{
    "fileLocations": [
        {
            "URIs": [
                "https://s3.amazonaws.com/amzn-s3-demo-bucket/data.tsv"
            ]
        },
        {
            "URIPrefixes": [
                "https://s3-us-east-1.amazonaws.com/amzn-s3-demo-bucket/"
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "TSV",
        "delimiter": "\t"
    }
}
```

The following example identifies two buckets that contain .clf files for import. One is in the same AWS Region as the Quick account, and one in a different AWS Region. The `delimiter`, `textqualifier`, and `containsHeader` fields are skipped because they are not applicable to log files.

```
{
    "fileLocations": [
        {
            "URIPrefixes": [
                "https://amzn-s3-demo-bucket1.your-s3-url.com",
                "s3://amzn-s3-demo-bucket2/"
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "CLF"
    }
}
```

The following example uses the Amazon Redshift format to identify a .csv file for import.

```
{
    "entries": [
        {
            "url": "https://amzn-s3-demo-bucket.your-s3-url.com/myalias-test/file-to-import.csv",
            "mandatory": true
        }
    ]
}
```

The following example uses the Amazon Redshift format to identify two JSON files for import.

```
{
    "fileLocations": [
        {
            "URIs": [
                "https://yourBucket.s3.amazonaws.com/data-file.json",
                "https://yourBucket.s3.amazonaws.com/data-file-2.json"
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON"
    }
}
```

# Creating Amazon S3 datasets
<a name="create-a-data-set-s3-procedure"></a>

**To create an Amazon S3 dataset**

1. Check [Data source quotas](data-source-limits.md) to make sure that your target file set doesn't exceed data source quotas.

1. Create a manifest file to identify the text files that you want to import, using one of the formats specified in [Supported formats for Amazon S3 manifest files](supported-manifest-file-format.md).

1. Save the manifest file to a local directory, or upload it into Amazon S3.

1. On the Quick start page, choose **Data**.

1. On the **Data** page, choose **Create** then **New dataset**.

1. Choose the Amazon S3 icon and then choose **Next**.

1. For **Data source name**, enter a description of the data source. This name should be something that helps you distinguish this data source from others.

1. For **Upload a manifest file**, do one of the following:
   + To use a local manifest file, choose **Upload**, and then choose **Upload a JSON manifest file**. For **Open**, choose a file, and then choose **Open**.
   + To use a manifest file from Amazon S3, choose **URL**, and enter the URL for the manifest file. To find the URL of a pre-existing manifest file in the Amazon S3 console, navigate to the appropriate file and choose it. A properties panel displays, including the link URL. You can copy the URL and paste it into Quick Sight.

1. Choose **Connect**.

1. To make sure that the connection is complete, choose **Edit/Preview data**. Otherwise, choose **Visualize** to create an analysis using the data as-is. 

   If you choose **Edit/Preview data**, you can specify a dataset name as part of preparing the data. Otherwise, the dataset name matches the name of the manifest file. 

   To learn more about data preparation, see [Preparing data in Amazon Quick Sight](preparing-data.md).

## Creating datasets based on multiple Amazon S3 files
<a name="data-sets-based-on-multiple-s3-files"></a>

You can use one of several methods to merge or combine files from Amazon S3 buckets inside Quick Sight:
+ **Combine files by using a manifest** – In this case, the files must have the same number of fields (columns). The data types must match between fields in the same position in the file. For example, the first field must have the same data type in each file. The same goes for the second field, and the third field, and so on. Quick Sight takes field names from the first file.

  The files must be listed explicitly in the manifest. However, they don't have to be inside the same Amazon S3 bucket.

  In addition, the files must follow the rules described in [Supported formats for Amazon S3 manifest files](supported-manifest-file-format.md).

  For more details about combining files using a manifest, see [Creating a dataset using Amazon S3 files](create-a-data-set-s3.md).
+ **Merge files without using a manifest** – To merge multiple files into one without having to list them individually in the manifest, you can use Athena. With this method, you can simply query your text files, like they are in a table in a database. For more information, see the post [Analyzing data in Amazon S3 using Athena](https://aws.amazon.com/blogs/big-data/analyzing-data-in-s3-using-amazon-athena/) in the Big Data blog. 
+ **Use a script to append files before importing** – You can use a script designed to combine your files before uploading. 

# Datasets using S3 files in another AWS account
<a name="using-s3-files-in-another-aws-account"></a>

Use this section to learn how to set up security so you can use Quick Sight to access Amazon S3 files in another AWS account. 

For you to access files in another account, the owner of the other account must first set Amazon S3 to grant you permissions to read the file. Then, in Quick Sight, you must set up access to the buckets that were shared with you. After both of these steps are finished, you can use a manifest to create a dataset.

**Note**  
 To access files that are shared with the public, you don't need to set up any special security. However, you still need a manifest file.

**Topics**
+ [Setting up Amazon S3 to allow access from a different Quick account](#setup-S3-to-allow-access-from-a-different-quicksight-account)
+ [Setting up Quick Sight to access Amazon S3 files in another AWS account](#setup-quicksight-to-access-S3-in-a-different-account)

## Setting up Amazon S3 to allow access from a different Quick account
<a name="setup-S3-to-allow-access-from-a-different-quicksight-account"></a>

Use this section to learn how to set permissions in Amazon S3 files so they can be accessed by Quick Sight in another AWS account. 

For information on accessing another account's Amazon S3 files from your Quick Sight account, see [Setting up Quick Sight to access Amazon S3 files in another AWS account](#setup-quicksight-to-access-S3-in-a-different-account). For more information about S3 permissions, see [Managing access permissions to your Amazon S3 resources](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) and [How do I set permissions on an object?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-object-permissions.html)

You can use the following procedure to set this access from the S3 console. Or you can grant permissions by using the AWS CLI or by writing a script. If you have a lot of files to share, you can instead create an S3 bucket policy on the `s3:GetObject` action. To use a bucket policy, add it to the bucket permissions, not to the file permissions. For information on bucket policies, see [Bucket policy examples](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) in the *Amazon S3 Developer Guide. *

**To set access from a different Quick account from the S3 console**

1. Get the email address of the AWS account email that you want to share with. Or you can get and use the canonical user ID. For more information on canonical user IDs, see [AWS account identifiers](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) in the *AWS General Reference.*

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Find the Amazon S3 bucket that you want to share with Quick Sight. Choose **Permissions**.

1. Choose **Add Account**, and then enter an email address, or paste in a canonical user ID, for the AWS account that you want to share with. This email address should be the primary one for the AWS account. 

1. Choose **Yes** for both **Read bucket permissions** and **List objects**.

   Choose **Save** to confirm.

1. Find the file that you want to share, and open the file's permission settings. 

1. Enter an email address or the canonical user ID for the AWS account that you want to share with. This email address should be the primary one for the AWS account. 

1. Enable **Read object** permissions for each file that Quick Sight needs access to. 

1. Notify the Quick user that the files are now available for use.

## Setting up Quick Sight to access Amazon S3 files in another AWS account
<a name="setup-quicksight-to-access-S3-in-a-different-account"></a>

Use this section to learn how to set up Quick Sight so you can access Amazon S3 files in another AWS account. For information on allowing someone else to access your Amazon S3 files from their Quick account, see [Setting up Amazon S3 to allow access from a different Quick account](#setup-S3-to-allow-access-from-a-different-quicksight-account).

Use the following procedure to access another account's Amazon S3 files from Quick Sight. Before you can use this procedure, the users in the other AWS account must share the files in their Amazon S3 bucket with you.

**To access another account's Amazon S3 files from Quick Sight**

1. Verify that the user or users in the other AWS account gave your account read and write permission to the S3 bucket in question. 

1. Choose your profile icon, and then choose **Manage Quick Sight**.

1. Choose **Security & permissions**.

1. Under **Quick Sight access to AWS services**, choose **Manage**.

1. Choose **Select S3 buckets**.

1. On the **Select Amazon S3 buckets** screen, choose the **S3 buckets you can access across AWS** tab.

   The default tab is named **S3 buckets linked to Quick Sight account**. It shows all the buckets your Quick account has access to. 

1. Do one of the following:
   + To add all the buckets that you have permission to use, choose **Choose accessible buckets from other AWS accounts**. 
   + If you have one or more Amazon S3 buckets that you want to add, enter their names. Each must exactly match the unique name of the Amazon S3 bucket.

     If you don't have the appropriate permissions, you see the error message "We can't connect to this S3 bucket. Make sure that any S3 buckets you specify are associated with the AWS account used to create this Quick account." This error message appears if you don't have either account permissions or Quick Sight permissions.
**Note**  
To use Amazon Athena, Quick Sight needs to access the Amazon S3 buckets that Athena uses.   
You can add them here one by one, or use the **Choose accessible buckets from other AWS accounts** option.

1. Choose **Select buckets** to confirm your selection. 

1. Create a new dataset based on Amazon S3, and upload your manifest file. For more information Amazon S3 datasets, see [Creating a dataset using Amazon S3 files](create-a-data-set-s3.md).

# Creating a data source using Apache Spark
<a name="create-a-data-source-spark"></a>

You can connect directly to Apache Spark using Quick Sight, or you can connect to Spark through Spark SQL. Using the results of queries, or direct links to tables or views, you create data sources in Quick Sight. You can either directly query your data through Spark, or you can import the results of your query into [SPICE](spice.md).

Before you use Quick Sight with Spark products, you must configure Spark for Quick Sight. 

Quick Sight requires your Spark server to be secured and authenticated using LDAP, which is available to Spark version 2.0 or later. If Spark is configured to allow unauthenticated access, Quick Sight refuses the connection to the server. To use Quick Sight as a Spark client, you must configure LDAP authentication to work with Spark. 

The Spark documentation contains information on how to set this up. To start, you need to configure it to enable front-end LDAP authentication over HTTPS. For general information on Spark, see [the Apache spark website](http://spark.apache.org/). For information specifically on Spark and security, see [Spark security documentation](http://spark.apache.org/docs/latest/security.html). 

To make sure that you have configured your server for Quick Sight access, follow the instructions in [Network and database configuration requirements](configure-access.md).

# Using Databricks in Quick Sight
<a name="quicksight-databricks"></a>

Use this section to learn how to connect from Quick Sight to Databricks. 

**To connect to Databricks**

1. Begin by creating a new dataset. Choose **Data** from the navigation pane at left.

1. Choose **Create** then **New Dataset**.

1. Choose the **Databricks** data source card.

1. For **Data source name**, enter a descriptive name for your Databricks data source connection, for example `Databricks CS`. Because you can create many datasets from a connection to Databricks, it's best to keep the name simple.

1. For **Connection type**, select the type of network you're using. 
   + **Public network** – if your data is shared publicly.
   + **VPC** – if your data is inside a VPC. 
**Note**  
If you're using VPC, and you don't see it listed, check with your administrator. 

1.  For **Database server**, enter the **Hostname of workspace** specified in your Databricks connection details.

1.  For **HTTP Path**, enter the **Partial URL for the spark instance** specified in your Databricks connection details.

1.  For **Port**, enter the **port** specified in your Databricks connection details.

1.  For **Username** and **Password**, enter your connection credentials.

1.  To verify the connection is working, click **Validate connection**.

1.  To finish and create the data source, click **Create data source**.

## Adding a new Quick Sight dataset for Databricks
<a name="quicksight-databricks-create-dataset"></a>

After you have an existing data source connection for Databricks data, you can create Databricks datasets to use for analysis. 

**To create a dataset using Databricks**

1. Choose **Data** at left, then scroll down to find the data source card for your Databricks connection. If you have many data sources, you can use the search bar at the top of the page to find your data source with a partial match on the name.

1. Choose the **Databricks** data source card, and then choose **Create data set**.

1. To specify the table you want to connect to, first select the Catalog and Schema you want to use. Then, for **Tables**, select the table that you want to use. If you prefer to use your own SQL statement, select **Use custom SQL**. 

1. Choose **Edit/Preview**.

1. (Optional) To add more data, use the following steps: 

   1. Choose **Add data** at top right.

   1. To connect to different data, choose **Switch data source**, and choose a different dataset. 

   1. Follow the UI prompts to finish adding data. 

   1. After adding new data to the same dataset, choose **Configure this join **(the two red dots). Set up a join for each additional table. 

   1. If you want to add calculated fields, choose **Add calculated field**. 

   1. To add a model from SageMaker AI, choose **Augment with SageMaker**. This option is only available in Quick Enterprise edition.

   1. Clear the check box for any fields that you want to omit.

   1. Update any data types that you want to change.

1. When you are done, choose **Save** to save and close the dataset. 

## Quick Sight Administrator's guide to connecting Databricks
<a name="quicksight-databricks-administration-setup"></a>

You can use Amazon Quick Sight to connect to Databricks on AWS. You can connect to Databricks on AWS whether you signed up for through AWS Marketplace or through the Databricks website. 

Before you can connect to Databricks, your create or identify existing resources that the connection requires. Use this section to help you gather the resources you need to connect from Quick Sight to Databricks.
+ To learn how to obtain your Databricks connection details, see [Databricks ODBC and JDBC connections](https://docs.databricks.com/integrations/jdbc-odbc-bi.html#get-server-hostname-port-http-path-and-jdbc-url).. 
+ To learn how to obtain your Databricks credentials—personal access token or user name and password—for authentication, see [Authentication requirements](https://docs.databricks.com/integrations/bi/jdbc-odbc-bi.html#authentication-requirements) in the [Databricks documentation](https://docs.databricks.com/index.html). 

  To connect to a Databricks cluster, you need `Can Attach To` and `Can Restart` permissions. These permissions are managed in Databricks. For more information, see [Permission Requirements](https://docs.databricks.com/integrations/jdbc-odbc-bi.html#permission-requirements) in the [Databricks documentation](https://docs.databricks.com/index.html)..
+ If you are setting up a private connection for Databricks, you can learn more about how to configure a VPC for use with Quick Sight, see [Connecting to a VPC with Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/working-with-aws-vpc.html) in the Quick Sight documentation. If the connection isnt' visible, verify with a system administrator that the network has open [inbound endpoints for Amazon Route 53](https://docs.aws.amazon.com/quicksight/latest/user/vpc-route-53.html). the hostname of a Databricks workspace uses a public IP , there needs to be DNS TCP and DNS UDP inbound and outbound rules to allow traffic on DNS port 53, for the Route 53 security group. An administrator needs to create a security group with 2 inbound rules: one for DNS(TCP) on port 53 to the VPC CIDR and one for DNS(UDP) for port 53 to the VPC CIDR. 

  For Databricks-related details if you are using PrivateLink instead of a public connection, see [Enable AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) in the [Databricks documentation](https://docs.databricks.com/index.html). 

# Creating a dataset using Google BigQuery
<a name="quicksight-google-big-query"></a>

**Note**  
When Quick Sight uses and transfers information that is received from Google APIs, it adheres to the [Google API Services User Data Policy](https://developers.google.com/terms/api-services-user-data-policy).

Google BigQuery is a fully managed serverless data warehouse that customers use to manage and analyze their data. Google BigQuery customers use SQL to query their data without any infrastructure management.

## Creating a data source connection with Google BigQuery
<a name="quicksight-google-big-query-connect"></a>

**Prerequisites**

Before you start, make sure that you have the following. These are all required to create a data source connection with Google BigQuery:
+ **Project ID** – The project ID that is associated with your Google account. To find this, navigate to the Google Cloud console and choose the name of the project that you want to connect to Quick Sight. Copy the project ID that appears in the new window and record it for later use.
+ **Dataset Region** – The Google region that the Google BigQuery project exists in. To find the dataset region, navigate to the Google BigQuery console and choose **Explorer**. Locate and expand the project that you want to connect to, then choose the dataset that you want to use. The dataset region appears in the pop-up that opens.
+ **Google account login credentials** – The login credentials for your Google account. If you don't have this information, contact your Google account administrator.
+ **Google BigQuery Permissions** – To connect your Google account with Quick Sight, make sure that your Google account has the following permissions:
  + `BigQuery Job User` at the `Project` level.
  + `BigQuery Data Viewer` at the `Dataset` or `Table` level.
  + `BigQuery Metadata Viewer` at the `Project` level.

For information about how to retrieve the previous prerequisite information, see [Unlock the power of unified business intelligence with Google Cloud BigQuery and Quick Sight](https://aws.amazon.com/blogs/business-intelligence/unlock-the-power-of-unified-business-intelligence-with-google-cloud-bigquery-and-amazon-quicksight/).

Use the following procedure to connect your Quick account with your Google BigQuery data source.

**To create a new connection to a Google BigQuery data source from Quick Sight**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the left navigation pane, choose **Data**.

1. Choose **Create** then choose **New Dataset**

1. Choose the **Google BigQuery** tile.

1. Add the data source details that you recorded in the prerequisites section earlier:
   + **Data source name** – A name for the data source.
   + **Project ID** – A Google Platform project ID. This field is case sensitive.
   + **Dataset Region** – The Google cloud platform dataset region of the project that you want to connect to.

1. Choose **Sign in**.

1. In the new window that opens, enter the login credentials for the Google account that you want to connect to.

1. Choose **Continue** to grant Quick Sight access to Google BigQuery.

1. After you create the new data source connection, continue to [Step 4](#gbq-step-4) in the following procedure.

## Adding a new Quick Sight dataset for Google BigQuery
<a name="quicksight-google-big-query-create"></a>

After you create a data source connection with Google BigQuery, you can create Google BigQuery datasets for analysis. Datasets that use Google BigQuery can only be stored in SPICE.

**To create a dataset using Google BigQuery**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the start page, choose **Data**.

1. Choose **Create**, then **New Dataset**

1. Choose the **Google BigQuery** tile, and then choose **Create dataset**.

1. <a name="gbq-step-4"></a>For **Tables**, do one of the following:
   + Choose the table that you want to use.
   + Choose **Use custom SQL** to use your own personal SQL statement. For more information about using custom SQL in Quick Sight, see [Using SQL to customize data](adding-a-SQL-query.md).

1. Choose **Edit/Preview**.

1. (Optional) In the **Data prep** page that opens, you can add customizations to your data with calculated fields, filters, and joins.

1. When you are finished making changes, choose **Save** to save and close the dataset.

# Creating a dataset using a Google Sheets data source
<a name="create-a-dataset-google-sheets"></a>

Google Sheets is a web-based spreadsheet application that enables users to create, edit, and collaborate on data in real time. With its comprehensive set of functions and formulas, it serves as a powerful data source for business intelligence and analytics. Users can organize, analyze, and share insights efficiently, while its seamless collaboration features make it an ideal platform for teams working on data-driven projects.

## Admin configuration in Amazon Quick
<a name="google-sheets-admin-config"></a>

Amazon Quick administrators need to perform a one-time setup to enable Google Sheets as a data source. For detailed instructions and important considerations, see [the blog](https://aws.amazon.com//blogs/business-intelligence/transform-your-google-sheets-data-into-powerful-analytics-with-amazon-quicksight/).

## Creating a dataset using a Google Sheets data source
<a name="google-sheets-create-dataset"></a>

Use the following procedure to create a dataset using a Google Sheets data source.

**To create a dataset using a Google Sheets data source**

1. From the Quick start page, choose **Datasets**.

1. On the **Datasets** page, choose **New Dataset**.

1. Choose **Google Sheets**.

1. Enter a name for the data source, and then choose **Connect**.

1. When redirected to Google's sign-in page, do the following:

   1. Enter your Google account credentials, and then choose **Next**.

   1. Review the permissions to authorize your AWS account to connect with Google Sheets, and then choose **Continue**.

1. In the **Choose your table** menu, locate your data. The menu displays all folders, subfolders, sheets, and tabs from your Google account. To display the tabs, select a sheet from the displayed list.

1. Select the tab you want to work with.

1. Choose **Edit/Preview data** to navigate to the Data preparation page. Choose **Add data** to include any additional tabs.

1. Configure the join, and then select **Publish & visualize** to analyze your Google Sheets data with Quick Sight.

**Note**  
This connector supports only SPICE functionality.
If your OAuth token expires (visible in the ingestion error report or when creating a new dataset), reauthorize by choosing **Edit** on the data source and updating it.

# Creating a dataset using an Apache Impala data source
<a name="create-a-dataset-impala"></a>

Apache Impala is a high-performance massively parallel processing (MPP) SQL query engine designed to run natively on Apache Hadoop. Use the procedure below to establish a secure connection between Quick Sight and Apache Impala.

All traffic between Quick Sight and Apache Impala is encrypted using SSL. Quick Sight supports standard username and password authentication for Impala connections.

To establish a connection, you'll need to configure SSL settings in your Impala instance, prepare your authentication credentials, set up the connection in Quick Sight using your Impala server details, and validate the connection to ensure secure data access.

**To create a dataset using an Apache Impala data source**

1. On the Quick start page, choose **Data**.

1. On the **Data** page, choose **Create**.

1. Choose **Data source**.

1. Choose **Impala**, then choose **Next**.

1. Enter a name for the data source.

1. For public connections:

   1. Enter connection details for **Database server**, **HTTP Path**, **Port**, **Username**, and **Password**.

   1. Once the validation is successful, choose **Create data source**.

1. For private connections:

   1. Coordinate with your administrator to set up a VPC connection before entering connection details.

     You or your administrator can [configure the VPC connection in Quick](vpc-creating-a-connection-in-quicksight.md). SSL is enabled by default to ensure secure data transmission. If you encounter connection validation errors, please verify your connection and VPC details.

     If issues persist, consult your administrator to confirm that your Certificate Authority is included in Quick Sight's [approved list of certificates](configure-access.md#ca-certificates).

1. In the **Choose your table** menu, you can either:

   1. Choose a specific schema or table, then choose **Select**.

   1. Choose **Use custom SQL** to write your own SQL query.

1. After completing your selection, you will be redirected to the data preparation page. Make any adjustments to your data, then choose **Publish & visualize** to analyze your Impala data in Quick Sight.

**Note**  
This connector supports:  
Username and password authentication
Public and private connections
Table discovery and custom SQL queries
Full data refresh during ingestion
SPICE storage only

# Creating a dataset using a Microsoft Excel file
<a name="create-a-data-set-excel"></a>

To create a dataset using a Microsoft Excel file data source, upload an .xlsx file from a local or networked drive. The data is imported into [SPICE](spice.md).

 For more information about creating new Amazon S3 datasets using Amazon S3 data sources, see [Creating a dataset using an existing Amazon S3 data source](create-a-data-set-existing.md#create-a-data-set-existing-s3) or [Creating a dataset using Amazon S3 files](create-a-data-set-s3.md). 

**To create a dataset based on an excel file**

1. Check [Data source quotas](data-source-limits.md) to make sure that your target file doesn't exceed data source quotas.

1. On the Quick start page, choose **Data**.

1. On the **Data** page, choose **Create ** then **New dataset**.

1. Choose **Upload a file**.

1. In the **Open** dialog box, choose a file, and then choose **Open**.

   A file must be 1 GB or less to be uploaded to Quick Sight.

1. If the Excel file contains multiple sheets, choose the sheet to import. You can change this later by preparing the data. 

1. 
**Note**  
On the following screens, you have multiple chances to prepare the data. Each of these takes you to the **Prepare Data** screen. This screen is the same one where you can access after the data import is complete. It enables you to change the upload settings even after the upload is complete.

    Choose **Select** to confirm your settings. Or you can choose **Edit/Preview data** to prepare the data immediately.

   A preview of the data appears on the next screen. You can't make changes directly to the data preview. 

1. If the data headings and content don't look correct, choose **Edit settings and prepare data** to correct the file upload settings. 

   Otherwise, choose **Next**.

1. On the **Data Source Details** screen, you can choose **Edit/Preview data**. You can specify a dataset name in the **Prepare Data** screen. 

   If you don't need to prepare the data, you can choose to create an analysis using the data as-is. Choose **Visualize**. Doing this names the dataset the same as the source file, and takes you to the **Analysis** screen. To learn more about data preparation and excel upload settings, see [Preparing data in Amazon Quick Sight](preparing-data.md).

**Note**  
If at anytime you want to make changes to the file, such as adding a new field,you must make the change in Microsoft Excel and create a new dataset using the updated version in Quick Sight. For more information about possible implications of changing datasets, see [Things to consider when editing datasets](edit-a-data-set.md#change-a-data-set) .

# Creating a data source using Presto
<a name="create-a-data-source-presto"></a>

Presto (or PrestoDB) is an open-source, distributed SQL query engine, designed for fast analytic queries against data of any size. It supports both nonrelational and relational data sources. Supported nonrelational data sources include the Hadoop Distributed File System (HDFS), Amazon S3, Cassandra, MongoDB, and HBase. Supported relational data sources include MySQL, PostgreSQL, Amazon Redshift, Microsoft SQL Server, and Teradata. 

For more information about Presto, see the following:
+ [Introduction to presto](https://aws.amazon.com/big-data/what-is-presto/), a description of Presto on the AWS website.
+ [Creating a presto cluster with Amazon elastic MapReduce (EMR)](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-presto.html) in the *Amazon EMR Release Guide.*
+ For general information on Presto, see the [Presto documentation](https://trino.io/docs/current/).

The results of the queries that you run through the Presto query engine can be turned into Quick Sight datasets. Presto processes the analytic queries on the backend databases. Then it returns results to the Quick Sight client. You can directly query your data through Presto, or you can import the results of your query into SPICE. 

Before you use Quick Sight as a Presto client to run queries, make sure that you configure data source profiles. You need a data source profile in Quick Sight for each Presto data source that you want to access. Use the following procedure to create a connection to Presto.

**To create a new connection to a presto data source from Amazon Quick Sight (console)**

1. On the Amazon Quick Sight start page, choose **Data** at left.

1. Choose **Create** then **New dataset**. 

1. Choose the **Presto** tile. 
**Note**  
In most browsers, you can use Ctrl-F or Cmd-F to open a search box and enter **presto** to locate it. 

1. Add the settings for the new data source:
   + ****Data source name**** – Enter a descriptive name for your data source connection. This name appears in the **Existing data sources** section at the bottom of the **Data sets** screen. 
   + ****Connection type**** – Choose the connection type that you need to use to connect to Presto. 

     To connect through the public network, choose **Public network**. 

     If you use a public network, your Presto server must be secured and authenticated using Lightweight Directory Access Protocol (LDAP). For information on configuring Presto to use LDAP, see [LDAP authentication](https://trino.io/docs/current/security/ldap.html) in the Presto documentation. 

     To connect through a virtual private connection, choose the appropriate VPC name from the **VPC connections** list. 

     If your Presto server allows unauthenticated access, AWS requires that you connect to it securely by using a private VPC connection. For information on configuring a new VPC, see [Configuring VPC connections in Amazon Quick Sight](working-with-aws-vpc.md).
   + ****Database server**** – The name of the database server. 
   + ****Port**** – The port that the server using to accept incoming connections from Amazon Quick Sight 
   + ****Catalog**** – The name of the catalog that you want to use. 
   + ****Authentication required**** – (Optional) This option only appears if you choose a VPC connection type. If the Presto data source that you're connecting to doesn't require authentication, choose **No**. Otherwise, keep the default setting (**Yes**). 
   + ****Username**** – Enter a user name to use to connect to Presto. Quick Sight applies the same user name and password to all connections that use this data source profile. If you want to monitor Quick Sight separately from other accounts, create a Presto account for each Quick Sight data source profile. 

     The Presto account that you use needs be able to access to the database and run `SELECT` statements on at least one table. 
   + ****Password**** – The password to use with the Presto user name. Amazon Quick Sight encrypts all credentials that you use in data source profile. For more information, see [Data encryption in Amazon Quick](data-encryption.md). 
   + ****Enable SSL**** – SSL is enabled by default. 

1. Choose **Validate connection** to test your settings.

1. After you validate your settings, choose **Create data source** to complete the connection.

# Using Snowflake with Amazon Quick Sight
<a name="connecting-to-snowflake"></a>

Snowflake is an AI data cloud platform that provides data solutions from data warehousing and collaboration to data science and generative AI. Snowflake is an [AWS Partner](https://partners.amazonaws.com/partners/001E000000d8qQcIAI/Snowflake) with multiple AWS accreditations that include AWS ISV Competencies in Generative AI, Machine Learning, Data and Analytics, and Retail.

Amazon Quick Sight offers two ways to connect to Snowflake: with your Snowflake login credentials or with OAuth client credentials. Use the following sections to learn about both methods of connection.

**Topics**
+ [Creating an Quick Sight data source connection to Snowflake with login credentials](#create-connection-to-snowflake)
+ [Creating an Quick Sight data source connection to Snowflake with OAuth client credentials](#create-connection-to-snowflake-oauth-credentials)

## Creating an Quick Sight data source connection to Snowflake with login credentials
<a name="create-connection-to-snowflake"></a>

 Use this section to learn how to create a connection between Quick Sight and Snowflake with your Snowflake login credentials. All traffic between Quick Sight and Snowflake is enabled by SSL.

**To create a connection between Quick Sight and Snowflake**

1. Open the [Quick console](https://quicksight.aws.amazon.com/).

1. From the left navigation pane, choose **Data**, then choose **Create**, then choose **New Dataset**.

1. Choose the **Snowflake** data source card.

1. In the pop up that appears, enter the following information:

   1. For **Data source name**, enter a descriptive name for your Snowflake data source connection. Because you can create many datasets from a connection to Snowflake, it's bets to keep the name simple.

   1. For **Connection type**, choose the type of network that you're using. Choose **Public network** if your data is shared publicly. Choose **VPC** if your data is located inside a VPC. To configure a VPC connection in Quick Sight, see [Managing VPC connection in Amazon Quick](vpc-creating-a-connection-in-quicksight.md).

   1. For **Database server** enter the hostname specified in your Snowflake connection details.

1. For **Database name and Warehouse**, enter the respective Snowflake database and wearehouse that you want to connect.

1. For **Username** and **Password**, enter your Snowflake credentials.

After you have successfully created a data source connection between your Quick Sight and Snowflake accounts, you can begin [Creating datasets](creating-data-sets.md) that contain Snowflake data.

## Creating an Quick Sight data source connection to Snowflake with OAuth client credentials
<a name="create-connection-to-snowflake-oauth-credentials"></a>

You can use OAuth client credentials to connect your Quick Sight account with Snowflake through the [Quick Sight APIs](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CreateDataSource.html). *OAuth* is a standard authorization protocol that is often utilized for applications that have advanced security requirements. When you connect to Snowflake with OAuth client credentials, you can create datasets that contain Snowflake data with the Quick Sight APIs and in the Quick Sight UI. For more information about configuring OAuth in Snowflake, see [Snowflake OAuth overview](https://docs.snowflake.com/en/user-guide/oauth-snowflake-overview).

Quick Sight supports the `client credentials` OAuth grant type. OAuth client credentials is used to obtain an access token for machine-to-machine communication. This method is suitable for scenarios where a client needs to access resources that are hosted on a server without the involvement of a user.

In the client credentials flow of OAuth 2.0, there are several client authentication mechanisms that can be used to authenticate the client application with the authorization server. Quick Sight supports client credentials based OAuth for Snowflake for the following two mechanisms:
+ **Token (Client secrets-based OAuth)**: The secret-based client authentication mechanism is used with the client credentials to grant flow in order to authenticate with authorization server. This authentication scheme requires the `client_id` and `client_secret` of the OAuth client app to be stored in Secrets Manager.
+ **X509 (Client private key JWT-based OAuth)**: The X509 certificate key-based solution provides an additional security layer to the OAuth mechanism with client certificates that are used to authenticate instead of client secrets. This method is primarily used by private clients who use this method to authenticate with the authorization server with strong trust between the two services.

Quick Sight has validated OAuth connections with the following Identity providers:
+ OKTA
+ PingFederate

### Storing OAuth credentials in Secrets Manager
<a name="create-connection-to-snowflake-oauth-store-credentials"></a>

OAuth client credentials are meant for machine-to-machine use cases and are not designed to be interactive. To create a datasource connection between Quick Sight and Snowflake, create a new secret in Secrets Manager that contains your credentials for the OAuth client app. The secret ARN that is created with the new secret can be used to create datasets that contain Snowflake data in Quick Sight. For more information about using Secrets Manager keys in Quick Sight, see [Using AWS Secrets Manager secrets instead of database credentials in Quick](secrets-manager-integration.md).

The credentials that you need to store in Secrets Manager are determined by the OAuth mechanism that you use. The following key/value pairs are required for X509-based OAuth secrets:
+ `username`: The Snowflake account username to be used when connecting to Snowflake
+ `client_id`: The OAuth client ID
+ `client_private_key`: The OAuth client private key
+ `client_public_key`: The OAuth client certificate public key and its encrypted algorithm (for example, `{"alg": "RS256", "kid", "cert_kid"}`)

The following key/value pairs are required for token-based OAuth secrets:
+ `username`: The Snowflake account username to be used when connecting to Snowflake
+ `client_id`: The OAuth client ID
+ `client_secret`: the OAuth client secret

### Creating a Snowflake OAuth connection with the Quick Sight APIs
<a name="create-connection-to-snowflake-oauth-example"></a>

After you create a secret in Secrets Manager that contains your Snowflake OAuth credentials and have connected your Quick account to Secrets Manager, you can establish a data source connection between Quick Sight and Snowflake with the Quick Sight APIs and SDK. The following example creates a Snowflake data source connection using token OAuth client credentials.

```
{
    "AwsAccountId": "AWSACCOUNTID",
    "DataSourceId": "UNIQUEDATASOURCEID",
    "Name": "NAME",
    "Type": "SNOWFLAKE",
    "DataSourceParameters": {
        "SnowflakeParameters": {
            "Host": "HOSTNAME",
            "Database": "DATABASENAME",
            "Warehouse": "WAREHOUSENAME",
            "AuthenticationType": "TOKEN",
            "DatabaseAccessControlRole": "snowflake-db-access-role-name",
            "OAuthParameters": {
              "TokenProviderUrl": "oauth-access-token-endpoint", 
              "OAuthScope": "oauth-scope",
              "IdentityProviderResourceUri" : "resource-uri",
              "IdentityProviderVpcConnectionProperties" : {
                "VpcConnectionArn": "IdP-VPC-connection-ARN" 
             }
        }
    },
    "VpcConnectionProperties": {
        "VpcConnectionArn": "VPC-connection-ARN-for-Snowflake"
    }
    "Credentials": {
        "SecretArn": "oauth-client-secret-ARN"
    }
}
```

For more information about the CreateDatasource API operation, see [CreateDataSource](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CreateDataSource.html).

Once the connection between Quick Sight and Snowflake is established and a data source is created with the Quick Sight APIs or SDK, the new data source is displayed in Quick Sight. Quick Sight authors can use this data source to create datasets that contain Snowflake data. Tables are displayed based on the role used in the `DatabaseAccessControlRole` parameter that is passed in a `CreateDataSource` API call. If this parameter is not defined when the data source connection is created, the default Snowflake role is used.

After you have successfully created a data source connection between your Quick Sight and Snowflake accounts, you can begin [Creating datasets](creating-data-sets.md) that contain Snowflake data.

# Using Starburst with Amazon Quick Sight
<a name="connecting-to-starburst"></a>

Starburst is a full-featured data lake analytics service built on top of a massively parallel processing (MPP) query engine, Trino. Use this section to learn how to connect from Amazon Quick Sight to Starburst. All traffic between Quick Sight and Starburst is enabled by SSL. If you're connecting to Starburst Galaxy, you can get the necessary connection details by logging in to your Starburst Galaxy account, then choose **Partner Connect** and then **Quick Sight**. You should be able to see information, such as hostname and port. Amazon Quick Sight supports basic username and password authentication to Starburst.

Quick Sight offers two ways to connect to Starburst: with your Starburst login credentials or with OAuth client credentials. Use the following sections to learn about both methods of connection.

**Topics**
+ [Creating an Quick Sight data source connection to Starburst with login credentials](#create-connection-to-starburst)
+ [Creating an Quick Sight data source connection to Starburst with OAuth client credentials](#create-connection-to-starburst-oauth)

## Creating an Quick Sight data source connection to Starburst with login credentials
<a name="create-connection-to-starburst"></a>

1. Begin by creating a new dataset. From the left navigation pane, choose **Data**, then choose **Create**, then choose **New Dataset**.

1. Choose the **Starburst** data source card.

1. Select the Starburst product type. Choose **Starburst Enterprise** for on-prem Starburst instances. Choose **Starburst Galaxy** for managed instances.

1. For **Data source name**, enter a descriptive name for your Starburst data source connection. Because you can create many datasets from a connection to Starburst, it's best to keep the name simple.

1. For **Connection type**, select the type of network you're using. Choose **Public network** if your data is shared publicly. Choose **VPC** if your data is inside a VPC. To configure a VPC connection in Amazon Quick Sight, see [ Configuring the VPC connection in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/vpc-creating-a-connection-in-quicksight.html). This connection type is not available for Starburst Galaxy.

1. For **Database server** enter the hostname specified in your Starburst connection details.

1. For **Catalog**, enter the catalog specified in your Starburst connection details.

1. For **Port**, enter the port specified in your Starburst connection details. Defaults to 443 for Starburst Galaxy.

1. For **Username** and **Password**, enter your Starburst connection credentials.

1. To verify the connection is working, choose **Validate connection**.

1. To finish and create the data source, choose **Create data source**.

**Note**  
Connectivity between Amazon Quick Sight and Starburst was validated using Starburst version 420.

After you have successfully created a data source connection between your Quick Sight and Starburst accounts, you can begin [Creating datasets](creating-data-sets.md) that contain Starburst data.

## Creating an Quick Sight data source connection to Starburst with OAuth client credentials
<a name="create-connection-to-starburst-oauth"></a>

You can use OAuth client credentials to connect your Quick Sight account with Starburst through the [Quick Sight APIs](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CreateDataSource.html). *OAuth* is a standard authorization protocol that is often utilized for applications that have advanced security requirements. When you connect to Starburst with OAuth client credentials, you can create datasets that contain Starburst data with the Quick Sight APIs and in the Quick Sight UI. For more information about configuring OAuth in Starburst, see [OAuth 2.0 authentication](https://docs.starburst.io/latest/security/oauth2.html).

Quick Sight supports the `client credentials` OAuth grant type. OAuth client credentials is used to obtain an access token for machine-to-machine communication. This method is suitable for scenarios where a client needs to access resources that are hosted on a server without the involvement of a user.

In the client credentials flow of OAuth 2.0, there are several client authentication mechanisms that can be used to authenticate the client application with the authorization server. Quick Sight supports client credentials based OAuth for Starburst for the following two mechanisms:
+ **Token (Client secrets-based OAuth)**: The secret-based client authentication mechanism is used with the client credentials to grant flow in order to authenticate with authorization server. This authentication scheme requires the `client_id` and `client_secret` of the OAuth client app to be stored in Secrets Manager.
+ **X509 (Client private key JWT-based OAuth)**: The X509 certificate key-based solution provides an additional security layer to the OAuth mechanism with client certificates that are used to authenticate instead of client secrets. This method is primarily used by private clients who use this method to authenticate with the authorization server with strong trust between the two services.

Quick Sight has validated OAuth connections with the following Identity providers:
+ OKTA
+ PingFederate

### Storing OAuth credentials in Secrets Manager
<a name="create-connection-to-starburst-oauth-store-credentials"></a>

OAuth client credentials are meant for machine-to-machine use cases and are not designed to be interactive. To create a datasource connection between Quick Sight and Starburst, create a new secret in Secrets Manager that contains your credentials for the OAuth client app. The secret ARN that is created with the new secret can be used to create datasets that contain Starburst data in Quick Sight. For more information about using Secrets Manager keys in Quick Sight, see [Using AWS Secrets Manager secrets instead of database credentials in Quick](secrets-manager-integration.md).

The credentials that you need to store in Secrets Manager are determined by the OAuth mechanism that you use. The following key/value pairs are required for X509-based OAuth secrets:
+ `username`: The Starburst account username to be used when connecting to Starburst
+ `client_id`: The OAuth client ID
+ `client_private_key`: The OAuth client private key
+ `client_public_key`: The OAuth client certificate public key and its encrypted algorithm (for example, `{"alg": "RS256", "kid", "cert_kid"}`)

The following key/value pairs are required for token-based OAuth secrets:
+ `username`: The Starburst account username to be used when connecting to Starburst
+ `client_id`: The OAuth client ID
+ `client_secret`: the OAuth client secret

### Creating a Starburst OAuth connection with the Quick Sight APIs
<a name="create-connection-to-starburst-oauth-example"></a>

After you create a secret in Secrets Manager that contains your Starburst OAuth credentials and have connected your Quick account to Secrets Manager, you can establish a data source connection between Quick Sight and Starburst with the Quick Sight APIs and SDK. The following example creates a Starburst data source connection using token OAuth client credentials.

```
{
    "AwsAccountId": "AWSACCOUNTID",
    "DataSourceId": "DATASOURCEID",
    "Name": "NAME",
    "Type": "STARBURST",
    "DataSourceParameters": {
        "StarburstParameters": {
            "Host": "STARBURST_HOST_NAME",
            "Port": "STARBURST_PORT",
            "Catalog": "STARBURST_CATALOG",
            "ProductType": "STARBURST_PRODUCT_TYPE",     
            "AuthenticationType": "TOKEN",
            "DatabaseAccessControlRole": "starburst-db-access-role-name",
            "OAuthParameters": {
              "TokenProviderUrl": "oauth-access-token-endpoint", 
              "OAuthScope": "oauth-scope",
              "IdentityProviderResourceUri" : "resource-uri",
              "IdentityProviderVpcConnectionProperties" : {
                "VpcConnectionArn": "IdP-VPC-connection-ARN"
            }
        }
    },
    "VpcConnectionProperties": {
        "VpcConnectionArn": "VPC-connection-ARN-for-Starburst"
    },
    "Credentials": {
        "SecretArn": "oauth-client-secret-ARN"
    }
}
```

For more information about the CreateDatasource API operation, see [CreateDataSource](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CreateDataSource.html).

Once the connection between Quick Sight and Starburst is established and a data source is created with the Quick Sight APIs or SDK, the new data source is displayed in Quick Sight. Quick Sight authors can use this data source to create datasets that contain Starburst data. Tables are displayed based on the role used in the `DatabaseAccessControlRole` parameter that is passed in a `CreateDataSource` API call. If this parameter is not defined when the data source connection is created, the default Starburst role is used.

After you have successfully created a data source connection between your Quick Sight and Starburst accounts, you can begin [Creating datasets](creating-data-sets.md) that contain Starburst data.

# Creating a data source and data set from SaaS sources
<a name="connecting-to-saas-data-sources"></a>

To analyze and report on data from software as a service (SaaS) applications, you can use SaaS connectors to access your data directly from Quick Sight. The SaaS connectors simplify accessing third-party application sources using OAuth, without any need to export the data to an intermediate data store.

You can use either a cloud-based or server-based instance of a SaaS application. To connect to an SaaS application that is running on your corporate network, make sure that Quick Sight can access the application's Domain Name System (DNS) name over the network. If Quick Sight can't access the SaaS application, it generates an unknown host error. 

Here are examples of some ways that you can use SaaS data:
+ Engineering teams who use Jira to track issues and bugs can report on developer efficiency and bug burndown. 
+ Marketing organizations can integrate Quick Sight with Adobe Analytics to build consolidated dashboards to visualize their online and web marketing data.

Use the following procedure to create a data source and dataset by connecting to sources available through Software as a Service (SaaS). In this procedure, we use a connection to GitHub as an example. Other SaaS data sources follow the same process, although the screens—especially the SaaS screens—might look different.

**To create a data source and dataset by connecting to sources through SaaS**

1. On the Quick start page, choose **Data**.

1. On the **Data** page, choose **Create ** then choose **New dataset**.

1. Choose the icon that represents the SaaS source that you want to use. For example, you might choose Adobe Analytics or GitHub.

   For sources using OAuth, the connector takes you to the SaaS site to authorize the connection before you can create the data source. 

1. Choose a name for the data source, and enter that. If there are more screen prompts, enter the appropriate information. Then choose **Create data source**.

1. If you are prompted to do so, enter your credentials on the SaaS login page.

1. When prompted, authorize the connection between your SaaS data source and Quick Sight.

   The following example shows the authorization for Quick Sight to access the GitHub account for the Quick Sight documentation.
**Note**  
Quick Sight documentation is now available on GitHub. If you want to make changes to this user guide, you can use GitHub to edit it directly.

   (Optional) If your SaaS account is part of an organizational account, you might be asked to request organization access as part of authorizing Quick Sight. If you want to do this, follow the prompts on your SaaS screen, then choose to authorize Quick Sight.

1. After authorization is complete, choose a table or object to connect to. Then choose **Select**.

1. On the **Finish data set creation** screen, choose one of these options:
   + To save the data source and dataset, choose **Edit/Preview data**. Then choose **Save** from the top menu bar.
   + To create a dataset and an analysis using the data as-is, choose **Visualize**. This option automatically saves the data source and the dataset.

     You can also choose **Edit/Preview data** to prepare the data before creating an analysis. This opens the data preparation screen. For more information about data preparation, see [Preparing dataset examples](preparing-data-sets.md).

The following constraints apply:
+ The SaaS source must support REST API operations for Quick Sight to connect to it.
+ If you are connecting to Jira, the URL must be public address.
+ If you don't have enough [SPICE](spice.md) capacity, choose **Edit/Preview data**. In the data preparation screen, you can remove fields from the dataset to decrease its size or apply a filter that reduces the number of rows returned. For more information about data preparation, see [Preparing dataset examples](preparing-data-sets.md).

# Creating a dataset from Salesforce
<a name="create-a-data-set-salesforce"></a>

Use the following procedure to create a dataset by connecting to Salesforce and selecting a report or object to provide data.

**To create a dataset using Salesforce from a report or object**

1. Check [Data source quotas](data-source-limits.md) to make sure that your target report or object doesn't exceed data source quotas.

1. On the Quick start page, choose **Data**.

1. On the **Data** page, choose **Create** then **New dataset**.

1. Choose the **Salesforce** icon.

1. Enter a name for the data source and then choose **Create data source**.

1. On the Salesforce login page, enter your Salesforce credentials.

1. For **Data elements: contain your data**, choose **Select** and then choose either **REPORT** or **OBJECT**.
**Note**  
Joined reports aren't supported as Quick Sight data sources.

1. Choose one of the following options:
   + To prepare the data before creating an analysis, choose **Edit/Preview data** to open data preparation. For more information about data preparation, see [Preparing dataset examples](preparing-data-sets.md).
   + Otherwise, choose a report or object and then choose **Select**.

1. Choose one of the following options:
   + To create a dataset and an analysis using the data as-is, choose **Visualize**.
**Note**  
If you don't have enough [SPICE](spice.md) capacity, choose **Edit/Preview data**. In data preparation, you can remove fields from the dataset to decrease its size or apply a filter that reduces the number of rows returned. For more information about data preparation, see [Preparing dataset examples](preparing-data-sets.md).
   + To prepare the data before creating an analysis, choose **Edit/Preview data** to open data preparation for the selected report or object. For more information about data preparation, see [Preparing dataset examples](preparing-data-sets.md).

**Note**  
The Salesforce connector is not supported in embedded console deployments where users authenticate through namespace isolation. The OAuth authentication flow requires direct Amazon Quick Sight console access to complete the sign-in process.

# Using Trino with Amazon Quick Sight
<a name="connecting-to-trino"></a>

Trino is a massively parallel processing (MPP) query engine built to quickly query data lakes with petabytes of data. Use this section to learn how to connect from Amazon Quick Sight to Trino. All traffic between Amazon Quick Sight and Trino is enabled by SSL. Amazon Quick Sight supports basic username and password authentication to Trino.

## Creating a data source connection for Trino
<a name="create-connection-to-trino"></a>

1. Begin by creating a new dataset. From the left navigation pane, choose **Data**. Choose **Create** then **New Dataset**.

1. Choose the **Trino** data source card.

1. For **Data source name**, enter a descriptive name for your Trino data source connection. Because you can create many datasets from a connection to Trino, it's best to keep the name simple.

1. For **Connection type**, select the type of network you're using. Choose **Public network** if your data is shared publicly. Choose **VPC** if your data is inside a VPC. To configure a VPC connection in Amazon Quick Sight, see [ Configuring the VPC connection in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/vpc-creating-a-connection-in-quicksight.html).

1. For **Database server**, enter the hostname specified in your Trino connection details.

1. For **Catalog**, enter the catalog specified in your Trino connection details.

1. For **Port**, enter the port specified in your Trino connection details.

1. For **Username** and **Password**, enter your Trino connection credentials.

1. To verify the connection is working, choose **Validate connection**.

1. To finish and create the data source, choose **Create data source**.

## Adding a new Amazon Quick Sight dataset for Trino
<a name="create-dataset-using-trino"></a>

After you go through the [ data source creation process](https://docs.aws.amazon.com/create-connection-to-starburst.html) for Trino, you can create Trino datasets to use for analysis. You can create new datasets from a new or an existing Trino data source. When you are creating a new data source, Amazon Quick Sight immediately takes you to creating a dataset, which is step 3 below. If you're using an existing data source to create a new dataset, start from step 1 below.

To create a dataset using a Trino data source, see the following steps.

1. From the start page, choose **Data**. Choose **Create** then **New dataset**.

1. Choose the Trino data source you created.

1. Choose **Create data set**.

1. To specify the table you want to connect to, choose a schema. If you don't want to choose a schema, you can also use your own SQL statement.

1. To specify the table you want to connect to, first select the **Schema** you want to use. For **Tables**, choose the table that you want to use. If you prefer to use your own SQL statement, select **Use custom SQL**.

1. Choose **Edit/Preview**.

1. (Optional) To add more data, use the following steps:

1. Choose **Add data** in the top right.

1. To connect to different data, choose **Switch data source**, and choose a different dataset.

1. Follow the prompts to finish adding data.

1. After adding new data to the same dataset, choose **Configure this join** (the two red dots). Set up a join for each additional table.

1. If you want to add calculated fields, choose **Add calculated field**.

1. Clear the check box for any fields that you want to omit.

1. Update any data types that you want to change.

1. When you are done, choose **Save** to save and close the dataset.

**Note**  
Connectivity between Quick Sight and Trino was validated using Trino version 410.

# Creating a dataset using a local text file
<a name="create-a-data-set-file"></a>

To create a dataset using a local text file data source, identify the location of the file, and then upload it. The file data is automatically imported into [SPICE](spice.md) as part of creating a dataset. 

**To create a dataset based on a local text file**

1. Check [Data source quotas](data-source-limits.md) to make sure that your target file doesn't exceed data source quotas.

   Supported file types include .csv, .tsv, .json, .clf, or .elf files.

1. On the Quick start page, choose **Data**.

1. Choose **Create ** then **New dataset**.

1. Choose **Upload a file**.

1. In the **Open** dialog box, browse to a file, select it, and then choose **Open**.

   A file must be 1 GB or less to be uploaded to Quick Sight.

1. To prepare the data before creating the dataset, choose **Edit/Preview data**. Otherwise, choose **Visualize** to create an analysis using the data as-is. 

   If you choose the former, you can specify a dataset name as part of preparing the data. If you choose the latter, a dataset with the same name as the source file is created. To learn more about data preparation, see [Preparing data in Amazon Quick Sight](preparing-data.md).

# Using Amazon Timestream data with Amazon Quick Sight
<a name="using-data-from-timestream"></a>

Following, you can find how to connect to your Amazon Timestream data using Amazon Quick Sight. For a brief overview, see the [Getting started with Amazon Timestream and Amazon QuickSight](https://youtu.be/TzW4HWl-L8s) video tutorial on YouTube. 

## Creating a new Amazon Quick Sight data source connection for a Timestream database
<a name="create-connection-to-timestream"></a>

Following, you can find how to connect to Amazon Timestream from Amazon Quick Sight.

Before you can proceed, Amazon Quick Sight needs to be authorized to connect to Amazon Timestream. If connections aren't enabled, you get an error when you try to connect. A Quick Sight administrator can authorize connections to AWS resources. To authorize, open the menu by clicking on your profile icon at top right. Choose **Manage QuickSight**, **Security & permissions**, **Add or remove**. Then enable the check box for Amazon Timestream, then choose **Update** to confirm. For more information, see [Configuring Amazon Quick Sight access to AWS data sources](access-to-aws-resources.md).

**To connect to Amazon Timestream**

1. Begin by creating a new dataset. Choose **Data** from the navigation pane at left. 

1. Choose **Create** then **New Dataset**.

1. Choose the Timestream data source card.

1. For **Data source name**, enter a descriptive name for your Timestream data source connection, for example `US Timestream Data`. Because you can create many datasets from a connection to Timestream, it's best to keep the name simple.

1. Choose **Validate connection** to check that you can successfully connect to Timestream.

1. Choose **Create data source** to proceed.

1. For **Database**, choose **Select** to view the list of available options. 

1. Choose the one you want to use, then choose **Select** to continue. 

1. Do one of the following:
   + To import your data into Quick Sight's in-memory engine (called SPICE), choose **Import to SPICE for quicker analytics**. 
   + To allow Quick Sight to run a query against your data each time you refresh the dataset or use the analysis or dashboard, choose **Directly query your data**. 

   If you want to enable autorefresh on a published dashboard that uses Timestream data, the Timestream dataset needs to use a direct query.

1. Choose **Edit/Preview** and then **Save** to save your dataset and close it.

1. Repeat these steps for the number of concurrent direct connections to Timestream that you want to open in a dataset. For example, let's say you want to use four tables in a Quick Sight dataset. Currently, Quick Sight datasets connect to only one table at a time from a Timestream data source. To use four tables in the same dataset, you need to add four data source connections in Quick Sight. 

## Managing permissions for Timestream data
<a name="dataset-permissions-for-timestream"></a>

The following procedure describes how to view, add, and revoke permissions to allow access to the same Timestream data source. The people that you add need to be active users in Quick Sight before you can add them. 

**To edit permissions on a dataset**

1. Choose **Data** at left, then scroll down to find the dataset for your Timestream connection. An example might be `US Timestream Data`.

1. Choose the **Timestream** dataset to open it.

1. On the dataset details page that opens, choose the **Permissions**tab.

   A list of current permissions appears.

1. To add permissions, choose **Add users & groups**, then follow these steps:

   1. Add users or groups to allow them to use the same dataset.

   1. When you're finished adding everyone that you want to add, choose the **Permissions** that you want to apply to them.

1. (Optional) To edit permissions, you can choose **Viewer** or **Owner**. 
   + Choose **Viewer** to allow read access.
   + Choose **Owner** to allow that user to edit, share, or delete this Quick Sight data source. 

1. (Optional) To revoke permissions, choose **Revoke access**. After you revoke someone's access, they can't create edit, share, or delete the dataset.

1. When you are finished, choose **Close**.

## Adding a new Quick Sight dataset for Timestream
<a name="create-dataset-using-timestream"></a>

After you have an existing data source connection for Timestream data, you can create Timestream datasets to use for analysis. 

Currently, you can use a Timestream connection only for a single table in a dataset. To add data from multiple Timestream tables in a single dataset, create an additional Quick Sight data source connection for each table.

**To create a dataset using Amazon Timestream**

1. Choose **Data** at left, then scroll down to find the data source card for your Timestream connection. If you have many data sources, you can use the search bar at the top of the page to find your data source with a partial match on the name.

1. Choose the **Timestream** data source card, and then choose **Create data set**.

1. For **Database**, choose **Select** to view a list of available databases and choose the one that you want to use.

1. For **Tables**, choose the table that you want to use.

1. Choose **Edit/Preview**.

1. (Optional) To add more data, use the following steps: 

   1. Choose **Add data** at top right.

   1. To connect to different data, choose **Switch data source**, and choose a different dataset. 

   1. Follow the UI prompts to finish adding data. 

   1. After adding new data to the same dataset, choose **Configure this join **(the two red dots). Set up a join for each additional table. 

   1. If you want to add calculated fields, choose **Add calculated field**. 

   1. To add a model from SageMaker AI, choose **Augment with SageMaker**. This option is only available in Amazon Quick Enterprise edition.

   1. Clear the check box for any fields that you want to omit.

   1. Update any data types that you want to change.

1. When you are done, choose **Save** to save and close the dataset. 

## Adding Timestream data to an analysis
<a name="open-analysis-add-dataset-for-timestream"></a>

Following, you can find how to add an Amazon Timestream dataset to a Quick Sight analysis. Before you begin, make sure that you have an existing dataset that contains the Timestream data that you want to use.

**To add Amazon Timestream data to an analysis**

1. Choose **Analyses** at left.

1. Do one of the following:
   + To create a new analysis, choose **New analysis** at right. 
   + To add to an existing analysis, open the analysis that you want to edit. 
     + Choose the pencil icon near at top left.
     + Choose **Add data set**.

1. Choose the Timestream dataset that you want to add.

For more information, see [Working with analyses](https://docs.aws.amazon.com/quicksight/latest/user/working-with-analyses.html).