

# Amazon Athena Snowflake connector
<a name="connectors-snowflake"></a>

The Amazon Athena connector for [Snowflake](https://www.snowflake.com/) enables Amazon Athena to run SQL queries on data stored in your Snowflake SQL database or RDS instances using JDBC.

This connector can be registered with Glue Data Catalog as a federated catalog. It supports data access controls defined in Lake Formation at the catalog, database, table, column, row, and tag levels. This connector uses Glue Connections to centralize configuration properties in Glue.

## Prerequisites
<a name="connectors-snowflake-prerequisites"></a>

Deploy the connector to your AWS account using the Athena console or the `CreateDataCatalog` API operation. For more information, see [Create a data source connection](connect-to-a-data-source.md).

## Limitations
<a name="connectors-snowflake-limitations"></a>
+ Write DDL operations are not supported.
+ In a multiplexer setup, the spill bucket and prefix are shared across all database instances.
+ Any relevant Lambda limits. For more information, see [Lambda quotas](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html) in the *AWS Lambda Developer Guide*.
+ Only legacy connections support multiplexer setup. 
+ Currently, Snowflake views with single split are supported. 
+  In Snowflake, object names are case-sensitive. Athena accepts mixed case in DDL and DML queries, but by default [lower cases](https://docs.aws.amazon.com/athena/latest/ug/tables-databases-columns-names.html#table-names-and-table-column-names-in-ate-must-be-lowercase) the object names when it executes the query. The Snowflake connector supports only lower case when Glue Catalog/Lake Formation are used. When the Athena Catalog is used, customers can control the casing behavior using the `casing_mode` Lambda environment variable whose possible values are listed in the [Parameters](#connectors-snowflake-parameters) section (for example, `key=casing_mode, value = CASE_INSENSITIVE_SEARCH`). 

## Terms
<a name="connectors-snowflake-terms"></a>

The following terms relate to the Snowflake connector.
+ **Database instance** – Any instance of a database deployed on premises, on Amazon EC2, or on Amazon RDS.
+ **Handler** – A Lambda handler that accesses your database instance. A handler can be for metadata or for data records.
+ **Metadata handler** – A Lambda handler that retrieves metadata from your database instance.
+ **Record handler** – A Lambda handler that retrieves data records from your database instance.
+ **Composite handler** – A Lambda handler that retrieves both metadata and data records from your database instance.
+ **Property or parameter** – A database property used by handlers to extract database information. You configure these properties as Lambda environment variables.
+ **Connection String** – A string of text used to establish a connection to a database instance.
+ **Catalog** – A non-AWS Glue catalog registered with Athena that is a required prefix for the `connection_string` property.
+ **Multiplexing handler** – A Lambda handler that can accept and use multiple database connections.

## Parameters
<a name="connectors-snowflake-parameters"></a>

Use the parameters in this section to configure the Snowflake connector.

### Glue connections (recommended)
<a name="snowflake-gc"></a>

We recommend that you configure a Snowflake connector by using a Glue connections object. To do this, set the `glue_connection` environment variable of the Snowflake connector Lambda to the name of the Glue connection to use.

**Glue connections properties**

Use the following command to get the schema for a Glue connection object. This schema contains all the parameters that you can use to control your connection.

```
aws glue describe-connection-type --connection-type SNOWFLAKE
```

**Lambda environment properties**
+ **glue\$1connection** – Specifies the name of the Glue connection associated with the federated connector. 
+ **casing\$1mode** – (Optional) Specifies how to handle casing for schema and table names. The `casing_mode` parameter uses the following values to specify the behavior of casing:
  + **NONE** – Do not change case of the given schema and table names (run the query as is against Snowflake). This is the default value when **casing\$1mode** is not specified. 
  + **UPPER** – Upper case all given schema and table names in the query before running it against Snowflake.
  + **LOWER** – Lower case all given schema and table names in the query before running it against Snowflake.
  + **CASE\$1INSENSITIVE\$1SEARCH** – Perform case insensitive searches against schema and tables names in Snowflake. For example, you can use this mode when you have a query like `SELECT * FROM EMPLOYEE` and Snowflake contains a table called `Employee`. However, in the presence of name collisions, such as having a table called `EMPLOYEE` and another table called `Employee` in Snowflake, the query will fail.

**Note**  
The Snowflake connector created using Glue connections does not support the use of a multiplexing handler.
The Snowflake connector created using Glue connections only supports `ConnectionSchemaVersion` 2.

**Storing credentials**

All connectors that use Glue connections must use AWS Secrets Manager to store credentials. For more information, see [Authenticate with Snowflake](connectors-snowflake-authentication.md).

### Legacy connections
<a name="snowflake-legacy"></a>

**Note**  
Athena data source connectors created on December 3, 2024 and later use AWS Glue connections.

The parameter names and definitions listed below are for Athena data source connectors created without an associated Glue connection. Use the following parameters only when you [manually deploy](connect-data-source-serverless-app-repo.md) an earlier version of an Athena data source connector or when the `glue_connection` environment property is not specified.

**Lambda environment properties**
+ **default** – The JDBC connection string to use to connect to the Snowflake database instance. For example, `snowflake://${jdbc_connection_string}`
+ **catalog\$1connection\$1string** – Used by the Multiplexing handler (not supported when using a glue connection). A database instance connection string. Prefix the environment variable with the name of the catalog used in Athena. For example, if the catalog registered with Athena is mysnowflakecatalog, then the environment variable name is mysnowflakecatalog\$1connection\$1string.
+ **casing\$1mode** – (Optional) Specifies how to handle casing for schema and table names. The `casing_mode` parameter uses the following values to specify the behavior of casing:
  + **NONE** – Do not change case of the given schema and table names (run the query as is against Snowflake). This is the default value when **casing\$1mode** is not specified. 
  + **UPPER** – Upper case all given schema and table names in the query before running it against Snowflake.
  + **LOWER** – Lower case all given schema and table names in the query before running it against Snowflake.
  + **CASE\$1INSENSITIVE\$1SEARCH** – Perform case insensitive searches against schema and tables names in Snowflake. For example, you can use this mode when you have a query like `SELECT * FROM EMPLOYEE` and Snowflake contains a table called `Employee`. However, in the presence of name collisions, such as having a table called `EMPLOYEE` and another table called `Employee` in Snowflake, the query will fail.
+ **spill\$1bucket** – Specifies the Amazon S3 bucket for data that exceeds Lambda function limits.
+ **spill\$1prefix** – (Optional) Defaults to a subfolder in the specified `spill_bucket` called `athena-federation-spill`. We recommend that you configure an Amazon S3 [storage lifecycle](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) on this location to delete spills older than a predetermined number of days or hours.
+ **spill\$1put\$1request\$1headers** – (Optional) A JSON encoded map of request headers and values for the Amazon S3 `putObject` request that is used for spilling (for example, `{"x-amz-server-side-encryption" : "AES256"}`). For other possible headers, see [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) in the *Amazon Simple Storage Service API Reference*.
+ **kms\$1key\$1id** – (Optional) By default, any data that is spilled to Amazon S3 is encrypted using the AES-GCM authenticated encryption mode and a randomly generated key. To have your Lambda function use stronger encryption keys generated by KMS like `a7e63k4b-8loc-40db-a2a1-4d0en2cd8331`, you can specify a KMS key ID.
+ **disable\$1spill\$1encryption** – (Optional) When set to `True`, disables spill encryption. Defaults to `False` so that data that is spilled to S3 is encrypted using AES-GCM – either using a randomly generated key or KMS to generate keys. Disabling spill encryption can improve performance, especially if your spill location uses [server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html).

#### Connection string
<a name="connectors-snowflake-connection-string"></a>

Use a JDBC connection string in the following format to connect to a database instance.

```
snowflake://${jdbc_connection_string}
```

#### Using a multiplexing handler
<a name="connectors-snowflake-using-a-multiplexing-handler"></a>

You can use a multiplexer to connect to multiple database instances with a single Lambda function. Requests are routed by catalog name. Use the following classes in Lambda.


****  

| Handler | Class | 
| --- | --- | 
| Composite handler | SnowflakeMuxCompositeHandler | 
| Metadata handler | SnowflakeMuxMetadataHandler | 
| Record handler | SnowflakeMuxRecordHandler | 

##### Multiplexing handler parameters
<a name="connectors-snowflake-multiplexing-handler-parameters"></a>


****  

| Parameter | Description | 
| --- | --- | 
| \$1catalog\$1connection\$1string | Required. A database instance connection string. Prefix the environment variable with the name of the catalog used in Athena. For example, if the catalog registered with Athena is mysnowflakecatalog, then the environment variable name is mysnowflakecatalog\$1connection\$1string. | 
| default | Required. The default connection string. This string is used when the catalog is lambda:\$1\$1AWS\$1LAMBDA\$1FUNCTION\$1NAME\$1. | 

The following example properties are for a Snowflake MUX Lambda function that supports two database instances: `snowflake1` (the default), and `snowflake2`.


****  

| Property | Value | 
| --- | --- | 
| default | snowflake://jdbc:snowflake://snowflake1.host:port/?warehouse=warehousename&db=db1&schema=schema1&\$1\$1Test/RDS/Snowflake1\$1 | 
| snowflake\$1catalog1\$1connection\$1string | snowflake://jdbc:snowflake://snowflake1.host:port/?warehouse=warehousename&db=db1&schema=schema1\$1\$1Test/RDS/Snowflake1\$1 | 
| snowflake\$1catalog2\$1connection\$1string | snowflake://jdbc:snowflake://snowflake2.host:port/?warehouse=warehousename&db=db1&schema=schema1&user=sample2&password=sample2 | 

##### Providing credentials
<a name="connectors-snowflake-providing-credentials"></a>

To provide a user name and password for your database in your JDBC connection string, you can use connection string properties or AWS Secrets Manager.
+ **Connection String** – A user name and password can be specified as properties in the JDBC connection string.
**Important**  
As a security best practice, don't use hardcoded credentials in your environment variables or connection strings. For information about moving your hardcoded secrets to AWS Secrets Manager, see [Move hardcoded secrets to AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/hardcoded.html) in the *AWS Secrets Manager User Guide*.
+ **AWS Secrets Manager** – To use the Athena Federated Query feature with AWS Secrets Manager, the VPC connected to your Lambda function should have [internet access](https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/) or a [VPC endpoint](https://docs.aws.amazon.com/secretsmanager/latest/userguide/vpc-endpoint-overview.html) to connect to Secrets Manager.

  You can put the name of a secret in AWS Secrets Manager in your JDBC connection string. The connector replaces the secret name with the `username` and `password` values from Secrets Manager.

  For Amazon RDS database instances, this support is tightly integrated. If you use Amazon RDS, we highly recommend using AWS Secrets Manager and credential rotation. If your database does not use Amazon RDS, store the credentials as JSON in the following format:

  ```
  {"username": "${username}", "password": "${password}"}
  ```

**Example connection string with secret name**  
The following string has the secret name `${Test/RDS/Snowflake1}`.

```
snowflake://jdbc:snowflake://snowflake1.host:port/?warehouse=warehousename&db=db1&schema=schema1${Test/RDS/Snowflake1}&... 
```

The connector uses the secret name to retrieve secrets and provide the user name and password, as in the following example.

```
snowflake://jdbc:snowflake://snowflake1.host:port/warehouse=warehousename&db=db1&schema=schema1&user=sample2&password=sample2&... 
```

Currently, Snowflake recognizes the `user` and `password` JDBC properties. It also accepts the user name and password in the format *username*`/`*password* without the keys `user` or `password`.

#### Using a single connection handler
<a name="connectors-snowflake-using-a-single-connection-handler"></a>

You can use the following single connection metadata and record handlers to connect to a single Snowflake instance.


****  

| Handler type | Class | 
| --- | --- | 
| Composite handler | SnowflakeCompositeHandler | 
| Metadata handler | SnowflakeMetadataHandler | 
| Record handler | SnowflakeRecordHandler | 

##### Single connection handler parameters
<a name="connectors-snowflake-single-connection-handler-parameters"></a>


****  

| Parameter | Description | 
| --- | --- | 
| default | Required. The default connection string. | 

The single connection handlers support one database instance and must provide a `default` connection string parameter. All other connection strings are ignored.

The following example property is for a single Snowflake instance supported by a Lambda function.


****  

| Property | Value | 
| --- | --- | 
| default | snowflake://jdbc:snowflake://snowflake1.host:port/?secret=Test/RDS/Snowflake1 | 

#### Spill parameters
<a name="connectors-snowflake-spill-parameters"></a>

The Lambda SDK can spill data to Amazon S3. All database instances accessed by the same Lambda function spill to the same location.


****  

| Parameter | Description | 
| --- | --- | 
| spill\$1bucket | Required. Spill bucket name. | 
| spill\$1prefix | Required. Spill bucket key prefix. | 
| spill\$1put\$1request\$1headers | (Optional) A JSON encoded map of request headers and values for the Amazon S3 putObject request that is used for spilling (for example, \$1"x-amz-server-side-encryption" : "AES256"\$1). For other possible headers, see [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) in the Amazon Simple Storage Service API Reference. | 

## Data type support
<a name="connectors-snowflake-data-type-support"></a>

The following table shows the corresponding data types for JDBC and Apache Arrow.


****  

| JDBC | Arrow | 
| --- | --- | 
| Boolean | Bit | 
| Integer | Tiny | 
| Short | Smallint | 
| Integer | Int | 
| Long | Bigint | 
| float | Float4 | 
| Double | Float8 | 
| Date | DateDay | 
| Timestamp | DateMilli | 
| String | Varchar | 
| Bytes | Varbinary | 
| BigDecimal | Decimal | 
| ARRAY | List | 

## Data type conversions
<a name="connectors-snowflake-data-type-conversions"></a>

In addition to the JDBC to Arrow conversions, the connector performs certain other conversions to make the Snowflake source and Athena data types compatible. These conversions help ensure that queries get executed successfully. The following table shows these conversions.


****  

| Source data type (Snowflake) | Converted data type (Athena) | 
| --- | --- | 
| TIMESTAMP | TIMESTAMPMILLI | 
| DATE | TIMESTAMPMILLI | 
| INTEGER | INT | 
| DECIMAL | BIGINT | 
| TIMESTAMP\$1NTZ | TIMESTAMPMILLI | 

All other unsupported data types are converted to `VARCHAR`.

## Partitions and splits
<a name="connectors-snowflake-partitions-and-splits"></a>

Partitions are used to determine how to generate splits for the connector. Athena constructs a synthetic column of type `varchar` that represents the partitioning scheme for the table to help the connector generate splits. The connector does not modify the actual table definition.

To create this synthetic column and the partitions, Athena requires a primary key to be defined. However, because Snowflake does not enforce primary key constraints, you must enforce uniqueness yourself. Failure to do so causes Athena to default to a single split.

## Performance
<a name="connectors-snowflake-performance"></a>

For optimal performance, use filters in queries whenever possible. In addition, we highly recommend native partitioning to retrieve huge datasets that have uniform partition distribution. Selecting a subset of columns significantly speeds up query runtime and reduces data scanned. The Snowflake connector is resilient to throttling due to concurrency.

The Athena Snowflake connector performs predicate pushdown to decrease the data scanned by the query. `LIMIT` clauses, simple predicates, and complex expressions are pushed down to the connector to reduce the amount of data scanned and decrease query execution run time.

### LIMIT clauses
<a name="connectors-snowflake-performance-limit-clauses"></a>

A `LIMIT N` statement reduces the data scanned by the query. With `LIMIT N` pushdown, the connector returns only `N` rows to Athena.

### Predicates
<a name="connectors-snowflake-performance-predicates"></a>

A predicate is an expression in the `WHERE` clause of a SQL query that evaluates to a Boolean value and filters rows based on multiple conditions. The Athena Snowflake connector can combine these expressions and push them directly to Snowflake for enhanced functionality and to reduce the amount of data scanned.

The following Athena Snowflake connector operators support predicate pushdown:
+ **Boolean: **AND, OR, NOT
+ **Equality: **EQUAL, NOT\$1EQUAL, LESS\$1THAN, LESS\$1THAN\$1OR\$1EQUAL, GREATER\$1THAN, GREATER\$1THAN\$1OR\$1EQUAL, IS\$1DISTINCT\$1FROM, NULL\$1IF, IS\$1NULL
+ **Arithmetic: **ADD, SUBTRACT, MULTIPLY, DIVIDE, MODULUS, NEGATE
+ **Other: **LIKE\$1PATTERN, IN

### Combined pushdown example
<a name="connectors-snowflake-performance-pushdown-example"></a>

For enhanced querying capabilities, combine the pushdown types, as in the following example:

```
SELECT * 
FROM my_table 
WHERE col_a > 10 
    AND ((col_a + col_b) > (col_c % col_d))
    AND (col_e IN ('val1', 'val2', 'val3') OR col_f LIKE '%pattern%') 
LIMIT 10;
```

# Authenticate with Snowflake
<a name="connectors-snowflake-authentication"></a>

You can configure the Amazon Athena Snowflake connector to use either the key-pair authentication or OAuth authentication method to connect to your Snowflake data warehouse. Both methods provide secure access to Snowflake and eliminate the need to store passwords in connection strings.
+ **Key-pair authentication** – This method uses RSA public or private key pairs to authenticate with Snowflake. The private key digitally signs authentication requests while the corresponding public key is registered in Snowflake for verification. This method eliminates password storage.
+ **OAuth authentication** – This method uses authorization token and refresh token to authenticate with Snowflake. It supports automatic token refresh, making it suitable for long-running applications.

For more information, see the [Key-pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth) and [OAuth authentication](https://docs.snowflake.com/en/user-guide/oauth-custom) in the Snowflake user guide.

## Prerequisites
<a name="connectors-snowflake-authentication-prerequisites"></a>

Before you begin, complete the following prerequisites:
+ Snowflake account access with administrative privileges.
+ Snowflake user account dedicated for the Athena connector.
+ OpenSSL or equivalent key generation tools for key-pair authentication.
+ AWS Secrets Manager access to create and manage secrets.
+ Web browser to complete the OAuth flow for the OAuth authentication.

## Configure key-pair authentication
<a name="connectors-snowflake-keypair-authentication"></a>

This process involves generating an RSA key-pair, configuring your Snowflake account with the public key, and securely storing the private key in AWS Secrets Manager. The following steps will guide you through creating the cryptographic keys, setting up the necessary Snowflake permissions, and configuring AWS credentials for seamless authentication. 

1. **Generate RSA key-pair**

   Generate a private and public key pair using OpenSSL.
   + To generate an unencrypted version, use the following command in your local command line application.

     ```
     openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
     ```
   + To generate an encrypted version, use the following command, which omits `-nocrypt`.

     ```
     openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 des3 -inform PEM -out rsa_key.p8
     ```
   + To generate a public key from a private key.

     ```
     openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
     # Set appropriate permissions (Unix/Linux)
     chmod 600 rsa_key.p8
     chmod 644 rsa_key.pub
     ```
**Note**  
Do not share your private key. The private key should only be accessible to the application that needs to authenticate with Snowflake.

1. **Extract public key content without delimiters for Snowflake**

   ```
   # Extract public key content (remove BEGIN/END lines and newlines)
   cat rsa_key.pub | grep -v "BEGIN\|END" | tr -d '\n'
   ```

   Save this output as you will need it later in the next step.

1. **Configure Snowflake user**

   Follow these steps to configure a Snowflake user.

   1. Create a dedicated user for the Athena connector if it doesn't already exists.

      ```
      -- Create user for Athena connector
      CREATE USER athena_connector_user;
      
      -- Grant necessary privileges
      GRANT USAGE ON WAREHOUSE your_warehouse TO ROLE athena_connector_role;
      GRANT USAGE ON DATABASE your_database TO ROLE athena_connector_role;
      GRANT SELECT ON ALL TABLES IN DATABASE your_database TO ROLE athena_connector_role;
      ```

   1. Grant authentication privileges. To assign a public key to a user, you must have one of the following roles or privileges.
      + The `MODIFY PROGRAMMATIC AUTHENTICATION METHODS` or `OWNERSHIP` privilege on the user.
      + The `SECURITYADMIN` role or higher.

      Grant the necessary privileges to assign public keys with the following command.

      ```
      GRANT MODIFY PROGRAMMATIC AUTHENTICATION METHODS ON USER athena_connector_user TO ROLE your_admin_role;
      ```

   1. Assign the public key to the Snowflake user with the following command.

      ```
      ALTER USER athena_connector_user SET RSA_PUBLIC_KEY='RSAkey';
      ```

      Verify that the public key is successfully assigned to the user with the following command.

      ```
      DESC USER athena_connector_user;
      ```

1. **Store private key in AWS Secrets Manager**

   1. Convert your private key to the format required by the connector.

      ```
      # Read private key content
      cat rsa_key.p8
      ```

   1. Create a secret in AWS Secrets Manager with the following structure.

      ```
      {
        "sfUser": "your_snowflake_user",
        "pem_private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----",
        "pem_private_key_passphrase": "passphrase_in_case_of_encrypted_private_key(optional)"
      }
      ```
**Note**  
Header and footer are optional.
The private key must be separated by `\n`.

## Configure OAuth authentication
<a name="connectors-snowflake-oauth-authentication"></a>

This authentication method enables secure, token-based access to Snowflake with automatic credential refresh capabilities. The configuration process involves creating a security integration in Snowflake, retrieving OAuth client credentials, completing the authorization flow to obtain an access code, and storing the OAuth credentials in AWS Secrets Manager for the connector to use. 

1. **Create a security integration in Snowflake**

   Execute the following SQL command in Snowflake to create a Snowflake OAuth security integration.

   ```
   CREATE SECURITY INTEGRATION my_snowflake_oauth_integration_a
     TYPE = OAUTH
     ENABLED = TRUE
     OAUTH_CLIENT = CUSTOM
     OAUTH_CLIENT_TYPE = 'CONFIDENTIAL'
     OAUTH_REDIRECT_URI = 'https://localhost:8080/oauth/callback'
     OAUTH_ISSUE_REFRESH_TOKENS = TRUE
     OAUTH_REFRESH_TOKEN_VALIDITY = 7776000;
   ```

   **Configuration parameters**
   + `TYPE = OAUTH` – Specifies OAuth authentication type.
   + `ENABLED = TRUE` – Enables the security integration.
   + `OAUTH_CLIENT = CUSTOM` – Uses custom OAuth client configuration.
   + `OAUTH_CLIENT_TYPE = 'CONFIDENTIAL'` – Sets client type for secure applications.
   + `OAUTH_REDIRECT_URI` – The callback URL for OAuth flow. It can be localhost for testing.
   + `OAUTH_ISSUE_REFRESH_TOKENS = TRUE` – Enables refresh token generation.
   + `OAUTH_REFRESH_TOKEN_VALIDITY = 7776000` – Sets refresh token validity (90 days in seconds).

1. **Retrieve OAuth client secrets**

   1. Run the following SQL command to get the client credentials.

      ```
      DESC SECURITY INTEGRATION 'MY_SNOWFLAKE_OAUTH_INTEGRATION_A';
      ```

   1. Retrieve the OAuth client secrets.

      ```
      SELECT SYSTEM$SHOW_OAUTH_CLIENT_SECRETS('MY_SNOWFLAKE_OAUTH_INTEGRATION_A');
      ```

      **Example response**

      ```
      {
        "OAUTH_CLIENT_SECRET_2": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
        "OAUTH_CLIENT_SECRET": "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY,
        "OAUTH_CLIENT_ID": "AIDACKCEVSQ6C2EXAMPLE"
      }
      ```
**Note**  
Keep these credentials secure and do not share them. These will be used to configure the OAuth client.

1. **Authorize user and retrieve authorization code**

   1. Open the following URL in a browser.

      ```
      https://<your_account>.snowflakecomputing.com/oauth/authorize?client_id=<OAUTH_CLIENT_ID>&response_type=code&redirect_uri=https://localhost:8080/oauth/callback
      ```

   1. Complete the authorization flow.

      1. Sign in using your Snowflake credentials.

      1. Grant the requested permissions. You will be redirected to the callback URI with an authorization code.

   1. Extract the authorization code by copying the code parameter from the redirect URL.

      ```
      https://localhost:8080/oauth/callback?code=<authorizationcode>
      ```
**Note**  
The authorization code is valid for a limited time and can only be used once.

1. **Store OAuth credentials in AWS Secrets Manager**

   Create a secret in AWS Secrets Manager with the following structure.

   ```
   {
     "redirect_uri": "https://localhost:8080/oauth/callback",
     "client_secret": "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY",
     "token_url": "https://<your_account>.snowflakecomputing.com/oauth/token-request",
     "client_id": "AIDACKCEVSQ6C2EXAMPLE,
     "username": "your_snowflake_username",
     "auth_code": "authorizationcode"
   }
   ```

   **Required fields**
   + `redirect_uri` – OAuth redirect URI that you obtained from Step 1.
   + `client_secret` – OAuth client secret that you obtained from Step 2.
   + `token_url` – Snowflake The OAuth token endpoint.
   + `client_id` – The OAuth client ID from Step 2.
   + `username` – The Snowflake username for the connector.
   + `auth_code` – The authorization code that you obtained from Step 3.

After you create a secret, you get a secret ARN that you can use in your Glue connection when you [create a data source connection](connect-to-a-data-source.md). 

## Passthrough queries
<a name="connectors-snowflake-passthrough-queries"></a>

The Snowflake connector supports [passthrough queries](federated-query-passthrough.md). Passthrough queries use a table function to push your full query down to the data source for execution.

To use passthrough queries with Snowflake, you can use the following syntax:

```
SELECT * FROM TABLE(
        system.query(
            query => 'query string'
        ))
```

The following example query pushes down a query to a data source in Snowflake. The query selects all columns in the `customer` table, limiting the results to 10.

```
SELECT * FROM TABLE(
        system.query(
            query => 'SELECT * FROM customer LIMIT 10'
        ))
```

## License information
<a name="connectors-snowflake-license-information"></a>

By using this connector, you acknowledge the inclusion of third party components, a list of which can be found in the [pom.xml](https://github.com/awslabs/aws-athena-query-federation/blob/master/athena-snowflake/pom.xml) file for this connector, and agree to the terms in the respective third party licenses provided in the [LICENSE.txt](https://github.com/awslabs/aws-athena-query-federation/blob/master/athena-snowflake/LICENSE.txt) file on GitHub.com.

## Additional resources
<a name="connectors-snowflake-additional-resources"></a>

For the latest JDBC driver version information, see the [pom.xml](https://github.com/awslabs/aws-athena-query-federation/blob/master/athena-snowflake/pom.xml) file for the Snowflake connector on GitHub.com.

For additional information about this connector, visit [the corresponding site](https://github.com/awslabs/aws-athena-query-federation/tree/master/athena-snowflake) on GitHub.com.