

# Configuring an SFTP, FTPS, or FTP server endpoint
<a name="sftp-for-transfer-family"></a>

This topic provides details for creating and using AWS Transfer Family server endpoints that use one or more of the SFTP, FTPS, and FTP protocols.

**Topics**
+ [Identity provider options](#identity-provider-details)
+ [AWS Transfer Family endpoint type matrix](#endpoint-matrix)
+ [Configuring an SFTP, FTPS, or FTP server endpoint](tf-server-endpoint.md)
+ [FTP and FTPS Network Load Balancer considerations](#ftp-ftps-nlb-considerations)
+ [Transferring files over a server endpoint using a client](transfer-file.md)
+ [Managing users for server endpoints](create-user.md)
+ [Using logical directories to simplify your Transfer Family directory structures](logical-dir-mappings.md)
+ [Access your FSx for NetApp ONTAP file systems with Transfer Family](fsx-s3-access-points.md)

## Identity provider options
<a name="identity-provider-details"></a>

AWS Transfer Family provides several methods for authenticating and managing users. The following table compares the available identity providers that you can use with Transfer Family.


| Action | AWS Transfer Family service managed | AWS Managed Microsoft AD | Amazon API Gateway | AWS Lambda | 
| --- | --- | --- | --- | --- | 
| Supported protocols | SFTP | SFTP, FTPS, FTP | SFTP, FTPS, FTP | SFTP, FTPS, FTP | 
|  Key-based authentication  |  Yes  |  No  |  Yes  |  Yes  | 
|  Password authentication  |  No  |  Yes  |  Yes  |  Yes  | 
|  AWS Identity and Access Management (IAM) and POSIX  |  Yes  |  Yes  |  Yes  |  Yes  | 
|  Logical home directory  |  Yes  |  Yes  |  Yes  |  Yes  | 
| Parameterized access (username-based) | Yes | Yes | Yes | Yes | 
|  Ad hoc access structure  |  Yes  |  No  |  Yes  |  Yes  | 
|  AWS WAF  |  No  |  No  |  Yes  |  No  | 

Notes:
+ IAM is used to control access for Amazon S3 backing storage, and POSIX is used for Amazon EFS.
+ *Ad hoc* refers to the ability to send the user profile at runtime. For example, you can land users in their home directories by passing the username as a variable.
+ For details about AWS WAF, see [Add a web application firewall](web-application-firewall.md).
+ There is a blog post that describes using a Lambda function integrated with Microsoft Entra ID (formerly Azure AD) as your Transfer Family identity provider. For details, see [ Authenticating to AWS Transfer Family with Azure Active Directory and AWS Lambda](https://aws.amazon.com/blogs/storage/authenticating-to-aws-transfer-family-with-azure-active-directory-and-aws-lambda/).
+ We provide several CloudFormation templates to help you quickly deploy a Transfer Family server that uses a custom identity provider. For details, see [Lambda function templates](custom-lambda-idp.md#lambda-idp-templates).

In the following procedures, you can create an SFTP-enabled server, FTPS-enabled server, FTP-enabled server, or AS2-enabled server.

**Next step**
+ [Create an SFTP-enabled server](create-server-sftp.md)
+ [Create an FTPS-enabled server](create-server-ftps.md)
+ [Create an FTP-enabled server](create-server-ftp.md)
+ [Configuring AS2](create-b2b-server.md)

## AWS Transfer Family endpoint type matrix
<a name="endpoint-matrix"></a>

When you create a Transfer Family server, you choose the type of endpoint to use. The following table describes characteristics for each type of endpoint.


**Endpoint type matrix**  

| Characteristic | Public | VPC - Internet | VPC - Internal | VPC\$1Endpoint (deprecated) | 
| --- | --- | --- | --- | --- | 
| Supported protocols | SFTP | SFTP, FTPS, AS2 | SFTP, FTP, FTPS, AS2 | SFTP | 
| Access | From over the internet. This endpoint type doesn't require any special configuration in your VPC. | Over the internet and from within VPC and VPC-connected environments, such as an on-premises data center over Direct Connect or VPN. | From within VPC and VPC-connected environments, such as an on-premises data center over Direct Connect or VPN. | From within VPC and VPC-connected environments, such as an on-premises data center over Direct Connect or VPN. | 
| Static IP address | You can’t attach a static IP address. AWS provides IP addresses that are subject to change. |  You can attach Elastic IP addresses to the endpoint. These can be AWS-owned IP addresses or your own IP addresses ([Bring your own IP addresses](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html)). Elastic IP addresses attached to the endpoint don't change. Private IP addresses attached to the server also don't change.  | Private IP addresses attached to the endpoint don't change. | Private IP addresses attached to the endpoint don't change. | 
| Source IP allow list |  This endpoint type does not support allow lists by source IP addresses. The endpoint is publicly accessible and listens for traffic over port 22.  For VPC-hosted endpoints, SFTP Transfer Family servers can operate over port 22 (the default), 2222, 2223, or 22000.   |  To allow access by source IP address, you can use security groups attached to the server endpoints and network ACLs attached to the subnet that the endpoint is in.  |  To allow access by source IP address, you can use security groups attached to the server endpoints and network access control lists (network ACLs) attached to the subnet that the endpoint is in.  |  To allow access by source IP address, you can use security groups attached to the server endpoints and network ACLs attached to the subnet that the endpoint is in.  | 
| Client firewall allow list |  You must allow the DNS name of the server. Because IP addresses are subject to change, avoid using IP addresses for your client firewall allow list.  |  You can allow the DNS name of the server or the Elastic IP addresses attached to the server.  |  You can allow the private IP addresses or the DNS name of the endpoints.  |  You can allow the private IP addresses or the DNS name of the endpoints.  | 
| IP address type | IPv4 (default) or dual-stack (IPv4 and IPv6) | IPv4 only (dual-stack not supported) | IPv4 (default) or dual-stack (IPv4 and IPv6) | IPv4 only (dual-stack not supported) | 

**Note**  
The `VPC_ENDPOINT` endpoint type is now deprecated and cannot be used to create new servers. Instead of using `EndpointType=VPC_ENDPOINT`, use the VPC endpoint type (`EndpointType=VPC`), which you can use as either **Internal** or **Internet Facing**, as described in the preceding table.  
For details about the deprecation, see [Discontinuing the use of VPC\$1ENDPOINTYou can change the endpoint type for your server using the Transfer Family console, AWS CLI, API, SDKs, or CloudFormation. To change your server’s endpoint type, see [Updating the AWS Transfer Family server endpoint type from VPC\$1ENDPOINT to VPC](update-endpoint-type-vpc.md).](create-server-in-vpc.md#deprecate-vpc-endpoint).
For information about managing VPC endpoint permissions, see [Limiting VPC endpoint access for Transfer Family servers](create-server-in-vpc.md#limit-vpc-endpoint-access).

Consider the following options to increase the security posture of your AWS Transfer Family server:
+ Use a VPC endpoint with internal access, so that the server is accessible only to clients within your VPC or VPC-connected environments such as an on-premises data center over Direct Connect or VPN.
+ To allow clients to access the endpoint over the internet and protect your server, use a VPC endpoint with internet-facing access. Then, modify the VPC's security groups to allow traffic only from certain IP addresses that host your users' clients.
+ If you require password-based authentication and you use a custom identity provider with your server, it's a best practice that your password policy prevents users from creating weak passwords and limits the number of failed login attempts.
+ AWS Transfer Family is a managed service, and so it doesn't provide shell access. You cannot directly access the underlying SFTP server to run OS native commands on Transfer Family servers.
+ Use a Network Load Balancer in front of a VPC endpoint with internal access. Change the listener port on the load balancer from port 22 to a different port. This can reduce, but not eliminate, the risk of port scanners and bots probing your server, because port 22 is most commonly used for scanning. For details, see the blog post [Network Load Balancers now support Security groups](https://aws.amazon.com/blogs/containers/network-load-balancers-now-support-security-groups/).
**Note**  
If you use a Network Load Balancer, the AWS Transfer Family CloudWatch logs show the IP address for the NLB, rather than the actual client IP address.

# Configuring an SFTP, FTPS, or FTP server endpoint
<a name="tf-server-endpoint"></a>

You can create a file transfer server by using the AWS Transfer Family service. The following file transfer protocols are available:
+ Secure Shell (SSH) File Transfer Protocol (SFTP) – File transfer over SSH. For details, see [Create an SFTP-enabled server](create-server-sftp.md).
**Note**  
We provide an AWS CDK example for creating an SFTP Transfer Family server. The example uses TypeScript, and is available on GitHub [here](https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript/aws-transfer-sftp-server).
+ File Transfer Protocol Secure (FTPS) – File transfer with TLS encryption. For details, see [Create an FTPS-enabled server](create-server-ftps.md).
+ File Transfer Protocol (FTP) – Unencrypted file transfer. For details, see [Create an FTP-enabled server](create-server-ftp.md).
+ Applicability Statement 2 (AS2) – File transfer for transporting structured business-to-business data. For details, see [Configuring AS2](create-b2b-server.md). For AS2, you can quickly create an CloudFormation stack for demonstration purposes. This procedure is described in [Use a template to create a demo Transfer Family AS2 stack](create-as2-transfer-server.md#as2-cfn-demo-template).

You can create a server with multiple protocols.

**Note**  
If you have multiple protocols enabled for the same server endpoint and you want to provide access by using the same username over multiple protocols, you can do so as long as the credentials specific to the protocol have been set up in your identity provider. For FTP, we recommend maintaining separate credentials from SFTP and FTPS. This is because, unlike SFTP and FTPS, FTP transmits credentials in clear text. By isolating FTP credentials from SFTP or FTPS, if FTP credentials are shared or exposed, your workloads using SFTP or FTPS remain secure.

When you create a server, you choose a specific AWS Region to perform the file operation requests of users who are assigned to that server. Along with assigning the server one or more protocols, you also assign one of the following identity provider types:
+ **Service managed by using SSH keys**. For details, see [Working with service-managed users](service-managed-users.md).
+ **AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)**. This method allows you integrate your Microsoft Active Directory groups to provide access to your Transfer Family servers. For details, see [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md).
+ **A custom identity provider**. Transfer Family offers several options for using a custom identity provider, as described in the [Working with custom identity providers](custom-idp-intro.md) topic.

You also assign the server an endpoint type (publicly accessible or VPC hosted) and a hostname by using the default server endpoint, or a custom hostname by using the Amazon Route 53 service or by using a Domain Name System (DNS) service of your choice. A server hostname must be unique in the AWS Region where it's created.

Additionally, you can assign an Amazon CloudWatch logging role to push events to your CloudWatch logs, choose a security policy that contains the cryptographic algorithms that are enabled for use by your server, and add metadata to the server in the form of tags that are key-value pairs.

**Important**  
You incur costs for instantiated servers and for data transfer. For information about pricing and to use AWS Pricing Calculator to get an estimate of the cost to use Transfer Family, see [AWS Transfer Family pricing](https://aws.amazon.com/aws-transfer-family/pricing/).

# Create an SFTP-enabled server
<a name="create-server-sftp"></a>

Secure Shell (SSH) File Transfer Protocol (SFTP) is a network protocol used for secure transfer of data over the internet. The protocol supports the full security and authentication functionality of SSH. It's widely used to exchange data, including sensitive information between business partners in a variety of industries such as financial services, healthcare, retail, and advertising.

**Note the following**
+ SFTP servers for Transfer Family operate over port 22. For VPC-hosted endpoints, SFTP Transfer Family servers can also operate over port 2222, 2223 or 22000. For details, see [Create a server in a virtual private cloud](create-server-in-vpc.md).
+ Public endpoints cannot restrict traffic via security groups. To use security groups with your Transfer Family server, you must host your server's endpoint inside a virtual private cloud (VPC) as described in [Create a server in a virtual private cloud](create-server-in-vpc.md).

**See also**
+ We provide an AWS CDK example for creating an SFTP Transfer Family server. The example uses TypeScript, and is available on GitHub [here](https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript/aws-transfer-sftp-server).
+ For a walkthrough of how to deploy a Transfer Family server inside of a VPC, see [Use IP allow list to secure your AWS Transfer Family servers](https://aws.amazon.com/blogs//storage/use-ip-allow-list-to-secure-your-aws-transfer-for-sftp-servers/).

**To create an SFTP-enabled server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/) and select **Servers** from the navigation pane, then choose **Create server**.

1. In **Choose protocols**, select **SFTP**, and then choose **Next**.

1. In **Choose an identity provider**, choose the identity provider that you want to use to manage user access. You have the following options:
   + **Service managed** – You store user identities and keys in AWS Transfer Family. 
   + **AWS Directory Service for Microsoft Active Directory** – You provide an Directory Service directory to access the endpoint. By doing so, you can use credentials stored in your Active Directory to authenticate your users. To learn more about working with AWS Managed Microsoft AD identity providers, see [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md).
**Note**  
 Cross-Account and Shared directories are not supported for AWS Managed Microsoft AD. 
To set up a server with Directory Service as your identity provider, you need to add some Directory Service permissions. For details, see [Before you start using AWS Directory Service for Microsoft Active Directory](directory-services-users.md#managed-ad-prereq).
   + **Custom identity provider** – Choose either of the following options:
     + **Use AWS Lambda to connect your identity provider** – You can use an existing identity provider, backed by a Lambda function. You provide the name of the Lambda function. For more information, see [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md).
     + **Use Amazon API Gateway to connect your identity provider** – You can create an API Gateway method backed by a Lambda function for use as an identity provider. You provide an Amazon API Gateway URL and an invocation role. For more information, see [Using Amazon API Gateway to integrate your identity provider](authentication-api-gateway.md).  
![\[The Choose an identity provider console section with Custom identity provider selected. Also has the default value selected, which is that users can authenticate using either their password or key.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/custom-lambda-console.png)

1. Choose **Next**.

1. In **Choose an endpoint**, do the following:

   1. For **Endpoint type**, choose the **Publicly accessible** endpoint type. For a **VPC hosted** endpoint, see [Create a server in a virtual private cloud](create-server-in-vpc.md).

   1.  For **IP address type**, choose **IPv4** (default) for backwards compatibility or **Dual-stack** to enable both IPv4 and IPv6 connections to your endpoint.
**Note**  
Dual-stack mode allows your Transfer Family endpoint to communicate with both IPv4 and IPv6 enabled clients. This enables you to gradually transition from IPv4 to IPv6 based systems without needing to switch all at once.

   1. (Optional) For **Custom hostname**, choose **None**.

      You get a server hostname provided by AWS Transfer Family. The server hostname takes the form `serverId.server.transfer.regionId.amazonaws.com`.

      For a custom hostname, you specify a custom alias for your server endpoint. To learn more about working with custom hostnames, see [Working with custom hostnames](requirements-dns.md).

   1. (Optional) For **FIPS Enabled**, select the **FIPS Enabled endpoint** check box to ensure that the endpoint complies with Federal Information Processing Standards (FIPS).
**Note**  
FIPS-enabled endpoints are only available in North American AWS Regions. For available Regions, see [AWS Transfer Family endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/transfer-service.html) in the *AWS General Reference*. For more information about FIPS, see [ Federal Information Processing Standard (FIPS) 140-2 ](https://aws.amazon.com/compliance/fips/).

   1. Choose **Next**.

1. On the **Choose domain** page, choose the AWS storage service that you want to use to store and access your data over the selected protocol:
   + Choose **Amazon S3** to store and access your files as objects over the selected protocol.
   + Choose **Amazon EFS** to store and access your files in your Amazon EFS file system over the selected protocol.

   Choose **Next**.

1. In **Configure additional details**, do the following:

   1. For logging, specify an existing log group or create a new one (the default option). If you choose an existing log group, you must select one that is associated with your AWS account.  
![\[Logging pane for Configure additional details in the Create server wizard. Choose an existing log group is selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-choose-existing-group.png)

      If you choose **Create log group**, the CloudWatch console ([https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)) opens to the **Create log group** page. For details, see [ Create a log group in CloudWatch Logs](https://docs.aws.amazon.com//AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#Create-Log-Group). 

   1.  (Optional) For **Managed workflows**, choose workflow IDs (and a corresponding role) that Transfer Family should assume when executing the workflow. You can choose one workflow to execute upon a complete upload, and another to execute upon a partial upload. To learn more about processing your files by using managed workflows, see [AWS Transfer Family managed workflows](transfer-workflows.md).  
![\[The Managed workflows console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-addtoserver.png)

   1. For **Cryptographic algorithm options**, choose a security policy that contains the cryptographic algorithms enabled for use by your server. Our latest security policy is the default: for details, see [Security policies for AWS Transfer Family servers](security-policies.md).

   1. (Optional) For **Server Host Key**, enter an RSA, ED25519, or ECDSA private key that will be used to identify your server when clients connect to it over SFTP. You can also add a description to differentiate among multiple host keys. 

      After you create your server, you can add additional host keys. Having multiple host keys is useful if you want to rotate keys or if you want to have different types of keys, such as an RSA key and also an ECDSA key.
**Note**  
The **Server Host Key** section is used only for migrating users from an existing SFTP-enabled server.

   1. (Optional) For **Tags**, for **Key** and **Value**, enter one or more tags as key-value pairs, and then choose **Add tag**.

   1. Choose **Next**.

   1. You can optimize performance for your Amazon S3 directories. For example, suppose that you go into your home directory, and you have 10,000 subdirectories. In other words, your Amazon S3 bucket has 10,000 folders. In this scenario, if you run the `ls` (list) command, the list operation takes between six and eight minutes. However, if you optimize your directories, this operation takes only a few seconds.

      When you create your server using the console, optimized directories is enabled by default. If you create your server using the API, this behavior is not enabled by default.  
![\[The Optimized directories console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/optimized-directories.png)

   1. (Optional) Configure AWS Transfer Family servers to display customized messages such as organizational policies or terms and conditions to your end users. For **Display banner**, in the **Pre-authentication display banner** text box, enter the text message that you want to display to your users before they authenticate.

   1. (Optional) You can configure the following additional options.
      + **SetStat option**: enable this option to ignore the error that is generated when a client attempts to use `SETSTAT` on a file you are uploading to an Amazon S3 bucket. For additional details, see the `SetStatOption` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html).
      + **TLS session resumption**: this option is only available if you have enabled FTPS as one of the protocols for this server.
      + **Passive IP**: this option is only available if you have enabled FTPS or FTP as one of the protocols for this server.  
![\[Additional options screen for Server details page.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-configure-additional-items-sftp.png)

1. In **Review and create**, review your choices.
   + If you want to edit any of them, choose **Edit** next to the step.
**Note**  
You must review each step after the step that you chose to edit.
   + If you have no changes, choose **Create server** to create your server. You are taken to the **Servers** page, shown following, where your new server is listed.

It can take a couple of minutes before the status for your new server changes to **Online**. At that point, your server can perform file operations, but you'll need to create a user first. For details on creating users, see [Managing users for server endpoints](create-user.md).

# Create an FTPS-enabled server
<a name="create-server-ftps"></a>

File Transfer Protocol over SSL (FTPS) is an extension to FTP. It uses Transport Layer Security (TLS) and Secure Sockets Layer (SSL) cryptographic protocols to encrypt traffic. FTPS allows encryption of both the control and data channel connections either concurrently or independently.

**Note**  
For important considerations about Network Load Balancers, see [Avoid placing NLBs and NATs in front of AWS Transfer Family servers](infrastructure-security.md#nlb-considerations).

**To create an FTPS-enabled server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/) and select **Servers** from the navigation pane, then choose **Create server**.

1. In **Choose protocols**, select **FTPS**.

   For **Server certificate**, choose a certificate stored in AWS Certificate Manager (ACM) which will be used to identify your server when clients connect to it over FTPS and then choose **Next**.

   To request a new public certificate, see [Request a public certificate](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html) in the *AWS Certificate Manager User Guide*.

   To import an existing certificate into ACM, see [Importing certificates into ACM](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html) in the *AWS Certificate Manager User Guide*.

   To request a private certificate to use FTPS through private IP addresses, see [Requesting a Private Certificate](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-private.html) in the *AWS Certificate Manager User Guide*.

   Certificates with the following cryptographic algorithms and key sizes are supported:
   + 2048-bit RSA (RSA\$12048)
   + 4096-bit RSA (RSA\$14096)
   + Elliptic Prime Curve 256 bit (EC\$1prime256v1)
   + Elliptic Prime Curve 384 bit (EC\$1secp384r1)
   + Elliptic Prime Curve 521 bit (EC\$1secp521r1)
**Note**  
The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and contain information about the issuer.

1. In **Choose an identity provider**, choose the identity provider that you want to use to manage user access. You have the following options:
   + **AWS Directory Service for Microsoft Active Directory** – You provide an Directory Service directory to access the endpoint. By doing so, you can use credentials stored in your Active Directory to authenticate your users. To learn more about working with AWS Managed Microsoft AD identity providers, see [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md).
**Note**  
 Cross-Account and Shared directories are not supported for AWS Managed Microsoft AD. 
To set up a server with Directory Service as your identity provider, you need to add some Directory Service permissions. For details, see [Before you start using AWS Directory Service for Microsoft Active Directory](directory-services-users.md#managed-ad-prereq).
   + **Custom identity provider** – Choose either of the following options:
     + **Use AWS Lambda to connect your identity provider** – You can use an existing identity provider, backed by a Lambda function. You provide the name of the Lambda function. For more information, see [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md).
     + **Use Amazon API Gateway to connect your identity provider** – You can create an API Gateway method backed by a Lambda function for use as an identity provider. You provide an Amazon API Gateway URL and an invocation role. For more information, see [Using Amazon API Gateway to integrate your identity provider](authentication-api-gateway.md).  
![\[The Choose an identity provider console section with Custom identity provider selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/custom-lambda-console-no-sftp.png)

1. Choose **Next**.

1. In **Choose an endpoint**, do the following:
**Note**  
 FTPS servers for Transfer Family operate over Port 21 (Control Channel) and Port Range 8192–8200 (Data Channel).

   1. For **Endpoint type**, choose the **VPC hosted** endpoint type to host your server's endpoint. For information about setting up your VPC hosted endpoint, see [Create a server in a virtual private cloud](create-server-in-vpc.md).
**Note**  
Publicly accessible endpoints are not supported.

   1. (Optional) For **FIPS Enabled**, select the **FIPS Enabled endpoint** check box to ensure that the endpoint complies with Federal Information Processing Standards (FIPS).
**Note**  
FIPS-enabled endpoints are only available in North American AWS Regions. For available Regions, see [AWS Transfer Family endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/transfer-service.html) in the *AWS General Reference*. For more information about FIPS, see [ Federal Information Processing Standard (FIPS) 140-2 ](https://aws.amazon.com/compliance/fips/).

   1. Choose **Next**.  
![\[The Choose an endpoint console section with VPC hosted selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-choose-endpoint-vpc-internal.png)

1. On the **Choose domain** page, choose the AWS storage service that you want to use to store and access your data over the selected protocol:
   + Choose **Amazon S3** to store and access your files as objects over the selected protocol.
   + Choose **Amazon EFS** to store and access your files in your Amazon EFS file system over the selected protocol.

   Choose **Next**.

1. In **Configure additional details**, do the following:

   1. For logging, specify an existing log group or create a new one (the default option).  
![\[Logging pane for Configure additional details in the Create server wizard. Choose an existing log group is selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-choose-existing-group.png)

      If you choose **Create log group**, the CloudWatch console ([https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)) opens to the **Create log group** page. For details, see [ Create a log group in CloudWatch Logs](https://docs.aws.amazon.com//AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#Create-Log-Group). 

   1.  (Optional) For **Managed workflows**, choose workflow IDs (and a corresponding role) that Transfer Family should assume when executing the workflow. You can choose one workflow to execute upon a complete upload, and another to execute upon a partial upload. To learn more about processing your files by using managed workflows, see [AWS Transfer Family managed workflows](transfer-workflows.md).  
![\[The Managed workflows console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-addtoserver.png)

   1. For **Cryptographic algorithm options**, choose a security policy that contains the cryptographic algorithms enabled for use by your server. Our latest security policy is the default: for details, see [Security policies for AWS Transfer Family servers](security-policies.md).

   1. For **Server Host Key**, keep it blank.

   1. (Optional) For **Tags**, for **Key** and **Value**, enter one or more tags as key-value pairs, and then choose **Add tag**.

   1. You can optimize performance for your Amazon S3 directories. For example, suppose that you go into your home directory, and you have 10,000 subdirectories. In other words, your Amazon S3 bucket has 10,000 folders. In this scenario, if you run the `ls` (list) command, the list operation takes between six and eight minutes. However, if you optimize your directories, this operation takes only a few seconds.

      When you create your server using the console, optimized directories is enabled by default. If you create your server using the API, this behavior is not enabled by default.  
![\[The Optimized directories console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/optimized-directories.png)

   1. Choose **Next**.

   1. (Optional) You can configure AWS Transfer Family servers to display customized messages such as organizational policies or terms and conditions to your end users. You can also display customized Message of The Day (MOTD) to users who have successfully authenticated.

      For **Display banner**, in the **Pre-authentication display banner** text box, enter the text message that you want to display to your users before they authenticate, and in the **Post-authentication display banner** text box, enter the text that you want to display to your users after they successfully authenticate.

   1. (Optional) You can configure the following additional options.
      + **SetStat option**: enable this option to ignore the error that is generated when a client attempts to use `SETSTAT` on a file you are uploading to an Amazon S3 bucket. For additional details, see the `SetStatOption` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html) topic.
      + **TLS session resumption**: provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. For additional details, see the `TlsSessionResumptionMode` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html) topic.
      + **Passive IP**: indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For additional details, see the `PassiveIp` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html) topic.  
![\[The Additional configuration screen showing the SetStat, TLS session resumption, and Passive IP parameters.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-configure-additional-items-all.png)

1. In **Review and create**, review your choices.
   + If you want to edit any of them, choose **Edit** next to the step.
**Note**  
You must review each step after the step that you chose to edit.
   + If you have no changes, choose **Create server** to create your server. You are taken to the **Servers** page, shown following, where your new server is listed.

It can take a couple of minutes before the status for your new server changes to **Online**. At that point, your server can perform file operations for your users.

**Next steps**: For the next step, continue on to [Working with custom identity providers](custom-idp-intro.md) to set up users.

# Create an FTP-enabled server
<a name="create-server-ftp"></a>

File Transfer Protocol (FTP) is a network protocol used for the transfer of data. FTP uses a separate channel for control and data transfers. The control channel is open until terminated or inactivity timeout. The data channel is active for the duration of the transfer. FTP uses clear text and does not support encryption of traffic.

**Note**  
When you enable FTP, you must choose the internal access option for the VPC-hosted endpoint. If you need your server to have data traverse the public network, you must use secure protocols, such as SFTP or FTPS. 

**Note**  
For important considerations about Network Load Balancers, see [Avoid placing NLBs and NATs in front of AWS Transfer Family servers](infrastructure-security.md#nlb-considerations).

**To create an FTP-enabled server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/) and select **Servers** from the navigation pane, then choose **Create server**.

1. In **Choose protocols**, select **FTP**, and then choose **Next**.

1. In **Choose an identity provider**, choose the identity provider that you want to use to manage user access. You have the following options:
   + **AWS Directory Service for Microsoft Active Directory** – You provide an Directory Service directory to access the endpoint. By doing so, you can use credentials stored in your Active Directory to authenticate your users. To learn more about working with AWS Managed Microsoft AD identity providers, see [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md).
**Note**  
 Cross-Account and Shared directories are not supported for AWS Managed Microsoft AD. 
To set up a server with Directory Service as your identity provider, you need to add some Directory Service permissions. For details, see [Before you start using AWS Directory Service for Microsoft Active Directory](directory-services-users.md#managed-ad-prereq).
   + **Custom identity provider** – Choose either of the following options:
     + **Use AWS Lambda to connect your identity provider** – You can use an existing identity provider, backed by a Lambda function. You provide the name of the Lambda function. For more information, see [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md).
     + **Use Amazon API Gateway to connect your identity provider** – You can create an API Gateway method backed by a Lambda function for use as an identity provider. You provide an Amazon API Gateway URL and an invocation role. For more information, see [Using Amazon API Gateway to integrate your identity provider](authentication-api-gateway.md).  
![\[The Choose an identity provider console section with Custom identity provider selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/custom-lambda-console-no-sftp.png)

1. Choose **Next**.

1. In **Choose an endpoint**, do the following:
**Note**  
FTP servers for Transfer Family operate over Port 21 (Control Channel) and Port Range 8192–8200 (Data Channel).

   1. For **Endpoint type**, choose **VPC hosted** to host your server's endpoint. For information about setting up your VPC hosted endpoint, see [Create a server in a virtual private cloud](create-server-in-vpc.md).
**Note**  
Publicly accessible endpoints are not supported.

   1. For **FIPS Enabled**, keep the **FIPS Enabled endpoint** check box cleared.
**Note**  
FIPS-enabled endpoints are not supported for FTP servers.

   1. Choose **Next**.  
![\[The Choose an endpoint console section with VPC hosted selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-choose-endpoint-vpc-internal.png)

1. On the **Choose domain** page, choose the AWS storage service that you want to use to store and access your data over the selected protocol.
   + Choose **Amazon S3** to store and access your files as objects over the selected protocol.
   + Choose **Amazon EFS** to store and access your files in your Amazon EFS file system over the selected protocol.

   Choose **Next**.

1. In **Configure additional details**, do the following:

   1. For logging, specify an existing log group or create a new one (the default option).  
![\[Logging pane for Configure additional details in the Create server wizard. Choose an existing log group is selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-choose-existing-group.png)

      If you choose **Create log group**, the CloudWatch console ([https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)) opens to the **Create log group** page. For details, see [ Create a log group in CloudWatch Logs](https://docs.aws.amazon.com//AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#Create-Log-Group). 

   1.  (Optional) For **Managed workflows**, choose workflow IDs (and a corresponding role) that Transfer Family should assume when executing the workflow. You can choose one workflow to execute upon a complete upload, and another to execute upon a partial upload. To learn more about processing your files by using managed workflows, see [AWS Transfer Family managed workflows](transfer-workflows.md).  
![\[The Managed workflows console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-addtoserver.png)

   1. For **Cryptographic algorithm options**, choose a security policy that contains the cryptographic algorithms enabled for use by your server.
**Note**  
Transfer Family assigns the latest security policy to your FTP server. However, since the FTP protocol doesn't use any encryption, FTP servers do not use any of the security policy algorithms. Unless your server also uses the FTPS or SFTP protocol, the security policy remains unused.

   1. For **Server Host Key**, keep it blank.

   1. (Optional) For **Tags**, for **Key** and **Value**, enter one or more tags as key-value pairs, and then choose **Add tag**.

   1. You can optimize performance for your Amazon S3 directories. For example, suppose that you go into your home directory, and you have 10,000 subdirectories. In other words, your Amazon S3 bucket has 10,000 folders. In this scenario, if you run the `ls` (list) command, the list operation takes between six and eight minutes. However, if you optimize your directories, this operation takes only a few seconds.

      When you create your server using the console, optimized directories is enabled by default. If you create your server using the API, this behavior is not enabled by default.  
![\[The Optimized directories console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/optimized-directories.png)

   1. Choose **Next**.

   1. (Optional) You can configure AWS Transfer Family servers to display customized messages such as organizational policies or terms and conditions to your end users. You can also display customized Message of The Day (MOTD) to users who have successfully authenticated.

      For **Display banner**, in the **Pre-authentication display banner** text box, enter the text message that you want to display to your users before they authenticate, and in the **Post-authentication display banner** text box, enter the text that you want to display to your users after they successfully authenticate.

   1. (Optional) You can configure the following additional options.
      + **SetStat option**: enable this option to ignore the error that is generated when a client attempts to use `SETSTAT` on a file you are uploading to an Amazon S3 bucket. For additional details, see the `SetStatOption` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html) topic.
      + **TLS session resumption**: provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. For additional details, see the `TlsSessionResumptionMode` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html) topic.
      + **Passive IP**: indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For additional details, see the `PassiveIp` documentation in the [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html) topic.  
![\[The Additional configuration screen showing the SetStat, TLS session resumption, and Passive IP parameters.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-configure-additional-items-all.png)

1. In **Review and create**, review your choices.
   + If you want to edit any of them, choose **Edit** next to the step.
**Note**  
You must review each step after the step that you chose to edit.
   + If you have no changes, choose **Create server** to create your server. You are taken to the **Servers** page, shown following, where your new server is listed.

It can take a couple of minutes before the status for your new server changes to **Online**. At that point, your server can perform file operations for your users.

**Next steps** – For the next step, continue on to [Working with custom identity providers](custom-idp-intro.md) to set up users.

# Create a server in a virtual private cloud
<a name="create-server-in-vpc"></a>

You can host your server's endpoint inside a virtual private cloud (VPC) to use for transferring data to and from an Amazon S3 bucket or Amazon EFS file system without going over the public internet.

**Note**  
 After May 19, 2021, you won't be able to create a server using `EndpointType=VPC_ENDPOINT` in your AWS account if your account hasn't already done so before May 19, 2021. If you have already created servers with `EndpointType=VPC_ENDPOINT` in your AWS account on or before February 21, 2021, you will not be affected. After this date, use `EndpointType`=**VPC**. For more information, see [Discontinuing the use of VPC\$1ENDPOINT](#deprecate-vpc-endpoint).

If you use Amazon Virtual Private Cloud (Amazon VPC) to host your AWS resources, you can establish a private connection between your VPC and a server. You can then use this server to transfer data over your client to and from your Amazon S3 bucket without using public IP addressing or requiring an internet gateway.

Using Amazon VPC, you can launch AWS resources in a custom virtual network. You can use a VPC to control your network settings, such as the IP address range, subnets, route tables, and network gateways. For more information about VPCs, see [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) in the *Amazon VPC User Guide*.

In the next sections, find instructions on how to create and connect your VPC to a server. As an overview, you do this as follows:

1. Set up a server using a VPC endpoint.

1. Connect to your server using a client that is inside your VPC through the VPC endpoint. Doing this enables you to transfer data that is stored in your Amazon S3 bucket over your client using AWS Transfer Family. You can perform this transfer even though the network is disconnected from the public internet.

1.  In addition, if you choose to make your server's endpoint internet-facing, you can associate Elastic IP addresses with your endpoint. Doing this lets clients outside of your VPC connect to your server. You can use VPC security groups to control access to authenticated users whose requests originate only from allowed addresses.

**Note**  
AWS Transfer Family supports dual-stack endpoints, allowing your server to communicate over both IPv4 and IPv6. To enable dual-stack support, select the **Enable DNS dual-stack endpoint** option when creating your VPC endpoint. Note that both your VPC and subnets must be configured to support IPv6 before you can use this feature. Dual-stack support is particularly useful when you have clients that need to connect using either protocol.  
For information about dual-stack (IPv4 and IPv6) server endpoints, see [IPv6 support for Transfer Family servers](ipv6-support.md).

**Topics**
+ [Create a server endpoint that can be accessed only within your VPC](#create-server-endpoint-in-vpc)
+ [Create an internet-facing endpoint for your server](#create-internet-facing-endpoint)
+ [Change the endpoint type for your server](#change-server-endpoint-type)
+ [Discontinuing the use of VPC\$1ENDPOINT](#deprecate-vpc-endpoint)
+ [Limiting VPC endpoint access for Transfer Family servers](#limit-vpc-endpoint-access)
+ [Additional networking features](#additional-networking-features)
+ [Updating the AWS Transfer Family server endpoint type from VPC\$1ENDPOINT to VPC](update-endpoint-type-vpc.md)

## Create a server endpoint that can be accessed only within your VPC
<a name="create-server-endpoint-in-vpc"></a>

In the following procedure, you create a server endpoint that is accessible only to resources within your VPC.

**To create a server endpoint inside a VPC**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. From the navigation pane, select **Servers**, then choose **Create server**.

1. In **Choose protocols**, select one or more protocols, and then choose **Next**. For more information about protocols, see [Step 2: Create an SFTP-enabled server](getting-started.md#getting-started-server).

1. In **Choose an identity provider**, choose **Service managed** to store user identities and keys in AWS Transfer Family, and then choose **Next**.

   This procedure uses the service-managed option. If you choose **Custom**, you provide an Amazon API Gateway endpoint and an AWS Identity and Access Management (IAM) role to access the endpoint. By doing so, you can integrate your directory service to authenticate and authorize your users. To learn more about working with custom identity providers, see [Working with custom identity providers](custom-idp-intro.md).

1. In **Choose an endpoint**, do the following:

   1. For **Endpoint type**, choose the **VPC hosted** endpoint type to host your server's endpoint.

   1. For **Access**, choose **Internal** to make your endpoint only accessible to clients using the endpoint's private IP addresses.

      For details on the **Internet Facing** option, see [Create an internet-facing endpoint for your server](#create-internet-facing-endpoint). A server that is created in a VPC for internal access only doesn't support custom hostnames.

   1. For **VPC**, choose an existing VPC ID or choose **Create a VPC** to create a new VPC.

   1. In the **Availability Zones** section, choose up to three Availability Zones and associated subnets.

   1. In the **Security Groups** section, choose an existing security group ID or IDs or choose **Create a security group** to create a new security group. For more information about security groups, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. To create a security group, see [Creating a security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#CreatingSecurityGroups) in the *Amazon Virtual Private Cloud User Guide*.
**Note**  
Your VPC automatically comes with a default security group. If you don't specify a different security group or groups when you launch the server, we associate the default security group with your server.
      + For the inbound rules for the security group, you can configure SSH traffic to use port 22, 2222, 22000, or any combination. Port 22 is configured by default. To use port 2222 or port 22000, you add an inbound rule to your security group. For the type, choose **Custom TCP**, then enter either **2222** or **22000** for **Port range**, and for the source, enter the same CIDR range that you have for your SSH port 22 rule.
      + For the inbound rules for the security group, configure FTP and FTPS traffic to use **Port range** **21** for the control channel and **8192-8200** for the data channel.
**Note**  
You can also use port 2223 for clients that require TCP "piggy-back" ACKs, or the ability for the final ack of the TCP 3-way handshake to also contain data.  
Some client software may be incompatible with port 2223: for example, a client that requires the server to send the SFTP Identification String before the client does.  
![\[The inbound rules for a sample security group, showing a rule for SSH on port 22 and Custom TCP on port 2222.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/alternate-port-rule.png)

   1. (Optional) For **FIPS Enabled**, select the **FIPS Enabled endpoint** check box to ensure the endpoint complies with Federal Information Processing Standards (FIPS).
**Note**  
FIPS-enabled endpoints are only available in North American AWS Regions. For available Regions, see [AWS Transfer Family endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/transfer-service.html) in the *AWS General Reference*. For more information about FIPS, see [ Federal Information Processing Standard (FIPS) 140-2 ](https://aws.amazon.com/compliance/fips/).

   1. Choose **Next**.

1. In **Configure additional details**, do the following:

   1. For **CloudWatch logging**, choose one of the following to enable Amazon CloudWatch logging of your user activity:
      + **Create a new role** to allow Transfer Family to automatically create the IAM role, as long as you have the right permissions to create a new role. The IAM role that is created is called `AWSTransferLoggingAccess`.
      + **Choose an existing role** to choose an existing IAM role from your account. Under **Logging role**, choose the role. This IAM role should include a trust policy with **Service** set to `transfer.amazonaws.com`.

        For more information about CloudWatch logging, see [Configure CloudWatch logging role](configure-cw-logging-role.md).
**Note**  
You can't view end-user activity in CloudWatch if you don't specify a logging role.
If you don't want to set up a CloudWatch logging role, select **Choose an existing role**, but don't select a logging role.

   1. For **Cryptographic algorithm options**, choose a security policy that contains the cryptographic algorithms enabled for use by your server.
**Note**  
By default, the `TransferSecurityPolicy-2024-01` security policy is attached to your server unless you choose a different one.

      For more information about security policies, see [Security policies for AWS Transfer Family servers](security-policies.md).

   1. (Optional: this section is only for migrating users from an existing SFTP-enabled server.) For **Server Host Key**, enter an RSA, ED25519, or ECDSA private key that will be used to identify your server when clients connect to it over SFTP.

   1. (Optional) For **Tags**, for **Key** and **Value**, enter one or more tags as key-value pairs, and then choose **Add tag**.

   1. Choose **Next**.

1. In **Review and create**, review your choices. If you:
   + Want to edit any of them, choose **Edit** next to the step.
**Note**  
You will need to review each step after the step that you chose to edit.
   + Have no changes, choose **Create server** to create your server. You are taken to the **Servers** page, shown following, where your new server is listed.

It can take a couple of minutes before the status for your new server changes to **Online**. At that point, your server can perform file operations, but you'll need to create a user first. For details on creating users, see [Managing users for server endpoints](create-user.md).

## Create an internet-facing endpoint for your server
<a name="create-internet-facing-endpoint"></a>

In the following procedure, you create a server endpoint. This endpoint is accessible over the internet only to clients whose source IP addresses are allowed in your VPC's default security group. Additionally, by using Elastic IP addresses to make your endpoint internet-facing, your clients can use the Elastic IP address to allow access to your endpoint in their firewalls.

**Note**  
Only SFTP and FTPS can be used on an internet-facing VPC hosted endpoint.

**To create an internet-facing endpoint**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. From the navigation pane, select **Servers**, then choose **Create server**.

1. In **Choose protocols**, select one or more protocols, and then choose **Next**. For more information about protocols, see [Step 2: Create an SFTP-enabled server](getting-started.md#getting-started-server).

1. In **Choose an identity provider**, choose **Service managed** to store user identities and keys in AWS Transfer Family, and then choose **Next**.

   This procedure uses the service-managed option. If you choose **Custom**, you provide an Amazon API Gateway endpoint and an AWS Identity and Access Management (IAM) role to access the endpoint. By doing so, you can integrate your directory service to authenticate and authorize your users. To learn more about working with custom identity providers, see [Working with custom identity providers](custom-idp-intro.md).

1. In **Choose an endpoint**, do the following:

   1. For **Endpoint type**, choose the **VPC hosted** endpoint type to host your server's endpoint.

   1. For **Access**, choose **Internet Facing** to make your endpoint accessible to clients over the internet.
**Note**  
When you choose **Internet Facing**, you can choose an existing Elastic IP address in each subnet or subnets. Or you can go to the VPC console ([https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/)) to allocate one or more new Elastic IP addresses. These addresses can be owned either by AWS or by you. You can't associate Elastic IP addresses that are already in use with your endpoint.

   1. (Optional) For **Custom hostname**, choose one of the following:
**Note**  
Customers in AWS GovCloud (US) need to connect via the Elastic IP address directly, or create a hostname record within Commercial Route 53 that points to their EIP. For more information about using Route 53 for GovCloud endpoints, see [Setting up Amazon Route 53 with your AWS GovCloud (US) resources](https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/setting-up-route53.html) in the *AWS GovCloud (US) User Guide*. 
      + **Amazon Route 53 DNS alias** – if the hostname that you want to use is registered with Route 53. You can then enter the hostname.
      + **Other DNS** – if the hostname that you want to use is registered with another DNS provider. You can then enter the hostname.
      + **None** – to use the server's endpoint and not use a custom hostname. The server hostname takes the form `server-id.server.transfer.region.amazonaws.com`.
**Note**  
For customers in AWS GovCloud (US), selecting **None** does not create a hostname in this format.

      To learn more about working with custom hostnames, see [Working with custom hostnames](requirements-dns.md).

   1. For **VPC**, choose an existing VPC ID or choose **Create a VPC** to create a new VPC.

   1. In the **Availability Zones** section, choose up to three Availability Zones and associated subnets. For **IPv4 Addresses**, choose an **Elastic IP address** for each subnet. This is the IP address that your clients can use to allow access to your endpoint in their firewalls.

      **Tip: **You must use a public subnet for your Availability Zones, or first setup an internet gateway if you want to use a private subnet.

   1. In the **Security Groups** section, choose an existing security group ID or IDs or choose **Create a security group** to create a new security group. For more information about security groups, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. To create a security group, see [Creating a security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#CreatingSecurityGroups) in the *Amazon Virtual Private Cloud User Guide*.
**Note**  
Your VPC automatically comes with a default security group. If you don't specify a different security group or groups when you launch the server, we associate the default security group with your server.
      + For the inbound rules for the security group, you can configure SSH traffic to use port 22, 2222, 22000, or any combination. Port 22 is configured by default. To use port 2222 or port 22000, you add an inbound rule to your security group. For the type, choose **Custom TCP**, then enter either **2222** or **22000** for **Port range**, and for the source, enter the same CIDR range that you have for your SSH port 22 rule.
      + For the inbound rules for the security group, configure FTPS traffic to use **Port range** **21** for the control channel and **8192-8200** for the data channel.
**Note**  
You can also use port 2223 for clients that require TCP "piggy-back" ACKs, or the ability for the final ack of the TCP 3-way handshake to also contain data.  
Some client software may be incompatible with port 2223: for example, a client that requires the server to send the SFTP Identification String before the client does.  
![\[The inbound rules for a sample security group, showing a rule for SSH on port 22 and Custom TCP on port 2222.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/alternate-port-rule.png)

   1. (Optional) For **FIPS Enabled**, select the **FIPS Enabled endpoint** check box to ensure the endpoint complies with Federal Information Processing Standards (FIPS).
**Note**  
FIPS-enabled endpoints are only available in North American AWS Regions. For available Regions, see [AWS Transfer Family endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/transfer-service.html) in the *AWS General Reference*. For more information about FIPS, see [ Federal Information Processing Standard (FIPS) 140-2 ](https://aws.amazon.com/compliance/fips/).

   1. Choose **Next**.

1. In **Configure additional details**, do the following:

   1. For **CloudWatch logging**, choose one of the following to enable Amazon CloudWatch logging of your user activity:
      + **Create a new role** to allow Transfer Family to automatically create the IAM role, as long as you have the right permissions to create a new role. The IAM role that is created is called `AWSTransferLoggingAccess`.
      + **Choose an existing role** to choose an existing IAM role from your account. Under **Logging role**, choose the role. This IAM role should include a trust policy with **Service** set to `transfer.amazonaws.com`.

        For more information about CloudWatch logging, see [Configure CloudWatch logging role](configure-cw-logging-role.md).
**Note**  
You can't view end-user activity in CloudWatch if you don't specify a logging role.
If you don't want to set up a CloudWatch logging role, select **Choose an existing role**, but don't select a logging role.

   1. For **Cryptographic algorithm options**, choose a security policy that contains the cryptographic algorithms enabled for use by your server.
**Note**  
By default, the `TransferSecurityPolicy-2024-01` security policy is attached to your server unless you choose a different one.

      For more information about security policies, see [Security policies for AWS Transfer Family servers](security-policies.md).

   1. (Optional: this section is only for migrating users from an existing SFTP-enabled server.) For **Server Host Key**, enter an RSA, ED25519, or ECDSA private key that will be used to identify your server when clients connect to it over SFTP.

   1. (Optional) For **Tags**, for **Key** and **Value**, enter one or more tags as key-value pairs, and then choose **Add tag**.

   1. Choose **Next**.

   1.  (Optional) For **Managed workflows**, choose workflow IDs (and a corresponding role) that Transfer Family should assume when executing the workflow. You can choose one workflow to execute upon a complete upload, and another to execute upon a partial upload. To learn more about processing your files by using managed workflows, see [AWS Transfer Family managed workflows](transfer-workflows.md).  
![\[The Managed workflows console section.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-addtoserver.png)

1. In **Review and create**, review your choices. If you:
   + Want to edit any of them, choose **Edit** next to the step.
**Note**  
You will need to review each step after the step that you chose to edit.
   + Have no changes, choose **Create server** to create your server. You are taken to the **Servers** page, shown following, where your new server is listed.

You can choose the server ID to see the detailed settings of the server that you just created. After the column **Public IPv4 address** has been populated, the Elastic IP addresses that you provided are successfully associated with your server's endpoint.

**Note**  
When your server in a VPC is online, only the subnets can be modified and only through the [UpdateServer](https://docs.aws.amazon.com/transfer/latest/APIReference/API_UpdateServer.html) API. You must [stop the server](edit-server-config.md#edit-online-offline) to add or change the server endpoint's Elastic IP addresses.

## Change the endpoint type for your server
<a name="change-server-endpoint-type"></a>

If you have an existing server that is accessible over the internet (that is, has a public endpoint type), you can change its endpoint to a VPC endpoint.

**Note**  
If you have an existing server in a VPC displayed as `VPC_ENDPOINT`, we recommend that you modify it to the new VPC endpoint type. With this new endpoint type, you no longer need to use a Network Load Balancer (NLB) to associate Elastic IP addresses with your server's endpoint. Also, you can use VPC security groups to restrict access to your server's endpoint. However, you can continue to use the `VPC_ENDPOINT` endpoint type as needed.

The following procedure assumes that you have a server that uses either the current public endpoint type or the older `VPC_ENDPOINT` type.

**To change the endpoint type for your server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the navigation pane, choose **Servers**.

1. Select the check box of the server that you want to change the endpoint type for.
**Important**  
You must stop the server before you can change its endpoint.

1. For **Actions**, choose **Stop**.

1. In the confirmation dialog box that appears, choose **Stop** to confirm that you want to stop the server.
**Note**  
Before proceeding to the next step, in **Endpoint details**, wait for the **Status** of the server to change to **Offline**; this can take a couple of minutes. You might have to choose **Refresh** on the **Servers** page to see the status change.  
You won't be able to make any edits until the server is **Offline**.

1. In **Endpoint details**, choose **Edit**.

1. In **Edit endpoint configuration**, do the following:

   1. For **Edit endpoint type**, choose **VPC hosted**.

   1. For **Access**, choose one of the following:
      + **Internal** to make your endpoint only accessible to clients using the endpoint's private IP addresses.
      + **Internet Facing** to make your endpoint accessible to clients over the public internet.
**Note**  
When you choose **Internet Facing**, you can choose an existing Elastic IP address in each subnet or subnets. Or, you can go to the VPC console ([https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/)) to allocate one or more new Elastic IP addresses. These addresses can be owned either by AWS or by you. You can't associate Elastic IP addresses that are already in use with your endpoint.

   1. (Optional for internet facing access only) For **Custom hostname**, choose one of the following:
      + **Amazon Route 53 DNS alias** – if the hostname that you want to use is registered with Route 53. You can then enter the hostname.
      + **Other DNS** – if the hostname that you want to use is registered with another DNS provider. You can then enter the hostname.
      + **None** – to use the server's endpoint and not use a custom hostname. The server hostname takes the form `serverId.server.transfer.regionId.amazonaws.com`.

        To learn more about working with custom hostnames, see [Working with custom hostnames](requirements-dns.md).

   1. For **VPC**, choose an existing VPC ID, or choose **Create a VPC** to create a new VPC.

   1. In the **Availability Zones** section, select up to three Availability Zones and associated subnets. If **Internet Facing** is chosen, also choose an Elastic IP address for each subnet.
**Note**  
If you want the maximum of three Availability Zones, but there are not enough available, create them in the VPC console ([https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/)).  
If you modify the subnets or Elastic IP addresses, the server takes a few minutes to update. You can't save your changes until the server update is complete.

   1. Choose **Save**.

1. For **Actions**, choose **Start** and wait for the status of the server to change to **Online**; this can take a couple of minutes.
**Note**  
If you changed a public endpoint type to a VPC endpoint type, notice that **Endpoint type** for your server has changed to **VPC**.

The default security group is attached to the endpoint. To change or add additional security groups, see [Creating Security Groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#CreatingSecurityGroups).

## Discontinuing the use of VPC\$1ENDPOINT
<a name="deprecate-vpc-endpoint"></a>

AWS Transfer Family has discontinued the ability to create servers with `EndpointType=VPC_ENDPOINT` for new AWS accounts. As of May 19, 2021, AWS accounts that don't own AWS Transfer Family servers with an endpoint type of `VPC_ENDPOINT` will not be able to create new servers with `EndpointType=VPC_ENDPOINT`. If you already own servers that use the `VPC_ENDPOINT` endpoint type, we recommend that you start using `EndpointType=VPC` as soon as possible. For details, see[ Update your AWS Transfer Family server endpoint type from VPC\$1ENDPOINT to VPC](https://aws.amazon.com/blogs/storage/update-your-aws-transfer-family-server-endpoint-type-from-vpc_endpoint-to-vpc/).

We launched the new `VPC` endpoint type earlier in 2020. For more information, see [AWS Transfer Family for SFTP supports VPC Security Groups and Elastic IP addresses](https://aws.amazon.com/about-aws/whats-new/2020/01/aws-transfer-for-sftp-supports-vpc-security-groups-and-elastic-ip-addresses/). This new endpoint is more feature rich and cost effective and there are no PrivateLink charges. For more information, see [AWS PrivateLink pricing](https://aws.amazon.com/privatelink/pricing/). 

This endpoint type is functionally equivalent to the previous endpoint type (`VPC_ENDPOINT`). You can attach Elastic IP addresses directly to the endpoint to make it internet facing and use security groups for source IP filtering. For more information, see the [Use IP allow listing to secure your AWS Transfer Family for SFTP servers](https://aws.amazon.com/blogs/storage/use-ip-whitelisting-to-secure-your-aws-transfer-for-sftp-servers/) blog post.

You can also host this endpoint in a shared VPC environment. For more information, see [AWS Transfer Family now supports shared services VPC environments](https://aws.amazon.com/about-aws/whats-new/2020/11/aws-transfer-family-now-supports-shared-services-vpc-environments/). 

In addition to SFTP, you can use the VPC `EndpointType` to enable FTPS and FTP. We don't plan to add these features and FTPS/FTP support to `EndpointType=VPC_ENDPOINT`. We have also removed this endpoint type as an option from the AWS Transfer Family console. 

<a name="deprecate-vpc-endpoint.title"></a>You can change the endpoint type for your server using the Transfer Family console, AWS CLI, API, SDKs, or CloudFormation. To change your server’s endpoint type, see [Updating the AWS Transfer Family server endpoint type from VPC\$1ENDPOINT to VPC](update-endpoint-type-vpc.md).

If you have any questions, contact AWS Support or your AWS account team.

**Note**  
We do not plan to add these features and FTPS or FTP support to EndpointType=VPC\$1ENDPOINT. We are no longer offering it as an option on the AWS Transfer Family Console. 

If you have additional questions, you can contact us through AWS Support or your account team.

## Limiting VPC endpoint access for Transfer Family servers
<a name="limit-vpc-endpoint-access"></a>

When creating an AWS Transfer Family server with VPC endpoint type, your IAM users and principals need permissions to create and delete VPC endpoints. However, your organization's security policies may restrict these permissions. You can use IAM policies to allow VPC endpoint creation and deletion specifically for Transfer Family while maintaining restrictions for other services.

**Important**  
The following IAM policy allows users to create and delete VPC endpoints only for Transfer Family servers while denying these operations for other services:

```
{
    "Effect": "Deny",
    "Action": [
        "ec2:CreateVpcEndpoint",
        "ec2:DeleteVpcEndpoints"
    ],
    "Resource": ["*"],
    "Condition": {
        "ForAnyValue:StringNotLike": {
            "ec2:VpceServiceName": [
                "com.amazonaws.INPUT-YOUR-REGION.transfer.server.*"
            ]
        },
        "StringNotLike": {
            "aws:PrincipalArn": [
                "arn:aws:iam::*:role/INPUT-YOUR-ROLE"
            ]
        }
    }
}
```

Replace *INPUT-YOUR-REGION* with your AWS Region (for example, **us-east-1**) and *INPUT-YOUR-ROLE* with the IAM role you want to grant these permissions to.

## Additional networking features
<a name="additional-networking-features"></a>

AWS Transfer Family provides several advanced networking features that enhance security and flexibility when using VPC configurations:
+ **Shared VPC environment support** - You can host your Transfer Family server endpoint in a shared VPC environment. For more information, see [Using VPC hosted endpoints in shared VPCs with AWS Transfer Family](https://aws.amazon.com/blogs/storage/using-vpc-hosted-endpoints-in-shared-vpcs-with-aws-transfer-family/).
+ **Authentication and security** - You can use an AWS Web Application Firewall to protect your Amazon API Gateway endpoint. For more information, see [Securing AWS Transfer Family with AWS Web Application Firewall and Amazon API Gateway](https://aws.amazon.com/blogs/storage/securing-aws-transfer-family-with-aws-web-application-firewall-and-amazon-api-gateway/).

# Updating the AWS Transfer Family server endpoint type from VPC\$1ENDPOINT to VPC
<a name="update-endpoint-type-vpc"></a>

You can use the AWS Management Console, CloudFormation, or the Transfer Family API to update a server's `EndpointType` from `VPC_ENDPOINT` to `VPC`. Detailed procedures and examples for using each of these methods to update a server endpoint type are provided in the following sections. If you have servers in multiple AWS regions and in multiple AWS accounts, you can use the example script provided in the following section, with modifications, to identify servers using the `VPC_ENDPOINT` type that you will need to update.

**Topics**
+ [Identifying servers using the `VPC_ENDPOINT` endpoint type](#id-servers)
+ [Updating the server endpoint type using the AWS Management Console](#update-endpoint-console)
+ [Updating the server endpoint type using CloudFormation](#update-endpoint-cloudformation)
+ [Updating the server EndpointType using the API](#update-endpoint-cli)

## Identifying servers using the `VPC_ENDPOINT` endpoint type
<a name="id-servers"></a>

You can identify which servers are using the `VPC_ENDPOINT` using the AWS Management Console.

**To identify servers using the `VPC_ENDPOINT` endpoint type using the console**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Choose **Servers** in the navigation pane to display the list of servers in your account in that region.

1. Sort the list of servers by the **Endpoint type** to see all servers using `VPC_ENDPOINT`.

**To identify servers using `VPC_ENDPOINT` across multiple AWS Regions and accounts**

If you have servers in multiple AWS regions and in multiple AWS accounts, you can use the following example script, with modifications, to identify servers using the `VPC_ENDPOINT` endpoint type. The example script uses the Amazon EC2 [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) and the Transfer Family [https://docs.aws.amazon.com/transfer/latest/APIReference/API_ListServers.html](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ListServers.html) API operations. If you have many AWS accounts, you could loop through your accounts using an IAM Role with read only auditor access if you authenticate using session profiles to your identity provider.

1. Following is a simple example.

   ```
   import boto3
   
   profile = input("Enter the name of the AWS account you'll be working in: ")
   session = boto3.Session(profile_name=profile)
   
   ec2 = session.client("ec2")
   
   regions = ec2.describe_regions()
   
   for region in regions['Regions']:
       region_name = region['RegionName']
       if region_name=='ap-northeast-3': #https://github.com/boto/boto3/issues/1943
           continue
       transfer = session.client("transfer", region_name=region_name)
       servers = transfer.list_servers()
       for server in servers['Servers']:
          if server['EndpointType']=='VPC_ENDPOINT':
              print(server['ServerId'], region_name)
   ```

1. After you have the list of the servers to update, you can use one of the methods described in the following sections to update the `EndpointType` to `VPC`.

## Updating the server endpoint type using the AWS Management Console
<a name="update-endpoint-console"></a>

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the navigation pane, choose **Servers**.

1. Select the check box of the server that you want to change the endpoint type for.
**Important**  
You must stop the server before you can change its endpoint.

1. For **Actions**, choose **Stop**.

1. In the confirmation dialog box that appears, choose **Stop** to confirm that you want to stop the server.
**Note**  
Before proceeding to the next step, wait for the **Status** of the server to change to **Offline**; this can take a couple of minutes. You might have to choose **Refresh** on the **Servers** page to see the status change.

1. After the status changes to **Offline**, choose the server to display the server details page.

1. In the **Endpoint details** section, choose **Edit**.

1. Choose **VPC hosted** for the **Endpoint type**.

1. Choose **Save**

1. For **Actions**, choose **Start** and wait for the status of the server to change to **Online**; this can take a couple of minutes.

## Updating the server endpoint type using CloudFormation
<a name="update-endpoint-cloudformation"></a>

This section describes how to use CloudFormation to update a server's `EndpointType` to `VPC`. Use this procedure for Transfer Family servers that you have deployed using CloudFormation. In this example, the original CloudFormation template used to deploy the Transfer Family server is shown as follows:

```
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Create AWS Transfer Server with VPC_ENDPOINT endpoint type'
Parameters:
  SecurityGroupId:
    Type: AWS::EC2::SecurityGroup::Id
  SubnetIds:
    Type: List<AWS::EC2::Subnet::Id>
  VpcId:
    Type: AWS::EC2::VPC::Id
Resources:
  TransferServer:
    Type: AWS::Transfer::Server
    Properties:
      Domain: S3
      EndpointDetails:
        VpcEndpointId: !Ref VPCEndpoint
      EndpointType: VPC_ENDPOINT
      IdentityProviderType: SERVICE_MANAGED
      Protocols:
        - SFTP
  VPCEndpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      ServiceName: com.amazonaws.us-east-1.transfer.server
      SecurityGroupIds:
        - !Ref SecurityGroupId
      SubnetIds:
        - !Select [0, !Ref SubnetIds]
        - !Select [1, !Ref SubnetIds]
        - !Select [2, !Ref SubnetIds]
      VpcEndpointType: Interface
      VpcId: !Ref VpcId
```

The template is updated with the following changes:
+ The `EndpointType` was changed to `VPC`.
+ The `AWS::EC2::VPCEndpoint` resource is removed.
+ The `SecurityGroupId`, `SubnetIds`, and `VpcId` were moved to the `EndpointDetails` section of the `AWS::Transfer::Server` resource,
+ The `VpcEndpointId` property of `EndpointDetails` was removed.

The updated template looks as follows:

```
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Create AWS Transfer Server with VPC endpoint type'
Parameters:
  SecurityGroupId:
    Type: AWS::EC2::SecurityGroup::Id
  SubnetIds:
    Type: List<AWS::EC2::Subnet::Id>
  VpcId:
    Type: AWS::EC2::VPC::Id
Resources:
  TransferServer:
    Type: AWS::Transfer::Server
    Properties:
      Domain: S3
      EndpointDetails:
        SecurityGroupIds:
          - !Ref SecurityGroupId
        SubnetIds:
          - !Select [0, !Ref SubnetIds]
          - !Select [1, !Ref SubnetIds]
          - !Select [2, !Ref SubnetIds]
        VpcId: !Ref VpcId
      EndpointType: VPC
      IdentityProviderType: SERVICE_MANAGED
      Protocols:
        - SFTP
```

**To update the endpoint type of Transfer Family servers deployed using CloudFormation**

1. Stop the server that you want to update using the following steps.

   1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

   1. In the navigation pane, choose **Servers**.

   1. Select the check box of the server that you want to change the endpoint type for.
**Important**  
You must stop the server before you can change its endpoint.

   1. For **Actions**, choose **Stop**.

   1. In the confirmation dialog box that appears, choose **Stop** to confirm that you want to stop the server.
**Note**  
Before proceeding to the next step, wait for the **Status** of the server to change to **Offline**; this can take a couple of minutes. You might have to choose **Refresh** on the **Servers** page to see the status change.

1. Update the CloudFormation stack

   1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

   1. Choose the stack used to create the Transfer Family server.

   1. Choose **Update**.

   1. Choose **Replace current template**

   1. Upload the new template. CloudFormation Change Sets help you understand how template changes will affect running resources before you implement them. In this example, the Transfer server resource will be modified, and the VPCEndpoint resource will be removed. The VPC endpoint type server creates a VPC Endpoint on your behalf, replacing the original `VPCEndpoint` resource.

      After uploading the new template, the change set will look similar to the following:  
![\[Shows Change set preview page for replacing current CloudFormation template.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/vpc-endpoint-update-cfn.png)

   1. Update the stack.

1. Once the stack update is complete, navigate to the Transfer Family management console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Restart the server. Choose the server you updated in CloudFormation, and then choose **Start** from the **Actions** menu.

## Updating the server EndpointType using the API
<a name="update-endpoint-cli"></a>

You can use the [describe-server](https://docs.aws.amazon.com/cli/latest/reference/transfer/update-server.html) AWS CLI command, or the [UpdateServer](https://docs.aws.amazon.com/transfer/latest/APIReference/API_UpdateServer.html) API command. The following example script stops the Transfer Family server, updates the EndpointType, removes the VPC\$1ENDPOINT, and starts the server.

```
import boto3
import time

profile = input("Enter the name of the AWS account you'll be working in: ")
region_name = input("Enter the AWS Region you're working in: ")
server_id = input("Enter the AWS Transfer Server Id: ")

session = boto3.Session(profile_name=profile)

ec2 = session.client("ec2", region_name=region_name)
transfer = session.client("transfer", region_name=region_name)

group_ids=[]

transfer_description = transfer.describe_server(ServerId=server_id)
if transfer_description['Server']['EndpointType']=='VPC_ENDPOINT':
    transfer_vpc_endpoint = transfer_description['Server']['EndpointDetails']['VpcEndpointId']
    transfer_vpc_endpoint_descriptions = ec2.describe_vpc_endpoints(VpcEndpointIds=[transfer_vpc_endpoint])
    for transfer_vpc_endpoint_description in transfer_vpc_endpoint_descriptions['VpcEndpoints']:
        subnet_ids=transfer_vpc_endpoint_description['SubnetIds']
        group_id_list=transfer_vpc_endpoint_description['Groups']
        vpc_id=transfer_vpc_endpoint_description['VpcId']
        for group_id in group_id_list:
             group_ids.append(group_id['GroupId'])
    if transfer_description['Server']['State']=='ONLINE':
        transfer_stop = transfer.stop_server(ServerId=server_id)
        print(transfer_stop)
        time.sleep(300) #safe
        transfer_update = transfer.update_server(ServerId=server_id,EndpointType='VPC',EndpointDetails={'SecurityGroupIds':group_ids,'SubnetIds':subnet_ids,'VpcId':vpc_id})
        print(transfer_update)
        time.sleep(10) 
        transfer_start = transfer.start_server(ServerId=server_id)
        print(transfer_start)
        delete_vpc_endpoint = ec2.delete_vpc_endpoints(VpcEndpointIds=[transfer_vpc_endpoint])
```

# Working with custom hostnames
<a name="requirements-dns"></a>

Your *server host name* is the hostname that your users enter in their clients when they connect to your server. You can use a custom domain that you have registered for your server hostname when you work with AWS Transfer Family. For example, you might use a custom hostname like `mysftpserver.mysubdomain.domain.com`.

To redirect traffic from your registered custom domain to your server endpoint, you can use Amazon Route 53 or any Domain Name System (DNS) provider. Route 53 is the DNS service that AWS Transfer Family natively supports.

**Topics**
+ [Use Amazon Route 53 as your DNS provider](#requirements-use-r53)
+ [Use other DNS providers](#requirements-use-alt-dns)
+ [Custom hostnames for non-console created servers](#tag-custom-hostname-cdk)

On the console, you can choose one of these options for setting up a custom hostname:
+ **Amazon Route 53 DNS alias** – if the hostname that you want to use is registered with Route 53. You can then enter the hostname.
+ **Other DNS** – if the hostname that you want to use is registered with another DNS provider. You can then enter the hostname.
+ **None** – to use the server's endpoint and not use a custom hostname.

You set this option when you create a new server or edit the configuration of an existing server. For more information about creating a new server, see [Step 2: Create an SFTP-enabled server](getting-started.md#getting-started-server). For more information about editing the configuration of an existing server, see [Edit server details](edit-server-config.md).

For more details about using your own domain for the server hostname and how AWS Transfer Family uses Route 53, see the following sections.

## Use Amazon Route 53 as your DNS provider
<a name="requirements-use-r53"></a>

When you create a server, you can use Amazon Route 53 as your DNS provider. Before you use a domain with Route 53, you register the domain. For more information, see [How Domain registration works](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/welcome-domain-registration.html) in the *Amazon Route 53 Developer Guide*.

When you use Route 53 to provide DNS routing to your server, AWS Transfer Family uses the custom hostname that you entered to extract its hosted zone. When AWS Transfer Family extracts a hosted zone, three things can happen:

1. If you're new to Route 53 and don't have a hosted zone, AWS Transfer Family adds a new hosted zone and a `CNAME` record. The value of this `CNAME` record is the endpoint hostname for your server. A *CNAME* is an alternate domain name.

1. If you have a hosted zone in Route 53 without any `CNAME` records, AWS Transfer Family adds a `CNAME` record to the hosted zone.

1. If the service detects that a `CNAME` record already exists in the hosted zone, you see an error indicating that a `CNAME` record already exists. In this case, change the value of the `CNAME` record to the hostname of your server. 

For more information about hosted zones in Route 53, see [Hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html) in the *Amazon Route 53 Developer Guide*.

## Use other DNS providers
<a name="requirements-use-alt-dns"></a>

When you create a server, you can also use DNS providers other than Amazon Route 53. If you use an alternate DNS provider, make sure that traffic from your domain is directed to your server endpoint.

To do so, set your domain to the endpoint hostname for the server.
+ For IPv4 endpoints, the hostname looks like this in the console:

   `serverid.server.transfer.region.amazonaws.com` 
+ For dual-stack endpoints, the hostname looks like this in the console:

   `serverid.transfer-server.region.on.aws` 

**Note**  
If your server has a VPC endpoint, then the format for the hostname is different from those described above. To find your VPC endpoint, select the VPC on the server's details page, then select the **VPC endpoint ID** on the VPC dashboard. The endpoint is the first DNS name of those listed.

## Custom hostnames for non-console created servers
<a name="tag-custom-hostname-cdk"></a>

When you create a server using AWS Cloud Development Kit (AWS CDK), CloudFormation, or through the CLI, you must add a tag if you want that server to have a custom hostname. When you create a Transfer Family server by using the console, the tagging is done automatically.

**Note**  
You also need to create a DNS record to redirect traffic from your domain to your server endpoint. For details, see [Working with records](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/rrsets-working-with.html) in the *Amazon Route 53 Developer Guide*.

Use the following keys for your custom hostname:
+ Add `transfer:customHostname` to display the custom hostname in the console.
+ If you are using Route 53 as your DNS provider, add `transfer:route53HostedZoneId`. This tag links the custom hostname to your Route 53 Hosted Zone ID.

To add the custom hostname, issue the following CLI command.

```
aws transfer tag-resource --arn arn:aws:transfer:region:AWS account:server/server-ID --tags Key=transfer:customHostname,Value="custom-host-name"
```

For example:

```
aws transfer tag-resource --arn arn:aws:transfer:us-east-1:111122223333:server/s-1234567890abcdef0 --tags Key=transfer:customHostname,Value="abc.example.com"
```

If you are using Route 53, issue the following command to link your custom hostname to your Route 53 Hosted Zone ID.

```
aws transfer tag-resource --arn server-ARN:server/server-ID --tags Key=transfer:route53HostedZoneId,Value=HOSTED-ZONE-ID
```

For example:

```
aws transfer tag-resource --arn arn:aws:transfer:us-east-1:111122223333:server/s-1234567890abcdef0 --tags Key=transfer:route53HostedZoneId,Value=ABCDE1111222233334444
```

Assuming the sample values from the previous command, run the following command to view your tags:

```
aws transfer list-tags-for-resource --arn arn:aws:transfer:us-east-1:111122223333:server/s-1234567890abcdef0
```

```
"Tags": [
   {
      "Key": "transfer:route53HostedZoneId",
      "Value": "/hostedzone/ABCDE1111222233334444"
   },
   {
      "Key": "transfer:customHostname",
      "Value": "abc.example.com"
   }
 ]
```

**Note**  
 Your public, hosted zones and their IDs are available on Amazon Route 53.   
Sign in to the AWS Management Console and open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

## FTP and FTPS Network Load Balancer considerations
<a name="ftp-ftps-nlb-considerations"></a>

Although we recommend avoiding Network Load Balancers in front of AWS Transfer Family servers, if your FTP or FTPS implementation requires an NLB or NAT in the communication route from the client, follow these recommendations:
+ For an NLB, use port 21 for health checks, instead of ports 8192-8200.
+ For the AWS Transfer Family server, enable TLS session resumption by setting `TlsSessionResumptionMode = ENFORCED`.
**Note**  
This is the recommended mode, as it provides enhanced security:  
Requires clients to use TLS session resumption for subsequent connections.
Provides stronger security guarantees by ensuring consistent encryption parameters.
Helps prevent potential downgrade attacks.
Maintains compliance with security standards while optimizing performance.
+ If possible, migrate away from using an NLB to take full advantage of AWS Transfer Family performance and connection limits.

For additional guidance on NLB alternatives, contact the AWS Transfer Family Product Management team through AWS Support. For more information about improving your security posture, see the blog post [Six tips to improve the security of your AWS Transfer Family server](https://aws.amazon.com/blogs/security/six-tips-to-improve-the-security-of-your-aws-transfer-family-server/).

 Security guidance for NLBs is provided in [Avoid placing NLBs and NATs in front of AWS Transfer Family servers](infrastructure-security.md#nlb-considerations). 

# Transferring files over a server endpoint using a client
<a name="transfer-file"></a>

You transfer files over the AWS Transfer Family service by specifying the transfer operation in a client. AWS Transfer Family supports the following clients:
+ We support version 3 of the SFTP protocol.
+ OpenSSH (macOS and Linux)
**Note**  
This client works only with servers that are enabled for Secure Shell (SSH) File Transfer Protocol (SFTP).
+ WinSCP (Microsoft Windows only)
+ Cyberduck (Windows, macOS, and Linux)
+ FileZilla (Windows, macOS, and Linux)

The following limitations apply to every client:
+ The SCP protocol is not supported, as it is considered insecure. You can use the OpenSSH `scp` command as described in [Using the `scp` command](#openssh-scp).
+ The maximum number of concurrent, multiplexed, SFTP sessions per connection is 10.
+ For idle connections, the timeout value is 1800 seconds (30 minutes) for all protocols (SFTP/FTP/FTPS). If there is no activity after this period, the client may be disconnected. For unresponsive connections:
  + SFTP has a 300 seconds (5 minutes) timeout when a client is completely unresponsive.
  + FTPS and FTP have an approximately 10 minute unresponsive timeout that is handled by the underlying library.
+ Amazon S3 and Amazon EFS (due to the NFSv4 protocol) require filenames to be in UTF-8 encoding. Using different encoding can lead to unexpected results. For Amazon S3, see [Object key naming guidelines](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html#object-key-guidelines).
+ For File Transfer Protocol over SSL (FTPS), only Explicit mode is supported. Implicit mode is not supported.
+ For File Transfer Protocol (FTP) and FTPS, only Passive mode is supported.
+ For FTP and FTPS, only STREAM mode is supported.
+ For FTP and FTPS, only Image/Binary mode is supported.
+ For FTP and FTPS, TLS - PROT C (unprotected) TLS for the data connection is the default but PROT C is not supported in the AWS Transfer Family FTPS protocol. So for FTPS, you need to issue PROT P for your data operation to be accepted.
+ If you are using Amazon S3 for your server's storage, and if your client contains an option to use multiple connections for a single transfer, make sure to disable the option. Otherwise, large file uploads can fail in unpredictable ways. Note that if you are using Amazon EFS as your storage backend, EFS *does* support multiple connections for a single transfer.

The following is a list of available commands for FTP and FTPS:


| Available commands | 
| --- | 
| ABOR | FEAT | MLST | PASS | RETR | STOR | 
| AUTH | LANG | MKD | PASV | RMD | STOU | 
| CDUP | LIST | MODE | PBSZ | RNFR | STRU | 
| CWD | MDTM | NLST | PROT | RNTO | SYST | 
| DELE | MFMT | NOOP | PWD | SIZE | TYPE | 
| EPSV | MLSD | OPTS | QUIT | STAT | USER | 

**Note**  
APPE is not supported.

For SFTP, the following operations are currently not supported for users that are using the logical home directory on servers that are using Amazon Elastic File System (Amazon EFS).


| Unsupported SFTP commands | 
| --- | 
| SSH\$1FXP\$1READLINK | SSH\$1FXP\$1SYMLINK | SSH\$1FXP\$1STAT when the requested file is a symlink | SSH\$1FXP\$1REALPATH when the requested path contains any symlink components | 

**Generate public-private key pair**  
 Before you can transfer a file, you must have a public-private key pair available. If you have not previously generated a key pair, see [Generate SSH keys for service-managed users](sshkeygen.md). 

**Topics**
+ [Available SFTP/FTPS/FTP Commands](#transfer-sftp-commands)
+ [Find your Amazon VPC endpoint](#find-vpc-endpoint)
+ [Avoid `setstat` errors](#avoid-set-stat)
+ [Use OpenSSH](#openssh)
+ [Use WinSCP](#winscp)
+ [Use Cyberduck](#cyberduck)
+ [Use FileZilla](#filezilla)
+ [Use a Perl client](#using-clients-with-perl-modules)
+ [Use LFTP](#using-client-lftp)
+ [Post upload processing](#post-processing-upload)
+ [SFTP messages](#sftp-transfer-activity-types)

## Available SFTP/FTPS/FTP Commands
<a name="transfer-sftp-commands"></a>

The following table describes the available commands for AWS Transfer Family, for the SFTP, FTPS, and FTP protocols. 

**Note**  
The table mentions *files* and *directories* for Amazon S3, which only supports buckets and objects: there is no hierarchy. However, you can use prefixes in object key names to imply a hierarchy and organize your data in a way similar to folders. This behavior is described in [ Working with object metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.


**SFTP/FTPS/FTP Commands**  

| Command | Amazon S3 | Amazon EFS | 
| --- | --- | --- | 
| cd | Supported | Supported | 
| chgrp | Not supported  | Supported (root or owner only) | 
| chmod | Not supported | Supported (root only) | 
| chmtime | Not supported | Supported | 
| chown | Not supported | Supported (root only) | 
| get | Supported | Supported (including resolving symbolic links) | 
| ln -s | Not supported  | Supported | 
| ls/dir | Supported | Supported | 
| mkdir | Supported | Supported | 
| put | Supported | Supported | 
| pwd | Supported | Supported | 
| rename |  Supported for files only  Renaming that would overwrite an existing file is not supported.   | Supported  Renaming that would overwrite an existing file or directory is not supported.  | 
| rm | Supported | Supported | 
| rmdir | Supported (empty directories only) | Supported | 
| version | Supported | Supported | 

## Find your Amazon VPC endpoint
<a name="find-vpc-endpoint"></a>

If the endpoint type for your Transfer Family server is VPC, identifying the endpoint to use for transferring files is not straightforward. In this case, use the following procedure to find your Amazon VPC endpoint. 

**To find your Amazon VPC endpoint**

1. Navigate to your server's details page.

1. In the **Endpoint details** pane, select the **VPC**.  
![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/server-details-endpoint-vpc.png)

1. In the Amazon VPC dashboard, select the **VPC endpoint ID**.

1. In the list of **DNS names**, your server endpoint is the first one listed.  
![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/server-details-endpoint-vpc-2.png)

## Avoid `setstat` errors
<a name="avoid-set-stat"></a>

Some SFTP file transfer clients can attempt to change the attributes of remote files, including timestamp and permissions, using commands, such as SETSTAT when uploading the file. However, these commands are not compatible with object storage systems, such as Amazon S3. Due to this incompatibility, file uploads from these clients can result in errors even when the file is otherwise successfully uploaded.
+ When you call the `CreateServer` or `UpdateServer` API, use the `ProtocolDetails` option `SetStatOption` to ignore the error that is generated when the client attempts to use SETSTAT on a file you are uploading to an S3 bucket.
+ Set the value to `ENABLE_NO_OP` to have the Transfer Family server ignore the SETSTAT command, and upload files without needing to make any changes to your SFTP client.
+ Note that while the `SetStatOption` `ENABLE_NO_OP` setting ignores the error, it *does* generate a log entry in CloudWatch Logs, so you can determine when the client is making a SETSTAT call.

 For the API details for this option, see [ProtocolDetails](https://docs.aws.amazon.com/transfer/latest/APIReference/API_ProtocolDetails.html).

## Use OpenSSH
<a name="openssh"></a>

This section contains instructions to transfer files from the command line using OpenSSH.

**Note**  
This client works only with an SFTP-enabled server.

**Topics**
+ [Using OpenSSH](#openssh-use)
+ [Using the `scp` command](#openssh-scp)

### Using OpenSSH
<a name="openssh-use"></a>

**To transfer files over AWS Transfer Family using the OpenSSH command line utility**

1. On Linux, macOS, or Windows, open a command terminal.

1. At the prompt, enter the following command: 

   `sftp -i transfer-key sftp_user@service_endpoint`

   In the preceding command, `sftp_user` is the username and `transfer-key` is the SSH private key. Here, `service_endpoint` is the server's endpoint as shown in the AWS Transfer Family console for the selected server.
**Note**  
This command uses settings that are in the default `ssh_config` file. Unless you have previously edited this file, SFTP uses port 22. You can specify a different port (for example 2222) by adding a **-P** flag to the command, as follows.  

   ```
   sftp -P 2222 -i transfer-key sftp_user@service_endpoint
   ```
Alternatively, if you always want to use port 2222 or port 22000, you can update your default port in your `ssh_config` file.

   An `sftp` prompt should appear.

1.  (Optional) To view the user's home directory, enter the following command at the `sftp` prompt: 

   `pwd` 

1. To upload a file from your file system to the Transfer Family server, use the `put` command. For example, to upload `hello.txt` (assuming that file is in your current directory on your file system), run the following command at the `sftp` prompt: 

   `put hello.txt` 

   A message similar to the following appears, indicating that the file transfer is in progress, or complete.

   `Uploading hello.txt to /amzn-s3-demo-bucket/home/sftp_user/hello.txt`

   `hello.txt 100% 127 0.1KB/s 00:00`

**Note**  
After your server is created, it can take a few minutes for the server endpoint hostname to be resolvable by the DNS service in your environment.

### Using the `scp` command
<a name="openssh-scp"></a>

Transfer Family doesn't support the SCP protocol. However, you can use the OpenSSH `scp` command if you need this functionality.

The recommendation for using SCP over SFTP is to use OpenSSH version 9.0 or later. In OpenSSH version 9 and later, the `scp` command defaults to using the SFTP protocol for file transfers instead of the legacy SCP protocol.

**Important**  
Ensure that your Transfer Family server has been configured to use S3 optimized directory access.

## Use WinSCP
<a name="winscp"></a>

Use the instructions that follow to transfer files from the command line using WinSCP.

**Note**  
If you are using WinSCP 5.19, you can directly connect to Amazon S3 using your AWS credentials and upload/download files. For more details, see [Connecting to Amazon S3 service](https://winscp.net/eng/docs/guide_amazon_s3).

**To transfer files over AWS Transfer Family using WinSCP**

1. Open the WinSCP client.

1. In the **Login** dialog box, for **File protocol**, choose a protocol: **SFTP** or **FTP**.

   If you chose FTP, for **Encryption**, choose one of the following:
   + **No encryption** for FTP
   + **TLS/SSL Explicit encryption** for FTPS

1. For **Host name**, enter your server endpoint. The server endpoint is located on the **Server details** page. For more information, see [View SFTP, FTPS, and FTP server details](configuring-servers-view-info.md).

   If your server uses a VPC endpoint, see [Find your Amazon VPC endpoint](#find-vpc-endpoint).

1. For **Port number**, enter the following:
   + **22** for SFTP
   + **21** for FTP/FTPS

1. For **User name**, enter the name for the user that you created for your specific identity provider.

   **Tip: **The username should be one of the users you created or configured for your identity provider. AWS Transfer Family provides the following identity providers:
   + [Working with service-managed users](service-managed-users.md)
   + [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md)
   + [Working with custom identity providers](custom-idp-intro.md)

1. Choose **Advanced** to open the **Advanced Site Settings** dialog box. In the **SSH** section, choose **Authentication**.

1. For **Private key file**, browse for and choose the SSH private key file from your file system.

   If WinSCP offers to convert your SSH private key to the PPK format, choose **OK**.

1. Choose **OK** to return to the **Login** dialog box, and then choose **Save**.

1. In the **Save session as site** dialog box, choose **OK** to complete your connection setup.

1. In the **Login** dialog box, choose **Tools**, and then choose **Preferences**.

1. In the **Preferences** dialog box, for **Transfer**, choose **Endurance**.

   For the **Enable transfer resume/transfer to temporary filename for** option, choose **Disable**.
**Important**  
If you leave this option enabled, it increases upload costs, substantially decreasing upload performance. It also can lead to failures of large file uploads.

1. For **Transfer**, choose **Background**, and clear the **Use multiple connections for single transfer** check box.

   **Tip: **If you leave this option selected, large file uploads can fail in unpredictable ways. For example, orphaned multipart uploads that incur Amazon S3 charges can be created. Silent data corruption can also occur.

1. Perform your file transfer.

   You can use drag-and-drop methods to copy files between the target and source windows. You can use toolbar icons to upload, download, delete, edit, or modify the properties of files in WinSCP.

**Note**  
This note does not apply if you are using Amazon EFS for storage.  
Commands that attempt to change attributes of remote files, including timestamps, are not compatible with object storage systems such as Amazon S3. Therefore, if you are using Amazon S3 for storage, be sure to disable WinSCP timestamp settings (or use the `SetStatOption` as described in [Avoid `setstat` errors](#avoid-set-stat)) before you perform file transfers. To do so, in the **WinSCP Transfer settings** dialog box, disable the **Set permissions** upload option and the **Preserve timestamp** common option.

## Use Cyberduck
<a name="cyberduck"></a>

Use the instructions that follow to transfer files from the command line using Cyberduck.

**To transfer files over AWS Transfer Family using Cyberduck**

1. Open the [Cyberduck](https://cyberduck.io/download/) client.

1. Choose **Open Connection**.

1. In the **Open Connection** dialog box, choose a protocol: **SFTP (SSH File Transfer Protocol)**, **FTP-SSL (Explicit AUTH TLS)**, or **FTP (File Transfer Protocol)**.

1. For **Server**, enter your server endpoint. The server endpoint is located on the **Server details** page. For more information, see [View SFTP, FTPS, and FTP server details](configuring-servers-view-info.md).

   If your server uses a VPC endpoint, see [Find your Amazon VPC endpoint](#find-vpc-endpoint).

1. For **Port number**, enter the following:
   + **22** for SFTP
   + **21** for FTP/FTPS

1. For **Username**, enter the name for the user that you created in [Managing users for server endpoints](create-user.md).

1. If SFTP is selected, for **SSH Private Key**, choose or enter the SSH private key.

1. Choose **Connect**.

1. Perform your file transfer.

   Depending on where your files are, do one of the following:
   + In your local directory (the source), choose the files that you want to transfer, and drag and drop them into the Amazon S3 directory (the target).
   + In the Amazon S3 directory (the source), choose the files that you want to transfer, and drag and drop them into your local directory (the target).

## Use FileZilla
<a name="filezilla"></a>

Use the instructions that follow to transfer files using FileZilla.

**To set up FileZilla for a file transfer**

1. Open the FileZilla client.

1. Choose **File**, and then choose **Site Manager**.

1. In the **Site Manager** dialog box, choose **New site**.

1. On the **General** tab, for **Protocol**, choose a protocol: **SFTP** or **FTP**.

   If you chose FTP, for **Encryption**, choose one of the following:
   + **Only use plain FTP (insecure)** – for FTP
   + **Use explicit FTP over TLS if available** – for FTPS

1. For **Host name**, enter the protocol that you are using, followed by your server endpoint. The server endpoint is located on the **Server details** page. For more information, see [View SFTP, FTPS, and FTP server details](configuring-servers-view-info.md).
   + If you are using SFTP, enter: `sftp://hostname`
   +  If you are using FTPS, enter: `ftps://hostname` 

   Make sure to replace *hostname* with your actual server endpoint.

   If your server uses a VPC endpoint, see [Find your Amazon VPC endpoint](#find-vpc-endpoint).

1. For **Port number**, enter the following:
   + **22** for SFTP
   + **21** for FTP/FTPS

1. If SFTP is selected, for **Logon Type**, choose **Key file**.

   For **Key file**, choose or enter the SSH private key.

1. For **User**, enter the name for the user that you created in [Managing users for server endpoints](create-user.md).

1. Choose **Connect**.

1. Perform your file transfer.
**Note**  
If you interrupt a file transfer in progress, AWS Transfer Family might write a partial object in your Amazon S3 bucket. If you interrupt an upload, check that the file size in the Amazon S3 bucket matches the file size of the source object before continuing.

## Use a Perl client
<a name="using-clients-with-perl-modules"></a>

If you use the NET::SFTP::Foreign perl client, you must set the `queue_size` to `1`. For example:

`my $sftp = Net::SFTP::Foreign->new('user@s-12345.server.transfer.us-east-2.amazonaws.com', queue_size => 1);`

**Note**  
 This workaround is needed for revisions of `Net::SFTP::Foreign` prior to [1.92.02](https://metacpan.org/changes/release/SALVA/Net-SFTP-Foreign-1.93#L12). 

## Use LFTP
<a name="using-client-lftp"></a>

LFTP is a free FTP client that allows users to perform file transfers via the command-line interface from most Linux machines.

 For large file downloads, LFTP has a known issue with out of order packets, causing the file transfer to fail.

## Post upload processing
<a name="post-processing-upload"></a>

You can view post upload processing information including Amazon S3 object metadata and event notifications.

**Topics**
+ [Amazon S3 object metadata](#post-processing-S3-object-metadata)
+ [Amazon S3 event notifications](#post-processing-S3-event-notifications)

### Amazon S3 object metadata
<a name="post-processing-S3-object-metadata"></a>

As a part of your object's metadata you see a key called `x-amz-meta-user-agent` whose value is `AWSTransfer` and `x-amz-meta-user-agent-id` whose value is `username@server-id`. The `username` is the Transfer Family user who uploaded the file and `server-id` is the server used for the upload. This information can be accessed using the [HeadObject](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) operation on the S3 object inside your Lambda function.

![\[The Metadata screen displaying information about Amazon S3 object metadata for AWS Transfer Family.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/s3-object-metadata.png)


### Amazon S3 event notifications
<a name="post-processing-S3-event-notifications"></a>

When an object is uploaded to your S3 bucket using Transfer Family, `RoleSessionName` is contained in the Requester field in the [S3 event notification structure](https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html) as `[AWS:Role Unique Identifier]/username.sessionid@server-id`. For example, the following are the contents for a sample Requester field from an S3 access log for a file that was copied to the S3 bucket.

`arn:aws:sts::AWS-Account-ID:assumed-role/IamRoleName/username.sessionid@server-id`

In the Requester field above, it shows the IAM Role called `IamRoleName`. For more information about configuring S3 event notifications, see [Configuring Amazon S3 event notifications](https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) in the *Amazon Simple Storage Service Developer Guide*. For more information about AWS Identity and Access Management (IAM) role unique identifiers, see [Unique identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) in the *AWS Identity and Access Management User Guide*.

## SFTP messages
<a name="sftp-transfer-activity-types"></a>

This section describes client side messages that you may receive during or after your SFTP file transfers when using a Transfer Family server. For more information on any SFTP event, check your SFTP client logs. You can use that information to troubleshoot any errors, or forward that information to your network team for their help in identifying the issue.


**SFTP client-side messages**  

| Activity | Description | 
| --- | --- | 
| AUTH\$1FAILURE | The user failed authentication. This can be any kind of failure from a custom identity provider or service managed user. The details in the event help clarify the root cause of the failure. | 
| CLOSE | Indicates that an opened file or directory is closed successfully. | 
| CONNECTED/DISCONNECTED | Indicates normal connection success and disconnections. | 
| CREATE\$1SYMLINK  | A symbolic link was created (successfully or unsuccessfully). | 
| DELETE | A file was deleted (successfully or unsuccessfully). | 
| ERROR | A general, unexpected error. The associated description contains information that can help you or your network administrators to identify the specific issue. | 
| EXIT\$1REASON | Emitted when an unexpected error caused termination of your SFTP session. The message associated with the event describes the cause. | 
| MKDIR | A directory was created (successfully or unsuccessfully). | 
| OPEN | A file was opened for read or write (successfully or unsuccessfully) | 
| PARTIAL\$1CLOSE | The client disconnected from the server while a file was still open with no CLOSE message received. Transfer Family stores the received portion of the file (which could in fact be the complete file) and emits the PARTIAL\$1CLOSE event to alert the customer about the issue. Workflows integration also receives an onPartialClose event to handle the file appropriately. | 
| RENAME | A file was renamed (successfully or unsuccessfully) | 
| RMDIR | A directory was deleted (successfully or unsuccessfully) | 
| SETSTAT |  The attributes of a file are changed (successfully or unsuccessfully).  Transfer Family doesn't support SETSTAT if you are using Amazon S3 for storage. The [Avoid `setstat` errors](#avoid-set-stat) section provides details on how to avoid `SetStat` errors, by turning off the setting. This avoids you receiving a `fail unsupported error`: instead, you receive `success but do nothing` message.   | 
| TLS\$1RESUME\$1FAILURE  | The server is configured to enforce TLS Session Resumption and the client does not support it. | 

# Managing users for server endpoints
<a name="create-user"></a>

In the following sections, you can find information about how to add users using AWS Transfer Family, AWS Directory Service for Microsoft Active Directory or a custom identity provider.

As part of each user's properties, you also store that user's Secure Shell (SSH) public key. Doing so is required for key-based authentication. The private key is stored locally on your user's computer. When your user sends an authentication request to your server by using a client, your server first confirms that the user has access to the associated SSH private key. The server then successfully authenticates the user.

**Note**  
For automated deployment and management of users with multiple SSH keys, see [Transfer Family Terraform modules](terraform.md).

In addition, you specify a user's home directory, or landing directory, and assign an AWS Identity and Access Management (IAM) role to the user. Optionally, you can provide a session policy to limit user access only to the home directory of your Amazon S3 bucket.

**Important**  
AWS Transfer Family blocks usernames that are 1 or 2 characters long from authenticating to SFTP servers. Additionally, we also block the `root`user name.  
The reason behind this is due to the large volume of malicious login attempts by password scanners.

## Amazon EFS vs. Amazon S3
<a name="efs-vs-s3-users"></a>

Characteristics of each storage option:
+ To limit access: Amazon S3 supports session policies; Amazon EFS supports POSIX user, group, and secondary group IDs
+  Both support public/private keys 
+  Both support home directories 
+  Both support logical directories 
**Note**  
 For Amazon S3, most of the support for logical directories is via API/CLI. You can use the **Restricted** check box in the console to lock down a user to their home directory, but you cannot specify a virtual directory structure. 

## Logical directories
<a name="logical-dir-users"></a>

If you are specifying logical directory values for your user, the parameter you use depends on the type of user.
+ For service-managed users, provide logical directory values in `HomeDirectoryMappings`.
+ For custom identity provider users, provide logical directory values in `HomeDirectoryDetails`.

AWS Transfer Family supports specifying a HomeDirectory value when using the LOGICAL HomeDirectoryType. This applies to Service Managed users, Active Directory access, and Custom Identity Provider implementations where the HomeDirectoryDetails are provided in the response.

**Important**  
When specifying a HomeDirectory with LOGICAL HomeDirectoryType, the value must map to one of your logical directory mappings. The service validates this during both user creation and updates to prevent configurations that would not work.

### Default behavior
<a name="logical-dir-default"></a>

By default, if left unspecified, the HomeDirectory is set to "/" for LOGICAL mode. This behavior is unchanged and remains compatible with existing user definitions.
+ Make sure to map your HomeDirectory to an *Entry* and not a *Target*. For more details, see [Rules for using logical directories](logical-dir-mappings.md#logical-dir-rules).
+ For details on how a virtual directory is structured see [Virtual directory structure](implement-log-dirs.md#virtual-dirs).

### Custom Identity Provider considerations
<a name="logical-dir-custom-idp"></a>

When using a Custom Identity Provider, you can now specify a HomeDirectory in the response while using LOGICAL HomeDirectoryType. The TestIdentityProvider API call will produce correct results when the Custom IDP specifies a HomeDirectory in LOGICAL mode.

Example Custom IDP response with HomeDirectory and LOGICAL HomeDirectoryType:

```
{
  "Role": "arn:aws:iam::123456789012:role/transfer-user-role",
  "HomeDirectoryType": "LOGICAL",
  "HomeDirectory": "/marketing",
  "HomeDirectoryDetails": "[{\"Entry\":\"/\",\"Target\":\"/bucket/home\"},{\"Entry\":\"/marketing\",\"Target\":\"/marketing-bucket/campaigns\"}]"
}
```

## Active Directory group quotas
<a name="ad-group-quotas"></a>

AWS Transfer Family has a default limit of 100 Active Directory groups per server. If your use case requires more than 100 groups, consider using a custom identity provider solution as described in [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).

This limit applies to servers using the following identity providers:
+ AWS Directory Service for Microsoft Active Directory
+ AWS Directory Service for Entra ID Domain Services

If you need to request a service limit increase, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *AWS General Reference*. If your use case requires more than 100 groups, consider using a custom identity provider solution as described in [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).

For troubleshooting information related to Active Directory group limits, see [Active Directory group limits exceeded](auth-issues.md#managed-ad-group-limits).

**Topics**
+ [Amazon EFS vs. Amazon S3](#efs-vs-s3-users)
+ [Logical directories](#logical-dir-users)
+ [Active Directory group quotas](#ad-group-quotas)
+ [Working with service-managed users](service-managed-users.md)
+ [Working with custom identity providers](custom-idp-intro.md)
+ [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md)
+ [Using AWS Directory Service for Entra ID Domain Services](azure-sftp.md)

# Working with service-managed users
<a name="service-managed-users"></a>

You can add either Amazon S3 or Amazon EFS service-managed users to your server, depending on the server's **Domain** setting. For more information, see [Configuring an SFTP, FTPS, or FTP server endpoint](tf-server-endpoint.md).

If you use a service-managed identity type, you add users to your file transfer protocol enabled server. When you do so, each username must be unique on your server.

To add a service-managed user programmatically, see the [ example](https://docs.aws.amazon.com/transfer/latest/APIReference/API_CreateUser.html#API_CreateUser_Examples) for the [CreateUser](https://docs.aws.amazon.com/transfer/latest/APIReference/API_CreateUser.html) API.

**Note**  
For service-managed users there is a limit of 2,000 logical directory entries. For information about using logical directories see [Using logical directories to simplify your Transfer Family directory structures](logical-dir-mappings.md).

**Topics**
+ [Adding Amazon S3 service-managed users](#add-s3-user)
+ [Adding Amazon EFS service-managed users](#add-efs-user)
+ [Managing service-managed users](#managing-service-managed-users)

## Adding Amazon S3 service-managed users
<a name="add-s3-user"></a>

**Note**  
 If you want to configure a cross account Amazon S3 bucket, follow the steps mentioned in this Knowledge Center article: [ How do I configure my AWS Transfer Family server to use an Amazon Simple Storage Service bucket that's in another AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/sftp-cross-account-s3-bucket/).

**To add an Amazon S3 service-managed user to your server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/), then select **Servers** from the navigation pane.

1. On the **Servers** page, select the check box of the server that you want to add a user to.

1. Choose **Add user**.

1. In the **User configuration** section, for **Username**, enter the username. This username must be a minimum of 3 and a maximum of 100 characters. You can use the following characters in the username: a–z, A-Z, 0–9, underscore '\$1', hyphen '-', period '.' and at sign '@'. The username can't start with a hyphen '-', period '.' or at sign '@'.

1. For **Access**, choose the IAM role that you previously created that provides access to your Amazon S3 bucket.

   You created this IAM role using the procedure in [Create an IAM role and policy](requirements-roles.md). That IAM role includes an IAM policy that provides access to your Amazon S3 bucket. It also includes a trust relationship with the AWS Transfer Family service, defined in another IAM policy. If you need fine-grained access control for your users, refer to the [Enhance data access control with AWS Transfer Family and Amazon S3](https://aws.amazon.com/blogs/storage/enhance-data-access-control-with-aws-transfer-family-and-amazon-s3-access-points/) blog post.

1. (Optional) For **Policy**, select one of the following:
   + **None**
   + **Existing policy**
   + **Select a policy from IAM**: allows you to choose an existing session policy. Choose **View** to see a JSON object containing the details of the policy.
   + **Auto-generate policy based on home folder**: generates a session policy for you. Choose **View** to see a JSON object containing the details of the policy.
**Note**  
If you choose **Auto-generate policy based on home folder**, do not select **Restricted** for this user.

   To learn more about session policies, see [Create an IAM role and policy](requirements-roles.md), [Creating a session policy for an Amazon S3 bucket](users-policies-session.md), or [Dynamic permission management approaches](dynamic-permission-management.md).

1. For **Home directory**, choose the Amazon S3 bucket to store the data to transfer using AWS Transfer Family. Enter the path to the `home` directory where your user lands when they log in using their client.

   If you keep this parameter blank, the `root` directory of your Amazon S3 bucket is used. In this case, make sure that your IAM role provides access to this `root` directory.
**Note**  
We recommend that you choose a directory path that contains the user name of the user, which enables you to effectively use a session policy. The session policy limits user access in the Amazon S3 bucket to that user's `home` directory.

1. (Optional) For **Restricted**, select the check box so that your users can't access anything outside of that folder and can't see the Amazon S3 bucket or folder name.
**Note**  
Assigning the user a home directory and restricting the user to that home directory should be sufficient to lock down the user's access to the designated folder. If you need to apply further controls, use a session policy.   
If you select **Restricted** for this user, you cannot select **Auto-generate policy based on home folder**, because home folder is not a defined value for Restricted users.

1. For **SSH public key**, enter the public SSH key portion of the SSH key pair.

   Your key is validated by the service before you can add your new user.
**Note**  
For instructions on how to generate an SSH key pair, see [Generate SSH keys for service-managed users](sshkeygen.md).

1. (Optional) For **Key** and **Value**, enter one or more tags as key-value pairs, and choose **Add tag**.

1. Choose **Add** to add your new user to the server that you chose.

   The new user appears in the **Users** section of the **Server details** page.

**Next steps** – For the next step, continue on to [Transferring files over a server endpoint using a client](transfer-file.md).

## Adding Amazon EFS service-managed users
<a name="add-efs-user"></a>

Amazon EFS uses the Portable Operating System Interface (POSIX) file permission model to represent file ownership.
+  For more details on Amazon EFS file ownership, see [Amazon EFS file ownership](configure-storage.md#efs-file-ownership). 
+ For more details on setting up directories for your EFS users, see [Set up Amazon EFS users for Transfer Family](configure-storage.md#configure-efs-users-permissions). 

**To add an Amazon EFS service-managed user to your server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/), then select **Servers** from the navigation pane.

1. On the **Servers** page, select the Amazon EFS server that you want to add a user to.

1. Choose **Add user** to display the **Add user** page.

1. In the **User configuration** section, use the following settings.

   1. The **Username**, must be a minimum of 3 and a maximum of 100 characters. You can use the following characters in the username: a–z, A-Z, 0–9, underscore '\$1', hyphen '-', period '.', and at sign "@". The username can't start with a hyphen '-', period '.', or at sign "@".

   1.  For **User ID** and **Group ID**, note the following: 
      + For the first user that you create, we recommend that you enter a value of **0** for both **Group ID** and **User ID**. This grants the user administrator privileges for Amazon EFS. 
      + For additional users, enter the user's POSIX user ID and group ID. These IDs are used for all Amazon Elastic File System operations performed by the user. 
      + For **User ID** and **Group ID**, do not use any leading zeroes. For example, **12345** is acceptable, **012345** is not. 

   1. (Optional) For **Secondary Group IDs**, enter one or more additional POSIX group IDs for each user, separated by commas.

   1. For **Access**, choose the IAM role that:
      + Gives the user access to only the Amazon EFS resources (file systems) that you want them to access.
      + Defines which file system operations that the user can and cannot perform.

      We recommend that you use the IAM role for Amazon EFS file system selection with mount access and read/write permissions. For example, the combination of the following two AWS managed policies, while quite permissive, grants the necessary permissions for your user: 
      +  AmazonElasticFileSystemClientFullAccess 
      +  AWSTransferConsoleFullAccess 

      For more information, see the blog post [AWS Transfer Family support for Amazon Elastic File System](https://aws.amazon.com/blogs/aws/new-aws-transfer-family-support-for-amazon-elastic-file-system/).

   1. For **Home directory**, do the following:
      + Choose the Amazon EFS file system that you want to use for storing the data to transfer using AWS Transfer Family.
      + Decide whether to set the home directory to **Restricted**. Setting the home directory to **Restricted** has the following effects:
        + Amazon EFS users can't access any files or directories outside of that folder.
        + Amazon EFS users can't see the Amazon EFS file system name (**fs-xxxxxxx**).
**Note**  
When you select the **Restricted** option, symlinks don't resolve for Amazon EFS users.
      + (Optional) Enter the path to the home directory that you want users to be in when they log in using their client.

        If you don't specify a home directory, the root directory of your Amazon EFS file system is used. In this case, make sure that your IAM role provides access to this root directory.

1. For **SSH public key**, enter the public SSH key portion of the SSH key pair.

   Your key is validated by the service before you can add your new user.
**Note**  
For instructions on how to generate an SSH key pair, see [Generate SSH keys for service-managed users](sshkeygen.md).

1. (Optional) Enter any tags for the user. For **Key** and **Value**, enter one or more tags as key-value pairs, and choose **Add tag**.

1. Choose **Add** to add your new user to the server that you chose.

   The new user appears in the **Users** section of the **Server details** page.

 Issues that you might encounter when you first SFTP to your Transfer Family server: 
+  If you run the `sftp` command and the prompt doesn't appear, you might encounter the following message: 

   `Couldn't canonicalize: Permission denied` 

   `Need cwd` 

   In this case, you must increase the policy permissions for your user's role. You can add an AWS managed policy, such as `AmazonElasticFileSystemClientFullAccess`. 
+ If you enter `pwd` at the `sftp` prompt to view the user's home directory, you might see the following message, where *USER-HOME-DIRECTORY* is the home directory for the SFTP user:

   `remote readdir("/USER-HOME-DIRECTORY"): No such file or directory` 

  In this case, you should be able to navigate to the parent directory (`cd ..`), and create the user's home directory (`mkdir username`) .

**Next steps** – For the next step, continue on to [Transferring files over a server endpoint using a client](transfer-file.md).

## Managing service-managed users
<a name="managing-service-managed-users"></a>

 In this section, you can find information about how to view a list of users, how to edit user details, and how to add an SSH public key. 
+ [View a list of users](#list-users)
+ [View or edit user details](#view-user-details)
+ [Delete a user](#delete-user)
+ [Add SSH public key](#add-user-ssh-key)
+ [Delete SSH public key](#delete-user-ssh-key)<a name="list-users"></a>

**To find a list of your users**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Select **Servers** from the navigation pane to display the **Servers** page.

1. Choose the identifier in the **Server ID** column to see the **Server details** page.

1. Under **Users**, view a list of users.<a name="view-user-details"></a>

**To view or edit user details**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Select **Servers** from the navigation pane to display the **Servers** page.

1. Choose the identifier in the **Server ID** column to see the **Server details** page.

1. Under **Users**, choose a username to see the **User details** page.

   You can change the user's properties on this page by choosing **Edit**.

1. On the **Users details** page, choose **Edit** next to **User configuration**.  
![\[Image showing the screen for editing a user's configuration\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/edit-user-details-page-user-config.png)

1. On the **Edit configuration** page, for **Access**, choose the IAM role that you previously created that provides access to your Amazon S3 bucket.

   You created this IAM role using the procedure in [Create an IAM role and policy](requirements-roles.md). That IAM role includes an IAM policy that provides access to your Amazon S3 bucket. It also includes a trust relationship with the AWS Transfer Family service, defined in another IAM policy.

1. (Optional) For **Policy**, choose one of the following:
   + **None**
   + **Existing policy**
   + **Select a policy from IAM** to choose an existing policy. Choose **View** to see a JSON object containing the details of the policy.

   To learn more about session policies, see [Create an IAM role and policy](requirements-roles.md). To learn more about creating a session policy, see [Creating a session policy for an Amazon S3 bucket](users-policies-session.md).

1. For **Home directory**, choose the Amazon S3 bucket to store the data to transfer using AWS Transfer Family. Enter the path to the `home` directory where your user lands when they log in using their client.

   If you leave this parameter blank, the `root` directory of your Amazon S3 bucket is used. In this case, make sure that your IAM role provides access to this `root` directory.
**Note**  
We recommend that you choose a directory path that contains the user name of the user, which enables you to effectively use a session policy. The session policy limits user access in the Amazon S3 bucket to that user's `home` directory.

1. (Optional) For **Restricted**, select the check box so that your users can't access anything outside of that folder and can't see the Amazon S3 bucket or folder name.
**Note**  
When assigning the user a home directory and restricting the user to that home directory, this should be sufficient enough to lock down the user's access to the designated folder. Use a session policy when you need to apply further controls.

1. Choose **Save** to save your changes.<a name="delete-user"></a>

**To delete a user**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Select **Servers** from the navigation pane to display the **Servers** page.

1. Choose the identifier in the **Server ID** column to see the **Server details** page.

1. Under **Users**, choose a username to see the **User details** page. 

1. On the **Users details** page, choose **Delete** to the right of the username.

1. In the confirmation dialog box that appears, enter the word **delete**, and then choose **Delete** to confirm that you want to delete the user.

 The user is deleted from the **users** list.<a name="add-user-ssh-key"></a>

**To add an SSH public key for a user**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the navigation pane, choose **Servers**.

1. Choose the identifier in the **Server ID** column to see the **Server details** page.

1. Under **Users**, choose a username to see the **User details** page.

1. Choose **Add SSH public key** to add a new SSH public key to a user.
**Note**  
SSH keys are used only by servers that are enabled for Secure Shell (SSH) File Transfer Protocol (SFTP). For information about how to generate an SSH key pair, see [Generate SSH keys for service-managed users](sshkeygen.md).

1. For **SSH public key**, enter the SSH public key portion of the SSH key pair.

   Your key is validated by the service before you can add your new user. The format of the SSH key is `ssh-rsa string`. To generate an SSH key pair, see [Generate SSH keys for service-managed users](sshkeygen.md).

1. Choose **Add key**.<a name="delete-user-ssh-key"></a>

**To delete an SSH public key for a user**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the navigation pane, choose **Servers**.

1. Choose the identifier in the **Server ID** column to see the **Server details** page.

1. Under **Users**, choose a username to see the **User details** page.

1. To delete a public key, select its SSH key check box and choose **Delete**.

# Working with custom identity providers
<a name="custom-idp-intro"></a>

AWS Transfer Family offers several options for custom identity providers to authenticate and authorize users for secure file transfers. Here are the main approaches:
+ [Custom identity provider solution](custom-idp-toolkit.md)—This topic describes the Transfer Family custom identity provider solution, using a toolkit hosted in GitHub.
**Note**  
For most use cases, this is the recommended option. Specifically, if you need to support more than 100 Active Directory groups, the custom identity provider solution offers a scalable alternative without group limitations. This solution is described in the blog post, [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).
+ [Using Amazon API Gateway to integrate your identity provider](authentication-api-gateway.md)—This topic describes how to use an AWS Lambda function to back an Amazon API Gateway method.

  You can provide a RESTful interface with a single Amazon API Gateway method. Transfer Family calls this method to connect to your identity provider, which authenticates and authorizes your users for access to Amazon S3 or Amazon EFS. Use this option if you need a RESTful API to integrate your identity provider or if you want to use AWS WAF to leverage its capabilities for geo-blocking or rate-limiting requests. For details, see [Using Amazon API Gateway to integrate your identity provider](authentication-api-gateway.md).
+ [Dynamic permission management approaches](dynamic-permission-management.md)—This topic describes approaches for managing user permissions dynamically using session policies.

  To authenticate your users, you can use your existing identity provider with AWS Transfer Family. You integrate your identity provider using an AWS Lambda function, which authenticates and authorizes your users for access to Amazon S3 or Amazon Elastic File System (Amazon EFS). For details, see [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md). You can also access CloudWatch graphs for metrics such as number of files and bytes transferred in the AWS Transfer Family Management Console, giving you a single pane of glass to monitor file transfers using a centralized dashboard.
+ Transfer Family provides a blog post and a workshop that walk you through building a file transfer solution. This solution leverages AWS Transfer Family for managed SFTP/FTPS endpoints and Amazon Cognito and DynamoDB for user management. 

  The blog post is available at [Using Amazon Cognito as an identity provider with AWS Transfer Family and Amazon S3](https://aws.amazon.com/blogs/storage/using-amazon-cognito-as-an-identity-provider-with-aws-transfer-family-and-amazon-s3/). You can view the details for the workshop [here](https://catalog.workshops.aws/transfer-family-sftp/en-US). 

**Note**  
For custom identity providers, the username must be a minimum of 3 and a maximum of 100 characters. You can use the following characters in the username: a–z, A-Z, 0–9, underscore '\$1', hyphen '-', period '.' and at sign '@'. The username can't start with a hyphen '-', period '.' or at sign '@'.

When implementing a custom identity provider, consider the following best practices:
+ Deploy the solution in the same AWS account and region as your Transfer Family servers.
+ Implement the principle of least privilege when configuring IAM roles and policies.
+ Use features like IP allow-listing and standardized logging for enhanced security.
+ Test your custom identity provider thoroughly in a non-production environment before deployment.

**Topics**
+ [Custom identity provider solution](custom-idp-toolkit.md)
+ [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md)
+ [Using Amazon API Gateway to integrate your identity provider](authentication-api-gateway.md)
+ [Using multiple authentication methods](custom-idp-mfa.md)
+ [IPv6 support for custom identity providers](custom-idp-ipv6.md)

# Custom identity provider solution
<a name="custom-idp-toolkit"></a>

The AWS Transfer Family custom identity provider solution is a modular custom identity provider solution that solves for many common authentication and authorization use cases that enterprises have when implementing the service. This solution provides a reusable foundation for implementing custom identity providers with granular per-user session configuration and separates authentication and authorization logic, offering a flexible and easy-to-maintain foundation for various use cases. 

With the AWS Transfer Family custom identity provider solution, you can address common enterprise authentication and authorization use cases. This modular solution offers:
+ A reusable foundation for implementing custom identity providers 
+ Granular per-user session configuration 
+ Separated authentication and authorization logic 

## Implementation details for the custom identity toolkit
<a name="idp-toolkit-implementation-details"></a>

The solution provides a flexible and maintainable base for various use cases. To get started, review the toolkit at [https://github.com/aws-samples/toolkit-for-aws-transfer-family](https://github.com/aws-samples/toolkit-for-aws-transfer-family), then follow the deployment instructions in the [Getting started](https://github.com/aws-samples/toolkit-for-aws-transfer-family/tree/main/solutions/custom-idp#getting-started) section.

![\[Architecture diagram for the custom identity provider toolkit available in GitHub.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/custom-idp-solution-high-level-architecture.png)


**Note**  
If you have previously used custom identity provider templates and examples, consider adopting this solution instead. Moving forward, provider-specific modules will standardize on this solution. Ongoing maintenance and feature enhancements will be applied to this solution.

This solution contains standard patterns for implementing a custom provider that accounts for details including logging and where to store the additional session metadata needed for AWS Transfer Family, such as the `HomeDirectoryDetails` parameter. This solution provides a reusable foundation for implementing custom identity providers with granular per-user session configuration, and decouples the identity provider authentication logic from the reusable logic that builds a configuration that is returned to Transfer Family to complete authentication and establish settings for the session. 

The code and supporting resources for this solution are available at [https://github.com/aws-samples/toolkit-for-aws-transfer-family](https://github.com/aws-samples/toolkit-for-aws-transfer-family).

The toolkit contains the following features:
+ An [AWS Serverless Application Model](https://aws.amazon.com/serverless/sam) template that provisions the required resources. Optionally, deploy and configure Amazon API Gateway to incorporate AWS WAF, as described in the blog post [ Securing AWS Transfer Family with AWS Web Application Firewall and Amazon API Gateway](https://aws.amazon.com/blogs/storage/securing-aws-transfer-family-with-aws-web-application-firewall-and-amazon-api-gateway/).
+ An [Amazon DynamoDB](https://aws.amazon.com/dynamodb) schema to store configuration metadata about identity providers, including user session settings such as `HomeDirectoryDetails`, `Role`, and `Policy`.
+ A modular approach that enables you to add new identity providers to the solution in the future, as modules.
+ Attribute retrieval: Optionally retrieve IAM role and POSIX Profile (UID and GID) attributes from supported identity providers, including AD, LDAP, and Okta.
+ Support for multiple identity providers connected to a single Transfer Family server and multiple Transfer Family servers using the same deployment of the solution.
+ Built-in IP allow-list checking such as IP allow lists that can optionally be configured on a per-user or per-identity provider basis.
+ Detailed logging with configurable log-level and tracing support to aid in troubleshooting.

Before you begin to deploy the custom identity provider solution, you need to have the following AWS resources.
+ An Amazon Virtual Private Cloud (VPC) with private subnets, with internet connectivity through either a NAT gateway or a DynamoDB gateway endpoint.
+ Appropriate IAM permissions to perform the following tasks:
  + Deploy the `custom-idp.yaml` CloudFormation template,
  + Create AWS CodePipeline projects
  + Create AWS CodeBuild projects
  + Create IAM roles and policies

**Important**  
You must deploy the solution to the same AWS account and AWS Region that contains your target Transfer Family servers.

## Supported identity providers
<a name="custom-supported-idp"></a>

The following list contains details for identity providers that are supported for the custom identity provider solution.


| Provider | Password flows | Public key flows | Multi-factor | Attribute retrieval | Details | 
| --- | --- | --- | --- | --- | --- | 
| Active Directory and LDAP | Yes | Yes | No | Yes | User verification can be performed as part of public key authentication flow. | 
| Argon2 (local hash) | Yes | No | No | No | Argon2 hashes are stored in the user record for 'local' password based authentication use cases. | 
| Amazon Cognito | Yes | No | Yes\$1 | No | Time-based One-Time Password (TOTP)-based multi-factor authentication only. \$1SMS-based MFA is not supported. | 
| Entra ID (formerly Azure AD) | Yes | No | No | No |  | 
| Okta | Yes | Yes | Yes\$1 | Yes | TOTP-based MFA only. | 
| Public key | No | Yes | No | No | Public keys are stored in the user record in DynamoDB. | 
| Secrets Manager | Yes | Yes | No | No |  | 

# Using AWS Lambda to integrate your identity provider
<a name="custom-lambda-idp"></a>

This topic describes how to create an AWS Lambda function that connects to your custom identity provider. You can use any custom identity provider, such as Okta, Secrets Manager, OneLogin, or a custom data store that includes authorization and authentication logic.

For most use cases, the recommended way to configure a custom identity provider is to use the [Custom identity provider solution](custom-idp-toolkit.md).

**Note**  
Before you create a Transfer Family server that uses Lambda as the identity provider, you must create the function. For an example Lambda function, see [Example Lambda functions](#lambda-auth-examples). Or, you can deploy a CloudFormation stack that uses one of the [Lambda function templates](#lambda-idp-templates). Also, make sure your Lambda function uses a resource-based policy that trusts Transfer Family. For an example policy, see [Lambda resource-based policy](#lambda-resource-policy).

1. Open the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/).

1. Choose **Create server** to open the **Create server** page. For **Choose an identity provider**, choose **Custom Identity Provider**, as shown in the following screenshot.  
![\[The Choose an identity provider console section with Custom identity provider selected. Also has the default value selected, which is that users can authenticate using either their password or key.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/custom-lambda-console.png)
**Note**  
The choice of authentication methods is only available if you enable SFTP as one of the protocols for your Transfer Family server.

1. Make sure the default value, **Use AWS Lambda to connect your identity provider**, is selected.

1. For **AWS Lambda function**, choose the name of your Lambda function.

1. Fill in the remaining boxes, and then choose **Create server**. For details on the remaining steps for creating a server, see [Configuring an SFTP, FTPS, or FTP server endpoint](tf-server-endpoint.md).

## Lambda resource-based policy
<a name="lambda-resource-policy"></a>

You must have a policy that references the Transfer Family server and Lambda ARNs. For example, you could use the following policy with your Lambda function that connects to your identity provider. The policy is escaped JSON as a string.

****  

```
"Policy":
"{\"Version\":\"2012-10-17\",
\"Id\":\"default\",
\"Statement\":[
  {\"Sid\":\"AllowTransferInvocation\",
  \"Effect\":\"Allow\",
  \"Principal\":{\"Service\":\"transfer.amazonaws.com\"},
  \"Action\":\"lambda:InvokeFunction\",
  \"Resource\":\"arn:aws:lambda:region:123456789012:function:my-lambda-auth-function\",
  \"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:transfer:region:123456789012:server/server-id\"}}}
]}"
```

**Note**  
In the example policy above, replace each *user input placeholder* with your own information.

## Event message structure
<a name="event-message-structure"></a>

The event message structure from SFTP server sent to the authorizer Lambda function for a custom IDP is as follows.

```
{
    "username": "value",
    "password": "value",
    "protocol": "SFTP",
    "serverId": "s-abcd123456",
    "sourceIp": "192.168.0.100"
}
```

Where `username` and `password` are the values for the sign-in credentials that are sent to the server.

For example, you enter the following command to connect:

```
sftp bobusa@server_hostname
```

You are then prompted to enter your password:

```
Enter password:
    mysecretpassword
```

You can check this from your Lambda function by printing the passed event from within the Lambda function. It should look similar to the following text block.

```
{
    "username": "bobusa",
    "password": "mysecretpassword",
    "protocol": "SFTP",
    "serverId": "s-abcd123456",
    "sourceIp": "192.168.0.100"
}
```

The event structure is similar for FTP and FTPS: the only difference is those values are used for the `protocol` parameter, rather than SFTP.

## Lambda functions for authentication
<a name="authentication-lambda-examples"></a>

To implement different authentication strategies, edit the Lambda function. To help you meet your application's needs, you can deploy a CloudFormation stack. For more information about Lambda, see the [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) or [ Building Lambda functions with Node.js](https://docs.aws.amazon.com/lambda/latest/dg/lambda-nodejs.html).

**Topics**
+ [Valid Lambda values](#lambda-valid-values)
+ [Example Lambda functions](#lambda-auth-examples)
+ [Testing your configuration](#authentication-test-configuration)
+ [Lambda function templates](#lambda-idp-templates)

### Valid Lambda values
<a name="lambda-valid-values"></a>

The following table describes details for the values that Transfer Family accepts for Lambda functions that are used for custom identity providers.


|  Value  |  Description  |  Required  | 
| --- | --- | --- | 
|  `Role`  |  Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests. For details on establishing a trust relationship, see [To establish a trust relationship](requirements-roles.md#establish-trust-transfer).  |  Required  | 
|  `PosixProfile`  |  The full POSIX identity, including user ID (`Uid`), group ID (`Gid`), and any secondary group IDs (`SecondaryGids`), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.  |  Required for Amazon EFS backing storage  | 
|  `PublicKeys`  |  A list of SSH public key values that are valid for this user. An empty list implies that this is not a valid login. Must not be returned during password authentication.  |  Optional  | 
|  `Policy`  |  A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. For more information about using session policies with custom identity providers, see the session policy examples in this topic.  |  Optional  | 
|  `HomeDirectoryType`  |  The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/transfer/latest/userguide/custom-lambda-idp.html)  |  Optional  | 
|  `HomeDirectoryDetails`  |  Logical directory mappings that specify which Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the `Entry` and `Target` pair, where `Entry` shows how the path is made visible and `Target` is the actual Amazon S3 or Amazon EFS path.  |  Required if `HomeDirectoryType` has a value of `LOGICAL`  | 
|  `HomeDirectory`  |  The landing directory for a user when they log in to the server using the client. The format depends on your storage backend: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/transfer/latest/userguide/custom-lambda-idp.html)  The bucket name or Amazon EFS file system ID must be included in the path. Omitting this information will result in "File not found" errors during file transfers.   |  Optional  | 

**Note**  
`HomeDirectoryDetails` is a string representation of a JSON map. This is in contrast to `PosixProfile`, which is an actual JSON map object, and `PublicKeys` which is a JSON array of strings. See the code examples for the language-specific details.

**HomeDirectory Format Requirements**  
When using the `HomeDirectory` parameter, ensure you include the complete path format:  
**For Amazon S3 storage:** Always include the bucket name in the format `/bucket-name/path`
**For Amazon EFS storage:** Always include the file system ID in the format `/fs-12345/path`
A common cause of "File not found" errors is omitting the bucket name or EFS file system ID from the `HomeDirectory` path. Setting `HomeDirectory` to just `/` without the storage identifier will cause authentication to succeed but file operations to fail.

### Example Lambda functions
<a name="lambda-auth-examples"></a>

This section presents some example Lambda functions, in both NodeJS and Python.

**Note**  
In these examples, the user, role, POSIX profile, password, and home directory details are all examples, and must be replaced with your actual values.

------
#### [ Logical home directory, NodeJS ]

The following NodeJS example function provides the details for a user that has a [logical home directory](https://docs.aws.amazon.com/transfer/latest/userguide/logical-dir-mappings.html). 

```
// GetUserConfig Lambda

exports.handler = (event, context, callback) => {
  console.log("Username:", event.username, "ServerId: ", event.serverId);

  var response;
  // Check if the username presented for authentication is correct. This doesn't check the value of the server ID, only that it is provided.
  if (event.serverId !== "" && event.username == 'example-user') {
    var homeDirectoryDetails = [
      {
        Entry: "/",
        Target: "/fs-faa1a123"
      }
    ];
    response = {
      Role: 'arn:aws:iam::123456789012:role/transfer-access-role', // The user is authenticated if and only if the Role field is not blank
      PosixProfile: {"Gid": 65534, "Uid": 65534}, // Required for EFS access, but not needed for S3
      HomeDirectoryDetails: JSON.stringify(homeDirectoryDetails),
      HomeDirectoryType: "LOGICAL",
    };

    // Check if password is provided
    if (!event.password) {
      // If no password provided, return the user's SSH public key
      response['PublicKeys'] = [ "ssh-rsa abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789" ];
    // Check if password is correct
    } else if (event.password !== 'Password1234') {
      // Return HTTP status 200 but with no role in the response to indicate authentication failure
      response = {};
    }
  } else {
    // Return HTTP status 200 but with no role in the response to indicate authentication failure
    response = {};
  }
  callback(null, response);
};
```

------
#### [ Path-based home directory, NodeJS ]

The following NodeJS example function provides the details for a user that has a path-based home directory. 

```
// GetUserConfig Lambda

exports.handler = (event, context, callback) => {
  console.log("Username:", event.username, "ServerId: ", event.serverId);

  var response;
  // Check if the username presented for authentication is correct. This doesn't check the value of the server ID, only that it is provided.
  // There is also event.protocol (one of "FTP", "FTPS", "SFTP") and event.sourceIp (e.g., "127.0.0.1") to further restrict logins.
  if (event.serverId !== "" && event.username == 'example-user') {
    response = {
      Role: 'arn:aws:iam::123456789012:role/transfer-access-role', // The user is authenticated if and only if the Role field is not blank
      Policy: '', // Optional, JSON stringified blob to further restrict this user's permissions
      // HomeDirectory format depends on your storage backend:
      // For S3: '/bucket-name/user-home-directory' (e.g., '/my-transfer-bucket/users/john')
      // For EFS: '/fs-12345/user-home-directory' (e.g., '/fs-faa1a123/users/john')
      HomeDirectory: '/my-transfer-bucket/users/example-user' // S3 example - replace with your bucket name
      // HomeDirectory: '/fs-faa1a123/users/example-user' // EFS example - uncomment for EFS
    };
    
    // Check if password is provided
    if (!event.password) {
      // If no password provided, return the user's SSH public key
     response['PublicKeys'] = [ "ssh-rsa abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789" ];
    // Check if password is correct
    } else if (event.password !== 'Password1234') {
      // Return HTTP status 200 but with no role in the response to indicate authentication failure
      response = {};
    } 
  } else {
    // Return HTTP status 200 but with no role in the response to indicate authentication failure
    response = {};
  }
  callback(null, response);
};
```

------
#### [ Logical home directory, Python ]

The following Python example function provides the details for a user that has a [logical home directory](https://docs.aws.amazon.com/transfer/latest/userguide/logical-dir-mappings.html). 

```
# GetUserConfig Python Lambda with LOGICAL HomeDirectoryDetails
import json

def lambda_handler(event, context):
  print("Username: {}, ServerId: {}".format(event['username'], event['serverId']))

  response = {}

  # Check if the username presented for authentication is correct. This doesn't check the value of the server ID, only that it is provided.
  if event['serverId'] != '' and event['username'] == 'example-user':
    homeDirectoryDetails = [
      {
        'Entry': '/',
        'Target': '/fs-faa1a123'
      }
    ]
    response = {
      'Role': 'arn:aws:iam::123456789012:role/transfer-access-role', # The user will be authenticated if and only if the Role field is not blank
      'PosixProfile': {"Gid": 65534, "Uid": 65534}, # Required for EFS access, but not needed for S3
      'HomeDirectoryDetails': json.dumps(homeDirectoryDetails),
      'HomeDirectoryType': "LOGICAL"
    }

    # Check if password is provided
    if event.get('password', '') == '':
      # If no password provided, return the user's SSH public key
     response['PublicKeys'] = [ "ssh-rsa abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789" ]
    # Check if password is correct
    elif event['password'] != 'Password1234':
      # Return HTTP status 200 but with no role in the response to indicate authentication failure
      response = {}
  else:
    # Return HTTP status 200 but with no role in the response to indicate authentication failure
    response = {}

  return response
```

------
#### [ Path-based home directory, Python ]

The following Python example function provides the details for a user that has a path-based home directory. 

```
# GetUserConfig Python Lambda with PATH HomeDirectory

def lambda_handler(event, context):
  print("Username: {}, ServerId: {}".format(event['username'], event['serverId']))

  response = {}

  # Check if the username presented for authentication is correct. This doesn't check the value of the server ID, only that it is provided.
  # There is also event.protocol (one of "FTP", "FTPS", "SFTP") and event.sourceIp (e.g., "127.0.0.1") to further restrict logins.
  if event['serverId'] != '' and event['username'] == 'example-user':
    response = {
      'Role': 'arn:aws:iam::123456789012:role/transfer-access-role', # The user will be authenticated if and only if the Role field is not blank
      'Policy': '', #  Optional, JSON stringified blob to further restrict this user's permissions
      # HomeDirectory format depends on your storage backend:
      # For S3: '/bucket-name/user-home-directory' (e.g., '/my-transfer-bucket/users/john')
      # For EFS: '/fs-12345/user-home-directory' (e.g., '/fs-faa1a123/users/john')
      'HomeDirectory': '/my-transfer-bucket/users/example-user', # S3 example - replace with your bucket name
      # 'HomeDirectory': '/fs-faa1a123/users/example-user', # EFS example - uncomment for EFS
      'HomeDirectoryType': "PATH" # Not strictly required, defaults to PATH
    }
    
    # Check if password is provided
    if event.get('password', '') == '':
      # If no password provided, return the user's SSH public key
     response['PublicKeys'] = [ "ssh-rsa abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789" ]
    # Check if password is correct
    elif event['password'] != 'Password1234':
      # Return HTTP status 200 but with no role in the response to indicate authentication failure
      response = {}
  else:
    # Return HTTP status 200 but with no role in the response to indicate authentication failure
    response = {}

  return response
```

------

### Testing your configuration
<a name="authentication-test-configuration"></a>

After you create your custom identity provider, you should test your configuration.

------
#### [ Console ]

**To test your configuration by using the AWS Transfer Family console**

1. Open the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/). 

1. On the **Servers** page, choose your new server, choose **Actions**, and then choose **Test**.

1. Enter the text for **Username** and **Password** that you set when you deployed the CloudFormation stack. If you kept the default options, the username is `myuser` and the password is `MySuperSecretPassword`.

1. Choose the **Server protocol** and enter the IP address for **Source IP**, if you set them when you deployed the CloudFormation stack.

------
#### [ CLI ]

**To test your configuration by using the AWS CLI**

1. Run the [test-identity-provider](https://docs.aws.amazon.com/cli/latest/reference/transfer/test-identity-provider.html) command. Replace each `user input placeholder` with your own information, as described in the subsequent steps.

   ```
   aws transfer test-identity-provider --server-id s-1234abcd5678efgh --user-name myuser --user-password MySuperSecretPassword --server-protocol FTP --source-ip 127.0.0.1
   ```

1. Enter the server ID.

1. Enter the username and password that you set when you deployed the CloudFormation stack. If you kept the default options, the username is `myuser` and the password is `MySuperSecretPassword`.

1. Enter the server protocol and source IP address, if you set them when you deployed the CloudFormation stack.

------

If user authentication succeeds, the test returns a `StatusCode: 200` HTTP response, an empty string `Message: ""` (which would contain a reason for failure otherwise), and a `Response` field.

**Note**  
 In the response example below, the `Response` field is a JSON object that has been "stringified" (converted into a flat JSON string that can be used inside a program), and contains the details of the user's roles and permissions.

```
{
    "Response":"{\"Policy\":\"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Sid\\\":\\\"ReadAndListAllBuckets\\\",\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":[\\\"s3:ListAllMybuckets\\\",\\\"s3:GetBucketLocation\\\",\\\"s3:ListBucket\\\",\\\"s3:GetObjectVersion\\\",\\\"s3:GetObjectVersion\\\"],\\\"Resource\\\":\\\"*\\\"}]}\",\"Role\":\"arn:aws:iam::000000000000:role/MyUserS3AccessRole\",\"HomeDirectory\":\"/\"}",
    "StatusCode": 200,
    "Message": ""
}
```

### Lambda function templates
<a name="lambda-idp-templates"></a>

You can deploy an CloudFormation stack that uses a Lambda function for authentication. We provide several templates that authenticate and authorize your users using sign-in credentials. You can modify these templates or AWS Lambda code to further customize user access.

**Note**  
You can create a FIPS-enabled AWS Transfer Family server through CloudFormation by specifying a FIPS-enabled security policy in your template. Available security policies are described in [Security policies for AWS Transfer Family servers](security-policies.md) 

**To create an CloudFormation stack to use for authentication**

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Follow the instructions for deploying an CloudFormation stack from an existing template in [Selecting a stack template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console-create-stack-template.html) in the *AWS CloudFormation User Guide*.

1. Use one of the following templates to create a Lambda function to use for authentication in Transfer Family. 
   + [Classic (Amazon Cognito) stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-basic-lambda-cognito-s3.template.yml)

     A basic template for creating a AWS Lambda for use as a custom identity provider in AWS Transfer Family. It authenticates against Amazon Cognito for password-based authentication and public keys are returned from an Amazon S3 bucket if public key based authentication is used. After deployment, you can modify the Lambda function code to do something different.
   + [AWS Secrets Manager stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-secrets-manager-lambda.template.yml)

     A basic template that uses AWS Lambda with an AWS Transfer Family server to integrate Secrets Manager as an identity provider. It authenticates against an entry in AWS Secrets Manager of the format `aws/transfer/server-id/username`. Additionally, the secret must hold the key-value pairs for all user properties returned to Transfer Family. After deployment, you can modify the Lambda function code to do something different.
   + [Okta stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-okta-lambda.template.yml): A basic template that uses AWS Lambda with an AWS Transfer Family server to integrate Okta as a custom identity provider.
   + [Okta-mfa stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-okta-mfa-lambda.template.yml): A basic template that uses AWS Lambda with an AWS Transfer Family server to integrate Okta, with Multi Factor Authentication, as a custom identity provider.
   + [ Azure Active Directory template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-basic-lambda-azure-ad.template.yml): details for this stack are described in the blog post [ Authenticating to AWS Transfer Family with Azure Active Directory and AWS Lambda](https://aws.amazon.com/blogs/storage/authenticating-to-aws-transfer-family-with-azure-active-directory-and-aws-lambda/).

   After the stack has been deployed, you can view details about it on the **Outputs** tab in the CloudFormation console.

   Deploying one of these stacks is the easiest way to integrate a custom identity provider into the Transfer Family workflow.

# Using Amazon API Gateway to integrate your identity provider
<a name="authentication-api-gateway"></a>

This topic describes how to use an AWS Lambda function to back an API Gateway method. Use this option if you need a RESTful API to integrate your identity provider or if you want to use AWS WAF to leverage its capabilities for geo-blocking or rate-limiting requests.

For most use cases, the recommended way to configure a custom identity provider is to use the [Custom identity provider solution](custom-idp-toolkit.md).

**Limitations if using an API Gateway to integrate your identity provider**
+ This configuration does not support custom domains.
+ This configuration does not support a private API Gateway URL.

If you need either of these, you can use Lambda as an identity provider, without API Gateway. For details, see [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md).

## Authenticating using an API Gateway method
<a name="authentication-custom-ip"></a>

You can create an API Gateway method for use as an identity provider for Transfer Family. This approach provides a highly secure way for you to create and provide APIs. With API Gateway, you can create an HTTPS endpoint so that all incoming API operations are transmitted with greater security. For more details about the API Gateway service, see the [API Gateway Developer Guide](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html).

API Gateway offers an authorization method named `AWS_IAM`, which gives you the same authentication based on AWS Identity and Access Management (IAM) that AWS uses internally. If you enable authentication with `AWS_IAM`, only callers with explicit permissions to call an API can reach that API's API Gateway method.

To use your API Gateway method as a custom identity provider for Transfer Family, enable IAM for your API Gateway method. As part of this process, you provide an IAM role with permissions for Transfer Family to use your gateway.

**Note**  
To improve security, you can configure a web application firewall. AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway. For details, see [Add a web application firewall](web-application-firewall.md).

**Do not enable API Gateway caching**  
Do not enable caching for your API Gateway method when using it as a custom identity provider for Transfer Family. Caching is inappropriate and invalid for authentication requests because:  
Each authentication request is unique and requires a live response, not a cached response
Caching provides no benefits since Transfer Family never sends duplicate or repeated requests to the API Gateway
Enabling caching will cause the API Gateway to respond with mismatched data, resulting in invalid responses to authentication requests

**To use your API Gateway method for custom authentication with Transfer Family**

1. Create an CloudFormation stack. To do this:
**Note**  
The stack templates have been updated to use BASE64-encoded passwords: for details, see [Improvements to the CloudFormation templates](#base64-templates).

   1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

   1. Follow the instructions for deploying an CloudFormation stack from an existing template in [Selecting a stack template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console-create-stack-template.html) in the *AWS CloudFormation User Guide*.

   1. Use one of the following basic templates to create an AWS Lambda-backed API Gateway method for use as a custom identity provider in Transfer Family.
      + [Basic stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-basic-apig.template.yml)

        By default, your API Gateway method is used as a custom identity provider to authenticate a single user in a single server using a hard-coded SSH (Secure Shell) key or password. After deployment, you can modify the Lambda function code to do something different.
      + [AWS Secrets Manager stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-secrets-manager-apig.template.yml)

        By default, your API Gateway method authenticates against an entry in Secrets Manager of the format `aws/transfer/server-id/username`. Additionally, the secret must hold the key-value pairs for all user properties returned to Transfer Family. After deployment, you can modify the Lambda function code to do something different. For more information, see the blog post[Enable password authentication for AWS Transfer Family using AWS Secrets Manager](https://aws.amazon.com/blogs/storage/enable-password-authentication-for-aws-transfer-family-using-aws-secrets-manager-updated/).
      + [Okta stack template](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-okta-apig.template.yml)

        Your API Gateway method integrates with Okta as a custom identity provider in Transfer Family. For more information, see the blog post [Using Okta as an identity provider with AWS Transfer Family](https://aws.amazon.com/blogs/storage/using-okta-as-an-identity-provider-with-aws-transfer-for-sftp/).

   Deploying one of these stacks is the easiest way to integrate a custom identity provider into the Transfer Family workflow. Each stack uses the Lambda function to support your API method based on API Gateway. You can then use your API method as a custom identity provider in Transfer Family. By default, the Lambda function authenticates a single user called `myuser` with a password of `MySuperSecretPassword`. After deployment, you can edit these credentials or update the Lambda function code to do something different.
**Important**  
We recommend that you edit the default user and password credentials.

   After the stack has been deployed, you can view details about it on the **Outputs** tab in the CloudFormation console. These details include the stack's Amazon Resource Name (ARN), the ARN of the IAM role that the stack created, and the URL for your new gateway.
**Note**  
If you are using the custom identity provider option to enable password–based authentication for your users, and you enable the request and response logging provided by API Gateway, API Gateway logs your users' passwords to your Amazon CloudWatch Logs. We don't recommend using this log in your production environment. For more information, see [Set up CloudWatch API logging in API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html) in the *API Gateway Developer Guide*.

1. Check the API Gateway method configuration for your server. To do this:

   1. Open the API Gateway console at [https://console.aws.amazon.com/apigateway/](https://console.aws.amazon.com/apigateway/). 

   1. Choose the **Transfer Custom Identity Provider basic template API** that the CloudFormation template generated. You might need to select your region to see your gateways.

   1. In the **Resources** pane, choose **GET**. The following screenshot shows the correct method configuration.  
![\[API configuration details, showing the method configuration parameters for the Request Paths and the for the URL Query String.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/apig-config-method-fields.png)

   At this point, your API gateway is ready to be deployed.

1. For **Actions**, choose **Deploy API**. For **Deployment stage**, choose **prod**, and then choose **Deploy**.

   After the API Gateway method is successfully deployed, view its performance in **Stages** > **Stage details**, as shown in the following screenshot.
**Note**  
Copy the **Invoke URL** address that appears at the top of the screen. You might need it for the next step.  
![\[Stage details with the Invoke URL highlighted.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/apig-config-method-invoke.png)

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. A Transfer Family should have been created for you, when you created the stack. If not, configure your server using these steps.

   1. Choose **Create server** to open the **Create server** page. For **Choose an identity provider**, choose **Custom**, then select **Use Amazon API Gateway to connect to your identity provider**, as shown in the following screenshot.  
![\[The identity provider screen with Custom Identity Provider selected, and with the API Gateway chosen for connecting to your identity provider.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-choose-idp-custom.png)

   1. In the **Provide an Amazon API Gateway URL** text box, paste the **Invoke URL** address of the API Gateway endpoint that you created in step 3 of this procedure.

   1. For **Role**, choose the IAM role that was created by the CloudFormation template. This role allows Transfer Family to invoke your API gateway method.

      The invocation role contains the CloudFormation stack name that you selected for the stack that you created in step 1. It has the following format: `CloudFormation-stack-name-TransferIdentityProviderRole-ABC123DEF456GHI`.

   1. Fill in the remaining boxes, and then choose **Create server**. For details on the remaining steps for creating a server, see [Configuring an SFTP, FTPS, or FTP server endpoint](tf-server-endpoint.md).

## Implementing your API Gateway method
<a name="authentication-api-method"></a>

To create a custom identity provider for Transfer Family, your API Gateway method must implement a single method that has a resource path of `/servers/serverId/users/username/config`. The `serverId` and `username` values come from the RESTful resource path. Also, add `sourceIp` and `protocol` as **URL Query String Parameters** in the **Method Request**, as shown in the following image.

![\[The Resources screen of the API Gateway showing the GET method details.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/apig-config-method-request.png)


**Note**  
The username must be a minimum of 3 and a maximum of 100 characters. You can use the following characters in the username: a–z, A-Z, 0–9, underscore '\$1', hyphen '-', period '.' and at sign '@'. The username can't start with a hyphen '-', period '.' or at sign '@'.

If Transfer Family attempts password authentication for your user, the service supplies a `Password:` header field. In the absence of a `Password:` header, Transfer Family attempts public key authentication to authenticate your user.

When you are using an identity provider to authenticate and authorize end users, in addition to validating their credentials, you can allow or deny access requests based on the IP addresses of the clients used by your end users. You can use this feature to ensure that data stored in your S3 buckets or your Amazon EFS file system can be accessed over the supported protocols only from IP addresses that you have specified as trusted. To enable this feature, you must include `sourceIp` in the Query string.

If you have multiple protocols enabled for your server and want to provide access using the same username over multiple protocols, you can do so as long as the credentials specific to each protocol have been set up in your identity provider. To enable this feature, you must include the `protocol` value in the RESTful resource path.

Your API Gateway method should always return HTTP status code `200`. Any other HTTP status code means that there was an error accessing the API.

**Amazon S3 example response**  
The example response body is a JSON document of the following form for Amazon S3.

```
{
 "Role": "IAM role with configured S3 permissions",
 "PublicKeys": [
     "ssh-rsa public-key1",
     "ssh-rsa public-key2"
  ],
 "Policy": "STS Assume role session policy",
 "HomeDirectory": "/amzn-s3-demo-bucket/path/to/home/directory"
}
```

**Note**  
 The policy is escaped JSON as a string. For example:   

****  

```
"Policy":
"{
  \"Version\": \"2012-10-17\",
  \"Statement\":
     [
     {\"Condition\":
        {\"StringLike\":
            {\"s3:prefix\":
               [\"user/*\", \"user/\"]}},
     \"Resource\": \"arn:aws:s3:::amzn-s3-demo-bucket\",
     \"Action\": \"s3:ListBucket\",
     \"Effect\": \"Allow\",
     \"Sid\": \"ListHomeDir\"},
     {\"Resource\": \"arn:aws:s3:::*\",
        \"Action\": [\"s3:PutObject\",
        \"s3:GetObject\",
        \"s3:DeleteObjectVersion\",
        \"s3:DeleteObject\",
        \"s3:GetObjectVersion\",
        \"s3:GetObjectACL\",
        \"s3:PutObjectACL\"],
     \"Effect\": \"Allow\",
     \"Sid\": \"HomeDirObjectAccess\"}]
}"
```

The following example response shows that a user has a logical home directory type.

```
{
   "Role": "arn:aws:iam::123456789012:role/transfer-access-role-s3",
   "HomeDirectoryType":"LOGICAL",
   "HomeDirectoryDetails":"[{\"Entry\":\"/\",\"Target\":\"/amzn-s3-demo-bucket1\"}]",
   "PublicKeys":[""]
}
```

**Amazon EFS example response**  
The example response body is a JSON document of the following form for Amazon EFS.

```
{
 "Role": "IAM role with configured EFS permissions",
 "PublicKeys": [
     "ssh-rsa public-key1",
     "ssh-rsa public-key2"
  ],
 "PosixProfile": {
   "Uid": "POSIX user ID",
   "Gid": "POSIX group ID",
   "SecondaryGids": [Optional list of secondary Group IDs],
 },
 "HomeDirectory": "/fs-id/path/to/home/directory"
}
```

The `Role` field shows that successful authentication occurred. When doing password authentication (when you supply a `Password:` header), you don't need to provide SSH public keys. If a user can't be authenticated, for example, if the password is incorrect, your method should return a response without `Role` set. An example of such a response is an empty JSON object.

 The following example response shows a user that has a logical home directory type. 

```
{
    "Role": "arn:aws:iam::123456789012:role/transfer-access-role-efs",
    "HomeDirectoryType": "LOGICAL",
    "HomeDirectoryDetails":"[{\"Entry\":\"/\",\"Target\":\"/faa1a123\"}]",
    "PublicKeys":[""],
    "PosixProfile":{"Uid":65534,"Gid":65534}
}
```

You can include user policies in the Lambda function in JSON format. For more information about configuring user policies in Transfer Family, see [Managing access controls](users-policies.md).

## Default Lambda function
<a name="authentication-lambda-examples-default"></a>

To implement different authentication strategies, edit the Lambda function that your gateway uses. To help you meet your application's needs, you can use the following example Lambda functions in Node.js. For more information about Lambda, see the [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) or [ Building Lambda functions with Node.js](https://docs.aws.amazon.com/lambda/latest/dg/lambda-nodejs.html).

The following example Lambda function takes your username, password (if you're performing password authentication), server ID, protocol, and client IP address. You can use a combination of these inputs to look up your identity provider and determine if the login should be accepted.

**Note**  
If you have multiple protocols enabled for your server and want to provide access using the same username over multiple protocols, you can do so as long as the credentials specific to the protocol have been set up in your identity provider.  
For File Transfer Protocol (FTP), we recommend maintaining separate credentials from Secure Shell (SSH) File Transfer Protocol (SFTP) and File Transfer Protocol over SSL (FTPS). We recommend maintaining separate credentials for FTP because, unlike SFTP and FTPS, FTP transmits credentials in clear text. By isolating FTP credentials from SFTP or FTPS, if FTP credentials are shared or exposed, your workloads using SFTP or FTPS remain secure.

This example function returns the role and logical home directory details, along with the public keys (if it performs public key authentication).

When you create service-managed users, you set their home directory, either logical or physical. Similarly, we need the Lambda function results to convey the desired user physical or logical directory structure. The parameters you set depend on the value for the [https://docs.aws.amazon.com//transfer/latest/APIReference/API_CreateUser.html#TransferFamily-CreateUser-request-HomeDirectoryType](https://docs.aws.amazon.com//transfer/latest/APIReference/API_CreateUser.html#TransferFamily-CreateUser-request-HomeDirectoryType) field.
+ `HomeDirectoryType` set to `PATH` – the `HomeDirectory` field must then be an absolute Amazon S3 bucket prefix or Amazon EFS absolute path that is visible to your users.
+ `HomeDirectoryType` set to `LOGICAL` – Do *not* set a `HomeDirectory` field. Instead, we set a `HomeDirectoryDetails` field that provides the desired Entry/Target mappings, similar to the described values in the [https://docs.aws.amazon.com//transfer/latest/APIReference/API_CreateUser.html#TransferFamily-CreateUser-request-HomeDirectoryMappings](https://docs.aws.amazon.com//transfer/latest/APIReference/API_CreateUser.html#TransferFamily-CreateUser-request-HomeDirectoryMappings) parameter for service-managed users.

The example functions are listed in [Example Lambda functions](custom-lambda-idp.md#lambda-auth-examples).

## Lambda function for use with AWS Secrets Manager
<a name="authentication-lambda-examples-secrets-mgr"></a>

To use AWS Secrets Manager as your identity provider, you can work with the Lambda function in the sample CloudFormation template. The Lambda function queries the Secrets Manager service with your credentials and, if successful, returns a designated secret. For more information about Secrets Manager, see the [AWS Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html).

To download a sample CloudFormation template that uses this Lambda function, go to the [Amazon S3 bucket provided by AWS Transfer Family](https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-secrets-manager-apig.template.yml).

## Improvements to the CloudFormation templates
<a name="base64-templates"></a>

Improvements to the API Gateway interface have been made to the published CloudFormation templates. The templates now use BASE64-encoded passwords with the API Gateway. Your existing deployments continue to work without this enhancement, but don't allow for passwords with characters outside the basic US-ASCII character set.

The changes in the template that enable this capability are as follows:
+ The `GetUserConfigRequest AWS::ApiGateway::Method` resource has to have this `RequestTemplates` code (the line in italics is the updated line)

  ```
  RequestTemplates:
     application/json: |
     {
        "username": "$util.urlDecode($input.params('username'))",
        "password": "$util.escapeJavaScript($util.base64Decode($input.params('PasswordBase64'))).replaceAll("\\'","'")",
        "protocol": "$input.params('protocol')",
        "serverId": "$input.params('serverId')",
        "sourceIp": "$input.params('sourceIp')"
  }
  ```
+ The `RequestParameters` for the `GetUserConfig` resource must change to use the `PasswordBase64` header (the line in italics is the updated line):

  ```
  RequestParameters:
     method.request.header.PasswordBase64: false
     method.request.querystring.protocol: false
     method.request.querystring.sourceIp: false
  ```

**To check if the template for your stack is the latest**

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. From the list of stacks, choose your stack.

1. From the details panel, choose the **Template** tab.

1. Look for the following:
   + Search for `RequestTemplates`, and make sure you have this line:

     ```
     "password": "$util.escapeJavaScript($util.base64Decode($input.params('PasswordBase64'))).replaceAll("\\'","'")",
     ```
   + Search for `RequestParameters`, and make sure you have this line:

     ```
     method.request.header.PasswordBase64: false
     ```

If you don't see the updated lines, edit your stack. For details on how to update your CloudFormation stack, see [Modifying a stack template](https://docs.aws.amazon.com//AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-get-template.html) in the *AWS CloudFormation; User Guide*.

# Using multiple authentication methods
<a name="custom-idp-mfa"></a>

The Transfer Family server controls the AND logic when you use multiple authentication methods. Transfer Family treats this as two separate requests to your custom identity provider: however, their effect is combined.

Both requests must return successfully with the correct response to allow the authentication to complete. Transfer Family requires the two responses to be complete, meaning they contain all of the required elements (role, home directory, policy and the POSIX profile if you're using Amazon EFS for storage). Transfer Family also requires that the password response must not include public keys.

The public key request must have a separate response from the identity provider. That behavior is unchanged when using **Password OR Key** or **Password AND Key**.

The SSH/SFTP protocol challenges the software client first with a public key authentication, then requests a password authentication. This operation mandates both are successful before the user is allowed to complete the authentication.

For custom identity provider options, you can specify any of the following options for how to authenticate.
+ **Password OR Key** – users can authenticate with either their password or their key. This is the default value.
+ **Password ONLY** – users must provide their password to connect.
+ **Key ONLY** – users must provide their private key to connect.
+ **Password AND Key** – users must provide both their private key and their password to connect. The server checks the key first, and then if the key is valid, the system prompts for a password. If the private key provided does not match the public key that is stored, authentication fails.

# IPv6 support for custom identity providers
<a name="custom-idp-ipv6"></a>

AWS Transfer Family custom identity providers fully support IPv6 connections. When implementing a custom identity provider, your Lambda function can receive and process authentication requests from both IPv4 and IPv6 clients without any additional configuration. The Lambda function receives the client's IP address in the `sourceIp` field of the request, which can be either an IPv4 address (for example, `203.0.113.42`) or an IPv6 address (for example, `2001:db8:85a3:8d3:1319:8a2e:370:7348`). Your custom identity provider implementation should handle both address formats appropriately.

**Important**  
If your custom identity provider performs IP-based validation or logging, ensure your implementation properly handles IPv6 address formats. IPv6 addresses are longer than IPv4 addresses and use a different notation format.

**Note**  
When handling IPv6 addresses in your custom identity provider, ensure you're using proper IPv6 address parsing functions rather than simple string comparisons. IPv6 addresses can be represented in various canonical formats (for example `fd00:b600::ec2` or `fd00:b600:0:0:0:0:0:ec2`). Use appropriate IPv6 address libraries or functions in your implementation language to correctly validate and compare IPv6 addresses.

**Example Handling both IPv4 and IPv6 addresses in a custom identity provider**  

```
def lambda_handler(event, context):
    # Extract the source IP address from the request
    source_ip = event.get('sourceIp', '')
    
    # Log the client IP address (works for both IPv4 and IPv6)
    print(f"Authentication request from: {source_ip}")
    
    # Example of IP-based validation that works with both IPv4 and IPv6
    if is_ip_allowed(source_ip):
        # Continue with authentication
        # ...
    else:
        # Reject the authentication request
        return {
            "Role": "",
            "HomeDirectory": "",
            "Status": "DENIED"
        }
```

For more information about implementing custom identity providers, see [Using AWS Lambda to integrate your identity provider](custom-lambda-idp.md).

# Using AWS Directory Service for Microsoft Active Directory
<a name="directory-services-users"></a>

You can use AWS Transfer Family to authenticate your file transfer end users using AWS Directory Service for Microsoft Active Directory. It enables seamless migration of file transfer workflows that rely on Active Directory authentication without changing end users’ credentials or needing a custom authorizer. 

With AWS Managed Microsoft AD, you can securely provide Directory Service users and groups access over SFTP, FTPS, and FTP for data stored in Amazon Simple Storage Service (Amazon S3) or Amazon Elastic File System (Amazon EFS). If you use Active Directory to store your users’ credentials, you now have an easier way to enable file transfers for these users. 

You can provide access to Active Directory groups in AWS Managed Microsoft AD in your on-premises environment or in the AWS Cloud using Active Directory connectors. You can give users that are already configured in your Microsoft Windows environment, either in the AWS Cloud or in their on-premises network, access to an AWS Transfer Family server that uses AWS Managed Microsoft AD for identity. The AWS storage blog contains a post that details a solution for using Active Directory with Transfer Family: [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).

**Note**  
AWS Transfer Family does not support Simple AD.
Transfer Family does not support cross-region Active Directory configurations: we only support Active Directory integrations that are in the same region as that of the Transfer Family server.
Transfer Family does not support using either AWS Managed Microsoft AD or AD Connector to enable multi-factor authentication (MFA) for your existing RADIUS-based MFA infrastructure.
AWS Transfer Family does not support replicated regions of Managed Active Directory.

To use AWS Managed Microsoft AD, you must perform the following steps:

1. Create one or more AWS Managed Microsoft AD directories using the Directory Service console.

1. Use the Transfer Family console to create a server that uses AWS Managed Microsoft AD as its identity provider. 

1. Set up AWS Directory using an Active Directory Connector.

1. Add access from one or more of your Directory Service groups. 

1. Although not required, we recommend that you test and verify user access.

**Topics**
+ [Before you start using AWS Directory Service for Microsoft Active Directory](#managed-ad-prereq)
+ [Working with Active Directory realms](#managed-ad-realms)
+ [Choosing AWS Managed Microsoft AD as your identity provider](#managed-ad-identity-provider)
+ [Connecting to on-prem Microsoft Active Directory](#on-prem-ad)
+ [Granting access to groups](#directory-services-grant-access)
+ [Testing users](#directory-services-test-user)
+ [Deleting server access for a group](#directory-services-misc)
+ [Connecting to the server using SSH (Secure Shell)](#directory-services-ssh-procedure)
+ [Connecting AWS Transfer Family to a self-managed Active Directory using forests and trusts](#directory-services-ad-trust)

## Before you start using AWS Directory Service for Microsoft Active Directory
<a name="managed-ad-prereq"></a>

**Note**  
AWS Transfer Family has a default limit of 100 Active Directory groups per server. If your use case requires more than 100 groups, consider using a custom identity provider solution as described in [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).

### Provide a unique identifier for your AD groups
<a name="add-identifier-adgroups"></a>

Before you can use AWS Managed Microsoft AD, you must provide a unique identifier for each group in your Microsoft AD directory. You can use the security identifier (SID) for each group to do this. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using AWS Transfer Family. 

Use the following Windows PowerShell command to retrieve the SID for a group, replacing *YourGroupName* with the name of the group. 

```
Get-ADGroup -Filter {samAccountName -like "YourGroupName*"} -Properties * | Select SamAccountName,ObjectSid
```

**Note**  
If you are using AWS Directory Service as your identity provider, and if `userPrincipalName` and `SamAccountName` have different values, AWS Transfer Family accepts the value in `SamAccountName`. Transfer Family does not accept the value specified in `userPrincipalName`.

### Add Directory Service permissions to your role
<a name="add-active-directory-permissions"></a>

You also need Directory Service API permissions to use AWS Directory Service as your identity provider. The following permissions are required or suggested:
+ `ds:DescribeDirectories` is required for Transfer Family to look up the directory
+ `ds:AuthorizeApplication` is required to add authorization for Transfer Family
+ `ds:UnauthorizeApplication` is suggested to remove any resources that are provisionally created, in case something goes wrong during the server creation process

Add these permissions to the role you are using for creating your Transfer Family servers. For more details on these permissions, see [Directory Service API permissions: Actions, resources, and conditions reference](https://docs.aws.amazon.com//directoryservice/latest/admin-guide/UsingWithDS_IAM_ResourcePermissions.html).

## Working with Active Directory realms
<a name="managed-ad-realms"></a>

 When you are considering how to have your Active Directory users access AWS Transfer Family servers, keep in mind the user's realm, and their group's realm. Ideally, the user's realm and their group's realm should match. That is, both the user and the group are in the default realm, or both are in the trusted realm. If this is not the case, the user cannot be authenticated by Transfer Family.

You can test the user to ensure the configuration is correct. For details, see [Testing users](#directory-services-test-user). If there is a problem with the user/group realm, you receive the error, No associated access found for user's groups.

## Choosing AWS Managed Microsoft AD as your identity provider
<a name="managed-ad-identity-provider"></a>

This section describes how to use AWS Directory Service for Microsoft Active Directory with a server.

**To use AWS Managed Microsoft AD with Transfer Family**

1. Sign in to the AWS Management Console and open the Directory Service console at [https://console.aws.amazon.com/directoryservicev2/](https://console.aws.amazon.com/directoryservicev2/).

   Use the Directory Service console to configure one or more managed directories. For more information, see [AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) in the * Directory Service Admin Guide*.  
![\[The Directory Service console showing a list of directories and their details.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/directory-services-AD-list.png)

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/), and choose **Create server**.

1. On the **Choose protocols** page, choose one or more protocols from the list.
**Note**  
If you select **FTPS**, you must provide the AWS Certificate Manager certificate. 

1. For **Choose an identity provider**, choose **AWS Directory Service**.  
![\[Console screenshot showing Choose identity provider section with Directory Service selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-server-choose-idp-directory-services.png)

1. The **Directory** list contains all the managed directories that you have configured. Choose a directory from the list, and choose **Next**.
**Note**  
 Cross-Account and Shared directories are not supported for AWS Managed Microsoft AD. 
To set up a server with Directory Service as your identity provider, you need to add some Directory Service permissions. For details, see [Before you start using AWS Directory Service for Microsoft Active Directory](#managed-ad-prereq).

1. To finish creating the server, use one of the following procedures:
   + [Create an SFTP-enabled server](create-server-sftp.md)
   + [Create an FTPS-enabled server](create-server-ftps.md)
   + [Create an FTP-enabled server](create-server-ftp.md)

   In those procedures, continue with the step that follows choosing an identity provider.

**Important**  
 You can't delete a Microsoft AD directory in Directory Service if you used it in a Transfer Family server. You must delete the server first, and then you can delete the directory. 

## Connecting to on-prem Microsoft Active Directory
<a name="on-prem-ad"></a>

This section describes how to set up an AWS Directory using an AD Connector

**To set up your AWS Directory using AD Connector**

1. Open the [Directory Service](https://console.aws.amazon.com/directoryservicev2/) console and select **Directories**.

1. Select **Set up directory**.

1. For directory type, choose **AD Connector**.

1. Select a directory size, select **Next**, then select your VPC and Subnets.

1. Select **Next**, then fill in the fields as follows:
   + **Directory DNS name**: enter the domain name you are using for your Microsoft Active Directory.
   + **DNS IP addresses**: enter you Microsoft Active Directory IP addresses.
   + **Server account username** and **password**: enter the details for the service account to use.

1. Complete the screens to create the directory service.

The next step is to create a Transfer Family server with the SFTP protocol, and the identity provider type of **AWS Directory Service**. From **Directory** drop down list, select the directory you added in the previous procedure.

## Granting access to groups
<a name="directory-services-grant-access"></a>

 After you create the server, you must choose which groups in the directory should have access to upload and download files over the enabled protocols using AWS Transfer Family. You do this by creating an *access*.

**Note**  
AWS Transfer Family has a default limit of 100 Active Directory groups per server. If your use case requires more than 100 groups, consider using a custom identity provider solution as described in [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).

**Note**  
Users must belong *directly* to the group to which you are granting access. For example, assume that Bob is a user and belongs to groupA, and groupA itself is included in groupB.  
If you grant access to groupA, Bob is granted access.
 If you grant access to groupB (and not to groupA), Bob does not have access.

**To grant access to a group**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Navigate to your server details page.

1.  In the **Accesses** section, choose **Add access**. 

1.  Enter the SID for the AWS Managed Microsoft AD directory that you want to have access to this server.
**Note**  
For information about how to find the SID for your group, see [Before you start using AWS Directory Service for Microsoft Active Directory](#managed-ad-prereq).

1. For **Access**, choose an AWS Identity and Access Management (IAM) role for the group.

1.  In the **Policy** section, choose a policy. The default setting is **None**. 

1. For **Home directory**, choose an Amazon S3 bucket that corresponds to the group's home directory.
**Note**  
You can limit the portions of the bucket that users see by creating a session policy. For example, to limit users to their own folder under the `/filetest` directory, enter the following text in the box.  

   ```
   /filetest/${transfer:UserName}
   ```
 To learn more about creating a session policy, see [Creating a session policy for an Amazon S3 bucket](users-policies-session.md). 

1.  Choose **Add** to create the association. 

1. Choose your server.

1. Choose **Add access**.

   1.  Enter the SID for the group. 
**Note**  
For information about how to find the SID, see [Before you start using AWS Directory Service for Microsoft Active Directory](#managed-ad-prereq).

1. Choose **Add access**.

 In the **Accesses** section, the accesses for the server are listed. 

![\[Console showing the Accesses section with the server accesses listed.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/accesses-list.png)


## Testing users
<a name="directory-services-test-user"></a>

You can test whether a user has access to the AWS Managed Microsoft AD directory for your server.

**Note**  
A user must be in exactly one group (an external ID) that is listed in the **Access** section of the **Endpoint configuration** page. If the user is in no groups, or is in more than a single group, that user is not granted access.

**To test whether a specific user has access**

1. On the server details page, choose **Actions**, and then choose **Test**.

1. For **Identity provider testing**, enter the sign-in credentials for a user that is in one of the groups that has access. 

1.  Choose **Test**. 

You see a successful identity provider test, showing that the selected user has been granted access to the server.

![\[Console screenshot of the successful identity provider testing response.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/identity-provider-test-success.png)


If the user belongs to more than one group that has access, you receive the following response.

```
"Response":"",
"StatusCode":200,
"Message":"More than one associated access found for user's groups."
```

## Deleting server access for a group
<a name="directory-services-misc"></a>

**To delete server access for a group**

1. On the server details page, choose **Actions**, and then choose **Delete Access**.

1. In the dialog box, confirm that you want to remove access for this group.

 When you return to the server details page, you see that the access for this group is no longer listed. 

## Connecting to the server using SSH (Secure Shell)
<a name="directory-services-ssh-procedure"></a>

After you configure your server and users, you can connect to the server using SSH and use the fully qualified username for a user that has access. 

```
sftp user@active-directory-domain@vpc-endpoint
```

For example: `transferuserexample@mycompany.com@vpce-0123456abcdef-789xyz.vpc-svc-987654zyxabc.us-east-1.vpce.amazonaws.com`.

This format targets the search of the federation, limiting the search of a potentially large Active Directory. 

**Note**  
You can specify the simple username. However, in this case, the Active Directory code has to search all the directories in the federation. This might limit the search, and authentication might fail even if the user should have access. 

After authenticating, the user is located in the home directory that you specified when you configured the user.

## Connecting AWS Transfer Family to a self-managed Active Directory using forests and trusts
<a name="directory-services-ad-trust"></a>

Directory Service has the following options available to connect to a self-managed Active Directory:
+ One-way forest trust (outgoing from AWS Managed Microsoft AD and incoming for on-premises Active Directory) works only for the root domain.
+ For child domains, you can use either of the following:
  + Use two-way trust between AWS Managed Microsoft AD and on-premises Active Directory
  + Use one-way external trust to each child domain.

When connecting to the server using a trusted domain, the user needs to specify the trusted domain, for example `transferuserexample@mycompany.com`.

# Using AWS Directory Service for Entra ID Domain Services
<a name="azure-sftp"></a>

 For customers that need SFTP Transfer only, and do not want to manage a domain, there is Simple Active Directory. Alternatively, customers who want the benefits of Active Directory and high availability in a fully managed service can use AWS Managed Microsoft AD. Finally, for customers who want to take advantage of their existing Active Directory forest for their SFTP Transfer, there is Active Directory Connector. 

Note the following:
+ To take advantage of your existing Active Directory forest for your SFTP Transfer needs, you can use [Active Directory Connector](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html).
+ If you want the benefits of Active Directory and high availability in a fully managed service, you can use AWS Directory Service for Microsoft Active Directory. For details, see [Using AWS Directory Service for Microsoft Active Directory](directory-services-users.md).

This topic describes how to use an Active Directory Connector and [Entra ID (formerly Azure AD) Domain Services](https://azure.microsoft.com/en-us/services/active-directory-ds/) to authenticate SFTP Transfer users with Entra ID.

**Topics**
+ [Before you start using AWS Directory Service for Entra ID Domain Services](#azure-prereq)
+ [Step 1: Adding Entra ID Domain Services](#azure-add-adds)
+ [Step 2: Creating a service account](#azure-create-service-acct)
+ [Step 3: Setting up AWS Directory using AD Connector](#azure-setup-directory)
+ [Step 4: Setting up AWS Transfer Family server](#azure-setup-transfer-server)
+ [Step 5: Granting access to groups](#azure-grant-access)
+ [Step 6: Testing users](#azure-test)

## Before you start using AWS Directory Service for Entra ID Domain Services
<a name="azure-prereq"></a>

**Note**  
AWS Transfer Family has a default limit of 100 Active Directory groups per server. If your use case requires more than 100 groups, consider using a custom identity provider solution as described in [Simplify Active Directory authentication with a custom identity provider for AWS Transfer Family](https://aws.amazon.com/blogs/storage/simplify-active-directory-authentication-with-a-custom-identity-provider-for-aws-transfer-family/).

For AWS, you need the following:
+ A virtual private cloud (VPC) in an AWS region where you are using your Transfer Family servers
+ At least two private subnets in your VPC
+ The VPC must have internet connectivity
+ A customer gateway and Virtual private gateway for site-to-site VPN connection with Microsoft Entra

For Microsoft Entra, you need the following:
+ An Entra ID and Active directory domain service
+ An Entra resource group
+ An Entra virtual network
+ VPN connectivity between your Amazon VPC and your Entra resource group
**Note**  
This can be through native IPSEC tunnels or using VPN appliances. In this topic, we use IPSEC tunnels between an Entra Virtual network gateway and local network gateway. The tunnels must be configured to allow traffic between your Entra Domain Service endpoints and the subnets that house your AWS VPC.
+ A customer gateway and Virtual private gateway for site-to-site VPN connection with Microsoft Entra

The following diagram shows the configuration needed before you begin.

![\[Entra/Azure AD and AWS Transfer Family architecture diagram. An AWS VPC connecting to an Entra virtual network over the internet, using an AWS Directory Service connector to the Entra Domain Service.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-architecture.png)


## Step 1: Adding Entra ID Domain Services
<a name="azure-add-adds"></a>

 Entra ID does not support Domain joining instances by default. To perform actions like Domain Join, and to use tools such as Group Policy, administrators must enable Entra ID Domain Services. If you have not already added Entra DS, or your existing implementation is not associated with the domain that you want your SFTP Transfer server to use, you must add a new instance.

For information about enabling Entra ID Domain Services, see [ Tutorial: Create and configure a Microsoft Entra Domain Services managed domain](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-getting-started).

**Note**  
When you enable Entra DS, make sure it is configured for the resource group and the Entra domain to which you are connecting your SFTP Transfer server.

![\[Entra domain services screen showing the resource group bob.us running.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-ad-add-instance.png)


## Step 2: Creating a service account
<a name="azure-create-service-acct"></a>

 Entra must have one service account that is part of an Admin group in Entra DS. This account is used with the AWS Active Directory connector. Make sure this account is in sync with Entra DS. 

![\[Entra screen showing a profile for a user.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-service-acct.png)


**Tip**  
Multi-factor authentication for Entra ID is not supported for Transfer Family servers that use the SFTP protocol. The Transfer Family server cannot provide the MFA token after a user authenticates to SFTP. Make sure to disable MFA before you attempt to connect.  

![\[Entra multi-factor authentication details, showing the MFA status as disabled for two users.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-ad-mfa-disable.png)


## Step 3: Setting up AWS Directory using AD Connector
<a name="azure-setup-directory"></a>

 After you have configured Entra DS, and created a service account with IPSEC VPN tunnels between your AWS VPC and Entra Virtual network, you can test the connectivity by pinging the Entra DS DNS IP address from any AWS EC2 instance.

After you verify the connection is active, you can continue below.

**To set up your AWS Directory using AD Connector**

1. Open the [Directory Service](https://console.aws.amazon.com/directoryservicev2/) console and select **Directories**.

1. Select **Set up directory**.

1. For directory type, choose **AD Connector**.

1. Select a directory size, select **Next**, then select your VPC and Subnets.

1. Select **Next**, then fill in the fields as follows:
   + **Directory DNS name**: enter the domain name you are using for your Entra DS.
   + **DNS IP addresses**: enter your Entra DS IP addresses.
   + **Server account username** and **password**: enter the details for the service account you created in *Step 2: Create a service account*.

1. Complete the screens to create the directory service.

Now the directory status should be **Active**, and it is ready to be used with an SFTP Transfer server.

![\[The Directory Services screen showing one directory with a status of Active, as required.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-connector-ready.png)


## Step 4: Setting up AWS Transfer Family server
<a name="azure-setup-transfer-server"></a>

Create a Transfer Family server with the SFTP protocol, and the identity provider type of **AWS Directory Service**. From **Directory** drop down list, select the directory you added in *Step 3: Setup AWS Directory using AD Connector*.

**Note**  
You can't delete a Microsoft AD directory in AWS Directory Service if you used it in a Transfer Family server. You must delete the server first, and then you can delete the directory. 

## Step 5: Granting access to groups
<a name="azure-grant-access"></a>

 After you create the server, you must choose which groups in the directory should have access to upload and download files over the enabled protocols using AWS Transfer Family. You do this by creating an *access*.

**Note**  
Users must belong *directly* to the group to which you are granting access. For example, assume that Bob is a user and belongs to groupA, and groupA itself is included in groupB.  
If you grant access to groupA, Bob is granted access.
 If you grant access to groupB (and not to groupA), Bob does not have access.

 In order to grant access you need to retrieve the SID for the group.

Use the following Windows PowerShell command to retrieve the SID for a group, replacing *YourGroupName* with the name of the group. 

```
Get-ADGroup -Filter {samAccountName -like "YourGroupName*"} -Properties * | Select SamAccountName,ObjectSid
```

![\[Windows PowerShell showing an Object SID being retrieved.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-grant-access.png)


**Grant access to groups**

1. Open [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. Navigate to your server details page and in the **Accesses** section, choose **Add access**. 

1. Enter the SID you received from the output of the previous procedure.

1. For **Access**, choose an AWS Identity and Access Management role for the group.

1. In the **Policy** section, choose a policy. The default value is **None**.

1. For **Home directory**, choose an Amazon S3 bucket that corresponds to the group's home directory.

1. Choose **Add** to create the association.

The details from your Transfer server should look similar to the following:

![\[A portion of the Transfer Family server details screen, showing an example Directory ID for the Identity provider.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-assoc-1.png)


![\[A portion of the Transfer Family server details screen, showing the External ID of the active directory in the Accesses portion of the screen.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/azure-assoc-2.png)


## Step 6: Testing users
<a name="azure-test"></a>

You can test ([Testing users](directory-services-users.md#directory-services-test-user)) whether a user has access to the AWS Managed Microsoft AD directory for your server. A user must be in exactly one group (an external ID) that is listed in the **Access** section of the **Endpoint configuration** page. If the user is in no groups, or is in more than a single group, that user is not granted access. 

# Using logical directories to simplify your Transfer Family directory structures
<a name="logical-dir-mappings"></a>

Logical directories simplify your AWS Transfer Family server directory structure. With logical directories, you can create a virtual directory structure with user-friendly names that users navigate when connecting to your Amazon S3 bucket or Amazon EFS file system. This prevents users from seeing the actual directory paths, bucket names, and file system names.

**Note**  
You should use session policies so that your end users can only perform operations that you allow them to perform.  
You should use logical directories to create a user-friendly, virtual directory for your end users and abstract away bucket names. Logical directory mappings only allow users to access their designated logical paths and subdirectories, and forbid relative paths that traverse the logical roots.  
Transfer Family validates every path that might include relative elements and actively blocks these paths from resolving before we pass these paths to Amazon S3; this prevents your users from moving beyond their logical mappings.  
Even though Transfer Family prevents your end users from accessing directories outside of their logical directory, we recommend you also use unique roles or session policies to enforce least privilege at the storage level.

## Understanding chroot and directory structure
<a name="chroot-dir-structure"></a>

A **chroot** operation lets you set a user's root directory to any location in your storage hierarchy. This restricts users to their configured home or root directory, preventing access to higher-level directories.

Consider a case where an Amazon S3 user is limited to `amzn-s3-demo-bucket/home/${transfer:UserName}`. Without **chroot**, some clients might allow users to move up to /amzn-s3-demo-bucket/home, requiring a logout and login to return to their proper directory. Performing a **chroot** operation prevents this issue.

You can create custom directory structures across multiple buckets and prefixes. This is useful if your workflow requires a specific directory layout that bucket prefixes alone can't provide. You can also link to multiple non-contiguous locations within Amazon S3, similar to creating a symbolic link in a Linux file system where your directory path references a different location in the file system.

## Rules for using logical directories
<a name="logical-dir-rules"></a>

This section describes some rules and other considerations for using logical directories.

### Mapping limits
<a name="key-mapping-rules"></a>
+ Only one mapping is allowed when `Entry` is `"/"` (no overlapping paths are allowed).
+ Logical directories support mappings of up to 2.1 MB for custom IDP and AD users, and 2,000 entries for service-managed users. You can calculate your mappings size as follows:

  1. Write out a typical mapping in the format `{"Entry":"/entry-path","Target":"/target-path"}`, where `entry-path` and `target-path` are the actual values that you will use.

  1. Count the characters in that string, then add one (1).

  1. Multiply that number by the approximate number of mappings that you have for your server.

  If the number that you estimated in step 3 is less than 2.1 MB, then your mappings are within the acceptable limit.

### Target path requirements
<a name="target-path"></a>
+ Use `${transfer:UserName}` variable if the bucket or file system path has been parameterized based on the username.
+ Targets can be configured to point to different Amazon S3 buckets or file systems, as long as the associated IAM role has the necessary permissions to access those storage locations.
+ All targets must begin with a forward slash (`/`) but can't end with one. For example, `/amzn-s3-demo-bucket/images` is correct, while `amzn-s3-demo-bucket/images `and `/amzn-s3-demo-bucket/images/` are not.

### Storage considerations
<a name="storage-considerations"></a>
+ Amazon S3 is an object store where folders exist only as a virtual concept. When using Amazon S3 storage, Transfer Family reports prefixes as directories in STAT operations, even if there is no zero-byte object with a trailing slash. A proper zero-byte object with a trailing slash is also reported as a directory in STAT operations. This behavior is described in [ Organizing objects in the Amazon S3 console using folders](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) in the *Amazon Simple Storage Service User Guide*.
+ For applications that need to distinguish between files and folders, use Amazon Elastic File System (Amazon EFS) as your Transfer Family storage option.
+ If you're specifying logical directory values for your user, the parameter that you use depends on the type of user:
  + For service-managed users, provide logical directory values in `HomeDirectoryMappings`.
  + For custom identity provider users, provide logical directory values in `HomeDirectoryDetails`.

### User directory values
<a name="user-dir-values"></a>
+ The parameter for specifying logical directory values depends on your user type:
  + For service-managed users, provide logical directory values in `HomeDirectoryMappings`.
  + For custom identity provider users, provide logical directory values in `HomeDirectoryDetails`.
+ When using LOGICAL HomeDirectoryType, you can specify a HomeDirectory value for Service Managed users, Active Directory access, and Custom Identity Provider implementations where the HomeDirectoryDetails are provided in the response. If not specified, HomeDirectory defaults to `/`.

For details on how to implement logical directories, see [Implementing logical directories](implement-log-dirs.md).

# Implementing logical directories
<a name="implement-log-dirs"></a>

**Important**  
**Root directory requirements**
If you are not using Amazon S3 performance optimization settings your root directory must exist on startup.
For Amazon S3, this means creating a zero-byte object that ends with a forward slash (`/`).
To avoid this requirement, consider enabling Amazon S3 performance optimization when you create or update your server.
When specifying a HomeDirectory with LOGICAL HomeDirectoryType, the value must map to one of your logical directory mappings. The service validates this during both user creation and updates to prevent configurations that would not work.
**Logical home directory configuration**
When using LOGICAL as your HomeDirectoryType, note the following:  
The HomeDirectory value must correspond to one of your existing logical directory mappings.
The system automatically validates this during user creation and updates.
This validation prevents configurations that would cause access issues.

## Enable logical directories
<a name="enable-log-dirs-small"></a>

To use logical directories for a user, set the `HomeDirectoryType` parameter to `LOGICAL`. Do this when you create a new user or update an existing user. 

```
"HomeDirectoryType": "LOGICAL"
```

## Enable `chroot` for users
<a name="chroot"></a>

For **chroot**, create a directory structure that consists of a single `Entry` and `Target` pairing for each user. The Entry **/** represents the root folder, while **Target** specifies the actual location in your bucket or file system.

------
#### [ Example for Amazon S3 ]

```
[{"Entry": "/", "Target": "/amzn-s3-demo-bucket/jane"}]
```

------
#### [ Example for Amazon EFS ]

```
[{"Entry": "/", "Target": "/fs-faa1a123/jane"}]
```

------

You can use an absolute path as in the previous example, or you can use a dynamic substitution for the username with `${transfer:UserName}`, as in the following example.

```
[{"Entry": "/", "Target":
"/amzn-s3-demo-bucket/${transfer:UserName}"}]
```

In the preceding example, the user is locked to their root directory and cannot traverse up higher in the hierarchy.

## Virtual directory structure
<a name="virtual-dirs"></a>

For a virtual directory structure, you can create multiple `Entry` `Target` pairings, with targets anywhere in your S3 buckets or EFS file systems, including across multiple buckets or file systems, as long as the user’s IAM role mapping has permissions to access them.

In the following virtual structure example, when the user logs into AWS SFTP, they are in the root directory with sub-directories of `/pics`, `/doc`, `/reporting`, and `/anotherpath/subpath/financials`. 

**Note**  
Unless you choose to optimize performance for your Amazon S3 directories (when you create or update a server), either the user or an administrator needs to create the directories if they don't already exist. Avoiding this issue is a reason to consider optimizing Amazon S3 performance.  
For Amazon EFS, you still need the administrator to create the logical mappings or the `/` directory.

```
[
{"Entry": "/pics", "Target": "/amzn-s3-demo-bucket1/pics"}, 
{"Entry": "/doc", "Target": "/amzn-s3-demo-bucket1/anotherpath/docs"},
{"Entry": "/reporting", "Target": "/amzn-s3-demo-bucket2/Q1"},
{"Entry": "/anotherpath/subpath/financials", "Target": "/amzn-s3-demo-bucket2/financials"}]
```



**Note**  
 You can only upload files to the specific folders that you map. This means that in the previous example, you cannot upload to `/anotherpath` or `anotherpath/subpath` directories; only `anotherpath/subpath/financials`. You also cannot map to those paths directly, as overlapping paths are not allowed.  
 For example, assume that you create the following mappings:   

```
{
   "Entry": "/pics", 
   "Target": "/amzn-s3-demo-bucket/pics"
}, 
{
   "Entry": "/doc", 
   "Target": "/amzn-s3-demo-bucket/mydocs"
}, 
{
   "Entry": "/temp", 
   "Target": "/amzn-s3-demo-bucket2/temporary"
}
```
 You can only upload files to those buckets. When you first connect through `sftp`, you are dropped into the root directory, `/`. If you attempt to upload a file to that directory, the upload fails. The following commands show an example sequence:   

```
sftp> pwd
Remote working directory: /
sftp> put file
Uploading file to /file
remote open("/file"): No such file or directory
```
To upload to any `directory/sub-directory`, you must explicitly map the path to the `sub-directory`.

For more information about configuring logical directories and **chroot** for your users, including an AWS CloudFormation template that you can download and use, see [ Simplify your AWS SFTP Structure with chroot and logical directories](https://aws.amazon.com/blogs/storage/simplify-your-aws-sftp-structure-with-chroot-and-logical-directories/) in the AWS Storage Blog.

# Configure logical directories examples
<a name="logical-dir-example"></a>

In this example, we create a user and assign two logical directories. The following command creates a new user (for an existing Transfer Family server) with logical directories `pics` and `doc`. 

```
aws transfer create-user \
    --user-name marymajor \
    --server-id s-11112222333344445 \
    --role arn:aws:iam::1234abcd5678:role/marymajor-role \
    --home-directory-type LOGICAL \
    --home-directory-mappings "[{\"Entry\":\"/pics\", \"Target\":\"/amzn-s3-demo-bucket1/pics\"}, {\"Entry\":\"/doc\", \"Target\":\"/amzn-s3-demo-bucket2/test/mydocs\"}]" \
    --ssh-public-key-body file://~/.ssh/id_rsa.pub
```

If **marymajor** is an existing user and her home directory type is `PATH`, you can change it to `LOGICAL` with a similar command as the previous one.

```
aws transfer update-user \
    --user-name marymajor \
    --server-id s-11112222333344445 \
    --role arn:aws:iam::1234abcd5678:role/marymajor-role \
    --home-directory-type LOGICAL \
    --home-directory-mappings "[{\"Entry\":\"/pics\", \"Target\":\"/amzn-s3-demo-bucket1/pics\"}, {\"Entry\":\"/doc\", \"Target\":\"/amzn-s3-demo-bucket2/test/mydocs\"}]"
```

Note the following:
+ If the directories `/amzn-s3-demo-bucket1/pics` and `/amzn-s3-demo-bucket2/test/mydocs` don't already exist, the user (or an administrator) needs to create them.
**Note**  
These directories are created automatically by the Transfer Family server if you have configured optimized directories.
+ When **marymajor** connects to the server, and runs the `ls -l` command, Mary sees the following:

  ```
  drwxr--r--   1        -        -        0 Mar 17 15:42 doc
  drwxr--r--   1        -        -        0 Mar 17 16:04 pics
  ```
+ **marymajor** cannot create any files or directories at this level. However, within `pics` and `doc`, she can add sub-directories.
+ Files that Mary adds to `pics` and `doc` are added to Amazon S3 paths `/amzn-s3-demo-bucket1/pics` and `/amzn-s3-demo-bucket2/test/mydocs` respectively.
+ In this example, we specify two different buckets to illustrate that possibility. However, you can use the same bucket for several or all of the logical directories that you specify for the user.

This example provides an alternate configuration for a logical home path.

```
aws transfer create-user \
    --user-name marymajor \
    --server-id s-11112222333344445 \
    --role arn:aws:iam::1234abcd5678:role/marymajor-role \
    --home-directory-type LOGICAL \
    --home-directory /home/marymajor \
    --home-directory-mappings "[{\"Entry\":\"/home/marymajor/pics\", \"Target\":\"/amzn-s3-demo-bucket1/pics\"}, {\"Entry\":\"/home/marymajor/doc\", \"Target\":\"/amzn-s3-demo-bucket2/test/mydocs\"}]" \
    --ssh-public-key-body file://~/.ssh/id_rsa.pub
```

Note the following:
+ The mappings provide for a common path, `/home/marymajor`, which is the first part of the two logical paths. Files then can be added to the `pics` and `doc` folders.
+ As in the previous example, the home directory, `/home/marymajor`, is read-only.

## Configure logical directories for Amazon EFS
<a name="logical-dir-efs"></a>

If your Transfer Family server uses Amazon EFS, the home directory for the user must be created with read and write access before the user can work in their logical home directory. The user cannot create this directory themselves, as they would lack permissions for `mkdir` on their logical home directory.

If the user's home directory does not exist, and they run an `ls` command, the system responds as follows:

```
sftp> ls
remote readdir ("/"): No such file or directory
```

A user with administrative access to the parent directory needs to create the user's logical home directory.

## Custom AWS Lambda response
<a name="auth-lambda-response"></a>

You can use logical directories with a Lambda function that connects to your custom identity provider. To do so, in your Lambda function, you specify the `HomeDirectoryType` as **LOGICAL**, and add `Entry` and `Target` values for the `HomeDirectoryDetails` parameter. For example:

```
HomeDirectoryType: "LOGICAL"
HomeDirectoryDetails: "[{\"Entry\": \"/\", \"Target\": \"/amzn-s3-demo-bucket/theRealFolder"}]"
```

The following code is an example of a successful response from a custom Lambda authentication call. 

```
aws transfer test-identity-provider \
    --server-id s-1234567890abcdef0 \
    --user-name myuser
{
    "Url": "https://a1b2c3d4e5.execute-api.us-east-2.amazonaws.com/prod/servers/s-1234567890abcdef0/users/myuser/config", 
    "Message": "", 
    "Response": "{\"Role\": \"arn:aws:iam::123456789012:role/bob-usa-role\",
                  \"HomeDirectoryType\": \"LOGICAL\",
                  \"HomeDirectoryDetails\": \"[{\\\"Entry\\\":\\\"/myhome\\\",\\\"Target\\\":\\\"/amzn-s3-demo-bucket/theRealFolder\\\"}]\",
                  \"PublicKeys\": \"[ssh-rsa myrsapubkey]\"}", 
    "StatusCode": 200
}
```

**Note**  
The `"Url":` line is returned only if you are using an API Gateway method as your custom identity provider.

# Access your FSx for NetApp ONTAP file systems with Transfer Family
<a name="fsx-s3-access-points"></a>



**Contents**
+ [Overview](#fsx-overview)
+ [Prerequisites](#fsx-prerequisites)
  + [FSx for NetApp ONTAP requirements](#fsx-ontap-requirements)
  + [Required IAM permissions](#required-iam-permissions)
+ [How FSx storage works with Transfer Family](#how-fsx-storage-works)
  + [File system user identity](#file-system-user-identity)
+ [Creating an S3 access point for FSx](#creating-s3-access-point)
  + [Access point naming](#access-point-naming)
  + [Creating an access point for FSx for NetApp ONTAP](#creating-access-point-ontap)
  + [Configuring file system permissions](#configuring-file-system-permissions)
+ [Using S3 access point aliases with FSx](#using-s3-access-point-aliases)
  + [About access point aliases](#about-access-point-aliases)
  + [Finding your access point alias](#finding-access-point-alias)
+ [Configuring Transfer Family for FSx storage](#configuring-transfer-family-fsx)
  + [Creating an IAM role](#creating-iam-role-fsx)
+ [Managing users for FSx storage](#managing-users-fsx)
  + [Creating a user](#creating-user-fsx)
  + [Configuring multiple directory mappings](#multiple-directory-mappings)
+ [Configuring file transfer clients](#configuring-file-transfer-clients)
  + [WinSCP configuration](#winscp-configuration)
  + [Other SFTP clients](#other-sftp-clients)
+ [Troubleshooting FSx storage](#troubleshooting-fsx-storage)
  + [File operation issues](#file-operation-issues)

## Overview
<a name="fsx-overview"></a>

Transfer Family supports Amazon FSx for NetApp ONTAP through S3 access points. Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, high-performing, and feature-rich file storage built on NetApp's popular ONTAP file system. When you configure Transfer Family with an FSx file system, your users connect to Transfer Family endpoints using standard file transfer clients. Transfer Family routes file operations through an S3 access point attached to your FSx volume, while your data remains on the FSx file system. To learn more about FSx for NetApp ONTAP, see [What is Amazon FSx for NetApp ONTAP?](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/what-is-fsx-ontap.html)

This integration enables you to:
+ Transfer files using SFTP, FTPS, or FTP protocols to enterprise-grade file storage
+ Access the same data through multiple protocols (SFTP, NFS, SMB)
+ Use FSx features such as snapshots, backups, and data tiering

**Important**  
Some file operations are not supported when using FSx file systems with Transfer Family, including rename and append operations. For upload operations, file sizes are limited to 5 GB. For a complete list of limitations, see [Access point compatibility](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/access-points-for-fsxn-object-api-support.html).

## Prerequisites
<a name="fsx-prerequisites"></a>

Before you configure Transfer Family with Amazon FSx, you must meet the following requirements.

### FSx for NetApp ONTAP requirements
<a name="fsx-ontap-requirements"></a>

To use FSx for NetApp ONTAP with Transfer Family, you need:
+ An FSx for NetApp ONTAP file system running ONTAP version 9.17.1 or later
+ The file system and S3 access point in the same AWS Region
+ The same AWS account owning both the file system and access point

To learn more, see [Getting started with Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started.html).

### Required IAM permissions
<a name="required-iam-permissions"></a>

You can configure each S3 access point with distinct permissions and network controls that S3 applies for any request that is made using that access point. S3 access points support IAM resource policies that you can use to control the use of the access point by resource, user, or other conditions. For an application or user to access files through an access point, both the access point and the underlying volume must permit the request. For more information, see [IAM access point policies](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/s3-ap-manage-access-fsxn.html).

Amazon S3 access points for FSx use a dual-layer authorization model that combines IAM permissions with file system-level permissions. This approach ensures that data access requests are properly authorized at both the AWS service level and the underlying file system level.

For an application or user to successfully access data through an access point, both the S3 access point policy and the underlying FSx volume must permit the request.

To create and configure this integration, you need the following permissions:
+ `fsx:CreateAndAttachS3AccessPoint`
+ `s3:CreateAccessPoint`
+ `s3:GetAccessPoint`
+ `s3:PutAccessPointPolicy` (if creating an optional access point policy)

## How FSx storage works with Transfer Family
<a name="how-fsx-storage-works"></a>

When you configure Transfer Family with an FSx file system, the following components work together to enable file transfers:

1. A user connects to the Transfer Family server using an SFTP, FTPS, or FTP client.

1. Transfer Family authenticates the user using service-managed identities, a custom identity provider, or AWS Directory Service for Microsoft Active Directory. Once authenticated, Transfer Family assumes the IAM role associated with the user.

1. For each file operation, Transfer Family acts as a standard S3 API client, making requests to the S3 Access Point using the assumed IAM role of the user and verifies permissions against the S3 access point policy.

1. The FSx file system verifies that the file system user associated with the access point has permission to perform the requested operation. The file operation is then performed on the FSx volume.

For a file operation to succeed, both authorization layers must permit the request.

**Note**  
Attaching an S3 access point to an FSx volume does not change how the volume behaves when accessed directly through NFS or SMB. Existing file protocol access continues to work unchanged.

### File system user identity
<a name="file-system-user-identity"></a>

Each access point uses a file system user identity that you specify when creating the access point. This identity authorizes all file access requests made through that access point. The file system user is a user account on the underlying Amazon FSx file system. If the file system user has read-only access, then only read requests made using the access point are authorized, and write requests are blocked. If the file system user has read-write access, then both read and write requests to the attached volume made using the access point are authorized.

## Creating an S3 access point for FSx
<a name="creating-s3-access-point"></a>

Before you configure Transfer Family, you must create an S3 access point attached to your FSx volume. S3 access points are named network endpoints that are attached to a data source such as a bucket or Amazon FSx for ONTAP volume. You can create and attach an access point to an FSx for NetApp ONTAP using the Amazon FSx console, AWS CLI, or API. Once attached, you can use the S3 object APIs to access your file data. Your data continues to reside on the Amazon FSx file system and continues to be directly accessible for your existing workloads. You continue to manage your storage using all the FSx for NetApp ONTAP storage management capabilities, including backups, snapshots, user and group quotas, and compression.

For more information, see [Creating access points](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/create-access-points.html).

### Access point naming
<a name="access-point-naming"></a>

When you name your access point, follow these guidelines:
+ Access point names must be unique within your AWS account and Region.
+ Names cannot end with `-ext-s3alias` (reserved for aliases).
+ Avoid including sensitive information in names because they are published in DNS.

For a full list of naming rules, see [Access points naming rules, restrictions, and limitations](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/access-point-for-fsxn-restrictions-limitations-naming-rules.html).

### Creating an access point for FSx for NetApp ONTAP
<a name="creating-access-point-ontap"></a>

Use the following procedure to create an S3 access point for an FSx for NetApp ONTAP volume.

**To create an access point (console)**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the navigation pane, choose **File systems**.

1. Choose your FSx for NetApp ONTAP file system.

1. Choose the **Volumes** tab.

1. Select the volume that you want to attach.

1. For **Actions**, choose **Create S3 access point**.

1. For **Access point name**, enter a descriptive name (for example, `transfer-family-ap`).

1. For **File system user identity type**, choose one of the following:
   + **UNIX identity** - For volumes with UNIX security style
   + **Windows identity** - For volumes with NTFS security style

1. (Optional) For **Access point policy**, enter an IAM policy that defines which IAM principals can perform which operations on objects accessed through this access point. For more information, see [Managing access point access](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/s3-ap-manage-access-fsxn.html).

1. Choose **Create**.

1. After creation, note the access point alias for use in Transfer Family configuration.

**Note**  
When AWS Transfer Family accesses S3 resources on behalf of your connected SFTP/FTPS users, requests originate from AWS Transfer Family infrastructure, not from your VPC. Because of this, S3 Access Points configured with a VPC network origin will deny these requests. However, even if you use an Access Point configured with an Internet network origin, all traffic between Transfer Family and the Access Point remains private and travels over the AWS backbone network - it does not traverse the public internet.

### Configuring file system permissions
<a name="configuring-file-system-permissions"></a>

The file system user that you specify determines what operations Transfer Family users can perform. You must configure appropriate permissions on your FSx volume.

**UNIX example:**

```
# Create a directory for Transfer Family users
mkdir -p /vol1/transfer-users

# Set ownership to match the access point user
chown 1001:1001 /vol1/transfer-users

# Set permissions
chmod 755 /vol1/transfer-users
```

**Windows example:**

```
# Create a directory for Transfer Family users
New-Item -Path "D:\vol1\transfer-users" -ItemType Directory

# Set permissions for the file system user associated with the access point
# Replace DOMAIN\TransferUser with your Windows user identity
icacls "D:\vol1\transfer-users" /grant "DOMAIN\TransferUser:(OI)(CI)M" /T

# Verify permissions
icacls "D:\vol1\transfer-users"
```

## Using S3 access point aliases with FSx
<a name="using-s3-access-point-aliases"></a>

When you use FSx file systems with Transfer Family, you must use S3 access point aliases. Transfer Family does not support using access point ARNs or other reference methods for FSx storage.

**Important**  
AWS Transfer Family only supports S3 access point aliases when using FSx file systems. You cannot use access point ARNs or virtual-hosted-style URIs.

**Important**  
The access point must be in the same Region as the volume.

### About access point aliases
<a name="about-access-point-aliases"></a>

When you create an S3 access point attached to an FSx volume, Amazon S3 automatically generates an access point alias. This alias is a unique identifier that you can use anywhere you use an S3 bucket name.

For access points attached to FSx volumes, the alias uses the following format:

```
access-point-name-metadata-ext-s3alias
```

**Example alias:**

```
my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias
```

**Note**  
The `-ext-s3alias` suffix is reserved for FSx access point aliases. You cannot use this suffix in access point names.

### Finding your access point alias
<a name="finding-access-point-alias"></a>

You can find the access point alias after creating the access point.

**To find the access point alias (console)**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the navigation pane, choose **File systems**.

1. Choose your file system.

1. Choose the **Volumes** tab and select the volume you created the access point for.

1. Go to **S3 access point details** column.

1. The alias is displayed in the **Alias** column.

**To find the access point alias (CLI)**

Use the `describe-s3-access-point-attachments` command.

```
aws fsx describe-s3-access-point-attachments \
    --filters Name=file-system-id,Values=fs-0123456789abcdef0
```

The response includes the alias:

```
{
    "S3AccessPointAttachments": [
        {
            "S3AccessPoint": {
                "ResourceARN": "arn:aws:s3:us-east-1:111122223333:accesspoint/my-fsx-ap",
                "Alias": "my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias"
            }
        }
    ]
}
```

When you configure Transfer Family users, use the access point alias in home directory mappings.

**Home directory format:**

```
/access-point-alias/path/to/directory
```

**Example:**

```
/my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias/users/jsmith
```

## Configuring Transfer Family for FSx storage
<a name="configuring-transfer-family-fsx"></a>

After you create the S3 access point, configure a Transfer Family server to use it.

### Creating an IAM role
<a name="creating-iam-role-fsx"></a>

You must create an IAM role that grants Transfer Family access to the S3 access point.

**Important**  
IAM policies require the Access Point ARN format, not the alias. Use the format `arn:aws:s3:region:account-id:accesspoint/access-point-name` in your IAM policy Resource statements. The access point alias (ending in `-ext-s3alias`) is only used for home directory mappings.

**To create the IAM role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**, then choose **Create role**.

1. For **Trusted entity type**, choose **AWS service**.

1. For **Use case**, choose **Transfer**.

1. Choose **Next**.

1. Choose **Create policy** and enter your policy (see sample policy below).

1. Attach the policy to the role and choose **Create role**.

**Example IAM policy:**

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowFileOperations",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:GetObjectTagging",
                "s3:PutObjectTagging"
            ],
            "Resource": "arn:aws:s3:us-east-2:111122223333:accesspoint/my-fsx-ap/object/*"
        },
        {
            "Sid": "AllowDirectoryOperations",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:us-east-2:111122223333:accesspoint/my-fsx-ap"
        }
    ]
}
```

## Managing users for FSx storage
<a name="managing-users-fsx"></a>

Create Transfer Family users with home directory mappings that use the S3 access point alias.

### Creating a user
<a name="creating-user-fsx"></a>

When you create a user for FSx storage, use the access point alias in home directory mappings.

**To create a Service Managed user (console)**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the navigation pane, choose **Servers**.

1. Choose your server.

1. In the Users section, choose **Add user**.

1. For **Username**, enter a username.

1. For **Role**, choose the IAM role that you created.

1. For **Home directory**, choose **Restricted**.

1. For **Home directory mappings**, add a mapping using the access point alias:

   ```
   [{"Entry": "/", "Target": "/my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias/users/jsmith"}]
   ```

**To create a user (CLI)**

Use the `create-user` command. Replace the access point alias with your alias.

```
aws transfer create-user \
    --server-id s-0123456789abcdef0 \
    --user-name jsmith \
    --role arn:aws:iam::111122223333:role/TransferFamilyFSxRole \
    --home-directory-type LOGICAL \
    --home-directory-mappings '[
        {
            "Entry": "/",
            "Target": "/my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias/users/jsmith"
        }
    ]'
```

### Configuring multiple directory mappings
<a name="multiple-directory-mappings"></a>

You can map multiple virtual directories to different paths on the FSx volume.

**Example: Separate upload and download directories**

```
aws transfer create-user \
    --server-id s-0123456789abcdef0 \
    --user-name jsmith \
    --role arn:aws:iam::111122223333:role/TransferFamilyFSxRole \
    --home-directory-type LOGICAL \
    --home-directory-mappings '[
        {
            "Entry": "/inbox",
            "Target": "/my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias/users/jsmith/inbox"
        },
        {
            "Entry": "/outbox",
            "Target": "/my-fsx-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias/users/jsmith/outbox"
        }
    ]'
```

## Configuring file transfer clients
<a name="configuring-file-transfer-clients"></a>

When using FSx file systems with Transfer Family, you must configure your file transfer clients to disable features that are not supported.

### WinSCP configuration
<a name="winscp-configuration"></a>

WinSCP uses a temporary rename feature by default that is not supported with S3 access points for FSx.

**Warning**  
If you do not disable the temporary rename feature in WinSCP, file uploads will fail.

**To disable temporary rename in WinSCP**

1. Open WinSCP.

1. On the Login dialog, choose **Edit** to modify your session settings.

1. Choose **Advanced**.

1. In the left navigation, under **Transfer**, choose **Endurance**.

1. For **Enable transfer resume/transfer to temporary filename**, choose **Disable**.

1. Choose **OK** to save the settings.

Alternatively, you can disable this setting for an existing session:

1. Connect to your Transfer Family server.

1. Choose **Options**, then **Preferences**.

1. Choose **Transfer**, then **Endurance**.

1. For **Enable transfer resume/transfer to temporary filename**, choose **Disable**.

1. Choose **OK**.

### Other SFTP clients
<a name="other-sftp-clients"></a>

For other SFTP clients, disable the following features if available:
+ Temporary file uploads (upload to temp file, then rename)
+ Resume transfers using temporary files
+ Atomic uploads using rename operations
+ Append mode for uploads

Consult your client documentation for specific configuration steps.

## Troubleshooting FSx storage
<a name="troubleshooting-fsx-storage"></a>

This section describes how to identify and resolve common issues when using Transfer Family with FSx file systems.

### File operation issues
<a name="file-operation-issues"></a>

**Permission denied**

If you receive permission denied errors:

1. Verify the IAM role has the correct permissions for the access point alias. You can do this by testing directly with S3 APIs.

1. Check that the access point policy allows the IAM role.

1. Verify the file system user has permissions on the target path.

1. Confirm the home directory mapping uses the correct access point alias.

**Upload fails with WinSCP**

If file uploads fail with WinSCP, disable temporary rename:

1. In WinSCP, choose **Options**, then **Preferences**.

1. Choose **Transfer**, then **Endurance**.

1. For **Enable transfer resume/transfer to temporary filename**, choose **Disable**.

For more information, see [Configuring file transfer clients](#configuring-file-transfer-clients).

**File upload fails**

If file uploads fail:

1. Verify the file size is under 5 GB.

1. Check that the FSx volume has sufficient available storage.

1. Monitor CloudWatch metrics for throttling.