CfnAccessPointProps
- class aws_cdk.aws_s3files.CfnAccessPointProps(*, file_system_id, client_token=None, posix_user=None, root_directory=None, tags=None)
Bases:
objectProperties for defining a
CfnAccessPoint.- Parameters:
file_system_id (
str) – The ID of the S3 Files file system that the access point provides access to.client_token (
Optional[str]) – (optional) A string of up to 64 ASCII characters that Amazon EFS uses to ensure idempotent creation.posix_user (
Union[IResolvable,PosixUserProperty,Dict[str,Any],None])root_directory (
Union[IResolvable,RootDirectoryProperty,Dict[str,Any],None])tags (
Optional[Sequence[Union[AccessPointTagProperty,Dict[str,Any]]]])
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3files-accesspoint.html
- ExampleMetadata:
infused
Example:
import aws_cdk as cdk import aws_cdk.aws_ec2 as ec2 import aws_cdk.aws_s3 as s3 import aws_cdk.aws_s3files as s3files vpc = ec2.Vpc(self, "Vpc") # Versioning is required — S3 Files relies on object versions for consistency. bucket = s3.Bucket(self, "Bucket", versioned=True) # S3 Files assumes this role to sync data between S3 and the file system. role = iam.Role(self, "S3FilesRole", assumed_by=iam.ServicePrincipal("elasticfilesystem.amazonaws.com") ) # S3 permissions: read/write access to the bucket and objects role.add_to_policy(iam.PolicyStatement( actions=["s3:ListBucket*"], resources=[bucket.bucket_arn] )) role.add_to_policy(iam.PolicyStatement( actions=["s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject*", "s3:List*", "s3:PutObject*"], resources=[bucket.arn_for_objects("*")] )) # EventBridge permissions: S3 Files creates rules prefixed "DO-NOT-DELETE-S3-Files" # to detect S3 object changes and trigger data synchronization. role.add_to_policy(iam.PolicyStatement( actions=["events:DeleteRule", "events:DisableRule", "events:EnableRule", "events:PutRule", "events:PutTargets", "events:RemoveTargets" ], resources=[f"arn:{cdk.Aws.PARTITION}:events:*:*:rule/DO-NOT-DELETE-S3-Files*"], conditions={"StringEquals": {"events:ManagedBy": "elasticfilesystem.amazonaws.com"}} )) role.add_to_policy(iam.PolicyStatement( actions=["events:DescribeRule", "events:ListRuleNamesByTarget", "events:ListRules", "events:ListTargetsByRule"], resources=[f"arn:{cdk.Aws.PARTITION}:events:*:*:rule/*"] )) file_system = s3files.CfnFileSystem(self, "S3FilesFs", bucket=bucket.bucket_arn, role_arn=role.role_arn ) sg = ec2.SecurityGroup(self, "MountTargetSG", vpc=vpc) # Create a mount target in each private subnet so Lambda can reach the file system via NFS. vpc.private_subnets.for_each((subnet, i) => new s3files.CfnMountTarget(this, `MountTarget${i}`, { fileSystemId: fileSystem.attrFileSystemId, subnetId: subnet.subnetId, securityGroups: [sg.securityGroupId], })) # The access point defines the POSIX identity and root path Lambda uses on the file system. access_point = s3files.CfnAccessPoint(self, "AccessPoint", file_system_id=file_system.attr_file_system_id, root_directory=s3files.CfnAccessPoint.RootDirectoryProperty( path="/export/lambda", creation_permissions=s3files.CfnAccessPoint.CreationPermissionsProperty(owner_gid="1001", owner_uid="1001", permissions="750") ), posix_user=s3files.CfnAccessPoint.PosixUserProperty(gid="1001", uid="1001") ) fn = lambda_.Function(self, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset(path.join(__dirname, "lambda-handler")), vpc=vpc, filesystem=lambda_.FileSystem.from_s3_files_access_point(access_point, "/mnt/s3files") )Attributes
- client_token
(optional) A string of up to 64 ASCII characters that Amazon EFS uses to ensure idempotent creation.
- file_system_id
The ID of the S3 Files file system that the access point provides access to.
- posix_user
-
- Type:
see
- root_directory
-
- Type:
see