

# Monitoring your VPC
<a name="monitoring"></a>

You can use the following tools to monitor traffic or network access in your virtual private cloud (VPC).

**VPC Flow Logs**  
You can use VPC Flow Logs to capture detailed information about the traffic going to and from network interfaces in your VPCs.

**Amazon CloudWatch Internet Monitor**  
You can use Internet Monitor for visibility into how internet issues impact the performance and availability between your applications hosted on AWS and your end users. You can also explore, in near real-time, how to improve the projected latency of your application by switching to use other services, or by rerouting traffic to your workload through different AWS Regions. For more information, see [ Using Amazon CloudWatch Internet Monitor](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-InternetMonitor.html).

**Amazon VPC IP Address Manager (IPAM)**  
You can use IPAM to plan, track, and monitor IP addresses for your workloads. For more information, see [IP Address Manager](https://docs.aws.amazon.com/vpc/latest/ipam/).

**Traffic Mirroring**  
You can use this feature to copy network traffic from a network interface of an Amazon EC2 instance and send it to out-of-band security and monitoring appliances for deep packet inspection. You can detect network and security anomalies, gain operational insights, implement compliance and security controls, and troubleshoot issues. For more information, see [Traffic Mirroring](https://docs.aws.amazon.com/vpc/latest/mirroring/).

**Reachability Analyzer**  
You can use this tool to analyze and debug network reachability between two resources in your VPC. After you specify the source and destination resources, Reachability Analyzer produces hop-by-hop details of the virtual path between them when they are reachable, and identifies the blocking component when they are unreachable. For more information, see [Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/).

**Network Access Analyzer**  
You can use Network Access Analyzer to understand network access to your resources. This helps you identify improvements to your network security posture and demonstrate that your network meets specific compliance requirements. For more information, see [Network Access Analyzer](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/).

**CloudTrail logs**  
AWS CloudTrail logs API calls for Amazon VPC, such as:  
+ Which API calls were made (such as actions like creating or modifying VPC resources)
+ The source IP address of the call
+ Who made the call
+ When the call was made
Separate logs are created for `CreateVpc`, `DeleteVpc` and `CreateDefaultVpc` actions. These logs also include the default resources (like any default internet gateways or default security groups) created and associated with the VPC.   
For more information, see [Log Amazon EC2 API calls using AWS CloudTrail](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitor-with-cloudtrail.html) in the *Amazon EC2 User Guide*.

# Logging IP traffic using VPC Flow Logs
<a name="flow-logs"></a>

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to the following locations: Amazon CloudWatch Logs, Amazon S3, or Amazon Data Firehose. The configured delivery path and permissions that enable network traffic logs to be sent to a destination like CloudWatch Logs or S3 are referred to as *subscriptions*. After you create a flow log, you can retrieve and view the flow log records in the log group, bucket, or delivery stream that you configured.

Flow logs can help you with a number of tasks, such as:
+ Diagnosing overly restrictive security group rules
+ Monitoring the traffic that is reaching your instance
+ Determining the direction of the traffic to and from the network interfaces

Flow log data is collected outside of the path of your network traffic, and therefore does not affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance.

**Note**  
This section only talks about flow logs for VPCs. For information about flow logs for transit gateways introduced in version 6, see [Logging network traffic using Transit Gateway Flow Logs](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-flow-logs.html) in the *Amazon VPC Transit Gateways User Guide*.

**Topics**
+ [Flow logs basics](flow-logs-basics.md)
+ [Flow log records](flow-log-records.md)
+ [Flow log record examples](flow-logs-records-examples.md)
+ [Flow log limitations](flow-logs-limitations.md)
+ [Pricing](#flow-logs-pricing)
+ [Work with flow logs](working-with-flow-logs.md)
+ [Publish flow logs to CloudWatch Logs](flow-logs-cwl.md)
+ [Publish flow logs to Amazon S3](flow-logs-s3.md)
+ [Publish flow logs to Amazon Data Firehose](flow-logs-firehose.md)
+ [Query flow logs using Amazon Athena](flow-logs-athena.md)
+ [Troubleshoot VPC Flow Logs](flow-logs-troubleshooting.md)

# Flow logs basics
<a name="flow-logs-basics"></a>

You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored. 

Flow log data for a monitored network interface is recorded as *flow log records*, which are log events consisting of fields that describe the traffic flow. For more information, see [Flow log records](flow-log-records.md).

To create a flow log, you specify:
+ The resource for which to create the flow log
+ The type of traffic to capture (accepted traffic, rejected traffic, or all traffic)
+ The destinations to which you want to publish the flow log data

In the following example, you create a flow log that captures accepted traffic for the network interface for one of the EC2 instances in a private subnet and publishes the flow log records to an Amazon S3 bucket.

![\[Flow logs for an instance\]](http://docs.aws.amazon.com/vpc/latest/userguide/images/flow-logs-diagram-s3.png)


In the following example, a flow log captures all traffic for a subnet and publishes the flow log records to Amazon CloudWatch Logs. The flow log captures traffic for all network interfaces in the subnet.

![\[Flow logs for a subnet\]](http://docs.aws.amazon.com/vpc/latest/userguide/images/flow-logs-diagram-cw.png)


After you create a flow log, it can take several minutes to begin collecting and publishing data to the chosen destinations. Flow logs do not capture real-time log streams for your network interfaces. For more information, see [2. Create a flow log](working-with-flow-logs.md#create-flow-log). 

If you launch an instance into your subnet after you create a flow log for your subnet or VPC, we create a log stream (for CloudWatch Logs) or log file object (for Amazon S3) for the new network interface as soon as there is network traffic for the network interface.

You can create flow logs for network interfaces that are created by other AWS services, such as:
+ Elastic Load Balancing
+ Amazon RDS
+ Amazon ElastiCache
+ Amazon Redshift
+ Amazon WorkSpaces
+ NAT gateways
+ Transit gateways

Regardless of the type of network interface, you must use the Amazon EC2 console or the Amazon EC2 API to create a flow log for a network interface.

You can apply tags to your flow logs. Each tag consists of a key and an optional value, both of which you define. Tags can help you organize your flow logs, for example by purpose or owner.

If you no longer require a flow log, you can delete it. Deleting a flow log disables the flow log service for the resource, so that no new flow log records are created or published. Deleting a flow log does not delete any existing flow log data. After you delete a flow log, you can delete the flow log data directly from the destination when you are finished with it. For more information, see [4. Delete a flow log](working-with-flow-logs.md#delete-flow-log).

# Flow log records
<a name="flow-log-records"></a>

A flow log record represents a network flow in your VPC. By default, each record captures a network internet protocol (IP) traffic flow (characterized by a 5-tuple on a per network interface basis) that occurs within an *aggregation interval*, also referred to as a *capture window*.

Each record is a string with fields separated by spaces. A record includes values for the different components of the IP flow, for example, the source, destination, and protocol.

When you create a flow log, you can use the default format for the flow log record, or you can specify a custom format.

**Topics**
+ [Aggregation interval](#flow-logs-aggregration-interval)
+ [Default format](#flow-logs-default)
+ [Custom format](#flow-logs-custom)
+ [Available fields](#flow-logs-fields)

## Aggregation interval
<a name="flow-logs-aggregration-interval"></a>

The aggregation interval is the period of time during which a particular flow is captured and aggregated into a flow log record. By default, the maximum aggregation interval is 10 minutes. When you create a flow log, you can optionally specify a maximum aggregation interval of 1 minute. Flow logs with a maximum aggregation interval of 1 minute produce a higher volume of flow log records than flow logs with a maximum aggregation interval of 10 minutes.

When a network interface is attached to a [Nitro-based instance](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html), the aggregation interval is always 1 minute or less, regardless of the specified maximum aggregation interval.

After data is captured within an aggregation interval, it takes additional time to process and publish the data to CloudWatch Logs or Amazon S3. The flow log service typically delivers logs to CloudWatch Logs in about 5 minutes and to Amazon S3 in about 10 minutes. However, log delivery is on a best effort basis, and your logs might be delayed beyond the typical delivery time.

## Default format
<a name="flow-logs-default"></a>

With the default format, the flow log records include the version 2 fields, in the order shown in the [available fields](#flow-logs-fields) table. You cannot customize or change the default format. To capture additional fields or a different subset of fields, specify a custom format instead.

## Custom format
<a name="flow-logs-custom"></a>

With a custom format, you specify which fields are included in the flow log records and in which order. This enables you to create flow logs that are specific to your needs and to omit fields that are not relevant. Using a custom format can reduce the need for separate processes to extract specific information from the published flow logs. You can specify any number of the available flow log fields, but you must specify at least one.

## Available fields
<a name="flow-logs-fields"></a>

The following table describes all of the available fields for a flow log record. The **Version** column indicates the VPC Flow Logs version in which the field was introduced. The default format includes all version 2 fields, in the same order that they appear in the table.

When publishing flow log data to Amazon S3, the data type for the fields depends on the flow log format. If the format is plain text, all fields are of type STRING. If the format is Parquet, see the table for the field data types.

If a field is not applicable or could not be computed for a specific record, the record displays a '-' symbol for that entry. Metadata fields that do not come directly from the packet header are best effort approximations, and their values might be missing or inaccurate.


| Field | Description | Version | 
| --- | --- | --- | 
|  version  |  The VPC Flow Logs version. If you use the default format, the version is 2. If you use a custom format, the version is the highest version among the specified fields. For example, if you specify only fields from version 2, the version is 2. If you specify a mixture of fields from versions 2, 3, and 4, the version is 4. **Parquet data type:** INT\$132  | 2 | 
|  account-id  |  The AWS account ID of the owner of the source network interface for which traffic is recorded. If the network interface is created by an AWS service, for example when creating a VPC endpoint or Network Load Balancer, the record might display unknown for this field. **Parquet data type:** STRING  | 2 | 
|  interface-id  |  The ID of the network interface for which the traffic is recorded. Returns a '-' symbol for flows associated with a regional NAT gateway. **Parquet data type:** STRING  | 2 | 
|  srcaddr  |   For incoming traffic, this is the IP address of the source of traffic. For outgoing traffic, this is the private IPv4 address or the IPv6 address of the network interface sending the traffic. For outgoing traffic from regional NAT gateway, this is the same packet-level source IP address as in pkt-srcaddr. See also pkt-srcaddr. **Parquet data type:** STRING  | 2 | 
|  dstaddr  |  The destination address for outgoing traffic, or the IPv4 or IPv6 address of the network interface for incoming traffic on the network interface. The IPv4 address of the network interface is always its private IPv4 address. For incoming traffic to regional NAT gateway, this is the same packet-level destination IP address as in pkt-dstaddr. See also pkt-dstaddr. **Parquet data type:** STRING  | 2 | 
|  srcport  |  The source port of the traffic. **Parquet data type:** INT\$132  | 2 | 
|  dstport  |  The destination port of the traffic. **Parquet data type:** INT\$132  | 2 | 
|  protocol  |  The IANA protocol number of the traffic. For more information, see [ Assigned Internet Protocol Numbers](http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). **Parquet data type:** INT\$132  | 2 | 
|  packets  |  The number of packets transferred during the flow. **Parquet data type:** INT\$164  | 2 | 
|  bytes  |  The number of bytes transferred during the flow. **Parquet data type:** INT\$164  | 2 | 
|  start  |  The time, in Unix seconds, when the first packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface. **Parquet data type:** INT\$164  | 2 | 
|  end  |  The time, in Unix seconds, when the last packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface. **Parquet data type:** INT\$164  | 2 | 
|  action  |  The action that is associated with the traffic: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/flow-log-records.html) **Parquet data type:** STRING  | 2 | 
|  log-status  |  The logging status of the flow log: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/flow-log-records.html) **Parquet data type:** STRING  | 2 | 
|  vpc-id  |  The ID of the VPC that contains the network interface for which the traffic is recorded. **Parquet data type:** STRING  | 3 | 
|  subnet-id  |  The ID of the subnet that contains the network interface for which the traffic is recorded. Returns a '-' symbol for flows associated with regional NAT gateway. **Parquet data type:** STRING  | 3 | 
|  instance-id  |  The ID of the instance that's associated with network interface for which the traffic is recorded, if the instance is owned by you. Returns a '-' symbol for a [requester-managed network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requester-managed-eni.html); for example, the network interface for a NAT gateway. **Parquet data type:** STRING  | 3 | 
|  tcp-flags  | The bitmask value for the following TCP flags:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/flow-log-records.html)If no supported flags are recorded, the TCP flag value is 0. For example, since tcp-flags does not support logging ACK or PSH flags, records for traffic with these unsupported flags will result in tcp-flags value 0. If, however, an unsupported flag is accompanied by a supported flag, we will report the value of the supported flag. For example, if ACK is a part of SYN-ACK, it reports 18. And if there is a record like SYN\$1ECE, since SYN is a supported flag and ECE is not, the TCP flag value is 2. If for some reason the flag combination is invalid and the value cannot be calculated, the value is '-'. If no flags are sent, the TCP flag value is 0.TCP flags can be OR-ed during the aggregation interval. For short connections, the flags might be set on the same line in the flow log record, for example, 19 for SYN-ACK and FIN, and 3 for SYN and FIN. For an example, see [TCP flag sequence](flow-logs-records-examples.md#flow-log-example-tcp-flag).For general information about TCP flags (such as the meaning of flags like FIN, SYN, and ACK), see [TCP segment structure](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure) on Wikipedia.**Parquet data type:** INT\$132 | 3 | 
|  type  |  The type of traffic. The possible values are: IPv4 \$1 IPv6 \$1 EFA. For more information, see [Elastic Fabric Adapter](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html). **Parquet data type:** STRING  | 3 | 
|  pkt-srcaddr  |  The packet-level (original) source IP address of the traffic. Use this field with the srcaddr field to distinguish between the IP address of an intermediate layer through which traffic flows, and the original source IP address of the traffic. For example, when traffic flows through [a network interface for a NAT gateway](flow-logs-records-examples.md#flow-log-example-nat), or where the IP address of a pod in Amazon EKS is different from the IP address of the network interface of the instance node on which the pod is running (for communication within a VPC). **Parquet data type:** STRING  | 3 | 
|  pkt-dstaddr  |  The packet-level (original) destination IP address for the traffic. Use this field with the dstaddr field to distinguish between the IP address of an intermediate layer through which traffic flows, and the final destination IP address of the traffic. For example, when traffic flows through [a network interface for a NAT gateway](flow-logs-records-examples.md#flow-log-example-nat), or where the IP address of a pod in Amazon EKS is different from the IP address of the network interface of the instance node on which the pod is running (for communication within a VPC). **Parquet data type:** STRING  | 3 | 
|  region  |  The Region that contains the network interface for which traffic is recorded. **Parquet data type:** STRING  |  4  | 
|  az-id  |  The ID of the Availability Zone that contains the network interface for which traffic is recorded. If the traffic is from a sublocation, the record displays a '-' symbol for this field. **Parquet data type:** STRING  |  4  | 
|  sublocation-type  |  The type of sublocation that's returned in the sublocation-id field. The possible values are: [wavelength](https://aws.amazon.com/wavelength/) \$1 [outpost](https://docs.aws.amazon.com/outposts/latest/userguide/) \$1 [localzone](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-local-zones). If the traffic is not from a sublocation, the record displays a '-' symbol for this field. **Parquet data type:** STRING  |  4  | 
|  sublocation-id  |  The ID of the sublocation that contains the network interface for which traffic is recorded. If the traffic is not from a sublocation, the record displays a '-' symbol for this field. **Parquet data type:** STRING  |  4  | 
|  pkt-src-aws-service  |  The name of the subset of [IP address ranges](aws-ip-ranges.md) for the pkt-srcaddr field, if the source IP address is for an AWS service. If the source IP address belongs to an [overlapped range](aws-ip-syntax.md#aws-ip-range-overlaps), pkt-src-aws-service shows only one of the AWS service codes. The possible values are: `AMAZON` \$1 `AMAZON_APPFLOW` \$1 `AMAZON_CONNECT` \$1 `API_GATEWAY` \$1 `AURORA_DSQL` \$1 `CHIME_MEETINGS` \$1 `CHIME_VOICECONNECTOR` \$1 `CLOUD9` \$1 `CLOUDFRONT` \$1 `CLOUDFRONT_ORIGIN_FACING` \$1 `CODEBUILD` \$1 `DYNAMODB` \$1 `EBS` \$1 `EC2` \$1 `EC2_INSTANCE_CONNECT` \$1 `GLOBALACCELERATOR` \$1 `IVS_LOW_LATENCY` \$1 `IVS_REALTIME` \$1 `KINESIS_VIDEO_STREAMS` \$1 `MEDIA_PACKAGE_V2` \$1 `ROUTE53` \$1 `ROUTE53_HEALTHCHECKS` \$1 `ROUTE53_HEALTHCHECKS_PUBLISHING` \$1 `ROUTE53_RESOLVER` \$1 `S3` \$1 `WORKSPACES_GATEWAYS`. **Parquet data type:** STRING  |  5  | 
|  pkt-dst-aws-service  |  The name of the subset of IP address ranges for the pkt-dstaddr field, if the destination IP address is for an AWS service. For a list of possible values, see the pkt-src-aws-service field. **Parquet data type:** STRING  |  5  | 
|  flow-direction  |  The direction of the flow with respect to the interface where traffic is captured. The possible values are: ingress \$1 egress. **Parquet data type:** STRING  |  5  | 
|  traffic-path  |  The path that egress traffic takes to the destination. To determine whether the traffic is egress traffic, check the flow-direction field. The possible values are as follows. If none of the values apply, the field is set to -.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/flow-log-records.html) **Parquet data type:** INT\$132  |  5  | 
|  ecs-cluster-arn  | AWS Resource Name (ARN) of the ECS cluster if the traffic is from a running ECS task. To include this field in your subscription, you need permission to call ecs:ListClusters.Parquet data type: STRING |  7  | 
|  ecs-cluster-name  | Name of the ECS cluster if the traffic is from a running ECS task. To include this field in your subscription, you need permission to call ecs:ListClusters.Parquet data type: STRING |  7  | 
|  ecs-container-instance-arn  | ARN of the ECS container instance if the traffic is from a running ECS task on an EC2 instance. If the capacity provider is AWS Fargate, this field will be '-'. To include this field in your subscription, you need permission to call ecs:ListClusters and ecs:ListContainerInstances. Parquet data type: STRING |  7  | 
|  ecs-container-instance-id  | ID of the ECS container instance if the traffic is from a running ECS task on an EC2 instance. If the capacity provider is AWS Fargate, this field will be '-'. To include this field in your subscription, you need permission to call ecs:ListClusters and ecs:ListContainerInstances. Parquet data type: STRING |  7  | 
|  ecs-container-id  | Docker runtime ID of the container if the traffic is from a running ECS task. If there are one or more containers in the ECS task, this will be the docker runtime ID of the first container. To include this field in your subscription, you need permission to call ecs:ListClusters. Parquet data type: STRING |  7  | 
|  ecs-second-container-id  | Docker runtime ID of the container if the traffic is from a running ECS task. If there are more than one containers in the ECS task, this will be the Docker runtime ID of the second container. To include this field in your subscription, you need permission to call ecs:ListClusters. Parquet data type: STRING |  7  | 
|  ecs-service-name  | Name of the ECS service if the traffic is from a running ECS task and the ECS task is started by an ECS service. If the ECS task is not started by an ECS service, this field will be '-'. To include this field in your subscription, you need permission to call ecs:ListClusters and ecs:ListServices. Parquet data type: STRING |  7  | 
|  ecs-task-definition-arn  | ARN of the ECS task definition if the traffic is from a running ECS task. To include this field in your subscription, you need permission to call ecs:ListClusters and ecs:ListTaskDefinitions Parquet data type: STRING |  7  | 
|  ecs-task-arn  | ARN of the ECS task if the traffic is from a running ECS task. To include this field in your subscription, you need permission to call ecs:ListClusters and ecs:ListTasks. Parquet data type: STRING |  7  | 
|  ecs-task-id  | ID of the ECS task if the traffic is from a running ECS task. To include this field in your subscription, you need permission to call ecs:ListClusters and ecs:ListTasks. Parquet data type: STRING |  7  | 
|  reject-reason  |  Reason why traffic was rejected. Possible values: BPA, EC. Returns a '-' for any other reject reason. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/flow-log-records.html) **Parquet data type:** STRING  |  8  | 
|  resource-id  | The ID of the regional NAT gateway that contains the network interface for which the traffic is recorded. Returns a '-' symbol for traffic flows not associated with a regional NAT gateway. For more information about regional NAT gateways, see [Regional NAT gateways for automatic multi-AZ expansion](nat-gateways-regional.md). **Parquet data type:** STRING  |  9  | 
|  encryption-status  |  Encryption status of the flow. For more information about VPC Encryption Controls, see [Enforce VPC encryption in transit](vpc-encryption-controls.md). The possible values are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/flow-log-records.html) The value is '-' if VPC Encryption Controls is not enabled, or if FlowLog cannot get the status.  \$1 For interface and gateway endpoints, AWS does not look at packet data to determine encryption status, we instead rely on the port used to assume encryption status. \$1\$1 For specified AWS managed endpoints, AWS determines encryption status based on the requirement for TLS in the service configuration.  **Parquet data type:** INT\$132  |  10  | 

# Flow log record examples
<a name="flow-logs-records-examples"></a>

The following are examples of flow log records that capture specific traffic flows.

For information about flow log record format, see [Flow log records](flow-log-records.md). For information about how to create flow logs, see [Work with flow logs](working-with-flow-logs.md).

**Topics**
+ [Accepted and rejected traffic](#flow-log-example-accepted-rejected)
+ [No data and skipped records](#flow-log-example-no-data)
+ [Security group and network ACL rules](#flow-log-example-security-groups)
+ [IPv6 traffic](#flow-log-example-ipv6)
+ [TCP flag sequence](#flow-log-example-tcp-flag)
+ [Traffic through a zonal NAT gateway](#flow-log-example-nat)
+ [Traffic through a regional NAT gateway](#flow-log-example-regional-nat)
+ [Traffic through a transit gateway](#flow-log-example-tgw)
+ [Service name, traffic path, and flow direction](#flow-log-example-traffic-path)

## Accepted and rejected traffic
<a name="flow-log-example-accepted-rejected"></a>

The following are examples of default flow log records.

In this example, SSH traffic (destination port 22, TCP protocol) from IP address 172.31.16.139 to network interface with private IP address is 172.31.16.21 and ID eni-1235b8ca123456789 in account 123456789010 was allowed.

```
2 123456789010 eni-1235b8ca123456789 172.31.16.139 172.31.16.21 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK
```

In this example, RDP traffic (destination port 3389, TCP protocol) to network interface eni-1235b8ca123456789 in account 123456789010 was rejected.

```
2 123456789010 eni-1235b8ca123456789 172.31.9.69 172.31.9.12 49761 3389 6 20 4249 1418530010 1418530070 REJECT OK
```

## No data and skipped records
<a name="flow-log-example-no-data"></a>

The following are examples of default flow log records.

In this example, no data was recorded during the aggregation interval.

```
2 123456789010 eni-1235b8ca123456789 - - - - - - - 1431280876 1431280934 - NODATA
```

VPC Flow Logs skips records when it can't capture flow log data during an aggregation interval because it exceeds internal capacity. A single skipped record can represent multiple flows that were not captured for the network interface during the aggregation interval.

```
2 123456789010 eni-11111111aaaaaaaaa - - - - - - - 1431280876 1431280934 - SKIPDATA
```

**Note**  
Some flow log records may be skipped during the aggregation interval (see *log-status* in [Available fields](flow-log-records.md#flow-logs-fields)). This may be caused by an internal AWS capacity constraint or internal error. If you are using AWS Cost Explorer to view VPC flow log charges and some flow logs are skipped during the flow log aggregation interval, the number of flow logs reported in AWS Cost Explorer will be higher than the number of flow logs published by Amazon VPC.

## Security group and network ACL rules
<a name="flow-log-example-security-groups"></a>

If you're using flow logs to diagnose overly restrictive or permissive security group rules or network ACL rules, be aware of the statefulness of these resources. Security groups are stateful — this means that responses to allowed traffic are also allowed, even if the rules in your security group do not permit it. Conversely, network ACLs are stateless, therefore responses to allowed traffic are subject to network ACL rules.

For example, you use the **ping** command from your home computer (IP address is 203.0.113.12) to your instance (the network interface's private IP address is 172.31.16.139). Your security group's inbound rules allow ICMP traffic but the outbound rules do not allow ICMP traffic. Because security groups are stateful, the response ping from your instance is allowed. Your network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic. Because network ACLs are stateless, the response ping is dropped and does not reach your home computer. In a default flow log, this is displayed as two flow log records:
+ An ACCEPT record for the originating ping that was allowed by both the network ACL and the security group, and therefore was allowed to reach your instance.
+ A REJECT record for the response ping that the network ACL denied.

```
2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK
```

```
2 123456789010 eni-1235b8ca123456789 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK
```

If your network ACL permits outbound ICMP traffic, the flow log displays two ACCEPT records (one for the originating ping and one for the response ping). If your security group denies inbound ICMP traffic, the flow log displays a single REJECT record, because the traffic was not permitted to reach your instance.

## IPv6 traffic
<a name="flow-log-example-ipv6"></a>

The following is an example of a default flow log record. In the example, SSH traffic (port 22) from IPv6 address 2001:db8:1234:a100:8d6e:3477:df66:f105 to network interface eni-1235b8ca123456789 in account 123456789010 was allowed.

```
2 123456789010 eni-1235b8ca123456789 2001:db8:1234:a100:8d6e:3477:df66:f105 2001:db8:1234:a102:3304:8879:34cf:4071 34892 22 6 54 8855 1477913708 1477913820 ACCEPT OK
```

## TCP flag sequence
<a name="flow-log-example-tcp-flag"></a>

This section contains examples of custom flow logs that capture the following fields in the following order.

```
version vpc-id subnet-id instance-id interface-id account-id type srcaddr dstaddr srcport dstport pkt-srcaddr pkt-dstaddr protocol bytes packets start end action tcp-flags log-status
```

The tcp-flags field in the examples in this section are represented by the second-to-last value in the flow log. TCP flags can help you identify the direction of the traffic, for example, which server initiated the connection.

**Note**  
For more information about the tcp-flags option and an explanation of each of the TCP flags, see [Available fields](flow-log-records.md#flow-logs-fields).

In the following records (starting at 7:47:55 PM and ending at 7:48:53 PM), two connections were started by a client to a server running on port 5001. Two SYN flags (2) were received by server from the client from different source ports on the client (43416 and 43418). For each SYN, a SYN-ACK was sent from the server to the client (18) on the corresponding port.

```
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 52.213.180.42 10.0.0.62 43416 5001 52.213.180.42 10.0.0.62 6 568 8 1566848875 1566848933 ACCEPT 2 OK
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 10.0.0.62 52.213.180.42 5001 43416 10.0.0.62 52.213.180.42 6 376 7 1566848875 1566848933 ACCEPT 18 OK
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 52.213.180.42 10.0.0.62 43418 5001 52.213.180.42 10.0.0.62 6 100701 70 1566848875 1566848933 ACCEPT 2 OK
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 10.0.0.62 52.213.180.42 5001 43418 10.0.0.62 52.213.180.42 6 632 12 1566848875 1566848933 ACCEPT 18 OK
```

In the second aggregation interval, one of the connections that was established during the previous flow is now closed. The server sent a FIN flag (1) to the client for the connection on port 43418. The client responded with a FIN to the server on port 43418.

```
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 10.0.0.62 52.213.180.42 5001 43418 10.0.0.62 52.213.180.42 6 63388 1219 1566848933 1566849113 ACCEPT 1 OK
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 52.213.180.42 10.0.0.62 43418 5001 52.213.180.42 10.0.0.62 6 23294588 15774 1566848933 1566849113 ACCEPT 1 OK
```

For short connections (for example, a few seconds) that are opened and closed within a single aggregation interval, the flags might be set on the same line in the flow log record for traffic flow in the same direction. In the following example, the connection is established and finished within the same aggregation interval. In the first line, the TCP flag value is 3, which indicates that there was a SYN and a FIN message sent from the client to the server. In the second line, the TCP flag value is 19, which indicates that there was SYN-ACK and a FIN message sent from the server to the client.

```
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 52.213.180.42 10.0.0.62 43638 5001 52.213.180.42 10.0.0.62 6 1260 17 1566933133 1566933193 ACCEPT 3 OK
3 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-01234567890123456 eni-1235b8ca123456789 123456789010 IPv4 10.0.0.62 52.213.180.42 5001 43638  10.0.0.62 52.213.180.42 6 967 14 1566933133 1566933193 ACCEPT 19 OK
```

## Traffic through a zonal NAT gateway
<a name="flow-log-example-nat"></a>

In this example, an instance in a private subnet accesses the internet through a zonal NAT gateway that's in a public subnet.

![\[Accessing the internet through a zonal NAT gateway\]](http://docs.aws.amazon.com/vpc/latest/userguide/images/flow-log-nat-gateway.png)


The following custom flow log for the zonal NAT gateway network interface captures the following fields in the following order.

```
instance-id interface-id srcaddr dstaddr pkt-srcaddr pkt-dstaddr
```

The flow log shows the flow of traffic from the instance IP address (10.0.1.5) through the zonal NAT gateway network interface to a host on the internet (203.0.113.5). The zonal NAT gateway network interface is a requester-managed network interface, therefore the flow log record displays a '-' symbol for the instance-id field. The following line shows traffic from the source instance to the zonal NAT gateway network interface. The values for the dstaddr and pkt-dstaddr fields are different. The dstaddr field displays the private IP address of the zonal NAT gateway network interface, and the pkt-dstaddr field displays the final destination IP address of the host on the internet. 

```
- eni-1235b8ca123456789 10.0.1.5 10.0.0.220 10.0.1.5 203.0.113.5
```

The next two lines show the traffic from the zonal NAT gateway network interface to the target host on the internet, and the response traffic from the host to the NAT gateway network interface.

```
- eni-1235b8ca123456789 10.0.0.220 203.0.113.5 10.0.0.220 203.0.113.5
- eni-1235b8ca123456789 203.0.113.5 10.0.0.220 203.0.113.5 10.0.0.220
```

The following line shows the response traffic from the zonal NAT gateway network interface to the source instance. The values for the srcaddr and pkt-srcaddr fields are different. The srcaddr field displays the private IP address of the zonal NAT gateway network interface, and the pkt-srcaddr field displays the IP address of the host on the internet.

```
- eni-1235b8ca123456789 10.0.0.220 10.0.1.5 203.0.113.5 10.0.1.5
```

You create another custom flow log using the same set of fields as above. You create the flow log for the network interface for the instance in the private subnet. In this case, the instance-id field returns the ID of the instance that's associated with the network interface, and there is no difference between the dstaddr and pkt-dstaddr fields and the srcaddr and pkt-srcaddr fields. Unlike the network interface for the zonal NAT gateway, this network interface is not an intermediate network interface for traffic.

```
i-01234567890123456 eni-1111aaaa2222bbbb3 10.0.1.5 203.0.113.5 10.0.1.5 203.0.113.5 #Traffic from the source instance to host on the internet
i-01234567890123456 eni-1111aaaa2222bbbb3 203.0.113.5 10.0.1.5 203.0.113.5 10.0.1.5 #Response traffic from host on the internet to the source instance
```

## Traffic through a regional NAT gateway
<a name="flow-log-example-regional-nat"></a>

A regional NAT gateway can connect to multiple subnets across different Availability Zones. In this example, two instances in private subnets from two different Availability Zones access the internet through the same regional NAT gateway. The following flow logs show traffic from one of the instances to the internet through the regional NAT gateway.

![\[Accessing the internet through a regional NAT gateway\]](http://docs.aws.amazon.com/vpc/latest/userguide/images/flow-log-regional-nat-gateway.png)


The following custom flow log for the regional NAT gateway captures the following fields in the following order.

```
resource-id instance-id interface-id subnet-id srcaddr dstaddr pkt-srcaddr pkt-dstaddr
```

The flow log shows the flow of traffic from the instance IP address (10.0.1.5) through the regional NAT gateway to a host on the internet (203.0.113.5). instance-id, interface-id, and subnet-id don’t apply to the regional NAT gateway. Therefore, the flow log record displays a '-' symbol for these fields. Instead, the resource-id field displays the ID of the regional NAT gateway. The dstaddr and pkt-dstaddr fields display the final destination IP address of the host on the internet.

```
nat-1234567890abcdef - - - 10.0.1.5 203.0.113.5 10.0.1.5 203.0.113.5
```

The next two lines show the traffic from the regional NAT gateway (public IP address 107.22.182.139) to the target host on the internet, and the response traffic from the host to the regional NAT gateway.

```
nat-1234567890abcdef - - - 107.22.182.139 203.0.113.5 107.22.182.139 203.0.113.5
nat-1234567890abcdef - - - 203.0.113.5 107.22.182.139 203.0.113.5 107.22.182.139
```

The following line shows the response traffic from the regional NAT gateway to the source instance. The srcaddr and pkt-srcaddr fields display the IP address of the host on the internet.

```
nat-1234567890abcdef - - - 203.0.113.5 10.0.1.5 203.0.113.5 10.0.1.5
```

You create another custom flow log using the same set of fields as above. You create the flow log for the network interface for the instance in the private subnet. In this case, the instance-id field returns the ID of the instance that's associated with the network interface, and the resource-id is '-'. There is no difference between the dstaddr and pkt-dstaddr fields and the srcaddr and pkt-srcaddr fields.

```
- i-01234567890123456 eni-1111aaaa2222bbbb3 subnet-aaaaaaaa012345678 10.0.1.5 203.0.113.5 10.0.1.5 203.0.113.5 #Traffic from the source instance to host on the internet
- i-01234567890123456 eni-1111aaaa2222bbbb3 subnet-aaaaaaaa012345678 203.0.113.5 10.0.1.5 203.0.113.5 10.0.1.5 #Response traffic from host on the internet to the source instance
```

## Traffic through a transit gateway
<a name="flow-log-example-tgw"></a>

In this example, a client in VPC A connects to a web server in VPC B through a transit gateway. The client and server are in different Availability Zones. Traffic arrives at the server in VPC B using one elastic network interface ID (in this example, let's say the ID is eni-11111111111111111) and leaves VPC B using another (for example eni-22222222222222222).

![\[Traffic through a transit gateway\]](http://docs.aws.amazon.com/vpc/latest/userguide/images/flow-log-tgw.png)


You create a custom flow log for VPC B with the following format.

```
version interface-id account-id vpc-id subnet-id instance-id srcaddr dstaddr srcport dstport protocol tcp-flags type pkt-srcaddr pkt-dstaddr action log-status
```

The following lines from the flow log records demonstrate the flow of traffic on the network interface for the web server. The first line is the request traffic from the client, and the last line is the response traffic from the web server.

```
3 eni-33333333333333333 123456789010 vpc-abcdefab012345678 subnet-22222222bbbbbbbbb i-01234567890123456 10.20.33.164 10.40.2.236 39812 80 6 3 IPv4 10.20.33.164 10.40.2.236 ACCEPT OK
...
3 eni-33333333333333333 123456789010 vpc-abcdefab012345678 subnet-22222222bbbbbbbbb i-01234567890123456 10.40.2.236 10.20.33.164 80 39812 6 19 IPv4 10.40.2.236 10.20.33.164 ACCEPT OK
```

The following line is the request traffic on eni-11111111111111111, a requester-managed network interface for the transit gateway in subnet subnet-11111111aaaaaaaaa. The flow log record therefore displays a '-' symbol for the instance-id field. The srcaddr field displays the private IP address of the transit gateway network interface, and the pkt-srcaddr field displays the source IP address of the client in VPC A.

```
3 eni-11111111111111111 123456789010 vpc-abcdefab012345678 subnet-11111111aaaaaaaaa - 10.40.1.175 10.40.2.236 39812 80 6 3 IPv4 10.20.33.164 10.40.2.236 ACCEPT OK
```

The following line is the response traffic on eni-22222222222222222, a requester-managed network interface for the transit gateway in subnet subnet-22222222bbbbbbbbb. The dstaddr field displays the private IP address of the transit gateway network interface, and the pkt-dstaddr field displays the IP address of the client in VPC A.

```
3 eni-22222222222222222 123456789010 vpc-abcdefab012345678 subnet-22222222bbbbbbbbb - 10.40.2.236 10.40.2.31 80 39812 6 19 IPv4 10.40.2.236 10.20.33.164 ACCEPT OK
```

## Service name, traffic path, and flow direction
<a name="flow-log-example-traffic-path"></a>

The following is an example of the fields for a custom flow log record.

```
version srcaddr dstaddr srcport dstport protocol start end type packets bytes account-id vpc-id subnet-id instance-id interface-id region az-id sublocation-type sublocation-id action tcp-flags pkt-srcaddr pkt-dstaddr pkt-src-aws-service pkt-dst-aws-service traffic-path flow-direction log-status
```

In the following example, the version is 5 because the records include version 5 fields. An EC2 instance calls the Amazon S3 service. Flow logs are captured on the network interface for the instance. The first record has a flow direction of ingress and the second record has a flow direction of egress. For the egress record, traffic-path is 8, indicating that the traffic goes through an internet gateway. The traffic-path field is not supported for ingress traffic. When pkt-srcaddr or pkt-dstaddr is a public IP address, the service name is shown.

```
5 52.95.128.179 10.0.0.71 80 34210 6 1616729292 1616729349 IPv4 14 15044 123456789012 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-0c50d5961bcb2d47b eni-1235b8ca123456789 ap-southeast-2 apse2-az3 - - ACCEPT 19 52.95.128.179 10.0.0.71 S3 - - ingress OK
5 10.0.0.71 52.95.128.179 34210 80 6 1616729292 1616729349 IPv4 7 471 123456789012 vpc-abcdefab012345678 subnet-aaaaaaaa012345678 i-0c50d5961bcb2d47b eni-1235b8ca123456789 ap-southeast-2 apse2-az3 - - ACCEPT 3 10.0.0.71 52.95.128.179 - S3 8 egress OK
```

# Flow log limitations
<a name="flow-logs-limitations"></a>

To use flow logs, you need to be aware of the following limitations:
+ After you create a flow log, you won't see flow log data until there is active traffic for the network interface, subnet, or VPC that you selected.
+ You can't enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account.
+ After you create a flow log, you can't change its configuration or the flow log record format. For example, you can't associate a different IAM role with the flow log, or add or remove fields in the flow log record. Instead, you can delete the flow log and create a new one with the required configuration. 
+ If your network interface has multiple IPv4 addresses and traffic is sent to a secondary private IPv4 address, the flow log displays the primary private IPv4 address in the `dstaddr` field. To capture the original destination IP address, create a flow log with the `pkt-dstaddr` field.
+ If traffic is sent to a network interface and the destination is not any of the network interface's IP addresses, the flow log displays the primary private IPv4 address in the `dstaddr` field. To capture the original destination IP address, create a flow log with the `pkt-dstaddr` field.
+  If traffic is sent from a network interface and the source is not any of the network interface's IP addresses, when the log record is for an egress flow, the flow log displays the primary private IPv4 address in the `srcaddr` field. To capture the original source IP address, create a flow log with the `pkt-srcaddr` field. If the log record is for an ingress flow into the network interface, the primary private IP of the network interface will not be shown in the `srcaddr` field.
+ When your network interface is attached to a [Nitro-based instance](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html), the aggregation interval is always 1 minute or less, regardless of the specified maximum aggregation interval.
+ For `pkt-srcaddr` and `pkt-dstaddr` fields, if the intermediate layer has Client IP address Preservation enabled, this field may show the preserved Client IP instead of the IP address of the intermediate layer.
+ For the `traffic-path` field, the value is the same for flows through resources in the same VPC and flows going through an Outpost local gateway.
+ Some flow log records may be skipped during the aggregation interval (see *log-status* in [Available fields](flow-log-records.md#flow-logs-fields)). This may be caused by an internal AWS capacity constraint or internal error. If you are using AWS Cost Explorer to view VPC flow log charges and some flow logs are skipped during the flow log aggregation interval, the number of flow logs reported in AWS Cost Explorer will be higher than the number of flow logs published by Amazon VPC.
+ If you are using [VPC Block Public Access (BPA)](security-vpc-bpa-assess-impact-main.md#security-vpc-bpa-fl):
  + Flow logs for VPC BPA do not include [skipped records](flow-logs-records-examples.md#flow-log-example-no-data).
  + Flow logs for VPC BPA do not include [`bytes`](flow-log-records.md#flow-logs-fields) even if you include the `bytes` field in your flow log.
+ VPC Flow Logs supports a maximum of 250 subscriptions per resource per account. To create additional subscriptions on a resource that has reached this limit, you must first delete existing subscriptions.

Flow logs do not capture all IP traffic. The following types of traffic are not logged:
+ Traffic generated by instances when they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged. 
+ Traffic generated by a Windows instance for Amazon Windows license activation.
+ Traffic to and from `169.254.169.254` for instance metadata.
+ Traffic to and from `169.254.169.123` for the Amazon Time Sync Service.
+ DHCP traffic.
+ [Traffic mirrored](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-how-it-works.html) source traffic. You will see traffic mirrored target traffic only.
+ Traffic to the reserved IP address for the default VPC router.
+ Traffic between an endpoint network interface and a Network Load Balancer network interface.
+ Address Resolution Protocol (ARP) traffic.
+ Traffic on a short-lived regional NAT gateway, which is deleted a few minutes after creation.

Limitations specific to ECS fields available in version 7:
+ ECS fields are not computed if the underlying ECS tasks are not owned by the owner of the flow log subscription. For example, if you share a subnet (`SubnetA`) with another account (`AccountB`), and then you create a flow log subscription for `SubnetA`, if `AccountB` launches ECS tasks in the shared subnet, your subscription will receive traffic logs from ECS tasks launched by `AccountB` but the ECS fields for these logs will not be computed due to security concerns.
+ If you create flow log subscriptions with ECS fields at the VPC/Subnet resource level, any traffic generated for non-ECS network interfaces will also be delivered for your subscriptions. The values for ECS fields will be '-' for non-ECS IP traffic. For example, you have a subnet (`subnet-000000`) and you create a flow log subscription for this subnet with ECS fields (`fl-00000000`). In `subnet-000000`, you launch an EC2 instance (`i-0000000`) that is connected to the internet and is actively generating IP traffic. You also launch a running ECS task (`ECS-Task-1`) in the same subnet. Since both `i-0000000` and `ECS-Task-1` are generating IP traffic, your flow log subscription `fl-00000000` will deliver traffic logs for both entities. However, only `ECS-Task-1` will have actual ECS metadata for the ECS fields you included in your logFormat. For `i-0000000` related traffic, these fields will have a value of '-'.
+ `ecs-container-id` and `ecs-second-container-id` are ordered as the VPC Flow Logs service receives them from the ECS event stream. They are not guaranteed to be in the same order as you see them on ECS console or in the DescribeTask API call. If a container enters a STOPPED status while the task is still running, it may continue to appear in your log.
+ The ECS metadata and IP traffic logs are from two different sources. We start computing your ECS traffic as soon as we obtain all required information from upstream dependencies. After you start a new task, we start computing your ECS fields 1) when we receive IP traffic for the underlying network interface and 2) when we receive the ECS event that contains the metadata for your ECS task to indicate the task is now running. After you stop a task, we stop computing your ECS fields 1) when we no longer receive IP traffic for the underlying network interface or we receive IP traffic that is delayed for more than one day and 2) when we receive the ECS event that contains the metadata for your ECS task to indicate your task is no longer running.
+ Only ECS tasks launched in `awsvpc` [network mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) are supported. 

Limitations specific to `encryption-status` field:
+ The encryption status may be '-'(not available) in some flows, due to limitation of some network appliance to report the encryption status. Users can ignore these flows in the analysis.
+ Showing as encrypted in monitor mode does not mean the flow will be allowed in enforce mode. Vice versa.
  + If a flow is encrypted in monitor mode, it may not be compliant in enforce mode:
    + If the flow involves an ENI created by an AWS service, then the service needs to support Encryption Controls.
    + If the flow goes through VPC peering, the peered VPC may not force Encryption Controls.
  + If a flow is not encrypted in monitor mode, it may still be compliant in enforce mode, given the service related to the flow is added as an exclusion.

## Pricing
<a name="flow-logs-pricing"></a>

Data ingestion and archival charges for vended logs apply when you publish flow logs. For more information about pricing when publishing vended logs, open [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/), select **Logs** and find **Vended Logs**.

To track charges from publishing flow logs, you can apply cost allocation tags to your destination resource. Thereafter, your AWS cost allocation report includes usage and costs aggregated by these tags. You can apply tags that represent business categories (such as cost centers, application names, or owners) to organize your costs. For more information, see the following:
+ [Using Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) in the *AWS Billing User Guide*
+ [Tag log groups in Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#log-group-tagging) in the *Amazon CloudWatch Logs User Guide*
+ [Using cost allocation S3 bucket tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CostAllocTagging.html) in the *Amazon Simple Storage Service User Guide*
+ [Tagging Your Delivery Streams](https://docs.aws.amazon.com/firehose/latest/dev/firehose-tagging.html) in the *Amazon Data Firehose Developer Guide*

# Work with flow logs
<a name="working-with-flow-logs"></a>

You can work with flow logs using consoles for Amazon EC2 and Amazon VPC.

**Topics**
+ [1. Control the use of flow logs with IAM](#controlling-use-of-flow-logs)
+ [2. Create a flow log](#create-flow-log)
+ [3. Tag a flow log](#modify-tags-flow-logs)
+ [4. Delete a flow log](#delete-flow-log)
+ [Command line overview](#flow-logs-api-cli)

## 1. Control the use of flow logs with IAM
<a name="controlling-use-of-flow-logs"></a>

By default, users do not have permission to work with flow logs. You can create an IAM role with a policy attached that grants users the permissions to create, describe, and delete flow logs.

The following is an example policy that grants users full permissions to create, describe, and delete flow logs.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteFlowLogs",
        "ec2:CreateFlowLogs",
        "ec2:DescribeFlowLogs"
      ],
      "Resource": "*"
    }
  ]
}
```

------

 For more information, see [How Amazon VPC works with IAM](security_iam_service-with-iam.md).

## 2. Create a flow log
<a name="create-flow-log"></a>

You can create flow logs for your VPCs, subnets, or network interfaces. When you create a flow log, you must specify a destination for the flow log. For more information, see the following:
+ [Create a flow log that publishes to CloudWatch Logs](flow-logs-cwl-create-flow-log.md)
+ [Create a flow log that publishes to Amazon S3](flow-logs-s3-create-flow-log.md)
+ [Create a flow log that publishes to Amazon Data Firehose](flow-logs-firehose-create-flow-log.md)

## 3. Tag a flow log
<a name="modify-tags-flow-logs"></a>

You can add or remove tags for a flow log at any time.

**To manage tags for a flow log**

1. Do one of the following:
   + Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/). In the navigation pane, choose **Network Interfaces**. Select the checkbox for the network interface.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Your VPCs**. Select the checkbox for the VPC.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Subnets**. Select the checkbox for the subnet.

1. Choose **Flow Logs**.

1. Choose **Actions**, **Manage tags**.

1. To add a new tag, choose **Add new tag** and enter the key and value. To remove a tag, choose **Remove**.

1. When you are finished adding or removing tags, choose **Save**.

## 4. Delete a flow log
<a name="delete-flow-log"></a>

You can delete a flow log at any time. After you delete a flow log, it can take several minutes to stop collecting data.

Deleting a flow log does not delete the log data from the destination or modify the destination resource. You must delete the existing flow log data directly from the destination, and clean up the destination resource, using the console for the destination service.

**To delete a flow log**

1. Do one of the following:
   + Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/). In the navigation pane, choose **Network Interfaces**. Select the checkbox for the network interface.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Your VPCs**. Select the checkbox for the VPC.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Subnets**. Select the checkbox for the subnet.

1. Choose **Flow Logs**.

1. Choose **Actions**, **Delete flow logs**.

1. When prompted for confirmation, type **delete** and then choose **Delete**.

## Command line overview
<a name="flow-logs-api-cli"></a>

You can perform the tasks described on this page using the command line.

**Create a flow log**
+ [create-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-flow-logs.html) (AWS CLI)
+ [New-EC2FlowLog](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2FlowLog.html) (AWS Tools for Windows PowerShell)

**Describe a flow log**
+ [describe-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-flow-logs.html) (AWS CLI)
+ [Get-EC2FlowLog](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2FlowLog.html) (AWS Tools for Windows PowerShell)

**Tag a flow log**
+ [create-tags](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-tags.html) and [delete-tags](https://docs.aws.amazon.com/cli/latest/reference/ec2/delete-tags.html) (AWS CLI)
+ [New-EC2Tag](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2Tag.html) and [Remove-EC2Tag](https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-EC2Tag.html) (AWS Tools for Windows PowerShell)

**Delete a flow log**
+ [delete-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/delete-flow-logs.html) (AWS CLI)
+ [Remove-EC2FlowLog](https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-EC2FlowLog.html) (AWS Tools for Windows PowerShell)

# Publish flow logs to CloudWatch Logs
<a name="flow-logs-cwl"></a>

Flow logs can publish flow log data directly to Amazon CloudWatch. Amazon CloudWatch is a comprehensive monitoring and observability service. It collects and tracks metrics, logs, and event data from various AWS resources, as well as your own applications and services. CloudWatch provides visibility into resource utilization, application performance, and operational health, enabling you to detect and respond to system-wide performance changes and potential issues. With CloudWatch, you can set alarms, visualize logs and metrics, and automatically react to collect and optimize your cloud resources. It is an essential tool for ensuring the reliability, availability, and performance of your cloud-based infrastructure and applications.

When publishing to CloudWatch Logs, flow log data is published to a log group, and each network interface has a unique log stream in the log group. Log streams contain flow log records. You can create multiple flow logs that publish data to the same log group. If the same network interface is present in one or more flow logs in the same log group, it has one combined log stream. If you've specified that one flow log should capture rejected traffic, and the other flow log should capture accepted traffic, then the combined log stream captures all traffic.

In CloudWatch Logs, the **timestamp** field corresponds to the start time that's captured in the flow log record. The **ingestionTime** field indicates the date and time when the flow log record was received by CloudWatch Logs. This timestamp is later than the end time that's captured in the flow log record.

For more information about CloudWatch Logs, see [Logs sent to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-CWL) in the *Amazon CloudWatch Logs User Guide*.

**Pricing**  
Data ingestion and archival charges for vended logs apply when you publish flow logs to CloudWatch Logs. For more information, open [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/), select **Logs** and find **Vended Logs**.

**Topics**
+ [IAM role for publishing flow logs to CloudWatch Logs](flow-logs-iam-role.md)
+ [Create a flow log that publishes to CloudWatch Logs](flow-logs-cwl-create-flow-log.md)
+ [View flow log records with CloudWatch Logs](view-flow-log-records-cwl.md)
+ [Search flow log records](search-flow-log-records-cwl.md)
+ [Process flow log records in CloudWatch Logs](process-records-cwl.md)

# IAM role for publishing flow logs to CloudWatch Logs
<a name="flow-logs-iam-role"></a>

The IAM role that's associated with your flow log must have sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs. The IAM role must belong to your AWS account.

The IAM policy that's attached to your IAM role must include at least the following permissions.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
      ],
      "Resource": "*"
    }
  ]
}
```

------

Ensure that your role has the following trust policy, which allows the flow logs service to assume the role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "vpc-flow-logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

------

We recommend that you use the `aws:SourceAccount` and `aws:SourceArn` condition keys to protect yourself against [the confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html). For example, you could add the following condition block to the previous trust policy. The source account is the owner of the flow log and the source ARN is the flow log ARN. If you don't know the flow log ID, you can replace that portion of the ARN with a wildcard (\$1) and then update the policy after you create the flow log.

```
"Condition": {
    "StringEquals": {
        "aws:SourceAccount": "account_id"
    },
    "ArnLike": {
        "aws:SourceArn": "arn:aws:ec2:region:account_id:vpc-flow-log/flow-log-id"
    }
}
```

## Create an IAM role for flow logs
<a name="create-flow-logs-role"></a>

You can update an existing role as described above. Alternatively, you can use the following procedure to create a new role for use with flow logs. You'll specify this role when you create the flow log.

**To create an IAM role for flow logs**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Create policy** page, do the following:

   1. Choose **JSON**.

   1. Replace the contents of this window with the permissions policy at the start of this section.

   1. Choose **Next**.

   1. Enter a name for your policy and an optional description and tags, and then choose **Create policy**.

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. For **Trusted entity type**, choose **Custom trust policy**. For **Custom trust policy**, replace `"Principal": {},` with the following, then and choose **Next**.

   ```
   "Principal": {
      "Service": "vpc-flow-logs.amazonaws.com"
   },
   ```

1. On the **Add permissions** page, select the checkbox for the policy that you created earlier in this procedure, and then choose **Next**.

1. Enter a name for your role and optionally provide a description.

1. Choose **Create role**.

# Create a flow log that publishes to CloudWatch Logs
<a name="flow-logs-cwl-create-flow-log"></a>

You can create flow logs for your VPCs, subnets, or network interfaces. If you perform these steps as a user using a particular IAM role, ensure that the role has permissions to use the `iam:PassRole` action.

**Prerequisite**  
Verify that the IAM principal that you are using to make the request has permissions to call the `iam:PassRole` action.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::111122223333:role/flow-log-role-name"
        }
    ]
}
```

------

**To create a flow log using the console**

1. Do one of the following:
   + Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/). In the navigation pane, choose **Network Interfaces**. Select the checkbox for the network interface.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Your VPCs**. Select the checkbox for the VPC.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Subnets**. Select the checkbox for the subnet.

1. Choose **Actions**, **Create flow log**.

1. For **Filter**, specify the type of traffic to log. Choose **All** to log accepted and rejected traffic, **Reject** to log only rejected traffic, or **Accept** to log only accepted traffic.

1. For **Maximum aggregation interval**, choose the maximum period of time during which a flow is captured and aggregated into one flow log record.

1. For **Destination**, choose **Send to CloudWatch Logs**.

1. For **Destination log group**, choose the name of an existing log group or enter the name of a new log group. If you enter a name, we create the log group when there is traffic to log.

1. For **Service access**, choose an existing [IAM service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) that has permissions to publish logs to CloudWatch Logs or choose to create a new service role.

1. For **Log record format**, select the format for the flow log record.
   + To use the default format, choose **AWS default format**.
   + To use a custom format, choose **Custom format** and then select fields from **Log format**.

1. For **Additional metadata**, select if you want to include metadata from Amazon ECS in the log format.

1. (Optional) Choose **Add new tag** to apply tags to the flow log.

1. Choose **Create flow log**.

**To create a flow log using the command line**

Use one of the following commands.
+ [create-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-flow-logs.html) (AWS CLI)
+ [New-EC2FlowLog](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2FlowLog.html) (AWS Tools for Windows PowerShell)

The following AWS CLI example creates a flow log that captures all accepted traffic for the specified subnet. The flow logs are delivered to the specified log group. The `--deliver-logs-permission-arn` parameter specifies the IAM role required to publish to CloudWatch Logs.

```
aws ec2 create-flow-logs --resource-type Subnet --resource-ids subnet-1a2b3c4d --traffic-type ACCEPT --log-group-name my-flow-logs --deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs
```

# View flow log records with CloudWatch Logs
<a name="view-flow-log-records-cwl"></a>

You can view your flow log records using the CloudWatch Logs console. After you create your flow log, it might take a few minutes for it to be visible in the console.

**To view flow log records published to CloudWatch Logs using the console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Logs**, **Log groups**.

1. Select the name of the log group that contains your flow logs to open its details page.

1. Select the name of the log stream that contains the flow log records. For more information, see [Flow log records](flow-log-records.md).

**To view flow log records published to CloudWatch Logs using the command line**
+ [get-log-events](https://docs.aws.amazon.com/cli/latest/reference/logs/get-log-events.html) (AWS CLI)
+ [Get-CWLLogEvent](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-CWLLogEvent.html) (AWS Tools for Windows PowerShell)

# Search flow log records
<a name="search-flow-log-records-cwl"></a>

You can search your flow log records that are published to CloudWatch Logs using the CloudWatch Logs console. You can use [metric filters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) to filter flow log records. Flow log records are space delimited.

**To search flow log records using the CloudWatch Logs console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Logs**, **Log groups**.

1. Select the log group that contains your flow log, and then select the log stream, if you know the network interface that you are searching for. Alternatively, choose **Search log group**. This might take some time if there are many network interfaces in your log group, or depending on the time range that you select.

1. Under **Filter events**, enter the string below. This assumes that the flow log record uses the [default format](flow-log-records.md#flow-logs-default).

   ```
   [version, accountid, interfaceid, srcaddr, dstaddr, srcport, dstport, protocol, packets, bytes, start, end, action, logstatus]
   ```

1. Modify the filter as needed by specifying values for the fields. The following examples filter by specific source IP addresses.

   ```
   [version, accountid, interfaceid, srcaddr = 10.0.0.1, dstaddr, srcport, dstport, protocol, packets, bytes, start, end, action, logstatus]
   [version, accountid, interfaceid, srcaddr = 10.0.2.*, dstaddr, srcport, dstport, protocol, packets, bytes, start, end, action, logstatus]
   ```

   The following examples filter by destination port, the number of bytes, and whether the traffic was rejected.

   ```
   [version, accountid, interfaceid, srcaddr, dstaddr, srcport, dstport = 80 || dstport = 8080, protocol, packets, bytes, start, end, action, logstatus]
   [version, accountid, interfaceid, srcaddr, dstaddr, srcport, dstport = 80 || dstport = 8080, protocol, packets, bytes >= 400, start, end, action = REJECT, logstatus]
   ```

# Process flow log records in CloudWatch Logs
<a name="process-records-cwl"></a>

You can process flow log records as you would with any other log events collected by CloudWatch Logs. For more information about monitoring log data and metric filters, see [Creating metrics from log events using filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) in the *Amazon CloudWatch Logs User Guide*.

## Example: Create a CloudWatch metric filter and alarm for a flow log
<a name="flow-logs-cw-alarm-example"></a>

In this example, you have a flow log for `eni-1a2b3c4d`. You want to create an alarm that alerts you if there have been 10 or more rejected attempts to connect to your instance over TCP port 22 (SSH) within a 1-hour time period. First, you must create a metric filter that matches the pattern of the traffic for which to create the alarm. Then, you can create an alarm for the metric filter.

**To create a metric filter for rejected SSH traffic and create an alarm for the filter**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Logs**, **Log groups**.

1. Select the check box for the log group, and then choose **Actions**, **Create metric filter**.

1. For **Filter pattern**, enter the following string.

   ```
   [version, account, eni, source, destination, srcport, destport="22", protocol="6", packets, bytes, windowstart, windowend, action="REJECT", flowlogstatus]
   ```

1. For **Select log data to test**, select the log stream for your network interface. (Optional) To view the lines of log data that match the filter pattern, choose **Test pattern**.

1. When you're ready, choose **Next**.

1. Enter a filter name, metric namespace, and metric name. Set the metric value to 1. When you're done, choose **Next** and then choose **Create metric filter**.

1. In the navigation pane, choose **Alarms**, **All alarms**.

1. Choose **Create alarm**.

1. Select the metric name that you created and then choose **Select metric**.

1. Configure the alarm as follows, and then choose **Next**:
   + For **Statistic**, choose **Sum**. This ensure that you capture the total number of data points for the specified time period.
   + For **Period**, choose **1 hour**.
   + For **Whenever TimeSinceLastActive is...**, choose **Greater/Equal** and enter 10 for the threshold.
   + For **Additional configuration**, **Datapoints to alarm**, leave the default of 1.

1. Choose **Next**.

1. For **Notification**, select an existing SNS topic or choose **Create new topic** to create a new one. Choose **Next**.

1. Enter a name and description for the alarm and choose **Next**.

1. When you are done previewing the alarm, choose **Create alarm**.

# Publish flow logs to Amazon S3
<a name="flow-logs-s3"></a>

Flow logs can publish flow log data to Amazon S3. Amazon S3 (Simple Storage Service) is a highly scalable and durable object storage service. It is designed to store and retrieve any amount of data, from anywhere on the web. S3 offers industry-leading durability and availability, with built-in features for data versioning, encryption, and access control.

When publishing to Amazon S3, flow log data is published to an existing Amazon S3 bucket that you specify. Flow log records for all of the monitored network interfaces are published to a series of log file objects that are stored in the bucket. If the flow log captures data for a VPC, the flow log publishes flow log records for all of the network interfaces in the selected VPC.

To create an Amazon S3 bucket for use with flow logs, see [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.

For more information about how to streamline VPC flow log ingestion, flow log processing, and flow log visualization, see [Centralized Logging with OpenSearch](https://aws.amazon.com/solutions/implementations/centralized-logging-with-opensearch/) in the AWS Solutions Library.

For more information about CloudWatch Logs, see [Logs sent to Amazon S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3) in the *Amazon CloudWatch Logs User Guide*.

**Pricing**  
Data ingestion and archival charges for vended logs apply when you publish flow logs to Amazon S3. For more information, open [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/), select **Logs** and find **Vended Logs**.

**Topics**
+ [Flow log files](flow-logs-s3-path.md)
+ [Amazon S3 bucket permissions for flow logs](flow-logs-s3-permissions.md)
+ [Required key policy for use with SSE-KMS](flow-logs-s3-cmk-policy.md)
+ [Amazon S3 log file permissions](flow-logs-file-permissions.md)
+ [Create a flow log that publishes to Amazon S3](flow-logs-s3-create-flow-log.md)
+ [View flow log records with Amazon S3](view-flow-log-records-s3.md)

# Flow log files
<a name="flow-logs-s3-path"></a>

 VPC Flow Logs collects data about the IP traffic going to and from your VPC into log records, aggregates those records into log files, and then publishes the log files to the Amazon S3 bucket at 5-minute intervals. Multiple files may be published and each log file may contain some or all of the flow log records for the IP traffic recorded in the previous 5 minutes. 

In Amazon S3, the **Last modified** field for the flow log file indicates the date and time at which the file was uploaded to the Amazon S3 bucket. This is later than the timestamp in the file name, and differs by the amount of time taken to upload the file to the Amazon S3 bucket.

**Log file format**

You can specify one of the following formats for the log files. Each file is compressed into a single Gzip file.
+ **Text** – Plain text. This is the default format.
+ **Parquet** – Apache Parquet is a columnar data format. Queries on data in Parquet format are 10 to 100 times faster compared to queries on data in plain text. Data in Parquet format with Gzip compression takes 20 percent less storage space than plain text with Gzip compression.

**Note**  
If data in Parquet format with Gzip compression is less than 100 KB per aggregation period, storing data in Parquet format may take up more space than plain text with Gzip compression due to Parquet file memory requirements.

**Log file options**

You can optionally specify the following options.
+ **Hive-compatible S3 prefixes** – Enable Hive-compatible prefixes instead of importing partitions into your Hive-compatible tools. Before you run queries, use the **MSCK REPAIR TABLE** command.
+ **Hourly partitions** – If you have a large volume of logs and typically target queries to a specific hour, you can get faster results and save on query costs by partitioning logs on an hourly basis.

**Log file S3 bucket structure**  
Log files are saved to the specified Amazon S3 bucket using a folder structure that is based on the flow log's ID, Region, creation date, and destination options.

By default, the files are delivered to the following location.

```
bucket-and-optional-prefix/AWSLogs/account_id/vpcflowlogs/region/year/month/day/
```

If you enable Hive-compatible S3 prefixes, the files are delivered to the following location.

```
bucket-and-optional-prefix/AWSLogs/aws-account-id=account_id/aws-service=vpcflowlogs/aws-region=region/year=year/month=month/day=day/
```

If you enable hourly partitions, the files are delivered to the following location.

```
bucket-and-optional-prefix/AWSLogs/account_id/vpcflowlogs/region/year/month/day/hour/
```

If you enable Hive-compatible partitions and partition the flow log per hour, the files are delivered to the following location.

```
bucket-and-optional-prefix/AWSLogs/aws-account-id=account_id/aws-service=vpcflowlogs/aws-region=region/year=year/month=month/day=day/hour=hour/
```

**Log file names**  
The file name of a log file is based on the flow log ID, Region, and creation date and time. File names use the following format.

```
aws_account_id_vpcflowlogs_region_flow_log_id_YYYYMMDDTHHmmZ_hash.log.gz
```

The following is an example of a log file for a flow log created by AWS account 123456789012, for a resource in the us-east-1 Region, on June 20, 2018 at 16:20 UTC. The file contains the flow log records with an end time between 16:20:00 and 16:24:59.

```
123456789012_vpcflowlogs_us-east-1_fl-1234abcd_20180620T1620Z_fe123456.log.gz
```

# Amazon S3 bucket permissions for flow logs
<a name="flow-logs-s3-permissions"></a>

By default, Amazon S3 buckets and the objects they contain are private. Only the bucket owner can access the bucket and the objects stored in it. However, the bucket owner can grant access to other resources and users by writing an access policy.

If the user creating the flow log owns the bucket and has `PutBucketPolicy` and `GetBucketPolicy` permissions for the bucket, we automatically attach the following policy to the bucket. This policy overwrites any existing policy attached to the bucket.

Otherwise, the bucket owner must add this policy to the bucket, specifying the AWS account ID of the flow log creator, or flow log creation fails. For more information, see [Using bucket policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html) in the *Amazon Simple Storage Service User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "123456789012",
                    "s3:x-amz-acl": "bucket-owner-full-control"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:logs:us-east-1:123456789012:*"
                }
            }
        },
        {
            "Sid": "AWSLogDeliveryAclCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "123456789012"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:logs:us-east-1:123456789012:*"
                }
            }
        }
    ]
}
```

------

The ARN that you specify for *my-s3-arn* depends on whether you use Hive-compatible S3 prefixes.
+ Default prefixes

  ```
  arn:aws:s3:::bucket_name/optional_folder/AWSLogs/account_id/*
  ```
+ Hive-compatible S3 prefixes

  ```
  arn:aws:s3:::bucket_name/optional_folder/AWSLogs/aws-account-id=account_id/*
  ```

It is a best practice to grant these permissions to the log delivery service principal instead of individual AWS account ARNs. It is also a best practice to use the `aws:SourceAccount` and `aws:SourceArn` condition keys to protect against [the confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html). The source account is the owner of the flow log and the source ARN is the wildcard (\$1) ARN of the logs service.

Note that the log delivery service calls the `HeadBucket` Amazon S3 API action to verify the existence and location of the S3 bucket. You are not required to grant the log delivery service permission to call this action; it will still deliver VPC flow logs even if it can't confirm that the S3 bucket exists and its location. However, there will be an `AccessDenied` error for the call to `HeadBucket` in your CloudTrail logs.

# Required key policy for use with SSE-KMS
<a name="flow-logs-s3-cmk-policy"></a>

You can protect the data in your Amazon S3 bucket by enabling either Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) or Server-Side Encryption with KMS Keys (SSE-KMS) on your S3 bucket. For more information, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) in the *Amazon S3 User Guide*.

If you choose SSE-S3, no additional configuration is required. Amazon S3 handles the encryption key.

If you choose SSE-KMS, you must use a customer managed key ARN. If you use a key ID, you can run into a [LogDestination undeliverable](flow-logs-troubleshooting.md#flow-logs-troubleshooting-kms-id) error when creating a flow log. Also, you must update the key policy for your customer managed key so that the log delivery account can write to your S3 bucket. For more information about the required key policy for use with SSE-KMS, see [Amazon S3 bucket server-side encryption](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-SSE-KMS-S3) in the *Amazon CloudWatch Logs User Guide*.

# Amazon S3 log file permissions
<a name="flow-logs-file-permissions"></a>

In addition to the required bucket policies, Amazon S3 uses access control lists (ACLs) to manage access to the log files created by a flow log. By default, the bucket owner has `FULL_CONTROL` permissions on each log file. The log delivery owner, if different from the bucket owner, has no permissions. The log delivery account has `READ` and `WRITE` permissions. For more information, see [Access control list (ACL) overview](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html) in the *Amazon S3 User Guide*.

# Create a flow log that publishes to Amazon S3
<a name="flow-logs-s3-create-flow-log"></a>

After you have created and configured your Amazon S3 bucket, you can create flow logs for your network interfaces, subnets, and VPCs.

**Prerequisite**

The IAM principal that creates the flow log must be using an IAM role that has the following permissions, which are required to publish flow logs to the destination Amazon S3 bucket.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogDelivery",
        "logs:DeleteLogDelivery"
      ],
      "Resource": "*"
    }
  ]
}
```

------

**To create a flow log using the console**

1. Do one of the following:
   + Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/). In the navigation pane, choose **Network Interfaces**. Select the checkbox for the network interface.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Your VPCs**. Select the checkbox for the VPC.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Subnets**. Select the checkbox for the subnet.

1. Choose **Actions**, **Create flow log**.

1. For **Filter**, specify the type of IP traffic data to log.
   + **Accept** – Log only accepted traffic.
   + **Reject** – Log only rejected traffic.
   + **All** – Log accepted and rejected traffic.

1. For **Maximum aggregation interval**, choose the maximum period of time during which a flow is captured and aggregated into one flow log record.

1. For **Destination**, choose **Send to an Amazon S3 bucket**.

1. For **S3 bucket ARN**, specify the Amazon Resource Name (ARN) of an existing Amazon S3 bucket. You can optionally include a subfolder. For example, to specify a subfolder named `my-logs` in a bucket named `my-bucket`, use the following ARN:

   `arn:aws:s3:::my-bucket/my-logs/`

   The bucket cannot use `AWSLogs` as a subfolder name, as this is a reserved term.

   If you own the bucket, we automatically create a resource policy and attach it to the bucket. For more information, see [Amazon S3 bucket permissions for flow logs](flow-logs-s3-permissions.md).

1. For **Log record format**, specify the format for the flow log record.
   + To use the default flow log record format, choose **AWS default format**.
   + To create a custom format, choose **Custom format**. For **Log format**, choose the fields to include in the flow log record.

1. For **Additional metadata**, select if you want to include metadata from Amazon ECS in the log format.

1. For **Log file format**, specify the format for the log file.
   + **Text** – Plain text. This is the default format.
   + **Parquet** – Apache Parquet is a columnar data format. Queries on data in Parquet format are 10 to 100 times faster compared to queries on data in plain text. Data in Parquet format with Gzip compression takes 20 percent less storage space than plain text with Gzip compression.

1. (Optional) To use Hive-compatible S3 prefixes, choose **Hive-compatible S3 prefix**, **Enable**.

1. (Optional) To partition your flow logs per hour, choose **Every 1 hour (60 mins)**.

1. (Optional) To add a tag to the flow log, choose **Add new tag** and specify the tag key and value.

1. Choose **Create flow log**.

**To create a flow log that publishes to Amazon S3 using the command line**

Use one of the following commands:
+ [create-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-flow-logs.html) (AWS CLI)
+ [New-EC2FlowLog](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2FlowLog.html) (AWS Tools for Windows PowerShell)

The following AWS CLI example creates a flow log that captures all traffic for the specified VPC and delivers the flow logs to the specified Amazon S3 bucket. The `--log-format` parameter specifies a custom format for the flow log records.

```
aws ec2 create-flow-logs --resource-type VPC --resource-ids vpc-00112233344556677 --traffic-type ALL --log-destination-type s3 --log-destination arn:aws:s3:::flow-log-bucket/custom-flow-logs/ --log-format '${version} ${vpc-id} ${subnet-id} ${instance-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${tcp-flags} ${type} ${pkt-srcaddr} ${pkt-dstaddr}'
```

# View flow log records with Amazon S3
<a name="view-flow-log-records-s3"></a>

You can view your flow log records using the Amazon S3 console. After you create your flow log, it might take a few minutes for it to be visible in the console.

The log files are compressed. If you open the log files using the Amazon S3 console, they are decompressed and the flow log records are displayed. If you download the files, you must decompress them to view the flow log records.

**To view flow log records published to Amazon S3**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Select the name of the bucket to open its details page.

1. Navigate to the folder with the log files. For example, *prefix*/AWSLogs/*account\$1id*/vpcflowlogs/*region*/*year*/*month*/*day*/.

1. Select the checkbox next to the file name, and then choose **Download**.

You can also query the flow log records in the log files using Amazon Athena. Amazon Athena is an interactive query service that makes it easier to analyze data in Amazon S3 using standard SQL. For more information, see [Querying Amazon VPC Flow Logs](https://docs.aws.amazon.com/athena/latest/ug/vpc-flow-logs.html) in the *Amazon Athena User Guide*.

# Publish flow logs to Amazon Data Firehose
<a name="flow-logs-firehose"></a>

Flow logs can publish flow log data directly to Amazon Data Firehose. Amazon Data Firehose is a fully managed service that collects, transforms, and delivers real-time data streams into various AWS data stores and analytics services. It handles the data ingestion on your behalf.

When it comes to VPC flow logs, Firehose can be useful. VPC flow logs capture information about the IP traffic going to and from network interfaces in your VPC. This data can be crucial for security monitoring, performance analysis, and regulatory compliance. However, managing the storage and processing of this continuous flow of log data can be a complex and resource-intensive task.

By integrating Firehose with your VPC flow logs, you can deliver this data to your preferred destination, such as Amazon S3 or Amazon Redshift. Firehose will scale to handle the ingestion, transformation, and delivery of your VPC flow logs, relieving you of the operational burden. This allows you to focus on analyzing the logs and deriving insights, rather than worrying about the underlying infrastructure.

Additionally, Firehose offers features like data transformation, compression, and encryption, which can enhance the efficiency and security of your VPC flow log processing pipeline. Using Firehose for VPC flow logs can simplify your data management and enable you to gain insights from your network traffic data. 

When publishing to Amazon Data Firehose, flow log data is published to a Amazon Data Firehose delivery stream, in plain text format.

**Pricing**  
Standard ingestion and delivery charges apply. For more information, open [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/), select **Logs** and find **Vended Logs**.

**Topics**
+ [IAM roles for cross account delivery](firehose-cross-account-delivery.md)
+ [Create a flow log that publishes to Amazon Data Firehose](flow-logs-firehose-create-flow-log.md)

# IAM roles for cross account delivery
<a name="firehose-cross-account-delivery"></a>

When you publish to Amazon Data Firehose, you can choose a delivery stream that's in the same account as the resource to monitor (the source account), or in a different account (the destination account). To enable cross account delivery of flow logs to Amazon Data Firehose, you must create an IAM role in the source account and an IAM role in the destination account.

**Topics**
+ [Source account role](#firehose-source-account-role)
+ [Destination account role](#firehose-destination-account-role)

## Source account role
<a name="firehose-source-account-role"></a>

In the source account, create a role that grants the following permissions. In this example, the name of the role is `mySourceRole`, but you can choose a different name for this role. The last statement allows the role in the destination account to assume this role. The condition statements ensure that this role is passed only to the log delivery service, and only when monitoring the specified resource. When you create your policy, specify the VPCs, network interfaces, or subnets that you're monitoring with the condition key `iam:AssociatedResourceARN`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::123456789012:role/mySourceRole",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "delivery.logs.amazonaws.com"
                },
                "StringLike": {
                    "iam:AssociatedResourceARN": [
                        "arn:aws:ec2:us-east-1:123456789012:vpc/vpc-00112233344556677"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogDelivery",
                "logs:DeleteLogDelivery",
                "logs:ListLogDeliveries",
                "logs:GetLogDelivery"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::111122223333:role/AWSLogDeliveryFirehoseCrossAccountRole"
        }
    ]
}
```

------

Ensure that this role has the following trust policy, which allows the log delivery service to assume the role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "delivery.logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

------

From the source account, use the following procedure to create the role.

**To create the source account role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Create policy** page, do the following:

   1. Choose **JSON**.

   1. Replace the contents of this window with the permissions policy at the start of this section.

   1. Choose **Next**.

   1. Enter a name for your policy and an optional description and tags, and then choose **Create policy**.

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. For **Trusted entity type**, choose **Custom trust policy**. For **Custom trust policy**, replace `"Principal": {},` with the following, which specifies the log delivery service. Choose **Next**.

   ```
   "Principal": {
      "Service": "delivery.logs.amazonaws.com"
   },
   ```

1. On the **Add permissions** page, select the checkbox for the policy that you created earlier in this procedure, and then choose **Next**.

1. Enter a name for your role and optionally provide a description.

1. Choose **Create role**.

## Destination account role
<a name="firehose-destination-account-role"></a>

In the destination account, create a role with a name that starts with **AWSLogDeliveryFirehoseCrossAccountRole**. This role must grant the following permissions.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
          "iam:CreateServiceLinkedRole",
          "firehose:TagDeliveryStream"
      ],
      "Resource": "*"
    }
  ]
}
```

------

Ensure that this role has the following trust policy, which allows the role that you created in the source account to assume this role.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/mySourceRole"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

------

From the destination account, use the following procedure to create the role.

**To create the destination account role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Create policy** page, do the following:

   1. Choose **JSON**.

   1. Replace the contents of this window with the permissions policy at the start of this section.

   1. Choose **Next**.

   1. Enter a name for your policy that starts with **AWSLogDeliveryFirehoseCrossAccountRole**, and then choose **Create policy**.

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. For **Trusted entity type**, choose **Custom trust policy**. For **Custom trust policy**, replace `"Principal": {},` with the following, which specifies the source account role. Choose **Next**.

   ```
   "Principal": {
      "AWS": "arn:aws:iam::source-account:role/mySourceRole"
   },
   ```

1. On the **Add permissions** page, select the checkbox for the policy that you created earlier in this procedure, and then choose **Next**.

1. Enter a name for your role and optionally provide a description.

1. Choose **Create role**.

# Create a flow log that publishes to Amazon Data Firehose
<a name="flow-logs-firehose-create-flow-log"></a>

You can create flow logs for your VPCs, subnets, or network interfaces.

**Prerequisites**
+ Create the destination Amazon Data Firehose delivery stream. Use **Direct Put** as the source. For more information, see [Creating an Amazon Data Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html).
+ The account creating the flow log must be using an IAM role that grants the following permissions to publish flow logs to Amazon Data Firehose. 

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "logs:CreateLogDelivery",
                  "logs:DeleteLogDelivery",
                  "iam:CreateServiceLinkedRole",
                  "firehose:TagDeliveryStream"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

------
+ If you're publishing flow logs to a different account, create the required IAM roles, as described in [IAM roles for cross account delivery](firehose-cross-account-delivery.md).

**To create a flow log that publishes to Amazon Data Firehose**

1. Do one of the following:
   + Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/). In the navigation pane, choose **Network Interfaces**. Select the checkbox for the network interface.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Your VPCs**. Select the checkbox for the VPC.
   + Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). In the navigation pane, choose **Subnets**. Select the checkbox for the subnet.

1. Choose **Actions**, **Create flow log**.

1. For **Filter**, specify the type of traffic to log.
   + **Accept** – Log only accepted traffic
   + **Reject** – Log only rejected traffic
   + **All** – Log accepted and rejected traffic

1. For **Maximum aggregation interval**, choose the maximum period of time during which a flow is captured and aggregated into one flow log record.

1. For **Destination**, choose either of the following options:
   + **Send to Amazon Data Firehose in the same account** – The delivery stream and the resource to monitor are in the same account.
   + **Send to Amazon Data Firehose in a different account** – The delivery stream and the resource to monitor are in different accounts.

1. For **Amazon Data Firehose** stream name, choose the delivery stream that you created.

1. [Cross account delivery only] For **Service access**, choose an existing [IAM service role for cross account delivery](firehose-cross-account-delivery.md) that has permissions to publish logs or choose **Set up permissions** to open the IAM console and create a service role.

1. For **Log record format**, specify the format for the flow log record.
   + To use the default flow log record format, choose **AWS default format**.
   + To create a custom format, choose **Custom format**. For **Log format**, choose the fields to include in the flow log record.

1. For **Additional metadata**, select if you want to include metadata from Amazon ECS in the log format.

1. (Optional) Choose **Add tag** to apply tags to the flow log.

1. Choose **Create flow log**.

**To create a flow log that publishes to Amazon Data Firehose using the command line**

Use one of the following commands:
+ [create-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-flow-logs.html) (AWS CLI)
+ [New-EC2FlowLog](https://docs.aws.amazon.com/powershell/latest/reference/items/New-EC2FlowLog.html) (AWS Tools for Windows PowerShell)

The following AWS CLI example creates a flow log that captures all traffic for the specified VPC and delivers the flow logs to the specified Amazon Data Firehose delivery stream in the same account.

```
aws ec2 create-flow-logs --traffic-type ALL \
  --resource-type VPC \
  --resource-ids vpc-00112233344556677 \
  --log-destination-type kinesis-data-firehose \
  --log-destination arn:aws:firehose:us-east-1:123456789012:deliverystream/flowlogs_stream
```

The following AWS CLI example creates a flow log that captures all traffic for the specified VPC and delivers the flow logs to the specified Amazon Data Firehose delivery stream in a different account.

```
aws ec2 create-flow-logs --traffic-type ALL \
  --resource-type VPC \
  --resource-ids vpc-00112233344556677 \
  --log-destination-type kinesis-data-firehose \
  --log-destination arn:aws:firehose:us-east-1:123456789012:deliverystream/flowlogs_stream \
  --deliver-logs-permission-arn arn:aws:iam::source-account:role/mySourceRole \ 
  --deliver-cross-account-role arn:aws:iam::destination-account:role/AWSLogDeliveryFirehoseCrossAccountRole
```

As a result of creating the flow log, you can get the flow log data from the destination that you configured for the delivery stream.

# Query flow logs using Amazon Athena
<a name="flow-logs-athena"></a>

Amazon Athena is an interactive query service that enables you to analyze data in Amazon S3, such as your flow logs, using standard SQL. You can use Athena with VPC Flow Logs to quickly get actionable insights about the traffic flowing through your VPC. For example, you can identify which resources in your virtual private clouds (VPCs) are the top talkers or identify the IP addresses with the most rejected TCP connections.

**Options**
+ You can streamline and automate the integration of your VPC flow logs with Athena by generating a CloudFormation template that creates the required AWS resources and predefined queries that you can run to obtain insights about the traffic flowing through your VPC.
+ You can create your own queries using Athena. For more information, see [Query flow logs using Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/vpc-flow-logs.html) in the *Amazon Athena User Guide*.

**Pricing**  
You incur standard [Amazon Athena charges](https://aws.amazon.com/athena/pricing/) for running queries. You incur standard [AWS Lambda charges](https://aws.amazon.com/lambda/pricing/) for the Lambda function that loads new partitions on a recurring schedule (when you specify a partition load frequency but do not specify a start and end date.)

**Topics**
+ [Generate the CloudFormation template using the console](flow-logs-generate-template-console.md)
+ [Generate the CloudFormation template using the AWS CLI](flow-logs-generate-template-cli.md)
+ [Run a predefined query](flow-logs-run-athena-query.md)

# Generate the CloudFormation template using the console
<a name="flow-logs-generate-template-console"></a>

After the first flow logs are delivered to your S3 bucket, you can integrate with Athena by generating a CloudFormation template and using the template to create a stack.

**Requirements**
+ The selected Region must support AWS Lambda and Amazon Athena.
+ The Amazon S3 buckets must be in the selected Region.
+ The log record format for the flow log must include the fields used by the specific predefined queries that you'd like to run.

**To generate the template using the console**

1. Do one of the following:
   + Open the Amazon VPC console. In the navigation pane, choose **Your VPCs** and then select your VPC.
   + Open the Amazon VPC console. In the navigation pane, choose **Subnets** and then select your subnet.
   + Open the Amazon EC2 console. In the navigation pane, choose **Network Interfaces** and then select your network interface.

1. On the **Flow logs** tab, select a flow log that publishes to Amazon S3 and then choose **Actions**, **Generate Athena integration**.

1. Specify the partition load frequency. If you choose **None**, you must specify the partition start and end date, using dates that are in the past. If you choose **Daily**, **Weekly**, or **Monthly**, the partition start and end dates are optional. If you do not specify start and end dates, the CloudFormation template creates a Lambda function that loads new partitions on a recurring schedule.

1. Select or create an S3 bucket for the generated template, and an S3 bucket for the query results.

1. Choose **Generate Athena integration**.

1. (Optional) In the success message, choose the link to navigate to the bucket that you specified for the CloudFormation template, and customize the template.

1. In the success message, choose **Create CloudFormation stack** to open the **Create Stack** wizard in the CloudFormation console. The URL for the generated CloudFormation template is specified in the **Template** section. Complete the wizard to create the resources that are specified in the template.

**Resources created by the CloudFormation template**
+ An Athena database. The database name is vpcflowlogsathenadatabase<*flow-logs-subscription-id*>.
+ An Athena workgroup. The workgroup name is <*flow-log-subscription-id*><*partition-load-frequency*><*start-date*><*end-date*>workgroup
+ A partitioned Athena table that corresponds to your flow log records. The table name is <*flow-log-subscription-id*><*partition-load-frequency*><*start-date*><*end-date*>.
+ A set of Athena named queries. For more information, see [Predefined queries](flow-logs-run-athena-query.md#predefined-queries).
+ A Lambda function that loads new partitions to the table on the specified schedule (daily, weekly, or monthly).
+ An IAM role that grants permission to run the Lambda functions.

# Generate the CloudFormation template using the AWS CLI
<a name="flow-logs-generate-template-cli"></a>

After the first flow logs are delivered to your S3 bucket, you can generate and use a CloudFormation template to integrate with Athena.

Use the following [get-flow-logs-integration-template](https://docs.aws.amazon.com/cli/latest/reference/ec2/get-flow-logs-integration-template.html) command to generate the CloudFormation template.

```
aws ec2 get-flow-logs-integration-template --cli-input-json file://config.json
```

The following is an example of the `config.json` file.

```
{
    "FlowLogId": "fl-12345678901234567",
    "ConfigDeliveryS3DestinationArn": "arn:aws:s3:::my-flow-logs-athena-integration/templates/",
    "IntegrateServices": {
        "AthenaIntegrations": [
            {
                "IntegrationResultS3DestinationArn": "arn:aws:s3:::my-flow-logs-analysis/athena-query-results/",
                "PartitionLoadFrequency": "monthly",
                "PartitionStartDate": "2021-01-01T00:00:00",
                "PartitionEndDate": "2021-12-31T00:00:00"
            }
        ]
    }
}
```

Use the following [create-stack](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack.html) command to create a stack using the generated CloudFormation template.

```
aws cloudformation create-stack --stack-name my-vpc-flow-logs --template-body file://my-cloudformation-template.json
```

# Run a predefined query
<a name="flow-logs-run-athena-query"></a>

The generated CloudFormation template provides a set of predefined queries that you can run to quickly get meaningful insights about the traffic in your AWS network. After you create the stack and verify that all resources were created correctly, you can run one of the predefined queries.

**To run a predefined query using the console**

1. Open the Athena console.

1. In the left nav, choose **Query editor**. Under **Workgroup**, select the workgroup created by the CloudFormation template.

1. Select **Saved queries**, select a query, modify the parameters as needed, and run the query. For a list of available predefined queries, see [Predefined queries](#predefined-queries).

1. Under **Query results**, view the query results.

## Predefined queries
<a name="predefined-queries"></a>

The following is the complete list of Athena named queries. The predefined queries that are provided when you generate the template depend on the fields that are part of the log record format for the flow log. Therefore, the template might not contain all of these predefined queries.
+ **VpcFlowLogsAcceptedTraffic** – The TCP connections that were allowed based on your security groups and network ACLs.
+ **VpcFlowLogsAdminPortTraffic** – The top 10 IP addresses with the most traffic, as recorded by applications serving requests on administrative ports.
+ **VpcFlowLogsIPv4Traffic** – The total bytes of IPv4 traffic recorded.
+ **VpcFlowLogsIPv6Traffic** – The total bytes of IPv6 traffic recorded.
+ **VpcFlowLogsRejectedTCPTraffic** – The TCP connections that were rejected based on your security groups or network ACLs.
+ **VpcFlowLogsRejectedTraffic** – The traffic that was rejected based on your security groups or network ACLs.
+ **VpcFlowLogsSshRdpTraffic** – The SSH and RDP traffic.
+ **VpcFlowLogsTopTalkers** – The 50 IP addresses with the most traffic recorded.
+ **VpcFlowLogsTopTalkersPacketLevel** – The 50 packet-level IP addresses with the most traffic recorded.
+ **VpcFlowLogsTopTalkingInstances** – The IDs of the 50 instances with the most traffic recorded.
+ **VpcFlowLogsTopTalkingSubnets** – The IDs of the 50 subnets with the most traffic recorded.
+ **VpcFlowLogsTopTCPTraffic** – All TCP traffic recorded for a source IP address.
+ **VpcFlowLogsTotalBytesTransferred** – The 50 pairs of source and destination IP addresses with the most bytes recorded.
+ **VpcFlowLogsTotalBytesTransferredPacketLevel** – The 50 pairs of packet-level source and destination IP addresses with the most bytes recorded.
+ **VpcFlowLogsTrafficFrmSrcAddr** – The traffic recorded for a specific source IP address.
+ **VpcFlowLogsTrafficToDstAddr** – The traffic recorded for a specific destination IP address.

# Troubleshoot VPC Flow Logs
<a name="flow-logs-troubleshooting"></a>

The following are possible issues you might have when working with flow logs.

**Topics**
+ [Incomplete flow log records](#flow-logs-troubleshooting-incomplete-records)
+ [Flow log is active, but no flow log records or log group](#flow-logs-troubleshooting-no-log-group)
+ ['LogDestinationNotFoundException' or 'Access Denied for LogDestination' error](#flow-logs-troubleshooting-not-found)
+ [Exceeding the Amazon S3 bucket policy limit](#flow-logs-troubleshooting-policy-limit)
+ [LogDestination undeliverable](#flow-logs-troubleshooting-kms-id)
+ [Flow logs data size mismatch with billing data](#flow-logs-data-size-mismatch)

## Incomplete flow log records
<a name="flow-logs-troubleshooting-incomplete-records"></a>

**Problem**  
Your flow log records are incomplete or are no longer being published.

**Cause**  
There might be a problem delivering the flow logs to the CloudWatch Logs log group or [SkipData entries may be present](flow-logs-records-examples.md#flow-log-example-no-data).

**Solution**  
Check the **Flow logs** tab for the VPC, subnet, or network interface. Note that you can't describe flow logs for a VPC or subnet that was shared with you, but you can describe flow logs for a network interface that you create in a VPC or subnet that was shared with you. If there are any errors, they appear in the **Status** column. Alternatively, use the [describe-flow-logs](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-flow-logs.html) command, and check the value that's returned in the `DeliverLogsErrorMessage` field.

The following are possible error values for the status:
+ `Rate limited`: This error can occur if CloudWatch Logs throttling has been applied — when the number of flow log records for a network interface is higher than the maximum number of records that can be published within a specific timeframe. This error can also occur if you've reached the quota for the number of CloudWatch Logs log groups that you can create. For more information, see [CloudWatch service quotas](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html) in the *Amazon CloudWatch User Guide*.
+ `Access error`: This error can occur for one of the following reasons:
  + The IAM role for your flow log does not have sufficient permissions to publish flow log records to the CloudWatch log group
  + The IAM role does not have a trust relationship with the flow logs service
  + The trust relationship does not specify the flow logs service as the principal

  For more information, see [IAM role for publishing flow logs to CloudWatch Logs](flow-logs-iam-role.md).
+ `Unknown error`: An internal error has occurred in the flow logs service. 

## Flow log is active, but no flow log records or log group
<a name="flow-logs-troubleshooting-no-log-group"></a>

**Problem**  
You created a flow log, and the Amazon VPC or Amazon EC2 console displays the flow log as `Active`. However, you cannot see any log streams in CloudWatch Logs or log files in your Amazon S3 bucket.

**Possible causes**
+ The flow log is still being created. In some cases, it can take ten minutes or more after you create the flow log for the log group to be created, and for data to be displayed.
+ There has been no traffic recorded for your network interfaces yet. The log group in CloudWatch Logs is only created when traffic is recorded.

**Solution**  
Wait a few minutes for the log group to be created, or for traffic to be recorded.

## 'LogDestinationNotFoundException' or 'Access Denied for LogDestination' error
<a name="flow-logs-troubleshooting-not-found"></a>

**Problem**  
You get a `Access Denied for LogDestination` or a `LogDestinationNotFoundException` error when you create a flow log.

**Possible causes**
+ When creating a flow log that publishes data to an Amazon S3 bucket, this error indicates that the specified S3 bucket could not be found or that the bucket policy does not allow logs to be delivered to the bucket.
+ When creating a flow log that publishes data to Amazon CloudWatch Logs, this error indicates that the IAM role does not allow logs to be delivered to the log group.

**Solution**
+ When publishing to Amazon S3, ensure that you have specified the ARN for an existing S3 bucket, and that the ARN is in the correct format. If you do not own the S3 bucket, verify that the [bucket policy](flow-logs-s3-permissions.md) has the required permissions and uses the correct account ID and bucket name in the ARN.
+ When publishing to CloudWatch Logs, verify that the [IAM role](flow-logs-iam-role.md) has the required permissions.

## Exceeding the Amazon S3 bucket policy limit
<a name="flow-logs-troubleshooting-policy-limit"></a>

**Problem**  
You get the following error when you try to create a flow log: `LogDestinationPermissionIssueException`.

**Possible causes**  
Amazon S3 bucket policies are limited to 20 KB in size.

Each time that you create a flow log that publishes to an Amazon S3 bucket, we automatically add the specified bucket ARN, which includes the folder path, to the `Resource` element in the bucket's policy.

Creating multiple flow logs that publish to the same bucket could cause you to exceed the bucket policy limit.

**Solution**
+ Clean up the bucket policy by removing the flow log entries that are no longer needed.
+ Grant permissions to the entire bucket by replacing the individual flow log entries with the following.

  ```
  arn:aws:s3:::bucket_name/*
  ```

  If you grant permissions to the entire bucket, new flow log subscriptions do not add new permissions to the bucket policy.

## LogDestination undeliverable
<a name="flow-logs-troubleshooting-kms-id"></a>

**Problem**  
You get the following error when you try to create a flow log: `LogDestination <bucket name> is undeliverable`.

**Possible causes**  
The target Amazon S3 bucket is encrypted using server-side encryption with AWS KMS (SSE-KMS) and the default encryption of the bucket is a KMS key ID.

**Solution**  
The value must be a KMS key ARN. Change the default S3 encryption type from KMS key ID to KMS key ARN. For more information, see [Configuring default encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html) in the *Amazon Simple Storage Service User Guide*.

## Flow logs data size mismatch with billing data
<a name="flow-logs-data-size-mismatch"></a>

**Problem**  
The total data size of your flow logs does not match the size reported by billing data.

**Possible causes**  
There may be SKIPDATA entries in your flow logs. See [No data and skipped records](flow-logs-records-examples.md#flow-log-example-no-data) for an explanation of SKIPDATA entries.

**Solution**  
Confirm that SKIPDATA entries are present in your log entries by querying your logs for different entries in the log-status field.

Sample queries to check for SKIPDATA:

CW Insights:

```
fields @timestamp, @message, @logStream, @log
| filter interfaceId = 'eni-123'
| stats count(*) by interfaceId, logStatus
| sort by interfaceId, logStatus
```

Athena:

```
SELECT log_status, interface_id, count(1)
FROM vpc_flow_logs
WHERE interface_id IN ('eni-1', 'eni-2', 'eni-3')
GROUP BY log_status, interface_id
```

# CloudWatch metrics for your VPCs
<a name="vpc-cloudwatch"></a>

Amazon VPC publishes data about your VPCs to Amazon CloudWatch. You can retrieve statistics about your VPCs as an ordered set of time-series data, known as *metrics*. Think of a metric as a variable to monitor and the data as the value of that variable over time. For more information, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).

**Topics**
+ [NAU metrics and dimensions](#nau-cloudwatch)
+ [Enable or disable NAU monitoring](#nau-monitoring-enable)
+ [NAU CloudWatch alarm example](#nau-cloudwatch-alarm-example)

## NAU metrics and dimensions
<a name="nau-cloudwatch"></a>

[Network Address Usage](network-address-usage.md) (NAU) is a metric applied to resources in your virtual network to help you plan for and monitor the size of your VPC. There is no cost to monitor NAU. Monitoring NAU is helpful because if you exhaust the NAU or peered NAU quotas for your VPC, you can't launch new EC2 instances or provision new resources, such as Network Load Balancers, VPC endpoints, Lambda functions, transit gateway attachments, and NAT gateways.

If you've enabled Network Address Usage monitoring for a VPC, Amazon VPC sends metrics related to NAU to Amazon CloudWatch. The size of a VPC is measured by the number of Network Address Usage (NAU) units that the VPC contains.

You can use these metrics to understand the rate of your VPC growth, forecast when your VPC will reach its size limit, or create alarms when size thresholds are crossed.

The `AWS/EC2`namespace includes the following metrics for monitoring NAU.


| Metric | Description | 
| --- | --- | 
|  NetworkAddressUsage  |  The NAU count per VPC. **Reporting criteria** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html) **Dimensions** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html)  | 
|  NetworkAddressUsagePeered  | The NAU count for the VPC and all VPCs that it's peered with.**Reporting criteria**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html)**Dimensions**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html) | 

The `AWS/Usage`namespace includes the following metrics for monitoring NAU.


| Metric | Description | 
| --- | --- | 
|  ResourceCount  |  The NAU count per VPC. **Reporting criteria** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html) **Dimensions** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html)  | 
|  ResourceCount  |  The NAU count for the VPC and all VPCs that it's peered with. **Reporting criteria** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html) **Dimensions** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html)  | 
|  ResourceCount  |  A combined view of NAU usage across VPCs. **Reporting criteria** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html) **Dimensions** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html)  | 
|  ResourceCount  |  A combined view of NAU usage across peered VPCs. **Reporting criteria** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html) **Dimensions** [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/vpc/latest/userguide/vpc-cloudwatch.html)  | 

## Enable or disable NAU monitoring
<a name="nau-monitoring-enable"></a>

To view NAU metrics in CloudWatch, you must first enable monitoring on each VPC to monitor.

**To enable or disable monitoring NAU**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Your VPCs**.

1. Select the check box for the VPC.

1. Select **Actions**, **Edit VPC settings**.

1. Do one of the following:
   + To enable monitoring, select **Network mapping units metrics settings**, **Enable network address usage metrics**.
   + To disable monitoring, clear **Network mapping units metrics settings**, **Enable network address usage metrics**.

**To enable or disable monitoring using the command line**
+ [modify-vpc-attribute](https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-vpc-attribute.html) (AWS CLI)
+ [Edit-EC2VpcAttribute](https://docs.aws.amazon.com/powershell/latest/reference/items/Edit-EC2VpcAttribute.html) (AWS Tools for Windows PowerShell)

## NAU CloudWatch alarm example
<a name="nau-cloudwatch-alarm-example"></a>

You can use the following AWS CLI command and example `.json` to create an Amazon CloudWatch alarm and SNS notification that tracks NAU utilization of the VPC with 50,000 NAUs as the threshold. This sample requires you to first create an Amazon SNS topic. For more information, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html) in the *Amazon Simple Notification Service Developer Guide*.

```
aws cloudwatch put-metric-alarm --cli-input-json file://nau-alarm.json
```

The following is an example of `nau-alarm.json`.

```
{
    "Namespace": "AWS/EC2",
    "MetricName": "NetworkAddressUsage",
    "Dimensions": [{
        "Name": "Per-VPC Metrics",
        "Value": "vpc-0123456798"
    }],
    "AlarmActions": ["arn:aws:sns:us-west-1:123456789012:my_sns_topic"],
    "ComparisonOperator": "GreaterThanThreshold",
    "Period": 86400,
    "EvaluationPeriods": 1,
    "Threshold": 50000,
    "AlarmDescription": "Tracks NAU utilization of the VPC with 50k NAUs as the threshold",
    "AlarmName": "VPC NAU Utilization",
    "Statistic": "Maximum"
}
```

# Understand codes for Amazon VPC in billing and usage reports
<a name="vpc-billing-usage-reports"></a>

When you use Amazon VPC, we include related codes in your AWS billing and usage reports. Reviewing these codes helps you understand your costs and usage patterns for Amazon VPC. Tracking and managing your expenses is essential for optimizing your costs.

The following tables describe the codes for Amazon VPC that appear in your billing and usage reports. For a list of the Region codes used in the billing and usage reports, see [AWS Region billing codes](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-region-billing-codes.html).

**Topics**
+ [IP address management](#ip-billing-usage-reports)
+ [VPC endpoints](#vpce-billing-usage-reports)
+ [Transit gateways](#tgw-billing-usage-reports)
+ [Network analysis](#analysis-billing-usage-reports)
+ [Traffic mirroring](#mirroring-billing-usage-reports)
+ [VPC Lattice](#lattice-billing-usage-reports)
+ [Cross-account/Region resources](#cross-billing-usage-reports)

**Related resources**
+ [Amazon VPC pricing](https://aws.amazon.com/vpc/pricing/)
+ [AWS PrivateLink pricing](https://aws.amazon.com/privatelink/pricing/)
+ [AWS Transit Gateway pricing](https://aws.amazon.com/transit-gateway/pricing/)
+ [Amazon VPC Lattice pricing](https://aws.amazon.com/vpc/lattice/pricing/)

## IP address management
<a name="ip-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-PublicIPv4:InUseAddress | The time that public IPv4 addresses are in use by a resource. | Hours | Per-second | 
| region-PublicIPv4:IdleAddress | The time that public IPv4 addresses are not in use by a resource. | Hours | Per-second | 
| region-PublicIPv4:ContiguousBlock | The use of public IPv4 addresses in an Amazon-provided contiguous IPv4 block. | Hours | Hourly | 
| region-IPAddressManager-IP-Hours | The time that IP addresses are managed by IPAM Advanced Tier. | Hours | Hourly | 



## VPC endpoints
<a name="vpce-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-VpcEndpoint-Hours | The time that interface VPC endpoints are provisioned. | Hours | Hourly | 
| region-VpcEndpoint-Bytes | The data processed by interface VPC endpoints. | GB | Hourly | 
| region-VpcEndpoint-GWLBE-Hours | The time that Gateway Load Balancer endpoints are provisioned. | Hours | Hourly | 
| region-VpcEndpoint-GWLBE-Bytes | The data processed by Gateway Load Balancer endpoints. | GB | Hourly | 



## Transit gateways
<a name="tgw-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-TransitGateway-Hours | The use of transit gateway attachments. | Hours | Hourly | 
| region-TransitGateway-Bytes | The data processed by transit gateways. | GB | Hourly | 
| region-TGW-Multicast-Consumer-Bytes | The data processed by multicast receiver instances. | GB | Hourly | 



## Network analysis
<a name="analysis-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-Analysis-Runs | The number of network paths analyzed by Reachability Analyzer. | Count | Per analysis | 
| region-NetworkInterface-Assessment | The number of network interfaces analyzed by Network Access Analyzer. | Count | Per assessment | 



## Traffic mirroring
<a name="mirroring-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-ENI-Mirror | The time that a network interface is configured for traffic mirroring. | Hours | Hourly | 



## VPC Lattice
<a name="lattice-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-VPCLattice-Service-Hourly | The running time for VPC Lattice services. | Hours | Hourly | 
| region-VPCLattice-DataProcessing-Bytes | The data processed by VPC Lattice services. | GB | Hourly | 
| region-VPCLattice-RequestCount-Free | The free HTTP requests and TCP connections. | Count | Hourly | 
| region-VpcLattice-Service-Network-Resource-Hours | The running time for VPC Lattice service networks. | Hours | Hourly | 



## Cross-account/Region resources
<a name="cross-billing-usage-reports"></a>


| Code | Description | Units | Granularity | 
| --- | --- | --- | --- | 
| region-VpcResource-Provider-Bytes | The data transferred from provider resources across accounts or Regions. | GB | Hourly | 
| region-VpcResource-Consumer-Bytes | The data transferred by consumer resources across accounts or Regions. | GB | Hourly | 



# Describe your VPC network architecture
<a name="vpc-network-inventory"></a>

Amazon VPC enables you to define a logically isolated virtual network in the AWS Cloud, known as a virtual private cloud (VPC). Create separate VPCs to isolate infrastructure by workload or organizational entity. You can configure your VPCs by selecting IP address ranges, configuring routing, and adding network gateways to connect your VPCs to each other, the internet, or to your own corporate network. You launch AWS resources, such as EC2 instances or RDS instances, in your VPCs.

The following table describes the key characteristics of a VPC network. A network administrator can use this guidance to describe the architecture and configuration of your VPC network. Having this information enables them to configure a functionally equivalent network on premises or using another Cloud Provider.


| Characteristic | Description | 
| --- | --- | 
| [Geographic location](#vpc-network-geographic-location) | Amazon VPC is hosted in all AWS Regions world-wide. You can select the Regions for your VPC network that put your AWS resources closest to your customers. | 
| [Subnets](#vpc-network-subnets) | The subnets that you define for your VPCs define network boundaries and determine the IP addresses for your AWS resources. You can add subnets in multiple Availability Zones to increase the availability of your resources. | 
| [Network connectivity](#vpc-network-connectivity) | The gateways that you attach to your VPCs or subnets to provide connectivity between your VPC network and other networks, such as other VPCs or subnets, the internet, or your on-premises networks. | 
| [Security controls](#vpc-network-security-controls) | The security groups that you create for your VPCs control traffic to and from the associated resources, such as compute resources, database resources, and load balancers. Each subnet has a network ACL that controls traffic entering and leaving the subnet. | 
| [Traffic management](#vpc-network-traffic-management) | Routing rules control the traffic flow between subnets, VPCs, and external locations. The load balancers provided by Elastic Load Balancing distribute incoming traffic across multiple targets, such as EC2 instances, containers, and Lambda functions. | 

## Geographic location
<a name="vpc-network-geographic-location"></a>

Amazon VPC is available in every AWS Region world-wide. Each Region is a separate geographic area. You can lower network latency when you create VPCs for your resources in Regions that are close to the majority of your users.

You can use Amazon EC2 Global View to list your VPCs across all Regions using a graphical user interface (there is no equivalent programmatic interface). With the Amazon VPC console, AWS API, and AWS command line interfaces, you must list the VPCs and VPC resources for each Region individually.

**Why this matters**  
After you determine where your VPCs are located, you can decide whether to configure a functionally equivalent network in the same locations or different locations, depending on your needs.

**To get a summary of your VPCs across all Regions**

1. Open the Amazon EC2 Global View console at [https://console.aws.amazon.com/ec2globalview/home](https://console.aws.amazon.com/ec2globalview/home).

1. On the **Region explorer** tab, under **Summary**, check the resource count for **VPCs**, which includes the number of VPCs and the number of Regions. This includes both default VPCs that AWS creates on your behalf and nondefault VPCs that you create. Click the underlined text to see how the VPC count is spread across Regions. If a Region has only one VPC, it is most likely the default VPC for the Region.

1. On the **Global search** tab, select the client filter **Resource type = Vpc**. You can filter the results further by specifying a Region or a tag.

**To get the VPCs in a Region using the AWS CLI**  
Use the following [describe-vpcs](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-vpcs.html) command. You must run this command in each Region where you have VPCs. The `--query` parameter includes only the VPC IDs in the output. You can include additional fields as needed.

```
aws ec2 describe-vpcs \
    --region us-east-2 \
    --query "Vpcs[*].VpcId"
```

Each Region comes with a default VPC. If you aren't using the default VPCs, you can exclude them from the results by adding the following filter.

```
--filters Name=is-default,Values=false
```

## Subnets
<a name="vpc-network-subnets"></a>

A subnet is a logical network boundary in a VPC. When you create a subnet, you assign a block of IP address. Resources that you launch into a subnet are assigned IP addresses from the block of IP addresses for the subnet. IP addresses allow resources to communicate with each other over a local network or the internet.

The resource map in the Amazon VPC console provides a visual representation of the subnets for your VPC.

**Why this matters**  
Subnets enables network administrators to implement security boundaries and control traffic between application tiers. By noting the IP addresses of your subnets, you can help to ensure that resources in a functionally equivalent network can communicate with the same clients or applications that they can in your VPC network.

**To view the subnets for a VPC using the resource map**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **VPCs**.

1. Select the checkbox for the VPC.

1. Choose the **Resource map** tab.

1. In the VPC pane, choose **Show details**. The **Subnets** pane lists all subnets in the VPC and shows their IP address ranges. Hover over a subnet to highlight its associated route table and network connections. For more detail, click the link to open the subnet detail page.

**To describe the subnets for a VPC using the AWS CLI**  
Use the following [describe-subnets](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-subnets.html) command. The `--filters` parameter scopes the search to describe the subnets for the specified VPC. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

```
aws ec2 describe-subnets \
    --filters Name=vpc-id,Values=vpc-1234567890abcdef0 \
    --query Subnets[*].[SubnetId,AvailabilityZoneId,CidrBlock,Ipv6CidrBlockAssociationSet[0].Ipv6CidrBlock] \
    --output table
```

The following is example output. The columns are subnet ID, AZ ID, IPv4 address range, and the first IPv6 address range (if any).

```
---------------------------------------------------------------------------------------
|                                   DescribeSubnets                                   |
+---------------------------+-----------+----------------+----------------------------+
|  subnet-0d2d1b81e0bc9c6d4 |  usw2-az1 |  10.0.144.0/20 |  2600:1f14:1e6e:a003::/64  |
|  subnet-0e01d500780bb7468 |  usw2-az1 |  10.0.16.0/20  |  2600:1f14:1e6e:a001::/64  |
|  subnet-0eb17d85f5dfd33b1 |  usw2-az2 |  10.0.128.0/20 |  2600:1f14:1e6e:a002::/64  |
|  subnet-0e990c67809773b19 |  usw2-az2 |  10.0.0.0/20   |  2600:1f14:1e6e:a000::/64  |
+---------------------------+-----------+----------------+----------------------------+
```

## Network connectivity
<a name="vpc-network-connectivity"></a>

The connectivity options provided by Amazon VPC enable you to create a network that spans VPCs in multiple accounts and Regions and remote networks.

You can use the resource map in the Amazon VPC console to discover whether your VPCs use internet gateways, egress-only internet gateways, NAT gateways, or gateway VPC endpoints. The resource map does not show any transit gateways, peering connections, virtual private gateways, or other types of VPC endpoints that are in use. You can get the complete list of gateways and peering connections for a VPC by describing them one at a time using the console, the API, or a command-line interface.

**Why this matters**  
After you understand the connectivity provided by your VPC network, you can ensure that resources in a functionally equivalent network can communicate with the same local and remote resources.

**To view the network connections for a VPC using the resource map**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **VPCs**.

1. Select the checkbox for the VPC.

1. Choose the **Resource map** tab.

1. In the VPC pane, choose **Show details**. The **Network connections** pane lists any internet gateways, egress-only internet gateways, NAT gateways, and gateway VPC endpoints. If the resource type isn't clear, hover over the link icon for the network connection and examine the resulting URL. This URL is a link to the resource in the console, and it contains the resource type and resource ID (for example, internetGatewayId=igw-0123456780abcdef).

**To get the network connections for your VPCs using the AWS CLI**

1. Use the following [describe-internet-gateways](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-internet-gateways.html) command to get the internet gateways for the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-internet-gateways \
       --region us-east-2 \
       --query InternetGateways[*].[Attachments[0].VpcId,InternetGatewayId] \
       --output table
   ```

   The following is example output. The columns show the VPC IDs and internet gateway IDs.

   ```
   ----------------------------------------------------
   |             DescribeInternetGateways             |
   +------------------------+-------------------------+
   |  None                  |  igw-04c61dba10EXAMPLE  |
   |  vpc-0bf4c2739bEXAMPLE |  igw-09737a4029EXAMPLE  |
   |  vpc-060415a18fEXAMPLE |  igw-0c562bd22aEXAMPLE  |
   |  vpc-0ea9d41094EXAMPLE |  igw-0e06f7033dEXAMPLE  |
   |  vpc-03b86de356EXAMPLE |  igw-0a9ff72d05EXAMPLE  |
   +------------------------+-------------------------+
   ```

1. Use the following [describe-egress-only-internet-gateways](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-egress-only-internet-gateways.html) command to get the egress-only internet gateways for the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-egress-only-internet-gateways \
       --region us-east-2 \
       --query EgressOnlyInternetGateways[*].[Attachments[0].VpcId,EgressOnlyInternetGatewayId] \
       --output table
   ```

   The following is example output. The columns show the VPC IDs and the egress-only internet gateway IDs.

   ```
   -----------------------------------------------------
   |        DescribeEgressOnlyInternetGateways         |
   +------------------------+--------------------------+
   |  vpc-060415a18fEXAMPLE |  eigw-0b8ca558acEXAMPLE  |
   +------------------------+--------------------------+
   ```

1. Use the following [describe-nat-gateways](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-nat-gateways.html) command to get the NAT gateways for the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-nat-gateways \
       --region us-east-2 \
       --query NatGateways[*].[VpcId,NatGatewayId,SubnetId] \
       --output table
   ```

   The following is example output. The columns show the VPC IDs, NAT gateway IDs, and subnet IDs.

   ```
   ---------------------------------------------------------------------------------
   |                              DescribeNatGateways                              |
   +------------------------+-------------------------+----------------------------+
   |  vpc-060415a18fEXAMPLE |  nat-026316334aEXAMPLE  |  subnet-0eb17d85f5EXAMPLE  |
   |  vpc-060415a18fEXAMPLE |  nat-0f08bc5f52EXAMPLE  |  subnet-0d2d1b81e0EXAMPLE  |
   +------------------------+-------------------------+----------------------------+
   ```

1. Use the following [describe-transit-gateway-vpc-attachments](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-transit-gateway-vpc-attachments.html) command to get the transit gateway VPC attachments for the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-transit-gateway-vpc-attachments \
       --region us-east-2 \
       --query TransitGatewayVpcAttachments[*].[VpcId,TransitGatewayId,length(SubnetIds[])] \
       --output table
   ```

   The following is example output. The columns show the VPC IDs, transit gateway IDs, and the count of subnets.

   ```
   ---------------------------------------------------------
   |         DescribeTransitGatewayVpcAttachments          |
   +------------------------+-------------------------+----+
   |  vpc-0bf4c2739bEXAMPLE |  tgw-055dc4e47bEXAMPLE  |  4 |
   |  vpc-0ea9d41094EXAMPLE |  tgw-055dc4e47bEXAMPLE  |  2 |
   +------------------------+-------------------------+----+
   ```

1. Use the following [describe-vpc-peering-connections](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-vpc-peering-connections.html) command to get the peering connections for the VPCs in the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-vpc-peering-connections \
       --region us-east-2 \
       --query VpcPeeringConnections[*].[AccepterVpcInfo.VpcId,RequesterVpcInfo.VpcId] \
       --output table
   ```

   The following is example output. The columns show the accepter VPC IDs, accepter VPC owners, requester VPC IDs, and requester VPC owners.

   ```
   ------------------------------------------------------------------------------------
   |                          DescribeVpcPeeringConnections                           |
   +------------------------+---------------+------------------------+----------------+
   |  vpc-0ea9d41094EXAMPLE |  123456789012 |  vpc-03b86de356EXAMPLE |  123456789012  |
   +------------------------+---------------+------------------------+----------------+
   ```

1. Use the following [describe-vpn-gateways](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-vpn-gateways.html) command to get the virtual private gateways for the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-vpn-gateways \
       --region us-east-2 \
       --query VpnGateways[*].[VpcAttachments[0].VpcId,VpnGatewayId] \
       --output table
   ```

   The following is example output. The columns show the VPC IDs and virtual private gateway IDs.

   ```
   ----------------------------------------------------
   |                DescribeVpnGateways               |
   +------------------------+-------------------------+
   |  vpc-0bf4c2739bEXAMPLE |  vgw-0cb3226c4aEXAMPLE  |
   +------------------------+-------------------------+
   ```

1. Use the following [describe-vpc-endpoints](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-vpc-endpoints.html) command to get the VPC endpoints for the specified Region. The `--query` parameter includes only the specified fields in the output. You can include additional fields as needed.

   ```
   aws ec2 describe-vpc-endpoints \
       --region us-east-2 \
       --query 'VpcEndpoints[*].[VpcId,VpcEndpointType,ServiceName||ServiceNetworkArn||ResourceConfigurationArn]' \
       --output table
   ```

   The following is example output. The first column shows the VPC ID and the second column shows the VPC endpoint type. The third column depends on the endpoint type, and shows either the service name, resource configuration ARN, or service network ARN.

   ```
   ----------------------------------------------------------------------------------------------------------------------------------------
   |                                                         DescribeVpcEndpoints                                                         |
   +------------------------+-----------------+-------------------------------------------------------------------------------------------+
   |  vpc-060415a18fcc9afde |  Interface      |  com.amazonaws.vpce.us-west-2.vpce-svc-007832a03d60fc387                                  |
   |  vpc-060415a18fcc9afde |  Interface      |  com.amazonaws.vpce.us-west-2.vpce-svc-007832a03d60fc387                                  |
   |  vpc-0bf4c2739bc05a694 |  Gateway        |  com.amazonaws.us-west-2.s3                                                               |
   |  vpc-0ea9d410947d27b7d |  Interface      |  com.amazonaws.us-west-2.logs                                                             |
   |  vpc-0bf4c2739bc05a694 |  Resource       |  arn:aws:vpc-lattice:us-east-2:123456789012:resourceconfiguration/rcfg-07129f3acded87625  |
   |  vpc-0bf4c2739bc05a694 |  ServiceNetwork |  arn:aws:vpc-lattice:us-east-2:123456789012:servicenetwork/sn-0808d1748faee0c1e           |
   |  vpc-0bf4c2739bc05a694 |  ServiceNetwork |  arn:aws:vpc-lattice:us-east-2:123456789012:servicenetwork/sn-0808d1748faee0c1e           |
   +------------------------+-----------------+-------------------------------------------------------------------------------------------+
   ```

## Security controls
<a name="vpc-network-security-controls"></a>

The security controls provided by Amazon VPC determine network access to your VPCs and the resources deployed in your VPCs.

**Why this matters**  
After you determine the inbound traffic allowed to reach your subnets and resources and the output traffic allowed to leave your subnets and resources, you can plan the firewall rules needed for a functionally equivalent network.

**Topics**
+ [Security groups](#vpc-network-security-groups)
+ [Network ACLs](#vpc-network-acls-subnet)

### Security groups
<a name="vpc-network-security-groups"></a>

A security group allows specific inbound and outbound traffic at the resource level. Security groups are the primary mechanism to control access to resources in your VPCs.

**To get the security groups for your VPCs**  
Use the following [describe-security-groups](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-security-groups.html) command to display the security groups for the specified VPC.

```
aws ec2 describe-security-groups \
    --filters Name=vpc-id,Values=vpc-1234567890abcdef0 \
    --query SecurityGroups[*].GroupId
```

**To get the inbound rules for a security group**  
Use the following [describe-security-group-rules](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-security-group-rules.html) command to display the rules for the specified security group where `IsEgress` is `false`.

```
aws ec2 describe-security-group-rules \
    --filters Name=group-id,Values=sg-0abcdef1234567890 \
    --query 'SecurityGroupRules[?IsEgress==`false`]'
```

**To get the outbound rules for a security group**  
Use the following [describe-security-group-rules](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-security-group-rules.html) command to display the rules for the specified security group where `IsEgress` is `true`.

```
aws ec2 describe-security-group-rules \
    --filters Name=group-id,Values=sg-0abcdef1234567890 \
    --query 'SecurityGroupRules[?IsEgress==`true`]'
```

### Network ACLs
<a name="vpc-network-acls-subnet"></a>

A network access control list (ACL) allows or denies specific inbound and outbound traffic at the subnet level. You can use network ACLs as defense-in-depth in case a resource is deployed without the correct security group.

**To get the network ACLs for your subnets**  
Use the following [describe-network-acls](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-acls.html) command to display the network ACLs for the specified VPC and their subnet associations.

```
aws ec2 describe-network-acls \
    --filters Name=vpc-id,Values=vpc-1234567890abcdef0 \
    --query "NetworkAcls[*].{ID:NetworkAclId,Subnets:Associations[].SubnetId}"
```

**To get the inbound rules for a network ACL**  
Use the following [describe-network-acls](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-acls.html) command to display the rules for the specified network ACL where `Egress` is `false`.

```
aws ec2 describe-network-acls \
    --network-acl-ids acl-0abcdef1234567890 \
    --query 'NetworkAcls[*].Entries[?Egress==`false`]'
```

**To get the outbound rules for a network ACL**  
Use the following [describe-network-acls](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-acls.html) command to display the rules for the specified network ACL where `Egress` is `true`.

```
aws ec2 describe-network-acls \
    --network-acl-ids acl-0abcdef1234567890 \
    --query 'NetworkAcls[*].Entries[?Egress==`true`]'
```

## Traffic management
<a name="vpc-network-traffic-management"></a>

Effective traffic management combines the network-level routing decisions provided by route tables with the application-level distribution strategies provided by load balancing.

**Why this matters**  
Network administrators must design subnets, routing, DNS resolution, and load balancing to optimize traffic flow while maintaining security boundaries and performance requirements. By noting the configuration of these components in your VPC network, you can help to ensure that resources in a functionally equivalent network can communicate with the same clients or devices that they can in your VPC network.

**Topics**
+ [Route tables](#vpc-network-traffic-routing)
+ [DHCP option set](#vpc-network-dhcp-options)
+ [Load balancers](#vpc-network-traffic-elb)

### Route tables
<a name="vpc-network-traffic-routing"></a>

Route tables determine how network traffic flows across network boundaries such as subnets, VPCs, on-premises networks, and the internet.

The resource map in the Amazon VPC console provides a visual representation of the route tables for your VPC.

**To view the route tables for a VPC using the resource map**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **VPCs**.

1. Select the checkbox for the VPC.

1. Choose the **Resource map** tab.

1. The **Route tables** pane lists all route tables for the VPC. Hover over a route table to highlight its associated subnets and network connections. For more detail, click the link to open the route table detail page.

**To describe your route tables**  
Use the [describe-route-tables](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-route-tables.html) command to describe the route tables for the specified VPC and their subnet associations.

```
aws ec2 describe-route-tables \
    --filters Name=vpc-id,Values=vpc-1234567890abcdef0 \
    --query "RouteTables[*].{ID:RouteTableId,Subnets:Associations[].SubnetId}"
```

**To get the routes for a route table**  
Use the [describe-route-tables](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-route-tables.html) command to describe the routes for the specified route table.

```
aws ec2 describe-route-tables \
    --route-table-ids rtb-02ec01715bEXAMPLE \
    --query RouteTables[*].Routes
```

### DHCP option set
<a name="vpc-network-dhcp-options"></a>

Your VPC has a DHCP option set that you can use to configure various network settings. For example, you can configure custom DNS servers so that your EC2 instances can resolve internal host names using your existing DNS infrastructure. For more information, see [DHCP option set concepts](DHCPOptionSetConcepts.md).

**To describe the DHCP options for your VPC**  
Use the [describe-dhcp-options](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-dhcp-options.html) command to describe the specified DHCP options. The example also gets the ID of the DHCP options for the specified VPC using the [describe-vpcs](https://docs.aws.amazon.com/cli/latest/reference//ec2/describe-vpcs.html) command.

```
aws ec2 describe-dhcp-options \
    --dhcp-options-id "$(aws ec2 describe-vpcs \
        --vpc-id vpc-1234567890abcdef0 \   
        --query Vpcs[].DhcpOptionsId --output text)"
```

The following is example output for a VPC that uses the default DHCP options.

```
{
    "DhcpOptions": [
        {
            "OwnerId": "415546850671",
            "Tags": [],
            "DhcpOptionsId": "dopt-1234567890abcdef0",
            "DhcpConfigurations": [
                {
                    "Key": "domain-name",
                    "Values": [
                        {
                            "Value": "us-west-2.compute.internal"
                        }
                    ]
                },
                {
                    "Key": "domain-name-servers",
                    "Values": [
                        {
                            "Value": "AmazonProvidedDNS"
                        }
                    ]
                }
            ]
        }
    ]
}
```

### Load balancers
<a name="vpc-network-traffic-elb"></a>

Load balancing distributes incoming traffic from clients across multiple targets. Load balancers monitor the health of targets and automatically remove unhealthy targets from traffic distribution, ensuring that only healthy targets are used. This improves the availability and performance of your application and optimizes resource utilization. For more information, see the [Elastic Load Balancing User Guide](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/).

**To describe your load balancers**  
Use the [describe-load-balancers](https://docs.aws.amazon.com/cli/latest/reference/elbv2/describe-load-balancers.html) command to display the load balancers for the specified VPC.

```
aws elbv2 describe-load-balancers \
    --query 'LoadBalancers[?VpcId==`vpc-1234567890abcdef0`].LoadBalancerArn'
```

## Related resources
<a name="vpc-network-related-resources"></a>

The following are optional services or features that you might be using in your VPC network:
+ [Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/)
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/)
+ [IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/)
+ [Traffic mirroring](https://docs.aws.amazon.com/vpc/latest/mirroring/)
+ [VPC flow logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html)