

# Configuring Amazon Kinesis Agent for Microsoft Windows
<a name="configuring-kinesis-agent-windows"></a>

Before starting Amazon Kinesis Agent for Microsoft Windows, you must create a configuration file and deploy it. The configuration file provides the necessary information to collect, transform, and stream data on Windows servers and desktop computers to various AWS services. Configuration files define sets of sources, sinks, and pipes that connect sources to sinks, along with optional transformations. 

The Kinesis Agent for Windows configuration file is named `appsettings.json`. Deploy this file to `%PROGRAMFILES%\Amazon\AWSKinesisTap`.

**Topics**
+ [

# Basic Configuration Structure
](basic-configuration-structure.md)
+ [

# Source Declarations
](source-object-declarations.md)
+ [

# Sink Declarations
](sink-object-declarations.md)
+ [

# Pipe Declarations
](pipe-object-declarations.md)
+ [

# Configuring Automatic Updates
](update-configuration-options.md)
+ [

# Kinesis Agent for Windows Configuration Examples
](configuring-kaw-examples.md)
+ [

# Configuring Telemetrics
](telemetrics-configuration-option.md)

# Basic Configuration Structure
<a name="basic-configuration-structure"></a>

The basic structure of the Amazon Kinesis Agent for Microsoft Windows configuration file is a JSON document with the following template:

```
{
     "Sources": [ ],
     "Sinks": [ ],
     "Pipes": [ ]
}
```
+ The value of `Sources` is one or more [Source Declarations](source-object-declarations.md).
+ The value of `Sinks` is one or more [Sink Declarations](sink-object-declarations.md).
+ The value of `Pipes` is one or more [Pipe Declarations](pipe-object-declarations.md).

For more information about the Kinesis Agent for Windows source, pipe, and sink concepts, see [Amazon Kinesis Agent for Microsoft Windows Concepts](kinesis-agent-windows-concepts.md).

The following example is a complete `appsettings.json` configuration file that configures Kinesis Agent for Windows to stream Windows application log events to Firehose.

```
{
  "Sources": [
    {
      "LogName": "Application",
      "Id": "ApplicationLog",
      "SourceType": "WindowsEventLogSource"
    }
  ],
  "Sinks": [
    {
      "StreamName": "ApplicationLogFirehoseStream",
      "Region": "us-west-2",
      "Id": "MyKinesisFirehoseSink",
      "SinkType": "KinesisFirehose"
    }
  ],
  "Pipes": [
    {
      "Id": "ApplicationLogTotestKinesisFirehoseSink",
      "SourceRef": "ApplicationLog",
      "SinkRef": "MyKinesisFirehoseSink"
    }
  ]
}
```

For information about each kind of declaration, see the following sections:
+ [Source Declarations](source-object-declarations.md)
+ [Sink Declarations](sink-object-declarations.md)
+ [Pipe Declarations](pipe-object-declarations.md)

## Configuration Case Sensitivity
<a name="basic-configuration-structure-case"></a>

JSON-formatted files are typically case sensitive, and you should assume that all the keys and values in Kinesis Agent for Windows configuration files are also case sensitive. Some keys and values in the `appsettings.json` configuration file are not case sensitive; for example:
+ The value of the `Format` key-value pair for sinks. For more information, see [Sink Declarations](sink-object-declarations.md).
+ The value of the `SourceType` key-value pair for sources, the `SinkType` key-value pair for sinks, and the `Type` key-value pair for pipes and plugins.
+ The value of `RecordParser` key-value pair for the `DirectorySource` source. For more information, see [DirectorySource Configuration](source-object-declarations.md#directory-source-configuration).
+ The value of the `InitialPosition` key-value pair for sources. For more information, see [Bookmark Configuration](source-object-declarations.md#advanced-source-configuration).
+ Prefixes for variable substitutions. For more information, see [Configuring Sink Variable Substitutions](sink-object-declarations.md#configuring-kinesis-agent-windows-sink-variable-substitution).

# Source Declarations
<a name="source-object-declarations"></a>

In Amazon Kinesis Agent for Microsoft Windows, *source declarations* describe where and what log, event, and metric data should be collected. They also optionally specify information for parsing that data so that it can be transformed. The following sections describe configurations for the built-in source types that are available in Kinesis Agent for Windows. Because Kinesis Agent for Windows is extensible, you can add custom source types. Each source type typically requires specific key-value pairs in the configuration objects that are relevant for that source type.

All source declarations must contain at least the following key-value pairs:

`Id`  
A unique string that identifies a particular source object within the configuration file.

`SourceType`  
The name of the source type for this source object. The source type specifies the origin of the log, event, or metric data that is being collected by this source object. It also controls what other aspects of the source can be declared.

For examples of complete configuration files that use different kinds of source declarations, see [Streaming from Various Sources to Kinesis Data Streams](configuring-kaw-examples.md#configuring-kaw-examples-sources). 

**Topics**
+ [

## DirectorySource Configuration
](#directory-source-configuration)
+ [

## ExchangeLogSource Configuration
](#exchange-source-configuration)
+ [

## W3SVCLogSource Configuration
](#iis-source-configuration)
+ [

## UlsSource Configuration
](#sharepoint-source-configuration)
+ [

## WindowsEventLogSource Configuration
](#window-event-source-configuration)
+ [

## WindowsEventLogPollingSource Configuration
](#eventlogpolling-source-configuration)
+ [

## WindowsETWEventSource Configuration
](#etw-source-configuration)
+ [

## WindowsPerformanceCounterSource Configuration
](#performance-counter-source-configuration)
+ [

## Kinesis Agent for Windows Built-In Metrics Source
](#kinesis-agent-builin-metrics-source)
+ [

## List of Kinesis Agent for Windows Metrics
](#kinesis-agent-metric-list)
+ [

## Bookmark Configuration
](#advanced-source-configuration)

## DirectorySource Configuration
<a name="directory-source-configuration"></a>

### Overview
<a name="directory-source-configuration-overview"></a>

The `DirectorySource` source type gathers logs from files that are stored in the specified directory. Because log files come in many different formats, the `DirectorySource` declaration lets you specify the format of the data in the log file. Then you can transform the log contents to a standard format such as JSON or XML before streaming to various AWS services.

The following is an example `DirectorySource` declaration:

```
{
	   "Id": "myLog",
	   "SourceType": "DirectorySource",
	   "Directory": "C:\\Program Data\\MyCompany\\MyService\\logs",
	   "FileNameFilter": "*.log",
	   "IncludeSubdirectories": true,
	   "IncludeDirectoryFilter": "cpu\\cpu-1;cpu\\cpu-2;load;memory",
	   "RecordParser": "Timestamp",
	   "TimestampFormat": "yyyy-MM-dd HH:mm:ss.ffff",
	   "Pattern": "\\d{4}-\\d{2}-\\d(2}",
	   "ExtractionPattern": "",
	   "TimeZoneKind": "UTC",
	   "SkipLines": 0,
	   "Encoding": "utf-16",
	   "ExtractionRegexOptions": "Multiline"
}
```

All `DirectorySource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"DirectorySource"` (required).

`Directory`  
The path to the directory containing the log files (required).

`FileNameFilter`  
Optionally limits the set of files in the directory where log data is collected based on a wild card file-naming pattern. If you have multiple log file name patterns, this feature allows you to use a single `DirectorySource`, as shown in the following example.  

```
FileNameFilter: "*.log|*.txt"
```
System administrators sometimes compress log files before archiving them. If you specify `"*.*"` in `FileNameFilter`, known compressed files are now excluded. This feature prevents `.zip`, `.gz`, and `.bz2` files from being streamed accidentally. If this key-value pair is not specified, data from all files in the directory are collected by default.

`IncludeSubdirectories`  
Specifies to monitor subdirectories to arbitrary depth limited by the operating system. This feature is useful for monitoring web servers with multiple websites. You can also use the `IncludeDirectoryFilter` attribute to monitor only certain subdirectories specified in the filter.

`RecordParser`  
Specifies how the `DirectorySource` source type should parse the log files that are found in the specified directory. This key-value pair is required, and the valid values are as follows:  
+ `SingleLine` — Each line of the log file is a log record.
+ `SingleLineJson` — Each line of the log file is a JSON-formatted log record. This parser is useful when you want to add additional key-value pairs to the JSON using object decoration. For more information, see [Configuring Sink Decorations](sink-object-declarations.md#configuring-kinesis-agent-windows-decoration-configuration). For an example that uses the `SingleLineJson` record parser, see [Tutorial: Stream JSON Log Files to Amazon S3 Using Kinesis Agent for Windows](directory-source-to-s3-tutorial.md).
+ `Timestamp` — One or more lines can include a log record. The log record starts with a timestamp. This option requires specifying the `TimestampFormat` key-value pair.
+ `Regex` — Each record starts with text that matches a particular regular expression. This option requires specifying the `Pattern` key-value pair.
+ `SysLog` — Indicates that the log file is written in the [syslog](https://en.wikipedia.org/wiki/Syslog) standard format. The log file is parsed into records based on that specification.
+ `Delimited` — A simpler version of the Regex record parser where data items in the log records are separated by a consistent delimiter. This option is easier to use and executes faster than the Regex parser, and it is preferred when this option is available. When using this option, you must specify the `Delimiter` key-value pair.

`TimestampField`  
Specifies which JSON field contains the timestamp for the record. This is only used with the `SingleLineJson` `RecordParser`. This key-value pair is optional. If it is not specified, Kinesis Agent for Windows uses the time when the record was read for the timestamp. One advantage of specifying this key-value pair is that latency statistics generated by Kinesis Agent for Windows are more accurate.

`TimestampFormat`  
Specifies how to parse the date and time associated with the record. The value is either the string `epoch` or a .NET date/time format string. If the value is `epoch`, time is parsed based on UNIX Epoch time. For more information about UNIX Epoch time, see [Unix time](https://en.wikipedia.org/wiki/Unix_time). For more information about .NET date/time format strings, see [Custom Date and Time Format Strings](https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings) in the Microsoft .NET documentation). This key-value pair is required only if the `Timestamp` record parser is specified, or the `SingleLineJson` record parser is specified along with the `TimestampField` key-value pair. 

`Pattern`  
Specifies a regular expression that must match the first line of a potentially multi-line record. This key-value pair is only required for the `Regex` record parser. 

`ExtractionPattern`  
Specifies a regular expression that should use named groups. The record is parsed using this regular expression and the named groups form the fields of the parsed record. These fields are then used as the basis for constructing JSON or XML objects or documents that are then streamed by sinks to various AWS services. This key-value pair is optional, and is available with the `Regex` record parser and the Timestamp parser.  
The `Timestamp` group name is specially processed, as it indicates to the `Regex` parser which field contains the date and time for each record in each log file.

`Delimiter`  
Specifies the character or string that separates each item in each log record. This key-value pair must be (and can only be) used with the `Delimited` record parser. Use the two-character sequence `\t` to represent the tab character.

`HeaderPattern`  
Specifies a regular expression for matching the line in the log file that contains the set of headers for the record. If the log file does not contain any header information, use the `Headers` key-value pair to specify the implicit headers. The `HeaderPattern` key-value pair is optional and only valid for the `Delimited` record parser.   
An empty (0 length) header entry for a column causes the data for that column to be filtered from the final output of the `DirectorySource` parsed output.

`Headers`  
Specifies the names for the columns of data parsed using the specified delimiter. This key-value pair is optional and only valid for the `Delimited` record parser.   
An empty (0 length) header entry for a column causes the data for that column to be filtered from the final output of the `DirectorySource` parsed output. 

`RecordPattern`  
Specifies a regular expression that identifies lines in the log file that contain record data. Other than the optional header line identified by `HeaderPattern`, lines that do not match the specified `RecordPattern` are ignored during record processing. This key-value pair is optional and only valid for the `Delimited` record parser. If it is not provided, the default is to consider any line that does not match the optional `HeaderPattern` or the optional `CommentPattern` to be a line that contains parseable record data.

`CommentPattern`  
Specifies a regular expression that identifies lines in the log file that should be excluded before parsing the data in the log file. This key-value pair is optional and only valid for the `Delimited` record parser. If it is not provided, the default is to consider any line that does not match the optional `HeaderPattern` to be a line that contains parseable record data, unless `RecordPattern` is specified.

`TimeZoneKind`  
Specifies whether the timestamp in the log file should be considered in the local time zone or the UTC time zone. This is optional and defaults to UTC. The only valid values for this key-value pair are `Local` or `UTC`. The timestamp is never altered if `TimeZoneKind` is either not specified or if the value is UTC. The timestamp is converted to UTC when the `TimeZoneKind` value is `Local` and the sink receiving the timestamp is CloudWatch Logs, or the parsed record is sent to other sinks. Dates and times that are embedded in messages are not converted.

`SkipLines`  
When specified, controls the number of lines ignored at the start of each log file before record parsing occurs. This is optional, and the default value is 0.

Encoding  
By default, Kinesis Agent for Windows can automatically detect the encoding from bytemark. However, the automatic encoding may not work correctly on some older unicode formats. The following example specifies the encoding required to stream a Microsoft SQL Server log.  

```
"Encoding": "utf-16"
```
For a list of encoding names, see [List of encodings](https://docs.microsoft.com/en-us/dotnet/api/system.text.encoding?view=netframework-4.8#list-of-encodings) in Microsoft .NET documentation.

ExtractionRegexOptions  
You can use `ExtractionRegexOptions` to simplify regular expressions. This key-value pair is optional. The default is `"None"`.  
The following example specifies that the `"."` expression matches any character including `\r\n`.  

```
"ExtractionRegexOptions" = "Multiline"
```
For a list of the possible fields for ExtractionRegexOptions, see the [RegExOptions Enum](https://docs.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.regexoptions?view=netframework-4.7.2#fields) in Microsoft .NET documentation.

### `Regex` Record Parser
<a name="directory-source-configuration-regex"></a>



You can parse unstructured text logs using the `Regex` record parser along with the `TimestampFormat`, `Pattern`, and `ExtractionPattern` key-value pairs. For example, suppose that your log file looks like the following:

```
[FATAL][2017/05/03 21:31:00.534][0x00003ca8][0000059c][][ActivationSubSystem][GetActivationForSystemID][0] 'ActivationException.File: EQCASLicensingSubSystem.cpp'
[FATAL][2017/05/03 21:31:00.535][0x00003ca8][0000059c][][ActivationSubSystem][GetActivationForSystemID][0] 'ActivationException.Line: 3999'
```

You can specify the following regular expression for the `Pattern` key-value pair to help break the log file into individual log records: 

```
^\[\w+\]\[(?<TimeStamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})\]   
```

This regular expression matches the following sequence:

1. The start of the string being evaluated.

1. One or more word characters surrounded by square brackets.

1. A timestamp surrounded by square brackets. The timestamp matches the following sequence:

   1. A four-digit year

   1. A forward slash

   1. A two-digit month

   1. A forward slash

   1. A two-digit day

   1. A space character

   1. A two-digit hour

   1. A colon

   1. A two-digit minute

   1. A colon

   1. A two-digit second

   1. A period

   1. A three-digit millisecond

You can specify the following format for the `TimestampFormat` key-value pair to convert the textual timestamp into a date and time:

```
yyyy/MM/dd HH:mm:ss.fff
```

You can use the following regular expression for extracting the fields of the log record via the `ExtractionPattern` key-value pair.

```
^\[(?<Severity>\w+)\]\[(?<TimeStamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})\]\[[^]]*\]\[[^]]*\]\[[^]]*\]\[(?<SubSystem>\w+)\]\[(?<Module>\w+)\]\[[^]]*\] '(?<Message>.*)'$
```

This regular expression matches the following groups in sequence:

1. `Severity` — One or more word characters surrounded by square brackets.

1. `TimeStamp` — See the previous description for the timestamp.

1. Three unnamed square bracketed sequences of zero or more characters are skipped.

1. `SubSystem` — One or more word characters surrounded by square brackets.

1. `Module` — One or more word characters surrounded by square brackets.

1. One unnamed square bracketed sequence of zero or more characters is skipped.

1. One unnamed space is skipped.

1. `Message` — Zero or more characters surrounded by single quotes.

The following source declaration combines these regular expressions and the date time format to provide the complete instructions to Kinesis Agent for Windows for parsing this kind of log file.

```
{
    "Id": "PrintLog",
    "SourceType": "DirectorySource",
    "Directory": "C:\\temp\\PrintLogTest",
    "FileNameFilter": "*.log",
    "RecordParser": "Regex",
    "TimestampFormat": "yyyy/MM/dd HH:mm:ss.fff",
    "Pattern": "^\\[\\w+\\]\\[(?<TimeStamp>\\d{4}/\\d{2}/\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3})\\]",
    "ExtractionPattern": "^\\[(?<Severity>\\w+)\\]\\[(?<TimeStamp>\\d{4}/\\d{2}/\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3})\\]\\[[^]]*\\]\\[[^]]*\\]\\[[^]]*\\]\\[(?<SubSystem>\\w+)\\]\\[(?<Module>\\w+)\\]\\[[^]]*\\] '(?<Message>.*)'$",
    "TimeZoneKind": "UTC"
}
```

**Note**  
Backslashes in JSON-formatted files must be escaped with an additional backslash.

For more information about regular expressions, see [Regular Expression Language - Quick Reference](https://docs.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-language-quick-reference) in the Microsoft .NET documentation.

### `Delimited` Record Parser
<a name="directory-source-configuration-delimited"></a>

You can use the `Delimited` record parser to parse semistructured log and data files where there is a consistent character sequence separating each column of data in each row of data. For example, CSV files use a comma to separate each column of data, and TSV files use a tab.

Suppose that you want to parse a Microsoft [NPS Database Format](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc771748(v=ws.10)) log file produced by a Network policy server. Such a file might look like the following:

```
"NPS-MASTER","IAS",03/22/2018,23:07:55,1,"user1","Domain1\user1",,,,,,,,0,"192.168.86.137","Nate - Test 1",,,,,,,1,,0,"311 1 192.168.0.213 03/15/2018 08:14:29 1",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"Use Windows authentication for all users",1,,,,
"NPS-MASTER","IAS",03/22/2018,23:07:55,3,,"Domain1\user1",,,,,,,,0,"192.168.86.137","Nate - Test 1",,,,,,,1,,16,"311 1 192.168.0.213 03/15/2018 08:14:29 1",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"Use Windows authentication for all users",1,,,,
```

The following example `appsettings.json` configuration file includes a `DirectorySource` declaration that uses the `Delimited` record parser to parse this text into an object representation. It then streams JSON-formatted data to Firehose:

```
{
    "Sources": [
        {
            "Id": "NPS",
            "SourceType": "DirectorySource",
            "Directory": "C:\\temp\\NPS",
            "FileNameFilter": "*.log",
            "RecordParser": "Delimited",
            "Delimiter": ",",
            "Headers": "ComputerName,ServiceName,Record-Date,Record-Time,Packet-Type,User-Name,Fully-Qualified-Distinguished-Name,Called-Station-ID,Calling-Station-ID,Callback-Number,Framed-IP-Address,NAS-Identifier,NAS-IP-Address,NAS-Port,Client-Vendor,Client-IP-Address,Client-Friendly-Name,Event-Timestamp,Port-Limit,NAS-Port-Type,Connect-Info,Framed-Protocol,Service-Type,Authentication-Type,Policy-Name,Reason-Code,Class,Session-Timeout,Idle-Timeout,Termination-Action,EAP-Friendly-Name,Acct-Status-Type,Acct-Delay-Time,Acct-Input-Octets,Acct-Output-Octets,Acct-Session-Id,Acct-Authentic,Acct-Session-Time,Acct-Input-Packets,Acct-Output-Packets,Acct-Terminate-Cause,Acct-Multi-Ssn-ID,Acct-Link-Count,Acct-Interim-Interval,Tunnel-Type,Tunnel-Medium-Type,Tunnel-Client-Endpt,Tunnel-Server-Endpt,Acct-Tunnel-Conn,Tunnel-Pvt-Group-ID,Tunnel-Assignment-ID,Tunnel-Preference,MS-Acct-Auth-Type,MS-Acct-EAP-Type,MS-RAS-Version,MS-RAS-Vendor,MS-CHAP-Error,MS-CHAP-Domain,MS-MPPE-Encryption-Types,MS-MPPE-Encryption-Policy,Proxy-Policy-Name,Provider-Type,Provider-Name,Remote-Server-Address,MS-RAS-Client-Name,MS-RAS-Client-Version",
            "TimestampField": "{Record-Date} {Record-Time}",
            "TimestampFormat": "MM/dd/yyyy HH:mm:ss"
        }
    ],
    "Sinks": [
        {
            "Id": "npslogtest",
            "SinkType": "KinesisFirehose",
            "Region": "us-west-2",
            "StreamName": "npslogtest",
            "Format": "json"
        }
    ],
    "Pipes": [
        {
            "Id": "W3SVCLog1ToKinesisStream",
            "SourceRef": "NPS",
            "SinkRef": "npslogtest"
        }
    ]
}
```

JSON-formatted data streamed to Firehose looks like the following:

```
{
    "ComputerName": "NPS-MASTER",
    "ServiceName": "IAS",
    "Record-Date": "03/22/2018",
    "Record-Time": "23:07:55",
    "Packet-Type": "1",
    "User-Name": "user1",
    "Fully-Qualified-Distinguished-Name": "Domain1\\user1",
    "Called-Station-ID": "",
    "Calling-Station-ID": "",
    "Callback-Number": "",
    "Framed-IP-Address": "",
    "NAS-Identifier": "",
    "NAS-IP-Address": "",
    "NAS-Port": "",
    "Client-Vendor": "0",
    "Client-IP-Address": "192.168.86.137",
    "Client-Friendly-Name": "Nate - Test 1",
    "Event-Timestamp": "",
    "Port-Limit": "",
    "NAS-Port-Type": "",
    "Connect-Info": "",
    "Framed-Protocol": "",
    "Service-Type": "",
    "Authentication-Type": "1",
    "Policy-Name": "",
    "Reason-Code": "0",
    "Class": "311 1 192.168.0.213 03/15/2018 08:14:29 1",
    "Session-Timeout": "",
    "Idle-Timeout": "",
    "Termination-Action": "",
    "EAP-Friendly-Name": "",
    "Acct-Status-Type": "",
    "Acct-Delay-Time": "",
    "Acct-Input-Octets": "",
    "Acct-Output-Octets": "",
    "Acct-Session-Id": "",
    "Acct-Authentic": "",
    "Acct-Session-Time": "",
    "Acct-Input-Packets": "",
    "Acct-Output-Packets": "",
    "Acct-Terminate-Cause": "",
    "Acct-Multi-Ssn-ID": "",
    "Acct-Link-Count": "",
    "Acct-Interim-Interval": "",
    "Tunnel-Type": "",
    "Tunnel-Medium-Type": "",
    "Tunnel-Client-Endpt": "",
    "Tunnel-Server-Endpt": "",
    "Acct-Tunnel-Conn": "",
    "Tunnel-Pvt-Group-ID": "",
    "Tunnel-Assignment-ID": "",
    "Tunnel-Preference": "",
    "MS-Acct-Auth-Type": "",
    "MS-Acct-EAP-Type": "",
    "MS-RAS-Version": "",
    "MS-RAS-Vendor": "",
    "MS-CHAP-Error": "",
    "MS-CHAP-Domain": "",
    "MS-MPPE-Encryption-Types": "",
    "MS-MPPE-Encryption-Policy": "",
    "Proxy-Policy-Name": "Use Windows authentication for all users",
    "Provider-Type": "1",
    "Provider-Name": "",
    "Remote-Server-Address": "",
    "MS-RAS-Client-Name": "",
    "MS-RAS-Client-Version": ""
}
```

### `SysLog` Record Parser
<a name="directory-source-configuration-syslog"></a>

For the `SysLog` record parser, the parsed output from the source includes the following information: 


| Attribute | Type | Description | 
| --- | --- | --- | 
| SysLogTimeStamp | String | The original date and time from the syslog-formatted log file. | 
| Hostname | String | The name of computer where the syslog-formatted log file resides. | 
| Program | String | The name of the application or service that generated the log file. | 
| Message | String | The log message generated by the application or service. | 
| TimeStamp | String | The parsed date and time in ISO 8601 format. | 

The following is an example of SysLog data transformed into JSON:

```
{
    "SysLogTimeStamp": "Jun 18 01:34:56",
    "Hostname": "myhost1.example.mydomain.com",
    "Program": "mymailservice:",
    "Message": "Info: ICID 123456789 close",
    "TimeStamp": "2017-06-18T01:34.56.000"
}
```

### Summary
<a name="directory-source-configuration-summary"></a>

The following is a summary of the key-value pairs available for the `DirectorySource` source and the `RecordParser`s related to those key-value pairs.


| Key Name | RecordParser | Notes | 
| --- | --- | --- | 
| SourceType | Required for all | Must have the value DirectorySource | 
| Directory | Required for all |  | 
| FileNameFilter | Optional for all |  | 
| RecordParser | Required for all |  | 
| TimestampField | Optional for SingleLineJson |  | 
| TimestampFormat | Required for Timestamp, and required for SingleLineJson if TimestampField is specified |  | 
| Pattern | Required for Regex |  | 
| ExtractionPattern | Optional for Regex | Required for Regex if sink specifies json or xml format | 
| Delimiter | Required for Delimited |  | 
| HeaderPattern | Optional for Delimited |  | 
| Headers | Optional for Delimited |  | 
| RecordPattern | Optional for Delimited |  | 
| CommentPattern | Optional for Delimited |  | 
| TimeZoneKind | Optional for Regex, Timestamp, SysLog, and SingleLineJson when a timestamp field is identified |  | 
| SkipLines | Optional for all |  | 

## ExchangeLogSource Configuration
<a name="exchange-source-configuration"></a>

 The `ExchangeLogSource` type is used to collect logs from Microsoft Exchange. Exchange produces logs in several different kinds of log formats. This source type parses all of them. Although it is possible to parse them using the `DirectorySource` type with the `Regex` record parser, it is much simpler to use the `ExchangeLogSource`. This is because you don't need to design and provide regular expressions for the log file formats. The following is an example `ExchangeLogSource` declaration: 

```
{
   "Id": "MyExchangeLog",
   "SourceType": "ExchangeLogSource",
   "Directory": "C:\\temp\\ExchangeLogTest",
   "FileNameFilter": "*.log"
}
```

All exchange declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"ExchangeLogSource"` (required).

`Directory`  
The path to the directory containing the log files (required).

`FileNameFilter`  
Optionally limits the set of files in the directory where log data is collected based on a wildcard file-naming pattern. If this key-value pair is not specified, then by default, log data from all files in the directory is collected.

`TimestampField`  
The name of the column containing the date and time for the record. This key-value pair is optional and need not be specified if the field name is `date-time` or `DateTime`. Otherwise, it is required.

## W3SVCLogSource Configuration
<a name="iis-source-configuration"></a>

 The `W3SVCLogSource` type is used to collect logs from Internet Information Services (IIS) for Windows. 

The following is an example `W3SVCLogSource` declaration: 

```
{
   "Id": "MyW3SVCLog",
   "SourceType": "W3SVCLogSource",
   "Directory": "C:\\inetpub\\logs\\LogFiles\\W3SVC1",
   "FileNameFilter": "*.log"
}
```

All `W3SVCLogSource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"W3SVCLogSource"` (required).

`Directory`  
The path to the directory containing the log files (required).

`FileNameFilter`  
Optionally limits the set of files in the directory where log data is collected based on a wildcard file-naming pattern. If this key-value pair is not specified, then by default, log data from all files in the directory is collected.

## UlsSource Configuration
<a name="sharepoint-source-configuration"></a>

 The `UlsSource` type is used to collect logs from Microsoft SharePoint. The following is an example `UlsSource` declaration: 

```
{
    "Id": "UlsSource",
    "SourceType": "UlsSource",
    "Directory": "C:\\temp\\uls",
    "FileNameFilter": "*.log"
}
```

All `UlsSource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"UlsSource"` (required).

`Directory`  
The path to the directory containing the log files (required).

`FileNameFilter`  
Optionally limits the set of files in the directory where log data is collected based on a wildcard file-naming pattern. If this key-value pair is not specified, then by default, log data from all files in the directory is collected.

## WindowsEventLogSource Configuration
<a name="window-event-source-configuration"></a>

The `WindowsEventLogSource` type is used to collect events from the Windows Event Log service. The following is an example `WindowsEventLogSource` declaration: 

```
{
    "Id": "mySecurityLog",
    "SourceType": "WindowsEventLogSource",
    "LogName": "Security"
}
```

All `WindowsEventLogSource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"WindowsEventLogSource"` (required).

`LogName`  
Events are collected from the specified log. Common values include `Application`, `Security`, and `System`, but you can specify any valid Windows event log name. This key-value pair is required.

`Query`  
Optionally limits what events are output from the `WindowsEventLogSource`. If this key-value pair is not specified, then by default, all events are output. For information about the syntax of this value, see [Event Queries and Event XML](https://msdn.microsoft.com/en-us/library/bb399427(v=vs.90).aspx) in the Windows documentation. For information about log level definitions, see [Event Types](https://docs.microsoft.com/en-us/windows/desktop/eventlog/event-types) in the Windows documentation.

`IncludeEventData`  
Optionally enables the collection and streaming of provider-specific event data associated with events from the specified Windows event log when the value of this key-value pair is `"true"`. Only event data that can be successfully serialized is included. This key-value pair is optional, and if it is not specified, the provider-specific event data is not collected.  
Including event data could significantly increase the amount of data streamed from this source. The maximum size of an event can be 262,143 bytes with event data included.

The parsed output from the `WindowsEventLogSource` contains the following information:


| Attribute | Type | Description | 
| --- | --- | --- | 
| EventId | Int | The identifier of the type of event. | 
| Description | String | Text that describes the details of the event. | 
| LevelDisplayName | String | The category of event (one of Error, Warning, Information, Success Audit, Failure Audit). | 
| LogName | String | Where the event was recorded (typical values are Application, Security, and System, but there are many possibilities). | 
| MachineName | String | Which computer recorded the event. | 
| ProviderName | String | Which application or service recorded the event. | 
| TimeCreated | String | When the event occurred in ISO 8601 format. | 
| Index | Int | Where the entry is located in the log. | 
| UserName | String | Who made the entry if known. | 
| Keywords | String | The type of event. Standard values include AuditFailure (failed security audit events), AuditSuccess (successful security audit events), Classic (events raised with the RaiseEvent function), Correlation Hint (transfer events), SQM (Service Quality Mechanism events), WDI Context (Windows Diagnostic Infrastructure context events), and WDI Diag (Windows Diagnostic Infrastructure diagnostics events).  | 
| EventData | List of objects | Optional provider-specific extra data about the log event. This is only included if the value for the IncludeEventData key-value pair is "true". | 

The following is an example event transformed into JSON:

```
{[ 
    "EventId": 7036, 
    "Description": "The Amazon SSM Agent service entered the stopped state.", 
    "LevelDisplayName": "Informational", 
    "LogName": "System", 
    "MachineName": "mymachine.mycompany.com", 
    "ProviderName": "Service Control Manager", 
    "TimeCreated": "2017-10-04T16:42:53.8921205Z", 
    "Index": 462335, 
    "UserName": null, 
    "Keywords": "Classic", 
    "EventData": [ 
    "Amazon SSM Agent", 
    "stopped", 
    "rPctBAMZFhYubF8zVLcrBd3bTTcNzHvY5Jc2Br0aMrxxx==" 
]}
```

## WindowsEventLogPollingSource Configuration
<a name="eventlogpolling-source-configuration"></a>

`WindowsEventLogPollingSource` uses a polling-based mechanism to gather all new events from the event log that match the configured parameters. The polling interval is updated dynamically between 100 ms and 5000 ms depending on how many events were gathered during the last poll. The following is an example `WindowsEventLogPollingSource` declaration:

```
{
    "Id": "MySecurityLog",
    "SourceType": "WindowsEventLogPollingSource",
    "LogName": "Security",
    "IncludeEventData": "true",
    "Query": "",
    "CustomFilters": "ExcludeOwnSecurityEvents"
}
```

All `WindowsEventLogPollingSource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"WindowsEventLogPollingSource"` (required).

`LogName`  
Specifies the log. Valid options are `Application`, `Security`, `System`, or other valid logs.

`IncludeEventData`  
Optional. When `true`, specifies that extra EventData when streamed as JSON and XML is included. Default is `false`.

`Query`  
Optional. Windows event logs support querying events using XPath expressions, which you can specify using `Query`. For more information, see [Event Queries and Event XML](https://docs.microsoft.com/en-us/previous-versions/bb399427(v=vs.90)) in Microsoft documentation.

`CustomFilters`  
Optional. A list of filters separated by a semi-colon (`;`). The following filters can be specified.    
`ExcludeOwnSecurityEvents`  
Excludes security events generated by Kinesis Agent for Windows itself.

## WindowsETWEventSource Configuration
<a name="etw-source-configuration"></a>

 The `WindowsETWEventSource` type is used to collect application and service event traces using a feature named Event Tracing for Windows (ETW). For more information, see [Event Tracing](https://docs.microsoft.com/en-us/windows/desktop/etw/event-tracing-portal) in the Windows documentation.

The following is an example `WindowsETWEventSource` declaration:

```
{
    "Id": "ClrETWEventSource",
    "SourceType": "WindowsETWEventSource",
    "ProviderName": "Microsoft-Windows-DotNETRuntime",
    "TraceLevel": "Verbose",
    "MatchAnyKeyword": 32768
}
```

All `WindowsETWEventSource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"WindowsETWEventSource"` (required).

`ProviderName`  
Specifies which event provider to use to collect trace events. This must be a valid ETW provider name for an installed provider. To determine which providers are installed, execute the following in a Windows command prompt window:  

```
logman query providers
```

`TraceLevel`  
Specifies what categories of trace events should be collected. Allowed values include `Critical`, `Error`, `Warning`, `Informational`, and `Verbose`. The exact meaning depends on the ETW provider that is selected.

`MatchAnyKeyword`  
This value is a 64-bit number, in which each bit represents an individual keyword. Each keyword describes a category of events to be collected. For the supported keywords and their values and how they related to `TraceLevel`, see the documentation for that provider. For example, for information about the CLR ETW provider, see [CLR ETW Keywords and Levels](https://docs.microsoft.com/en-us/dotnet/framework/performance/clr-etw-keywords-and-levels) in the Microsoft .NET Framework documentation.   
In the previous example, 32768 (0x00008000) represents the `ExceptionKeyword` for the CLR ETW provider that instructs the provider to collect information about exceptions thrown. Although JSON doesn't natively support hex constants, you can specify them for `MatchAnyKeyword` by placing them in a string. You can also specify several constants separated by commas. For example, use the following to specify both the `ExceptionKeyword` and `SecurityKeyword` (0x00000400):  

```
{
   "Id": "MyClrETWEventSource",
   "SourceType": "WindowsETWEventSource",
   "ProviderName": "Microsoft-Windows-DotNETRuntime",
   "TraceLevel": "Verbose",
   "MatchAnyKeyword": "0x00008000, 0x00000400"
}
```
To ensure that all specified keywords are enabled for a provider, multiple keyword values are combined using OR and passed to that provider.

The output from the `WindowsETWEventSource` contains the following information for each event:


| Attribute | Type | Description | 
| --- | --- | --- | 
| EventName | String | What kind of event occurred. | 
| ProviderName | String | Which provider detected the event. | 
| FormattedMessage | String | A textual summary of the event. | 
| ProcessID | Int | Which process reported the event. | 
| ExecutingThreadID | Int | Which thread within the process reported the event. | 
| MachineName | String | The name of the desktop or server that is reporting the event. | 
| Payload | Hashtable | A table with a string key and any kind of object as a value. The key is the payload item name, and the value is the payload item's value. The payload is provider dependent. | 

The following is an example event transformed into JSON:

```
{ 
     "EventName": "Exception/Start", 
     "ProviderName": "Microsoft-Windows-DotNETRuntime", 
     "FormattedMessage": "ExceptionType=System.Exception;\r\nExceptionMessage=Intentionally unhandled exception.;\r\nExceptionEIP=0x2ab0499;\r\nExceptionHRESULT=-2,146,233,088;\r\nExceptionFlags=CLSCompliant;\r\nClrInstanceID=9 ",
     "ProcessID": 3328, 
     "ExecutingThreadID": 6172, 
     "MachineName": "MyHost.MyCompany.com", 
     "Payload": 
      { 
        "ExceptionType": "System.Exception", 
        "ExceptionMessage": "Intentionally unhandled exception.", 
        "ExceptionEIP": 44762265, 
        "ExceptionHRESULT": -2146233088, 
        "ExceptionFlags": 16, 
        "ClrInstanceID": 9 
      } 
}
```

## WindowsPerformanceCounterSource Configuration
<a name="performance-counter-source-configuration"></a>

 The `WindowsPerformanceCounterSource` type collects performance counter metrics from Windows. The following is an example `WindowsPerformanceCounterSource` declaration: 

```
{
	"Id": "MyPerformanceCounter",
	"SourceType": "WindowsPerformanceCounterSource",
	"Categories": [{
			"Category": "Server",
			"Counters": ["Files Open", "Logon Total", "Logon/sec", "Pool Nonpaged Bytes"]
		},
		{
			"Category": "System",
			"Counters": ["Processes", "Processor Queue Length", "System Up Time"]
		},
		{
			"Category": "LogicalDisk",
			"Instances": "*",
			"Counters": [
				"% Free Space", "Avg. Disk Queue Length",
				{
					"Counter": "Disk Reads/sec",
					"Unit": "Count/Second"
				},
				"Disk Writes/sec"
			]
		},
		{
			"Category": "Network Adapter",
			"Instances": "^Local Area Connection\* \d$",
			"Counters": ["Bytes Received/sec", "Bytes Sent/sec"]
		}
	]
}
```

All `WindowsPerformanceCounterSource` declarations can provide the following key-value pairs:

`SourceType`  
Must be the literal string `"WindowsPerformanceCounterSource"` (required).

`Categories`  
Specifies a set of performance counter metric groups to gather from Windows. Each metric group contains the following key-value pairs:    
`Category`  
Specifies the counter set of metrics to be collected (required).  
`Instances`  
Specifies the set of objects of interest when there are a unique set of performance counters per object. For example, when the category is `LogicalDisk`, there are a set of performance counters per disk drive. This key-value pair is optional. You can use the wildcards `*` and `?` to match multiple instances. To aggregate values across all instances, specify `_Total`.  
You can also use `InstanceRegex`, which accepts regular expressions that contain the `*` wild card character as part of the instance name.  
`Counters`  
Specifies which metrics to gather for the specified category. This key-value pair is required. You can use the wildcards `*` and `?` to match multiple counters. You can specify `Counters` using only the name, or by using the name and unit. If counter units are not specified, Kinesis Agent for Windows attempts to infer the units from the name. If those inferences are incorrect, then the unit can be explicitly specified. You can change `Counter` names if you want. The more complex representation of a counter is an object with the following key-value pairs:    
`Counter`  
The name of the counter. This key-value pair is required.  
`Rename`  
The name of the counter to present to the sink. This key-value pair is optional.  
`Unit`  
The meaning of the value that is associated with the counter. For a complete list of valid unit names, see the unit documentation in [MetricDatum](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) in the *Amazon CloudWatch API Reference*.
The following is an example of a complex counter specification:  

```
{
   "Counter": "Disk Reads/sec, 
   "Rename": "Disk Reads per second",
   "Unit": "Count/Second"
}
```

`WindowsPerformanceCounterSource` can only be used with a pipe that specifies an Amazon CloudWatch sink. Use a separate sink if Kinesis Agent for Windows built-in metrics are also streamed to CloudWatch. Examine the Kinesis Agent for Windows log after service startup to determine what units have been inferred for counters when units have not been specified in the `WindowsPerformanceCounterSource` declarations. Use PowerShell to determine the valid names for categories, instances, and counters. 

To see information about all categories, including counters associated with counter sets, execute this command in a PowerShell window:

```
    Get-Counter -ListSet * | Sort-Object
```

To determine what instances are available for each of the counters in the counter set, execute a command similar to the following example in a PowerShell window:

```
    Get-Counter -Counter "\Process(*)\% Processor Time"
```

The value of the `Counter` parameter should be one of the paths from a `PathsWithInstances` member listed by the previous `Get-Counter -ListSet` command invocation.

## Kinesis Agent for Windows Built-In Metrics Source
<a name="kinesis-agent-builin-metrics-source"></a>

In addition to ordinary metrics sources such as the `WindowsPerformanceCounterSource` type (see [WindowsPerformanceCounterSource Configuration](#performance-counter-source-configuration)), the CloudWatch sink type can receive metrics from a special source that gathers metrics about Kinesis Agent for Windows itself. Kinesis Agent for Windows metrics are also available in the `KinesisTap` category of Windows performance counters. 

The `MetricsFilter` key-value pair for the CloudWatch sink declarations specifies which metrics are streamed to CloudWatch from the built-in Kinesis Agent for Windows metrics source. The value is a string that contains one or more filter expressions separated by semicolons; for example:

`"MetricsFilter": "`*FilterExpression1*`;`*FilterExpression2*`"`

A metric that matches one or more filter expressions is streamed to CloudWatch.

Single instance metrics are global in nature and not tied to a particular source or sink. Multiple instance metrics are dimensional based on the source or sink declaration `Id`. Each source or sink type can have a different set of metrics.

For a list of built-in Kinesis Agent for Windows metric names, see [List of Kinesis Agent for Windows Metrics](#kinesis-agent-metric-list).

For single instance metrics, the filter expression is the name of the metric; for example:

```
"MetricsFilter": "SourcesFailedToStart;SinksFailedToStart"
```

For multiple instance metrics, the filter expression is the name of the metric, a period (`.`), and then the `Id` of the source or sink declaration that generated that metric. For example, assuming there is a sink declaration with an `Id` of `MyFirehose`:

```
"MetricsFilter": "KinesisFirehoseRecordsFailedNonrecoverable.MyFirehose"      
```

You can use special wildcard patterns that are designed to distinguish between single and multiple instance metrics.
+ Asterisk (`*`) matches zero or more characters except period (`.`).
+ Question mark (`?`) matches one character except period.
+ Any other character only matches itself.
+ `_Total` is a special token that causes the aggregation of all matching multiple instance values across the dimension.

The following example matches all single instance metrics:

```
"MetricsFilter": "*"
```

Because an asterisk does not match the period character, only single instance metrics are included.

The following example matches all multiple instance metrics:

```
"MetricsFilter": "*.*"
```

The following example matches all metrics (single and multiple):

```
"MetricsFilter": "*;*.*"
```

The following example aggregates all multiple instance metrics across all sources and sinks:

```
"MetricsFilter": "*._Total"
```

The following example aggregates all Firehose metrics for all Firehose sinks:

```
"MetricsFilter": "*Firehose*._Total"
```

The following example matches all single and multiple instance error metrics:

```
"MetricsFilter": "*Failed*;*Error*.*;*Failed*.*"
```

The following example matches all non-recoverable error metrics aggregated across all sources and sinks:

```
"MetricsFilter": "*Nonrecoverable*._Total"
```



For information about how to specify a pipe that uses the Kinesis Agent for Windows built-in metric source, see [Configuring Kinesis Agent for Windows Metric Pipes](pipe-object-declarations.md#kinesis-agent-metric-pipe-configuration).

## List of Kinesis Agent for Windows Metrics
<a name="kinesis-agent-metric-list"></a>

The following is a list of single instance and multiple instance metrics that are available for Kinesis Agent for Windows.

### Single Instance Metrics
<a name="single-instance-metrics"></a>

The following single instance metrics are available:

`KinesisTapBuildNumber`  
The version number of Kinesis Agent for Windows.

`PipesConnected`  
How many pipes have connected their source to their sink successfully.

`PipesFailedToConnect`  
How many pipes have connected their source to their sink unsuccessfully.

`SinkFactoriesFailedToLoad`  
How many sink types did not load into Kinesis Agent for Windows successfully.

`SinkFactoriesLoaded`  
How many sink types loaded into Kinesis Agent for Windows successfully.

`SinksFailedToStart`  
How many sinks did not begin successfully, usually due to incorrect sink declarations.

`SinksStarted`  
How many sinks began successfully.

`SourcesFailedToStart`  
How many sources did not begin successfully, usually due to incorrect source declarations.

`SourcesStarted`  
How many sources began successfully.

`SourceFactoriesFailedToLoad`  
How many source types did not load into Kinesis Agent for Windows successfully.

`SourceFactoriesLoaded`  
How many source types loaded successfully into Kinesis Agent for Windows.

### Multiple Instance Metrics
<a name="multiple-instance-metrics"></a>

The following multiple instance metrics are available:

#### DirectorySource Metrics
<a name="directory-source-metrics"></a>

`DirectorySourceBytesRead`  
How many bytes were read during the interval for this `DirectorySource`.

`DirectorySourceBytesToRead`  
How many known numbers of bytes are available to read that have not been read yet by Kinesis Agent for Windows.

`DirectorySourceFilesToProcess`  
How many known files to examine that have not yet been examined yet by Kinesis Agent for Windows.

`DirectorySourceRecordsRead`  
How many records have been read during the interval for this `DirectorySource`.

#### WindowsEventLogSource Metrics
<a name="windows-event-log-source-metrics"></a>

`EventLogSourceEventsError`  
How many Windows event log events were not read successfully.

`EventLogSourceEventsRead`  
How many Windows event log events were read successfully.

#### KinesisFirehose Sink Metrics
<a name="kinesis-firehose-sink-metrics"></a>

`KinesisFirehoseBytesAccepted`  
How many bytes were accepted during the interval.

`KinesisFirehoseClientLatency`  
How much time passed between record generation and record streaming to the Firehose service.

`KinesisFirehoseLatency`  
How much time passed between the start and end of record streaming for the Firehose service.

`KinesisFirehoseNonrecoverableServiceErrors`  
How many times records could not be sent without error to the Firehose service despite retries.

`KinesisFirehoseRecordsAttempted`  
How many records tried to be streamed to the Firehose service.

`KinesisFirehoseRecordsFailedNonrecoverable`  
How many records were not successfully streamed to the Firehose service despite retries.

`KinesisFirehoseRecordsFailedRecoverable`  
How many records were successfully streamed to the Firehose service, but only with retries.

`KinesisFirehoseRecordsSuccess`  
How many records were successfully streamed to the Firehose service without retries.

`KinesisFirehoseRecoverableServiceErrors`  
How many times records could successfully be sent to the Firehose service, but only with retries.

#### KinesisStream Metrics
<a name="kinesis-stream-metrics"></a>

`KinesisStreamBytesAccepted`  
How many bytes were accepted during the interval.

`KinesisStreamClientLatency`  
How much time passed between record generation and record streaming to the Kinesis Data Streams service.

`KinesisStreamLatency`  
How much time passed between the start and end of record streaming for the Kinesis Data Streams service.

`KinesisStreamNonrecoverableServiceErrors`  
How many times records could not be sent without error to the Kinesis Data Streams service despite retries.

`KinesisStreamRecordsAttempted`  
How many records tried to be streamed to the Kinesis Data Streams service.

`KinesisStreamRecordsFailedNonrecoverable`  
How many records were not successfully streamed to the Kinesis Data Streams service despite retries.

`KinesisStreamRecordsFailedRecoverable`  
How many records were successfully streamed to the Kinesis Data Streams service, but only with retries.

`KinesisStreamRecordsSuccess`  
How many records were successfully streamed to the Kinesis Data Streams service without retries.

`KinesisStreamRecoverableServiceErrors`  
How many times records could successfully be sent to the Kinesis Data Streams service, but only with retries.

#### CloudWatchLog Metrics
<a name="cloud-watch-log-metrics"></a>

`CloudWatchLogBytesAccepted`  
How many bytes were accepted during the interval.

`CloudWatchLogClientLatency`  
How much time passed between record generation and record streaming to the CloudWatch Logs service.

`CloudWatchLogLatency`  
How much time passed between the start and end of record streaming for the CloudWatch Logs service.

`CloudWatchLogNonrecoverableServiceErrors`  
How many times records could not be sent without error to the CloudWatch Logs service despite retries.

`CloudWatchLogRecordsAttempted`  
How many records tried to be streamed to the CloudWatch Logs service.

`CloudWatchLogRecordsFailedNonrecoverable`  
How many records were not successfully streamed to the CloudWatch Logs service despite retries.

`CloudWatchLogRecordsFailedRecoverable`  
How many records were successfully streamed to the CloudWatch Logs service, but only with retries.

`CloudWatchLogRecordsSuccess`  
How many records were successfully streamed to the CloudWatch Logs service without retries.

`CloudWatchLogRecoverableServiceErrors`  
How many times records could successfully be sent to the CloudWatch Logs service, but only with retries.

#### CloudWatch Metrics
<a name="cloud-watch-metrics"></a>

`CloudWatchLatency`  
How much time on average passed between the start and end of metric streaming for the CloudWatch service.

`CloudWatchNonrecoverableServiceErrors`  
How many times metrics could not be sent without error to the CloudWatch service despite retries.

`CloudWatchRecoverableServiceErrors`  
How many times metrics were sent without error to the CloudWatch service but only with retries.

`CloudWatchServiceSuccess`  
How many times metrics were sent without error to the CloudWatch service with no retries needed.

## Bookmark Configuration
<a name="advanced-source-configuration"></a>

 By default, Kinesis Agent for Windows sends log records to sinks that are created after the agent starts. Sometimes it is useful to send earlier log records, for example, log records that are created during the time period when Kinesis Agent for Windows stops during an automatic update. The bookmark feature tracks what records have been sent to sinks. When Kinesis Agent for Windows is in bookmark mode and starts up, it sends all log records that were created after Kinesis Agent for Windows stopped, along with any subsequently created log records. To control this behavior, file-based source declarations can optionally include the following key-value pairs: 

`InitialPosition`  
Specifies the initial situation for the bookmark. Possible values are as follows:    
`EOS`  
Specifies end of stream (EOS). Only log records created while the agent is running are sent to sinks.  
`0`  
All available log records and events are initially sent. Then a bookmark is created to ensure that every new log record and event created after the bookmark was created are eventually sent, whether or not Kinesis Agent for Windows is running.  
`Bookmark`  
The bookmark is initialized to just after the latest log record or event. Then a bookmark is created to ensure that every new log record and event created after the bookmark was created are eventually sent, whether or not Kinesis Agent for Windows is running.  
Bookmarks are enabled by default. Files are stored in the `%ProgramData%\Amazon\KinesisTap` directory.  
`Timestamp`  
Log records and events that are created after the `InitialPositionTimestamp` value (definition follows) are sent. Then a bookmark is created to ensure that every new log record and event created after the bookmark was created are eventually sent whether or not Kinesis Agent for Windows is running.

`InitialPositionTimestamp`  
Specifies the earliest log record or event timestamp that you want. Specify this key-value pair only when `InitialPosition` has a value of `Timestamp`.

`BookmarkOnBufferFlush`  
 This setting can be added to any bookmarkable source. When set to `true`, ensures that bookmark updates occur only when a sink successfully ships an event to AWS. You can only subscribe a single sink to a source. If you are shipping logs to multiple destinations, duplicate your sources to avoid potential issues with data loss.

When Kinesis Agent for Windows has been stopped for a long time, it might be necessary to delete those bookmarks because log records and events that are bookmarked might no longer exist. Bookmark files for a given *source id* are located in `%PROGRAMDATA%\Amazon\AWSKinesisTap\source id.bm`.

Bookmarks do not work on files that are renamed or truncated. Because of the nature of ETW events and performance counters, they cannot be bookmarked.

# Sink Declarations
<a name="sink-object-declarations"></a>

*Sink declarations* specify where and in what form logs, events, and metrics should be sent to various AWS services. The following sections describe configurations for the built-in sink types that are available in Amazon Kinesis Agent for Microsoft Windows. Because Kinesis Agent for Windows is extensible, you can add custom sink types. Each sink type typically requires unique key-value pairs in the configuration declarations that are relevant for that sink type.

All sink declarations can contain the following key-value pairs:

`Id`  
A unique string that identifies a particular sink within the configuration file (required).

`SinkType`  
The name of the sink type for this sink (required). The sink type specifies the destination of the log, event, or metric data that is being streamed by this sink. 

`AccessKey`  
Specifies the AWS access key to use when authorizing access to the AWS service that is associated with the sink type. This key-value pair is optional. For more information, see [Sink Security Configuration](#configuring-kinesis-agent-windows-sink-security-configuration).

`SecretKey`  
Specifies the AWS secret key to use when authorizing access to the AWS service that is associated with the sink type. This key-value pair is optional. For more information, see [Sink Security Configuration](#configuring-kinesis-agent-windows-sink-security-configuration).

`Region`  
Specifies which AWS Region contains the destination resources for streaming. This key-value pair is optional.

`ProfileName`  
Specifies which AWS profile to use for authentication. This key-value pair is optional, but if specified, it overrides any specified access key and secret key. For more information, see [Sink Security Configuration](#configuring-kinesis-agent-windows-sink-security-configuration).

`RoleARN`  
Specifies the IAM role to use when accessing the AWS service that is associated with the sink type. This option is useful when Kinesis Agent for Windows is running on an EC2 instance but a different role would be more appropriate than the role referenced by the instance profile. For example, a cross-account role can be used to target resources that are not in the same AWS account as the EC2 instance. This key-value pair is optional.

`Format`  
Specifies the kind of serialization that is applied to logs and event data before streaming. Valid values are `json` and `xml`. This option is helpful when downstream analytics in the data pipeline require or prefer data in a particular form. This key-value pair is optional, and if not specified, ordinary text from the source is streamed from the sink to the AWS service that is associated with the sink type.

`TextDecoration`  
When no `Format` is specified, `TextDecoration` specifies what additional text should be included when streaming log or event records. For more information, see [Configuring Sink Decorations](#configuring-kinesis-agent-windows-decoration-configuration). This key-value pair is optional.

`ObjectDecoration`  
When `Format` is specified, `ObjectDecoration` specifies what additional data is included in the log or event record before serialization and streaming. For more information, see [Configuring Sink Decorations](#configuring-kinesis-agent-windows-decoration-configuration). This key-value pair is optional.

`BufferInterval`  
To minimize API calls to the AWS service that is associated with the sink type, Kinesis Agent for Windows buffers up multiple log, event, or metric records before streaming. This can save money for services that charge per API call. `BufferInterval` specifies the maximum length of time (in seconds) that records should be buffered before streaming to the AWS service. This key-value pair is optional, and if specified, use a string to represent the value. 

`BufferSize`  
To minimize API calls to the AWS service that is associated with the sink type, Kinesis Agent for Windows buffers up multiple log, event, or metric records before streaming. This can save money for services that charge per API call. `BufferSize` specifies the maximum number of records to buffer before streaming to the AWS service. This key-value pair is optional, and if it is specified, use a string to represent the value.

`MaxAttempts`  
Specifies the maximum number of times Kinesis Agent for Windows tries to stream a set of log, event, and metric records to an AWS service if the streaming consistently fails. This key-value pair is optional. If it is specified, use a string to represent the value. The default value is "`3`".

For examples of complete configuration files that use various kinds of sinks, see [Streaming from the Windows Application Event Log to Sinks](configuring-kaw-examples.md#configuring-kaw-examples-sinks).

**Topics**
+ [

## `KinesisStream` Sink Configuration
](#sink-object-declarations-kinesis-stream)
+ [

## `KinesisFirehose` Sink Configuration
](#sink-object-declarations-kinesis-firehose)
+ [

## CloudWatch Sink Configuration
](#sink-object-declarations-cloud-watch)
+ [

## `CloudWatchLogs` Sink Configuration
](#sink-object-declarations-cloud-watch-logs)
+ [

## Local `FileSystem` Sink Configuration
](#sink-object-declarations-local-filesystem)
+ [

## Sink Security Configuration
](#configuring-kinesis-agent-windows-sink-security-configuration)
+ [

## Configuring `ProfileRefreshingAWSCredentialProvider` to Refresh AWS Credentials
](#configuring-credential-refresh)
+ [

## Configuring Sink Decorations
](#configuring-kinesis-agent-windows-decoration-configuration)
+ [

## Configuring Sink Variable Substitutions
](#configuring-kinesis-agent-windows-sink-variable-substitution)
+ [

## Configuring Sink Queuing
](#configuring-kinesis-agent-windows-queuing)
+ [

## Configuring a Proxy for Sinks
](#configuring-kinesis-agent-windows-sink-proxy)
+ [

## Configuring resolving variables in more sink attributes
](#configuring-resolving-variables)
+ [

## Configuring AWS STS Regional Endpoints When Using RoleARN Property in AWS Sinks
](#configuring-sts-endpoints)
+ [

## Configuring VPC Endpoint for AWS Sinks
](#configuring-vpc-endpoint)
+ [

## Configuring An Alternate Means of Proxy
](#configuring-alternate-proxy)

## `KinesisStream` Sink Configuration
<a name="sink-object-declarations-kinesis-stream"></a>

The `KinesisStream` sink type streams log records and events to the Kinesis Data Streams service. Typically, data that is streamed to Kinesis Data Streams is processed by one or more custom applications that execute using various AWS services. Data is streamed to a named stream that is configured using Kinesis Data Streams. For more information, see the *[Amazon Kinesis Data Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/)*. 

The following is an example Kinesis Data Streams sink declaration:

```
{
    "Id": "TestKinesisStreamSink",
    "SinkType": "KinesisStream",
    "StreamName": "MyTestStream",
    "Region": "us-west-2"
}
```

All `KinesisStream` sink declarations can provide the following additional key-value pairs:

`SinkType`  
Must be specified, and the value must be the literal string `KinesisStream`.

`StreamName`  
Specifies the name of the Kinesis data stream that receives the data streamed from the `KinesisStream` sink type (required). Before streaming the data, configure the stream in the AWS Management Console, the AWS CLI, or through an application using the Kinesis Data Streams API.

`RecordsPerSecond`  
Specifies the maximum number of records streamed to Kinesis Data Streams per second. This key-value pair is optional. If it is specified, use an integer to represent the value. The default value is 1000 records.

`BytesPerSecond`  
Specifies the maximum number of bytes streamed to Kinesis Data Streams per second. This key-value pair is optional. If it is specified, use an integer to represent the value. The default value is 1 MB.

The default `BufferInterval` for this sink type is 1 second, and the default `BufferSize` is 500 records.

## `KinesisFirehose` Sink Configuration
<a name="sink-object-declarations-kinesis-firehose"></a>

The `KinesisFirehose` sink type streams log records and events to the Firehose service. Firehose delivers the streamed data to other services for storage. Typically the stored data is then analyzed in subsequent stages of the data pipeline. Data is streamed to a named delivery stream that is configured using Firehose. For more information, see the *[Amazon Data Firehose Developer Guide](https://docs.aws.amazon.com/firehose/latest/dev/)*. 

The following is an example Firehose sink declaration:

```
{
   "Id": "TestKinesisFirehoseSink",
   "SinkType": "KinesisFirehose",
   "StreamName": "MyTestFirehoseDeliveryStream",
   "Region": "us-east-1",
   "CombineRecords": "true"
}
```

All `KinesisFirehose` sink declarations can provide the following additional key-value pairs:

`SinkType`  
Must be specified, and the value must be the literal string `KinesisFirehose`.

`StreamName`  
Specifies the name of the Firehose delivery stream that receives the data streamed from the `KinesisStream` sink type (required). Before streaming the data, configure the delivery stream using the AWS Management Console, the AWS CLI, or through an application using the Firehose API. 

`CombineRecords`  
When set to `true`, specifies to combine multiple small records into a large record with a 5 KB maximum size. This key-value pair is optional. Records combined using this function are separated by `\n`. If you use AWS Lambda to transform a Firehose record, your Lambda function needs to account for the separator character.

`RecordsPerSecond`  
Specifies the maximum number of records that are streamed to Kinesis Data Streams per second. This key-value pair is optional. If it is specified, use an integer to represent the value. The default value is 5000 records.

`BytesPerSecond`  
Specifies the maximum number of bytes that are streamed to Kinesis Data Streams per second. This key-value pair is optional. If it is specified, use an integer to represent the value. The default value is 5 MB.

The default `BufferInterval` for this sink type is 1 second, and the default `BufferSize` is 500 records.

## CloudWatch Sink Configuration
<a name="sink-object-declarations-cloud-watch"></a>

The `CloudWatch` sink type streams metrics to the CloudWatch service. You can view the metrics in the AWS Management Console. For more information, see the *[Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/)*.

The following is an example `CloudWatch` sink declaration:

```
{
   "Id": "CloudWatchSink",
   "SinkType": "CloudWatch"
}
```

All `CloudWatch` sink declarations can provide the following additional key-value pairs:

`SinkType`  
Must be specified, and the value must be the literal string `CloudWatch`.

`Interval`  
Specifies how frequently (in seconds) Kinesis Agent for Windows reports metrics to the CloudWatch service. This key-value pair is optional. If it is specified, use an integer to represent the value. The default value is 60 seconds. Specify 1 second if you want high-resolution CloudWatch metrics.

`Namespace`  
Specifies the CloudWatch namespace where the metric data is reported. CloudWatch namespaces group a set of metrics together. This key-value pair is optional. The default value is `KinesisTap`.

`Dimensions`  
Specifies the CloudWatch dimensions that are used to isolate metric sets within a namespace. This can be useful to provide separate sets of metric data for each desktop or server, for example. This key-value pair is optional, and if specified, the value must comply with the following format: `"`*key1*`=`*value1*;*key2*`=`*value2...*`"`. The default value is `"ComputerName={computername};InstanceId={instance_id}"`. This value supports sink variable substitution. For more information, see [Configuring Sink Variable Substitutions](#configuring-kinesis-agent-windows-sink-variable-substitution).

`MetricsFilter`  
Specifies which metrics are streamed to CloudWatch from the built-in Kinesis Agent for Windows metrics source. For more information about the built-in Kinesis Agent for Windows metrics source, including the details of the syntax of the value of this key-value pair, see [Kinesis Agent for Windows Built-In Metrics Source](source-object-declarations.md#kinesis-agent-builin-metrics-source).

## `CloudWatchLogs` Sink Configuration
<a name="sink-object-declarations-cloud-watch-logs"></a>

The `CloudWatchLogs` sink type streams log records and events to Amazon CloudWatch Logs. You can view logs in the AWS Management Console, or process them via additional stages of a data pipeline. Data is streamed to a named log stream that is configured in CloudWatch Logs. Log streams are organized into named log groups. For more information, see the *[Amazon CloudWatch Logs User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/)*.

The following is an example CloudWatch Logs sink declaration:

```
{
   "Id": "MyCloudWatchLogsSink",
   "SinkType": "CloudWatchLogs",
   "BufferInterval": "60",
   "BufferSize": "100",
   "Region": "us-west-2",
   "LogGroup": "MyTestLogGroup",
   "LogStream": "MyTestStream"
}
```

All `CloudWatchLogs` sink declarations must provide the following additional key-value pairs:

`SinkType`  
Must be the literal string `CloudWatchLogs`.

`LogGroup`  
Specifies the name of the CloudWatch Logs log group that contains the log stream that receives the log and event records streamed by the `CloudWatchLogs` sink type. If the specified log group does not exist, Kinesis Agent for Windows attempts to create it. 

`LogStream`  
Specifies the name of the CloudWatch Logs log stream that receives the log and event records stream by the `CloudWatchLogs` sink type. This value supports sink variable substitution. For more information, see [Configuring Sink Variable Substitutions](#configuring-kinesis-agent-windows-sink-variable-substitution). If the specified log stream does not exist, Kinesis Agent for Windows attempts to create it. 

The default `BufferInterval` for this sink type is 1 second, and the default `BufferSize` is 500 records. The maximum buffer size is 10,000 records.

## Local `FileSystem` Sink Configuration
<a name="sink-object-declarations-local-filesystem"></a>

The sink type `FileSystem` saves log and event records to a file on the local file system instead of streaming them to AWS services. `FileSystem` sinks are useful for testing and diagnostics. For example, you can use this sink type to examine records before sending them to AWS.

With `FileSystem` sinks, you can also use configuration parameters to simulate batching, throttling, and retry-on-error to mimic the behavior of actual AWS sinks.

All records from all sources connected to a `FileSystem` sink are saved to the single file specified as `FilePath`. If `FilePath` is not specified, records are saved to a file named `SinkId.txt` in the `%TEMP%` directory, which is usually `C:\Users\UserName\AppData\Local\Temp`, where `SinkId` is the unique identifier of the sink and `UserName` is the Windows user name of the active user.

This sink type supports text decoration attributes. For more information, see [Configuring Sink Decorations](#configuring-kinesis-agent-windows-decoration-configuration).

An example `FileSystem` sink type configuration is shown in the following example.

```
{
	   "Id": "LocalFileSink",
	   "SinkType": "FileSystem",
	   "FilePath": "C:\\ProgramData\\Amazon\\local_sink.txt",
	   "Format": "json",
	   "TextDecoration": "",
	   "ObjectDecoration": ""
}
```

The `FileSystem` configuration consists of the following key-value pairs.

`SinkType`  
Must be the literal string `FileSystem`.

`FilePath`  
Specifies the path and file where records are saved. This key-value pair is optional. If not specified, the default is `TempPath\\SinkId.txt`, where `TempPath` is the folder stored in the `%TEMP%` variable and `SinkId` is the unique identifier of the sink.

`Format`  
Specifies the format of the event to be `json` or `xml`. This key value pair is optional and case-insensitive. If omitted, events are written to the file in plain text.

`TextDecoration`  
Applies only to events written in plain text. This key-value pair is optional.

`ObjectDecoration`  
Applies only to events where `Format` is set to `json`. This key-value pair is optional.

### Advanced Usage – Record Throttling and Failure Simulation
<a name="file-system-sink-advanced"></a>

`FileSystem` can mimic the behavior of AWS sinks by simulating record throttling. You can use the following key-value pairs to specify record throttling and failure simulation attributes.

By acquiring a lock on the destination file and preventing writes to it, you can use `FileSystem` sinks to simulate and examine the behavior of AWS sinks when the network fails.

The following example shows a `FileSystem` configuration with simulation attributes.

```
{
	   "Id": "LocalFileSink",
	   "SinkType": "FileSystem",
	   "FilePath": "C:\\ProgramData\\Amazon\\local_sink.txt",
	   "TextDecoration": "",
	   "RequestsPerSecond": "100",
    "BufferSize": "10",
    "MaxBatchSize": "1024"
}
```

`RequestsPerSecond`  
Optional and specified as a string type. If omitted, the default is `"5"`. Controls the rate of requests that the sink processes—that is, writes to file—not the number of records. Kinesis Agent for Windows makes batch requests to AWS endpoints, so a request may contain multiple records.

`BufferSize`  
Optional and specified as string type. Specifies the maximum number of event records that the sink batches before saving to file.

`MaxBatchSize`  
Optional and specified as a string type. Specifies the maximum amount of event record data in bytes that the sink batches before saving to file.

The maximum record rate limit is a function of `BufferSize`, which determines the maximum number of records per request, and `RequestsPerSecond`. You can calculate the record rate limit per second using the following formula.

**RecordRate** = `BufferSize` \$1 `RequestsPerSecond`

Given configuration values in the example above, there is a maximum record rate of 1000 records per second.

## Sink Security Configuration
<a name="configuring-kinesis-agent-windows-sink-security-configuration"></a>

### Configuring Authentication
<a name="configuring-kinesis-agent-windows-authentication"></a>

For Kinesis Agent for Windows to stream logs, events, and metrics to AWS services, access must be authenticated. There are several ways to provide authentication for Kinesis Agent for Windows. How you do it depends on the situation where Kinesis Agent for Windows is executing and the specific security requirements for a particular organization.
+ If Kinesis Agent for Windows is executing on an Amazon EC2 host, the most secure and simplest way to provide authentication is to create an IAM role with sufficient access to the required operations for the required AWS services, and an EC2 instance profile that references that role. For information about creating instance profiles, see [Using Instance Profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html). For information about what policies to attach to the IAM role, see [Configuring Authorization](#configuring-kinesis-agent-windows-authorization). 

  After creating the instance profile, you can associate it with any EC2 instances that use Kinesis Agent for Windows. If instances already have an associated instance profile, you can attach the appropriate policies to the role that is associated with that instance profile.
+ If Kinesis Agent for Windows executes on an EC2 host in one account, but the resources that are the target of the sink reside in a different account, you can create an IAM role for cross-account access. For more information, see [Tutorial: Delegate Access Across AWS accounts Using IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html). After creating the cross-account role, specify the Amazon Resource Name (ARN) for the cross-account role as the value of the `RoleARN` key-value pair in the sink declaration. Kinesis Agent for Windows then attempts to assume the specified cross-account role when accessing AWS resources that are associated with the sink type for that sink.
+ If Kinesis Agent for Windows is executing outside of Amazon EC2 (for example, on-premises), several options exist:
  + If it is acceptable to register the on-premises server or desktop machine as an Amazon EC2 Systems Manager managed-instance, use the following process to configure authentication:

    1. Use the process described in [Setting Up AWS Systems Manager in Hybrid Environments](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html) to create a service role, create an activation for a managed instance, and install the SSM agent.

    1. Attach the appropriate policies to the service role to enable Kinesis Agent for Windows to access the resources necessary for streaming data from the configured sinks. For information about what policies to attach to the IAM role, see [Configuring Authorization](#configuring-kinesis-agent-windows-authorization).

    1. Use the process described in [Configuring `ProfileRefreshingAWSCredentialProvider` to Refresh AWS Credentials](#configuring-credential-refresh) to refresh AWS credentials.

    This is the recommended approach for non-EC2 instances because credentials are securely managed by SSM and AWS.
  + If it's acceptable to run the `AWSKinesisTap` service for Kinesis Agent for Windows under a specific user instead of the default system account, use the following process:

    1. Create an IAM user in the AWS account where the AWS services will be used. Capture the access key and secret key of this user during the creation process. You need this information for later steps in this process.

    1. Attach policies to the IAM user that authorize access to the required operations for the required services. For information about what policies to attach to the IAM user, see [Configuring Authorization](#configuring-kinesis-agent-windows-authorization).

    1. Change the `AWSKinesisTap` service on each desktop or server so that it runs under a specific user rather than the default system account.

    1. Create a profile in the SDK store using the access key and secret key recorded earlier. For more information, see [Configuring AWS Credentials](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html).

    1. Update the `AWSKinesisTap.exe.config` file in the `%PROGRAMFILES%\Amazon\AWSKinesisTap` directory to specify the name of the profile created in the previous step. For more information, see [Configuring AWS Credentials](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html).

    This is the recommended approach for non-EC2 hosts that cannot be managed instances because the credentials are encrypted for the specific host and the specific user.
  + If it is required to run the `AWSKinesisTap` service for Kinesis Agent for Windows under the default system account, you must use a shared credential file. This is because the system account has no Windows user profile for enabling the SDK store. Shared credential files are not encrypted, so we do not recommend this approach. For information about how to use shared configuration files, see [Configuring AWS Credentials](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html) in the *AWS SDK for .NET*. If you use this approach, we recommend that you use NTFS encryption and restricted file access to the shared configuration file. Keys should be rotated by a management platform, and the shared configuration file must be updated when the key rotation occurs.

Although it is possible to directly provide access keys and secret keys in the sink declarations, this approach is discouraged because the declarations are not encrypted.

### Configuring Authorization
<a name="configuring-kinesis-agent-windows-authorization"></a>

Attach the appropriate policies that follow to the IAM user or role that Kinesis Agent for Windows will use to stream data to AWS services:

#### Kinesis Data Streams
<a name="minimum-permissions-kinesis-stream"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "kinesis:PutRecord",
                "kinesis:PutRecords"
            ],
            "Resource": "arn:aws:kinesis:*:*:stream/*"
        }
    ]
}
```

------

To limit authorization to a specific Region, account, or stream name, replace the appropriate asterisks in the ARN with specific values. For more information, see "Amazon Resource Names (ARNs) for Kinesis Data Streams" in [Controlling Access to Amazon Kinesis Data Streams Resources Using IAM](https://docs.aws.amazon.com/streams/latest/dev/controlling-access.html). 

#### Firehose
<a name="minimum-permissions-kinesis-firehose"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "firehose:PutRecord",
                "firehose:PutRecordBatch"
            ],
            "Resource": "arn:aws:firehose:*:*:deliverystream/*"
        }
    ]
}
```

------

To limit authorization to a specific Region, account, or delivery stream name, replace the appropriate asterisks in the ARN with specific values. For more information, see [Controlling Access with Amazon Kinesis Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html) in the *Amazon Data Firehose Developer Guide*.

#### CloudWatch
<a name="minimum-permissions-cloud-watch"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "cloudwatch:PutMetricData",
            "Resource": "*"
        }
    ]
}
```

------

For more information, see [Overview of Managing Access Permissions to Your CloudWatch Resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-access-control-overview-cw.html) in the *Amazon CloudWatch Logs User Guide*. 

#### CloudWatch Logs with an Existing Log Group and Log Stream
<a name="minimum-permissions-cloud-watch-logs1"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:*"
        },
        {
            "Sid": "VisualEditor4",
            "Effect": "Allow",
            "Action": "logs:PutLogEvents",
            "Resource": "arn:aws:logs:*:*:log-group:*:*:*"
        }
    ]
}
```

------

To restrict access to a specific Region, account, log group, or log stream, replace the appropriate asterisks in the ARNs with appropriate values. For more information, see [Overview of Managing Access Permissions to Your CloudWatch Logs Resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html) in the *Amazon CloudWatch Logs User Guide*.

#### CloudWatch Logs with Extra Permissions for Kinesis Agent for Windows to Create Log Groups and Log Streams
<a name="minimum-permissions-cloud-watch-logs2"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor5",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:*"
        },
        {
            "Sid": "VisualEditor6",
            "Effect": "Allow",
            "Action": "logs:PutLogEvents",
            "Resource": "arn:aws:logs:*:*:log-group:*:*:*"
        },
        {
            "Sid": "VisualEditor7",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "*"
        }
    ]
}
```

------

To restrict access to a specific Region, account, log group, or log stream, replace the appropriate asterisks in the ARNs with appropriate values. For more information, see [Overview of Managing Access Permissions to Your CloudWatch Logs Resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html) in the *Amazon CloudWatch Logs User Guide*.

#### Permissions Required for EC2 Tag Variable Expansion
<a name="ec2-permissions"></a>

Using variable expansion with the `ec2tag` variable prefix requires the `ec2:Describe*` permission.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [{
      "Sid": "VisualEditor8",
      "Effect": "Allow",
      "Action": "ec2:Describe*",
      "Resource": "*"
    }
   ]
}
```

------

**Note**  
You can combine multiple statements into a single policy as long as the `Sid` for each statement is unique within that policy. For information about creating policies, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

## Configuring `ProfileRefreshingAWSCredentialProvider` to Refresh AWS Credentials
<a name="configuring-credential-refresh"></a>

If you use AWS Systems Manager for hybrid environments to manage AWS credentials, Systems Manager rotates session credentials in `c:\Windows\System32\config\systemprofile\.aws\credentials`. For more information about Systems Manager for hybrid environments, see [Setting up AWS Systems Manager for hybrid environments](https://docs.aws.amazon.com/(systems-manager/latest/userguide/systems-manager-managedinstances.html)) in the *AWS Systems Manager User Guide*.

Because the AWS .net SDK does not pick up new credentials automatically, we provide the `ProfileRefreshingAWSCredentialProvider` plug-in to refresh credentials.

You can use the `CredentialRef` attribute of any AWS sync configuration to reference a `Credentials` definition where the `CredentialType` attribute is set to `ProfileRefreshingAWSCredentialProvider` as shown in the following example.

```
{
    "Sinks": [{
		      "Id": "myCloudWatchLogsSink",
		      "SinkType": "CloudWatchLogs",
		      "CredentialRef": "ssmcred",
		      "Region": "us-west-2",
		      "LogGroup": "myLogGroup",
		      "LogStream": "myLogStream"
    }],
    "Credentials": [{
        "Id": "ssmcred",
        "CredentialType": "ProfileRefreshingAWSCredentialProvider",
        "Profile": "default",
        "FilePath": "%USERPROFILE%//.aws//credentials",
        "RefreshingInterval": 300
    }]
}
```

A credential definition consists of the following attributes as key-value pairs.

`Id`  
Defines the string that sink definitions can specify using `CredentialRef` to reference this credential configuration.

`CredentialType`  
Set to the literal string `ProfileRefreshingAWSCredentialProvider`.

`Profile`  
Optional. The default is `default`.

`FilePath`  
Optional. Specifies the path to the AWS credentials file. If omitted, `%USERPROFILE%/.aws/credentials` is the default.

`RefreshingInterval`  
Optional. The frequency at which credentials are refreshed, in seconds. If omitted, `300` is the default.

## Configuring Sink Decorations
<a name="configuring-kinesis-agent-windows-decoration-configuration"></a>

Sink declarations can optionally include key-value pairs that specify additional data to stream to various AWS services to enhance the records gathered from the source.

`TextDecoration`  
Use this key-value pair when no `Format` is specified in the sink declaration. The value is a special format string where variable substitution occurs. For example, suppose that a `TextDecoration` of `"{ComputerName}:::{timestamp:yyyy-MM-dd HH:mm:ss}:::{_record}"` is provided for a sink. When a source emits a log record that contains the text `The system has resumed from sleep.`, and that source is connected to the sink via a pipe, then the text `MyComputer1:::2017-10-26 06:14:22:::The system has resumed from sleep.` is streamed to the AWS service associated with the sink type. The `{_record}` variable references the original text record delivered by the source.

`ObjectDecoration`  
Use this key-value pair when `Format` is specified in the sink declaration to add additional data before record serialization. For example, suppose that an `ObjectDecoration` of `"ComputerName={ComputerName};DT={timestamp:yyyy-MM-dd HH:mm:ss}"` is provided for a sink that specifies JSON `Format`. The resulting JSON streamed to the AWS service associated with the sink type includes the following key-value pairs in addition to the original data from the source:  

```
{
    ComputerName: "MyComputer2",
    DT: "2017-10-17 21:09:04"
}
```
For an example of using `ObjectDecoration`, see [Tutorial: Stream JSON Log Files to Amazon S3 Using Kinesis Agent for Windows](directory-source-to-s3-tutorial.md).

`ObjectDecorationEx`  
Specifies an expression, which allows for more flexible data extraction and formatting as compared to `ObjectDecoration`. This field can be used when the format of the sink is `json`. The expression syntax is shown in the following.  

```
"ObjectDecorationEx": "attribute1={expression1};attribute2={expression2};attribute3={expression3}(;...)"
```
For example, the following `ObjectDecorationEx` attribute:  

```
"ObjectDecorationEx": "host={env:ComputerName};message={upper(_record)};time={format(_timestamp, 'yyyyMMdd')}"
```
Transforms the literal record:  
`System log message`  
Into a JSON object as follows, with the values returned by the expressions:  

```
{
    "host": "EC2AMAZ-1234",
    "message": "SYSTEM LOG MESSAGE",
    "time": "20210201"
}
```
For more information about formulating expressions, see [Tips for Writing Expressions](#configuring-expressions). Most of the `ObjectDecoration` declaration should work using the new syntax with the exception of timestamp variables. A `{timestamp:yyyyMMdd}` field in `ObjectDecoration` is expressed as `{format(_timestamp,'yyyyMMdd')}` in `ObjectDecorationEx`.

`TextDecorationEx`  
Specifies an expression, which allows for more flexible data extraction and formatting as compared to `TextDecoration`, as shown in the following example.  

```
"TextDecorationEx": "Message '{lower(_record)}' at {format(_timestamp, 'yyyy-MM-dd')}"
```
You can use `TextDecorationEx` to compose JSON objects. Use ‘@\$1’ to escape open curly brace, as shown in the following example.  

```
"TextDecorationEx": "@{ \"var\": \"{upper($myvar1)}\" }"
```

If the source type of the source connected to the sink is `DirectorySource`, then the sink can use three additional variables:

`_FilePath`  
The full path to the log file.

`_FileName`  
The file name and file name extension of the file.

`_Position`  
An integer that represents where the record is located in the log file.

These variables are useful when you use a source that gathers log records from multiple files connected to a sink that streams all the records to a single stream. Injecting the values of these variables into the streaming records enables downstream analytics in the data pipeline to order the records by file and by location within each file.

### Tips for Writing Expressions
<a name="configuring-expressions"></a>

An expression can be any of the following:
+ A variable expression.
+ A constant expression, for example, `'hello'`, `1`, `1.21`, `null`, `true`, `false`.
+ An invocation expression that calls a function, as shown in the following example.

  ```
  regexp_extract('Info: MID 118667291 ICID 197973259 RID 0 To: <jd@acme.com>', 'To: (\\\\S+)', 1)
  ```

#### Special Characters
<a name="ex-special-char"></a>

Two backslashes are required to escape special characters.

#### Nesting
<a name="ex-nesting"></a>

Function invocations can be nested, as shown in the following example.

```
format(date(2018, 11, 28), 'MMddyyyy')
```

#### Variables
<a name="ex-variables"></a>

There are three types of variables: local, meta, and global.
+ **Local variables** start with a `$` such as `$message`. They are used to resolve the property of the event object, an entry if the event is a dictionary, or an attribute if the event is a JSON object. If the local variable contains space or special characters, use a quoted local variable such as `$'date created'`.
+ **Meta variables** start with an underscore (`_`) and are used to resolve to the metadata of the event. All event types support the following meta variables.  
`_timestamp`  
The time stamp of the event.  
`_record`  
The raw text representation of the event.

  Log events support the following additional meta variables.  
`_filepath`  
  
`_filename`  
  
`_position`  
  
`_linenumber`  


  
+ **Global variables** resolve to environment variables, EC2 instance metadata, or EC2tag. For better performance, we recommend that you use the prefix to limit search scope, such as `{env:ComputerName}`, `{ec2:InstanceId}`, and `{ec2tag:Name}`.

#### Built-in Functions
<a name="ex-built-in-functions"></a>

Kinesis Agent for Windows supports the following built-in functions. If any of the arguments are `NULL` and the function is not designed to handle `NULL`, a `NULL` object is returned.

```
//string functions
int length(string input)
string lower(string input)
string lpad(string input, int size, string padstring)
string ltrim(string input)
string rpad(string input, int size, string padstring)
string rtrim(string input)
string substr(string input, int start)
string substr(string input, int start, int length)
string trim(string input)
string upper(string str)

//regular expression functions
string regexp_extract(string input, string pattern)
string regexp_extract(string input, string pattern, int group)

//date functions
DateTime date(int year, int month, int day)
DateTime date(int year, int month, int day, int hour, int minute, int second)
DateTime date(int year, int month, int day, int hour, int minute, int second, int millisecond)

//conversion functions
int? parse_int(string input)
decimal? parse_decimal(string input)
DateTime? parse_date(string input, string format)
string format(object o, string format)

//coalesce functions
object coalesce(object obj1, object obj2)
object coalesce(object obj1, object obj2, object obj3)
object coalesce(object obj1, object obj2, object obj3, object obj4)
object coalesce(object obj1, object obj2, object obj3, object obj4, object obj5) 
object coalesce(object obj1, object obj2, object obj3, object obj4, object obj5, object obj6)
```

## Configuring Sink Variable Substitutions
<a name="configuring-kinesis-agent-windows-sink-variable-substitution"></a>

The `KinesisStream`, `KinesisFirehose`, and `CloudWatchLogs` sink declarations require either a `LogStream` or `StreamName` key-value pair. The value of these key-values can contain variable references that are automatically resolved by Kinesis Agent for Windows. For `CloudWatchLogs`, the `LogGroup` key-value pair is also required and can contain variables references that are automatically resolved by Kinesis Agent for Windows. The variables are specified using the template `{`*`prefix`*`:`*`variablename`*`}` where *`prefix`*`:` is optional. The supported prefixes are as follows:
+ `env` — The variable reference is resolved to the value of the environment variable of the same name.
+ `ec2` — The variable reference is resolved to the EC2 instance metadata of the same name.
+ `ec2tag` — The variable reference is resolved to the value of the EC2 instance tag of the same name. The `ec2:Describe*` permission is required to access instance tags. For more information, see [Permissions Required for EC2 Tag Variable Expansion](#ec2-permissions). 

If the prefix isn't specified, if there is an environment variable with the same name as `variablename`, the variable reference is resolved to the value of the environment variable. Otherwise, if `variablename` is `instance_id` or `hostname`, the variable reference is resolved to the value of the EC2 metadata of the same name. Otherwise, the variable reference is not resolved.

The following are examples of valid key-value pairs using variable references:

```
"LogStream": "LogStream_{instance_id}"
"LogStream": "LogStream_{hostname}"
"LogStream": "LogStream_{ec2:local-hostname}"
"LogStream": "LogStream_{computername}"
"LogStream": "LogStream_{env:computername}"
```

The `CloudWatchLogs` sink declarations support a special format timestamp variable that allows the timestamp of the original log or event record from the source to alter the name of the log stream. The format is `{timestamp:``timeformat``}`. See the following example:

```
"LogStream": "LogStream_{timestamp:yyyyMMdd}"
```

If the log or event record was generated on June 5, 2017, the value of the `LogStream` key-value pair in the previous example would resolve to `"LogStream_20170605"`.

If authorized, the `CloudWatchLogs` sink type can automatically create new log streams when required based on the generated names. You cannot do this for other sink types because they require additional configuration beyond the name of the stream.

There are special variable substitutions that occur in text and object decoration. For more information, see [Configuring Sink Decorations](#configuring-kinesis-agent-windows-decoration-configuration).

## Configuring Sink Queuing
<a name="configuring-kinesis-agent-windows-queuing"></a>

The `KinesisStream`, `KinesisFirehose`, and `CloudWatchLogs` sink declarations can optionally enable queuing of records that have failed to stream to the AWS service associated with those sink types due to transient connectivity issues. To enable queuing and automatic streaming retries when connectivity is restored, use the following key-value pairs in the sink declarations:

`QueueType`  
Specifies the kind of queuing mechanism to use. The only supported value is `file`, which indicates that records should be queued up in a file. This key-value pair is required in order to enable the queuing feature of Kinesis Agent for Windows. If it is not specified, the default behavior is to queue in memory only, and fail to stream when in memory queueing limits are reached.

`QueuePath`  
Specifies the path to the folder that contains the files of queued records. This key-value pair is optional. The default value is `%PROGRAMDATA%\KinesisTap\Queue\`*SinkId* where *SinkId* is the identifier you assigned as the value of the `Id` for the sink declaration.

`QueueMaxBatches`  
Limits the total amount of space that Kinesis Agent for Windows can consume when queuing records for streaming. The amount of space is limited to the value of this key-value pair multiplied by the maximum number of bytes per batch. The maximum bytes per batch for the `KinesisStream`, `KinesisFirehose`, and `CloudWatchLogs` sink types are 5 MB, 4 MB, and 1 MB respectively. When this limit is reached, any streaming failures are not queued and are reported as non-recoverable failures. This key-value pair is optional. The default value is 10,000 batches.

## Configuring a Proxy for Sinks
<a name="configuring-kinesis-agent-windows-sink-proxy"></a>

To configure a proxy for all the Kinesis Agent for Windows sink types that access AWS services, edit the Kinesis Agent for Windows configuration file located at `%Program Files%\Amazon\KinesisTap\AWSKinesisTap.exe.config`. For instructions, see the `proxy` section in [Configuration Files Reference for AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/net-dg-config-ref.html#net-dg-config-ref-elements-proxy) in the *AWS SDK for .NET Developer Guide*. 

## Configuring resolving variables in more sink attributes
<a name="configuring-resolving-variables"></a>

The following example shows a sink configuration that uses the `Region` environment variable for the value of the `Region` attribute key-value pair. For `RoleARN`, it specifies the EC2 tag key `MyRoleARN`, which evaluates to the value associated with that key.

```
"Id": "myCloudWatchLogsSink",
"SinkType": "CloudWatchLogs",
"LogGroup": "EC2Logs",
"LogStream": "logs-{instance_id}"
"Region": "{env:Region}"
"RoleARN": "{ec2tag:MyRoleARN}"
```

## Configuring AWS STS Regional Endpoints When Using RoleARN Property in AWS Sinks
<a name="configuring-sts-endpoints"></a>

This feature only applies if you are using KinesisTap on Amazon EC2 and using the `RoleARN` property of AWS sinks to assume an external IAM role to authenticate with the destination AWS services. 

By setting `UseSTSRegionalEndpoints` to `true`, you can specify that an agent use the regional endpoint (for example, `https://sts.us-east-1.amazonaws.com`) instead of the global endpoint (for example, `https://sts.amazonaws.com`). Using a Regional STS endpoint reduces round-trip latency for the operation and limits the impact of failures in the global endpoint service. 

## Configuring VPC Endpoint for AWS Sinks
<a name="configuring-vpc-endpoint"></a>

You can specify a VPC endpoint in the sink configuration for `CloudWatchLogs`, `CloudWatch`, `KinesisStreams`, and `KinesisFirehose` sink types. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. For more information, see [VPC endpoints](https://docs.aws.amazon.com//vpc/latest/userguide/vpc-endpoints.html) in the *Amazon VPC User Guide*.

You specify the VPC endpoint using the `ServiceURL` property as shown in the following example of a `CloudWatchLogs` sink configuration. Set the value of `ServiceURL` to the value shown on the **VPC endpoint details** tab using the Amazon VPC console.

```
{
    "Id": "myCloudWatchLogsSink",
    "SinkType": "CloudWatchLogs",
    "LogGroup": "EC2Logs",
    "LogStream": "logs-{instance_id}",
    "ServiceURL":"https://vpce-ab1c234de56-ab7cdefg.logs.us-east-1.vpce.amazonaws.com"
}
```

## Configuring An Alternate Means of Proxy
<a name="configuring-alternate-proxy"></a>

This feature allows you to configure a proxy server in a sink configuration using the proxy support built in to the AWS SDK instead of .NET. Previously, the only way to configure the agent to use a proxy was to use a native feature of .NET, which automatically routed all HTTP/S requests through the proxy defined in the proxy file.

If you are currently using the agent with a proxy server, you do not need to change over to use this method.

You can use the `ProxyHost` and `ProxyPort` properties to configure an alternate proxy as shown in the following example.

```
{
    "Id": "myCloudWatchLogsSink",
    "SinkType": "CloudWatchLogs",
    "LogGroup": "EC2Logs",
    "LogStream": "logs-{instance_id}",
    "Region": "us-west-2",
    "ProxyHost": "myproxy.mydnsdomain.com",
    "ProxyPort": "8080"
}
```

# Pipe Declarations
<a name="pipe-object-declarations"></a>

Use *pipe declarations* to connect a source (see [Source Declarations](source-object-declarations.md)) to a sink (see [Sink Declarations](sink-object-declarations.md)) in Amazon Kinesis Agent for Microsoft Windows. A pipe declaration is expressed as a JSON object. After Kinesis Agent for Windows starts, the logs, events, or metrics are gathered from the source for a given pipe. They are then streamed to various AWS services using the sink that is associated with that pipe.

The following is an example pipe declaration:

```
{
   "Id": "MyAppLogToCloudWatchLogs", 
   "SourceRef": "MyAppLog", 
   "SinkRef": "MyCloudWatchLogsSink" 
}
```

**Topics**
+ [

## Configuring Pipes
](#kinesis-agent-pipe-configuration)
+ [

## Configuring Kinesis Agent for Windows Metric Pipes
](#kinesis-agent-metric-pipe-configuration)

## Configuring Pipes
<a name="kinesis-agent-pipe-configuration"></a>

All pipe declarations can contain the following key-value pairs:

`Id`  
Specifies the name of the pipe (required). It must be unique within the configuration file. 

`Type`  
Specifies the type of transformation (if any) that is applied by the pipe as log data is transferred from the source to the sink. The only supported value is `RegexFilterPipe`. This value enables regular expression filtering of the underlying textual representation of the log record. Using filtering can reduce transmission and storage costs by sending only relevant log records downstream to the data pipeline. This key-value pair is optional. The default value is to provide no transformation.

`FilterPattern`  
Specifies the regular expression for `RegexFilterPipe` pipelines that are used to filter log records gathered by the source before being transferred to the sink. Log records are transferred by `RegexFilterPipe` type pipes when the regular expression matches the underlying textual representation of the record. Structured log records that are generated, for example, when using the `ExtractionPattern` key-value pair in a `DirectorySource` declaration, can still be filtered using the `RegexFilterPipe` mechanism. This is because this mechanism operates against the original textual representation before parsing. This key-value pair is optional, but it must be provided if the pipe specifies the `RegexFilterPipe` type.  
The following is an example `RegexFilterPipe` pipe declaration:  

```
{
	"Id": "MyAppLog2ToFirehose",
	"Type": "RegexFilterPipe",
	"SourceRef": "MyAppLog2",
	"SinkRef": "MyFirehose",
	"FilterPattern": "^(10|11),.*",
	"IgnoreCase": false,
	"Negate": false
}
```

`SourceRef`  
Specifies the name (the value of the `Id` key-value pair) of the source declaration that defines the source that is collecting log, event, and metric data for the pipe (required). 

`SinkRef`  
Specifies the name (the value of the `Id` key-value pair) of the sink declaration that defines the sink that is receiving the log, event, and metric data for the pipe (required).

`IgnoreCase`  
Optional. Accepts values of `true` or `false`. When set to `true`, the Regex will match records in a case-insensitive manner.

`Negate`  
Optional. Accepts values of `true` or `false`. When set to `true`, the pipe will forward the records that *do not* match the regular expression.

For an example of a complete configuration file that uses the `RegexFilterPipe` pipe type, see [Using Pipes](configuring-kaw-examples.md#configuring-kaw-examples-pipes).

## Configuring Kinesis Agent for Windows Metric Pipes
<a name="kinesis-agent-metric-pipe-configuration"></a>

There is a built-in metric source named `_KinesisTapMetricsSource` that produces metrics about Kinesis Agent for Windows. If there is a `CloudWatch` sink declaration with an `Id` of `MyCloudWatchSink`, the following example pipeline declaration transfers Kinesis Agent for Windows-generated metrics to that sink:

```
{
   "Id": "KinesisAgentMetricsToCloudWatch",
   "SourceRef": "_KinesisTapMetricsSource",
   "SinkRef": "MyCloudWatchSink"
}
```

For more information about the Kinesis Agent for Windows built-in metrics source, see [Kinesis Agent for Windows Built-In Metrics Source](source-object-declarations.md#kinesis-agent-builin-metrics-source).

If the configuration file also streams Windows performance counter metrics, we recommend that you use a separate pipe and sink rather than using the same sink for both Kinesis Agent for Windows metrics and Windows performance counter metrics.

# Configuring Automatic Updates
<a name="update-configuration-options"></a>

Use the `appsettings.json` configuration file to enable automatic updating of Amazon Kinesis Agent for Microsoft Windows and the configuration file for Kinesis Agent for Windows. To control the update behavior, specify the `Plugins` key-value pair at the same level in the configuration file as `Sources`, `Sinks`, and `Pipes`.

The `Plugins` key-value pair specifies the additional general functionality to use that does not fall specifically under the categories of sources, sinks, and pipes. For example, there is a plugin for updating Kinesis Agent for Windows, and there is a plugin for updating the `appsettings.json` configuration file. Plugins are represented as JSON objects and always have a `Type` key-value pair. The `Type` determines what other key-value pairs can be specified for the plugin. The following plugin types are supported:

`PackageUpdate`  
Specifies that Kinesis Agent for Windows should periodically check a package version configuration file. If the package version file indicates that a different version of Kinesis Agent for Windows should be installed, then Kinesis Agent for Windows downloads that version and installs it. The `PackageUpdate` plugin key-value pairs include:    
`Type`  
The value must be the string `PackageUpdate`, and it is required.  
`Interval`  
Specifies how often to check the package version file for any changes in minutes represented as a string. This key-value pair is optional. If it is not specified, the default value is 60 minutes. If the value is less than 1, no update checking occurs.  
`PackageVersion`  
Specifies the location of the package version JSON file. The file can reside on a file share (`file://`), a website (`http://`), or Amazon S3 (`s3://`). For example, a value of `s3://mycompany/config/agent-package-version.json` indicates that Kinesis Agent for Windows should check the contents of the `config/agent-package-version.json` file in the `mycompany` Amazon S3 bucket. It should perform updates based on the contents of that file.   
The value of the `PackageVersion` key-value pair is case sensitive for Amazon S3.
The following is an example of the contents of a package version file:   

```
{
    "Name": "AWSKinesisTap",
    "Version": "1.0.0.106",
    "PackageUrl": "https://s3-us-west-2.amazonaws.com/kinesis-agent-windows/downloads/AWSKinesisTap.{Version}.nupkg"
}
```
The `Version` key-value pair specifies what version of Kinesis Agent for Windows should be installed if it is not already installed. The `{Version}` variable reference in the `PackageUrl` resolves the value you specify for the `Version` key-value pair. In this example, the variable resolves to the string `1.0.0.106`. This variable resolution is provided so that there can be a single place in the package version file where the specific desired version is stored. You can use multiple package version files to control the pace of rolling out new versions of Kinesis Agent for Windows to validate a new version before a larger deployment. To roll back a deployment of Kinesis Agent for Windows, change one or more package version files to specify an earlier version of Kinesis Agent for Windows that is known to work in your environment.  
The value of the `PackageVersion` key-value pair is affected by variable substitution to facilitate the automatic selection of different package version files. For more information about variable substitution, see [Configuring Sink Variable Substitutions](sink-object-declarations.md#configuring-kinesis-agent-windows-sink-variable-substitution).  
`AccessKey`  
Specifies which access key to use when authenticating access to the package version file in Amazon S3. This key-value pair is optional. We do not recommend using this key-value pair. For alternative authentication approaches that are recommended, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication).   
`SecretKey`  
Specifies which secret key to use when authenticating access to the package version file in Amazon S3. This key-value pair is optional. We do not recommend using this key-value pair. For alternative authentication approaches that are recommended, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication).  
`Region`  
Specifies what Region endpoint to use when accessing the package version file from Amazon S3. This key-value pair is optional.  
`ProfileName`  
Specifies which security profile to use when authenticating access to the package version file in Amazon S3. For more information, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication). This key-value pair is optional.  
`RoleARN`  
Specifies which role to assume when authenticating access to the package version file in Amazon S3 in a cross-account scenario. For more information, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication). This key-value pair is optional.
If no `PackageUpdate` plugin is specified, then no package version files are checked to determine if an update is required.

`ConfigUpdate`  
Specifies that Kinesis Agent for Windows should periodically check for an updated `appsettings.json` configuration file stored in a file share, website, or Amazon S3. If an updated configuration file exists, it is downloaded and installed by Kinesis Agent for Windows. `ConfigUpdate` key-value pairs include the following:    
`Type`  
The value must be the string `ConfigUpdate`, and it is required.  
`Interval`  
Specifies how often to check for a new configuration file in minutes represented as a string. This key-value pair is optional, and if not specified, defaults to 5 minutes. If the value is less than 1, then the configuration file update is not checked.  
`Source`  
Specifies where to look for an updated configuration file. The file can reside on a file share (`file://`), a website (`http://`), or Amazon S3 (`s3://`). For example, a value of `s3://mycompany/config/appsettings.json` indicates that Kinesis Agent for Windows should check for updates to the `config/appsettings.json` file in the `mycompany` Amazon S3 bucket.  
The value of the `Source` key-value pair is case-sensitive for Amazon S3.
The value of the `Source` key-value pair is affected by variable substitution to facilitate the automatic selection of different configuration files. For more information about variable substitution, see [Configuring Sink Variable Substitutions](sink-object-declarations.md#configuring-kinesis-agent-windows-sink-variable-substitution).  
`Destination`  
Specifies where to store the configuration file on the local machine. This can be a relative path, an absolute path, or a path containing environment variable references such as `%PROGRAMDATA%`. If the path is relative, it is relative to the location where Kinesis Agent for Windows is installed. Typically the value should be `.\appsettings.json`. This key-value pair is required.   
`AccessKey`  
Specifies which access key to use when authenticating access to the configuration file in Amazon S3. This key-value pair is optional. We do not recommend using this key-value pair. For alternative authentication approaches that are recommended, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication).   
`SecretKey`  
Specifies which secret key to use when authenticating access to the configuration file in Amazon S3. This key-value pair is optional. We do not recommend using this key-value pair. For alternative authentication approaches that are recommended, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication).  
`Region`  
Specifies what Region endpoint to use when accessing the configuration file from Amazon S3. This key-value pair is optional.  
`ProfileName`  
Specifies which security profile to use when authenticating access to the configuration file in Amazon S3. For more information, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication). This key-value pair is optional.  
`RoleARN`  
Specifies which role to assume when authenticating access to the configuration file in Amazon S3 in a cross-account scenario. For more information, see [Configuring Authentication](sink-object-declarations.md#configuring-kinesis-agent-windows-authentication). This key-value pair is optional.
If no `ConfigUpdate` plugin is specified, then no configuration files are checked to determine whether a configuration file update is required.

The following is an example `appsettings.json` configuration file that demonstrates using the `PackageUpdate` and `ConfigUpdate` plugins. In this example, there is package version file located in the `mycompany` Amazon S3 bucket named `config/agent-package-version.json`. This file is checked for any changes approximately every 2 hours. If a different version of Kinesis Agent for Windows is specified in the package version file, the specified agent version is installed from the specified location in the package version file. 

In addition, there is an `appsettings.json` configuration file stored in the `mycompany` Amazon S3 bucket named `config/appsettings.json`. Approximately every 30 minutes, that file is compared against the current configuration file. If they are different, the updated configuration file is downloaded from Amazon S3 and installed to the typical local location for the `appsettings.json` configuration file.

```
{
  "Sources": [
    {
      "Id": "ApplicationLogSource",
      "SourceType": "DirectorySource",
      "Directory": "C:\\LogSource\\",
      "FileNameFilter": "*.log",
      "RecordParser": "SingleLine"
    }
  ],
  "Sinks": [
    {
       "Id": "ApplicationLogKinesisFirehoseSink",
       "SinkType": "KinesisFirehose",
       "StreamName": "ApplicationLogFirehoseDeliveryStream",
       "Region": "us-east-1"
    }  
    ],
  "Pipes": [
    {
      "Id": "ApplicationLogSourceToApplicationLogKinesisFirehoseSink",
      "SourceRef": "ApplicationLogSource",
      "SinkRef": "ApplicationLogKinesisFirehoseSink"
    }
  ],
  "Plugins": [
    {
      "Type": "PackageUpdate"
      "Interval": "120",
      "PackageVersion": "s3://mycompany/config/agent-package-version.json"
    },
    {
      "Type": "ConfigUpdate",
      "Interval": "30", 
      "Source": "s3://mycompany/config/appsettings.json",
      "Destination": ".\appSettings.json"      
    }
  ]
}
```

# Kinesis Agent for Windows Configuration Examples
<a name="configuring-kaw-examples"></a>

 The `appsettings.json` configuration file is a JSON document that controls how Amazon Kinesis Agent for Microsoft Windows collects logs, events, and metrics. It also controls how Kinesis Agent for Windows transforms that data and streams it to various AWS services. For details about the source, sink, and pipe declarations in the configuration file, see [Source Declarations](source-object-declarations.md), [Sink Declarations](sink-object-declarations.md), and [Pipe Declarations](pipe-object-declarations.md). 

The following sections contain examples of configuration files for several different kinds of scenarios. 

**Topics**
+ [

## Streaming from Various Sources to Kinesis Data Streams
](#configuring-kaw-examples-sources)
+ [

## Streaming from the Windows Application Event Log to Sinks
](#configuring-kaw-examples-sinks)
+ [

## Using Pipes
](#configuring-kaw-examples-pipes)
+ [

## Using Multiple Sources and Pipes
](#configuring-kaw-examples-multiple)

## Streaming from Various Sources to Kinesis Data Streams
<a name="configuring-kaw-examples-sources"></a>

The following example `appsettings.json` configuration files demonstrate streaming logs and events from various sources to Kinesis Data Streams and from Windows performance counters to Amazon CloudWatch metrics.

### `DirectorySource`, `SysLog` Record Parser
<a name="configuring-kaw-examples-sources-ds-sl"></a>

The following file streams syslog format log records from all files with a `.log` file extension in the `C:\LogSource\` directory to the `SyslogKinesisDataStream` Kinesis Data Streams stream in the us-east-1 Region. A bookmark is established to ensure that all data from the log files is sent even if the agent is shut down and restarted later. A custom application can read and process the records from the `SyslogKinesisDataStream` stream.

```
{
  "Sources": [
    {
      "Id": "SyslogDirectorySource",
      "SourceType": "DirectorySource",
      "Directory": "C:\\LogSource\\",
      "FileNameFilter": "*.log",
      "RecordParser": "SysLog",
      "TimeZoneKind": "UTC",
      "InitialPosition": "Bookmark"
    }
  ],
  "Sinks": [
    {
      "Id": "KinesisStreamSink",
      "SinkType": "KinesisStream",
      "StreamName": "SyslogKinesisDataStream",
      "Region": "us-east-1"
    }
  ],
  "Pipes": [
    {
      "Id": "SyslogDS2KSSink",
      "SourceRef": "SyslogDirectorySource",
      "SinkRef": "KinesisStreamSink"
    }
  ]
}
```

### `DirectorySource`, `SingleLineJson` Record Parser
<a name="configuring-kaw-examples-sources-ds-slj"></a>

The following file streams JSON-formatted log records from all files with a `.log` file extension in the `C:\LogSource\` directory to the `JsonKinesisDataStream` Kinesis Data Streams stream in the us-east-1 Region. Before streaming, key-value pairs for the `ComputerName` and `DT` keys are added to each JSON object, with values for the computer name and the date and time the record is processed. A custom application can read and process the records from the `JsonKinesisDataStream` stream. 

```
{
  "Sources": [
    {
      "Id": "JsonLogSource",
      "SourceType": "DirectorySource",
      "RecordParser": "SingleLineJson",
      "Directory": "C:\\LogSource\\",
      "FileNameFilter": "*.log",
      "InitialPosition": 0
    }
  ],
  "Sinks": [
    {
      "Id": "KinesisStreamSink",
      "SinkType": "KinesisStream",
      "StreamName": "JsonKinesisDataStream",
      "Region": "us-east-1",
      "Format": "json",
      "ObjectDecoration": "ComputerName={ComputerName};DT={timestamp:yyyy-MM-dd HH:mm:ss}"
    }
  ],
  "Pipes": [
    {
      "Id": "JsonLogSourceToKinesisStreamSink",
      "SourceRef": "JsonLogSource",
      "SinkRef": "KinesisStreamSink"
    }
  ]
}
```

### `ExchangeLogSource`
<a name="configuring-kaw-examples-sources-exchange"></a>

The following file streams log records generated by Microsoft Exchange and stored in files with the `.log` extension in the `C:\temp\ExchangeLog\` directory to the `ExchangeKinesisDataStream` Kinesis data stream in the us-east-1 Region in JSON format. Although the Exchange logs are not in JSON format, Kinesis Agent for Windows can parse the logs and transform them to JSON. Before streaming, key-value pairs for the `ComputerName` and `DT` keys are added to each JSON object containing values for the computer name and the date and time the record is processed. A custom application can read and process the records from the `ExchangeKinesisDataStream` stream. 

```
{
  "Sources": [
    {
       "Id": "ExchangeSource",
       "SourceType": "ExchangeLogSource",
       "Directory": "C:\\temp\\ExchangeLog\",
       "FileNameFilter": "*.log"
    }
  ],
  "Sinks": [
    {
      "Id": "KinesisStreamSink",
      "SinkType": "KinesisStream",
      "StreamName": "ExchangeKinesisDataStream",
      "Region": "us-east-1",
      "Format": "json",
      "ObjectDecoration": "ComputerName={ComputerName};DT={timestamp:yyyy-MM-dd HH:mm:ss}"
    }
  ],
  "Pipes": [
    {
      "Id": "ExchangeSourceToKinesisStreamSink",
      "SourceRef": "ExchangeSource",
      "SinkRef": "KinesisStreamSink"
    }
  ]
}
```

### `W3SVCLogSource`
<a name="configuring-kaw-examples-sources-iis"></a>

The following file streams Internet Information Services (IIS) for Windows log records stored in the standard location for those files to the `IISKinesisDataStream` Kinesis Data Streams stream in the us-east-1 Region. A custom application can read and process the records from the `IISKinesisDataStream` stream. IIS is a web server for Windows. 

```
{
  "Sources": [
    {
       "Id": "IISLogSource",
       "SourceType": "W3SVCLogSource",
       "Directory": "C:\\inetpub\\logs\\LogFiles\\W3SVC1",
       "FileNameFilter": "*.log"
    }
  ],
  "Sinks": [
    {
      "Id": "KinesisStreamSink",
      "SinkType": "KinesisStream",
      "StreamName": "IISKinesisDataStream",
      "Region": "us-east-1"
    }
  ],
  "Pipes": [
    {
      "Id": "IISLogSourceToKinesisStreamSink",
      "SourceRef": "IISLogSource",
      "SinkRef": "KinesisStreamSink"
    }
  ]
}
```

### `WindowsEventLogSource` with Query
<a name="configuring-kaw-examples-sources-wevq"></a>

The following file streams log events from the Windows system event log that have a level of `Critical` or `Error` (less than or equal to 2) to the `SystemKinesisDataStream` Kinesis data stream in the us-east-1 Region in JSON format. A custom application can read and process the records from the `SystemKinesisDataStream` stream. 

```
{
  "Sources": [
    {
         "Id": "SystemLogSource",
         "SourceType": "WindowsEventLogSource",
         "LogName": "System",
         "Query": "*[System/Level<=2]"
    }
  ],
  "Sinks": [
    {
      "Id": "KinesisStreamSink",
      "SinkType": "KinesisStream",
      "StreamName": "SystemKinesisDataStream",
      "Region": "us-east-1",
      "Format": "json"
    }
  ],
  "Pipes": [
    {
      "Id": "SLSourceToKSSink",
      "SourceRef": "SystemLogSource",
      "SinkRef": "KinesisStreamSink"
    }
  ]
}
```

### `WindowsETWEventSource`
<a name="configuring-kaw-examples-sources-etw"></a>

The following file streams Microsoft Common Language Runtime (CLR) exception and security events to the `ClrKinesisDataStream` Kinesis data stream in the us-east-1 Region in JSON format. A custom application can read and process the records from the `ClrKinesisDataStream` stream. 

```
{
  "Sources": [
    {
       "Id": "ClrETWEventSource",
       "SourceType": "WindowsETWEventSource",
       "ProviderName": "Microsoft-Windows-DotNETRuntime",
       "TraceLevel": "Verbose",
       "MatchAnyKeyword": "0x00008000, 0x00000400"
    }
  ],
  "Sinks": [
    {
      "Id": "KinesisStreamSink",
      "SinkType": "KinesisStream",
      "StreamName": "ClrKinesisDataStream",
      "Region": "us-east-1",
      "Format": "json"
    }
  ],
  "Pipes": [
    {
      "Id": "ETWSourceToKSSink",
      "SourceRef": "ClrETWEventSource",
      "SinkRef": "KinesisStreamSink"
    }
  ]
}
```

### `WindowsPerformanceCounterSource`
<a name="configuring-kaw-examples-sources-wpc"></a>

The following file streams performance counters for total files open, total login attempts since reboot, number of disk reads per second, and percentage of free disk space to CloudWatch metrics in the us-east-1 Region. You can graph these metrics in CloudWatch, build dashboards from the graphs, and set alarms that send notifications when thresholds are exceeded. 

```
{
  "Sources": [
    {
      "Id": "PerformanceCounter",
      "SourceType": "WindowsPerformanceCounterSource",
      "Categories": [
        {
          "Category": "Server",
          "Counters": [
            "Files Open",
            "Logon Total"
          ]
        },
        {
          "Category": "LogicalDisk",
          "Instances": "*",
          "Counters": [
            "% Free Space",
            {
              "Counter": "Disk Reads/sec",
              "Unit": "Count/Second"
            }
          ]
        }
      ],
    }
  ],
  "Sinks": [
    {
      "Namespace": "MyServiceMetrics",
      "Region": "us-east-1",
      "Id": "CloudWatchSink",
      "SinkType": "CloudWatch"
    }
  ],
  "Pipes": [
    {
      "Id": "PerformanceCounterToCloudWatch",
      "SourceRef": "PerformanceCounter",
      "SinkRef": "CloudWatchSink"
    }
  ]
}
```

## Streaming from the Windows Application Event Log to Sinks
<a name="configuring-kaw-examples-sinks"></a>

The following example `appsettings.json` configuration files demonstrate streaming Windows application event logs to various sinks in Amazon Kinesis Agent for Microsoft Windows. For examples of using the `KinesisStream` and `CloudWatch` sink types, see [Streaming from Various Sources to Kinesis Data Streams](#configuring-kaw-examples-sources).

### `KinesisFirehose`
<a name="configuring-kaw-examples-sinks-fh"></a>

The following file streams `Critical` or `Error` Windows application log events to the `WindowsLogFirehoseDeliveryStream` Firehose delivery stream in the us-east-1 Region. If connectivity to Firehose is interrupted, events are first queued in memory. Then if necessary, they are queued to a file on disk until connectivity is restored. Then events are unqueued and sent followed by any new events.

You can configure Firehose to store the streamed data to several different kinds of storage and analysis services based on data pipeline requirements. 

```
{
  "Sources": [
    {
         "Id": "ApplicationLogSource",
         "SourceType": "WindowsEventLogSource",
         "LogName": "Application",
         "Query": "*[System/Level<=2]"
    }
  ],
  "Sinks": [
    {
       "Id": "WindowsLogKinesisFirehoseSink",
       "SinkType": "KinesisFirehose",
       "StreamName": "WindowsLogFirehoseDeliveryStream",
       "Region": "us-east-1",
       "QueueType": "file"
    }  
    ],
  "Pipes": [
    {
      "Id": "ALSource2ALKFSink",
      "SourceRef": "ApplicationLogSource",
      "SinkRef": "WindowsLogKinesisFirehoseSink"
    }
  ]
}
```

### `CloudWatchLogs`
<a name="configuring-kaw-examples-sinks-cwl"></a>

The following file streams `Critical` or `Error` Windows application log events to CloudWatch Logs log streams in the `MyServiceApplicationLog-Group` log group. The name of each stream begins with `Stream-`. It ends with the four-digit year, two-digit month, and two-digit day that the stream was created, all concatenated (for example, `Stream-20180501` is the stream created on May 1, 2018). 

```
{
  "Sources": [
    {
         "Id": "ApplicationLogSource",
         "SourceType": "WindowsEventLogSource",
         "LogName": "Application",
         "Query": "*[System/Level<=2]"
    }
  ],
  "Sinks": [
    {
      "Id": "CloudWatchLogsSink",
      "SinkType": "CloudWatchLogs",
      "LogGroup": "MyServiceApplicationLog-Group",
      "LogStream": "Stream-{timestamp:yyyyMMdd}",
      "Region": "us-east-1",
      "Format": "json"
    }
  ],
  "Pipes": [
    {
      "Id": "ALSource2CWLSink",
      "SourceRef": "ApplicationLogSource",
      "SinkRef": "CloudWatchLogsSink"
    }
  ]
}
```

## Using Pipes
<a name="configuring-kaw-examples-pipes"></a>

The following example `appsettings.json` configuration file demonstrates using pipe-related features.

 This example streams log entries from the `c:\LogSource\` to the `ApplicationLogFirehoseDeliveryStream` Firehose delivery stream. It includes only lines that match the regular expression specified by the `FilterPattern` key-value pair. Specifically, only lines in the log file that start with `10` or `11` are streamed to Firehose. 

```
{
  "Sources": [
    {
      "Id": "ApplicationLogSource",
      "SourceType": "DirectorySource",
      "Directory": "C:\\LogSource\\",
      "FileNameFilter": "*.log",
      "RecordParser": "SingleLine"
    }
  ],
  "Sinks": [
    {
       "Id": "ApplicationLogKinesisFirehoseSink",
       "SinkType": "KinesisFirehose",
       "StreamName": "ApplicationLogFirehoseDeliveryStream",
       "Region": "us-east-1"
    }  
    ],
  "Pipes": [
    {
      "Id": "ALSourceToALKFSink",
      "Type": "RegexFilterPipe",
      "SourceRef": "ApplicationLogSource",
      "SinkRef": "ApplicationLogKinesisFirehoseSink",
      "FilterPattern": "^(10|11),.*"
    }
  ]
}
```

## Using Multiple Sources and Pipes
<a name="configuring-kaw-examples-multiple"></a>

The following example `appsettings.json` configuration file demonstrates using multiple sources and pipes.

This example streams the application, security, and system Windows Event Logs to the `EventLogStream` Firehose delivery stream using three sources, three pipes, and a single sink.

```
{
    "Sources": [
		{
		  "Id": "ApplicationLog",
		  "SourceType": "WindowsEventLogSource",
		  "LogName": "Application"
		},
		{
		  "Id": "SecurityLog",
		  "SourceType": "WindowsEventLogSource",
		  "LogName": "Security"
		},
		{
		  "Id": "SystemLog",
		  "SourceType": "WindowsEventLogSource",
		  "LogName": "System"
		}
    ],
    "Sinks": [
		{
		  "Id": "EventLogSink",
		  "SinkType": "KinesisFirehose",
		  "StreamName": "EventLogStream",
		  "Format": "json"
		},
    ],
    "Pipes": [
		{
		  "Id": "ApplicationLogToFirehose",
		  "SourceRef": "ApplicationLog",
		  "SinkRef": "EventLogSink"
		},
		{
		  "Id": "SecurityLogToFirehose",
		  "SourceRef": "SecurityLog",
		  "SinkRef": "EventLogSink"
		},
		{
		  "Id": "SystemLogToFirehose",
		  "SourceRef": "SystemLog",
		  "SinkRef": "EventLogSink"
		}
    ]
}
```

# Configuring Telemetrics
<a name="telemetrics-configuration-option"></a>

To enable better support, by default, Amazon Kinesis Agent for Microsoft Windows collects statistics about the operation of the agent and sends them to AWS. This information contains no personally identifiable information. It doesn't include any data that you gather or stream to AWS services. We collect approximately 1–2 KB of this metric data every 60 minutes. 

You can opt out of the collection and transmission of these statistics. To do this, add the following key-value pair to the `appsettings.json` configuration file at the same level as sources, sinks, and pipes:

```
"Telemetrics": 
    { "off": "true" }
```

For example, the following configuration file configures a source, sink, and pipe, and also disables telemetrics:

```
{
  "Sources": [
    {
      "Id": "ApplicationLogSource",
      "SourceType": "DirectorySource",
      "Directory": "C:\\LogSource\\",
      "FileNameFilter": "*.log",
      "RecordParser": "SingleLine"
    }
  ],
  "Sinks": [
    {
       "Id": "ApplicationLogKinesisFirehoseSink",
       "SinkType": "KinesisFirehose",
       "StreamName": "ApplicationLogFirehoseDeliveryStream",
       "Region": "us-east-1"
    }  
    ],
  "Pipes": [
    {
      "Id": "ApplicationLogSourceToApplicationLogKinesisFirehoseSink",
      "SourceRef": "ApplicationLogSource",
      "SinkRef": "ApplicationLogKinesisFirehoseSink"
    }
  ],
  "Telemetrics":
    {
      "off": "true"
    }
}
```

We collect the following metrics when telemetry is enabled:

`ClientId`  
The automatically assigned unique ID when the software is installed.

`ClientTimestamp`  
The date and time the telemetry is collected.

`OSDescription`  
A description of the operating system.

`DotnetFramework`  
The current dotnet framework version.

`MemoryUsage`  
The amount of memory consumed by Kinesis Agent for Windows(MB).

`CPUUsage`  
The amount of Kinesis Agent for Windows CPU usage percentage in decimal. For example, 0.01 means 1%.

`InstanceId`  
The Amazon EC2 instance ID if Kinesis Agent for Windows is running on an Amazon EC2 instance.

`InstanceType (string)`  
The Amazon EC2 instance type if Kinesis Agent for Windows is running on an Amazon EC2 instance.

In addition, we collect the metrics listed in [List of Kinesis Agent for Windows Metrics](source-object-declarations.md#kinesis-agent-metric-list).