

# Demand Planning
Demand Planning

Demand Planning is a web-based application that allows business users to create, collaborate, and publish demand plans. Demand Planning generates forecasts using proprietary machine learning algorithms based on historical forecasting experience.

**Topics**
+ [

# Terminology used in Demand Planning
](servicename-terminology.md)
+ [

# Create your first demand plan
](onboarding.md)
+ [

# Data Validation and Demand Pattern Analysis
](data-validation-analysis.md)
+ [

# Forecast Algorithms
](forecast-algorithims.md)
+ [

# Forecast based on demand drivers
](demand_drivers.md)
+ [

# Product lineage
](product_lineage.md)
+ [

# Product lifecycle
](product_lifecycle.md)
+ [

# Manage demand plans
](dp_dashboard.md)
+ [

# Forecast model analyzer
](forecast_model_analyzer.md)
+ [

# Manage Demand Plan settings
](settings.md)
+ [

# Role-based access control
](rolebased.md)

# Terminology used in Demand Planning


The following is the common terminology that you may frequently use in Demand Planning.
+ **Enterprise demand plan** – A single planning workbook that consolidates forecast input from multiple stakeholders to create a unified forecast. It can consist of multiple planning cycles, enabling iterative refinement of forecast based on evolving forecast input dataset. The enterprise demand plan displays two status points:
  + **Active** – The planning cycle is open and you can edit your forecast.
  + **Published** – The planning cycle is closed, and you cannot edit your forecast. However, you can view the demand plan.
+  **Demand planning cycle** – The time taken to create and finalize demand plans, which include forecast generation, and collaborating with stakeholders to adjust and publish demand plans.
+ **Dataset** – A collection of data used for generating forecasts, such as historical sales orders or product information.
+ **Forecast granularity** – Defines how you want to create and manage the forecast. You can use a combination of product, location, customer, and channel dimensions. You can also choose the time interval for the forecast data to be aggregated by day, week, month, or year for each product in the dataset. For example, if your forecast granularity is set as Daily, you will see the forecast daily for each product in the dataset.
**Note**  
Demand Planning uses the Gregorian calendar for planning. The default start day of the week is Monday.
+ **Forecast configuration** – The set of configurations for forecast generation. This includes the planning cycle configuration, time horizon granularity, and that hierarchy configuration that influences how Demand Planning will generate the forecast.
+ **System generated forecast** – This is also known as the baseline forecast. It refers to the use of the historical data by the system to generate a forecast. It provides initial demand prediction before you apply any overrides.
+ **Override** – A modification that you make to the system generated forecast.
+ **Published demand plan** – The final output of the planning workbook. You can choose to publish the finalized demand plan to downstream inventory and supply planning systems for implementation.
+ **Product lineage** – You can establish links between products and their previous versions or alternate products and set rules for the amount of historical data to be used in forecasting. For more information, see [Product lineage](product_lineage.md).
+ **Product lifecycle** – The product lifecycle refers to the various stages of a product from introduction to End of Life (EoL). For more information on product lifecycle, see [Product lifecycle](product_lifecycle.md). 
+ **Demand driver** – Factors that directly influence the level of demand for a particular product. For example, advertising and marketing efforts, pricing strategies, and so on. For more information on demand drivers, see [Forecast based on demand drivers](demand_drivers.md).
+ **Forecast lag** – The time between when the forecast was created and the actual demand. For example, forecast from January considered for February is considered a one month lag. Similarly, forecast from January that is considered for March is considered a two month lag.
+ **Forecast Model Analyzer** – You can use this tool to execute trial or experimental forecast by varying test conditions and reviewing the results of the different forecast methods. You can use the results to compare and evaluate model performance, ensuring the best selection based on business priorities.
+ **Forecast Lock** – You can use the forecast lock feature to lock specific periods in your forecast to prevent any further edits or adjustments.
+ **Intra-cycle Forecast Refresh** – You can refresh the forecast mid-cycle and incorporate the latest forecast input data without finalizing the demand plan.
+ **\$1 of Forecasts** – Number of unique time-series forecasts, where each time-series represents a distinct combination of product, site, customer, and channel as per demand plan configuration.
+ **Critical Rules** – Data validation rules that, if violated, can block forecast creation. For more information, see [Prequisites before uploading your dataset](data_quality.md).
+ **Data Validation** – The process of checking data for completeness, correctness, and consistency before using it for forecasting.
+ **Demand Pattern Analysis** – Exploratory Data Analysis of forecast input data including classifying historical demand data into different patterns.

# Create your first demand plan


When you log into Demand Planning for the first time, you will be able to view the onboarding pages that highlight key product features and help you get familiar with the Demand Planning capabilities.

**Overview of the process:**

To create your first forecast, from the left navigation bar, choose **Demand Planning**, **Manage Demand Plan**, and **Create forecast**. The system guides you through the following steps. For more information, see [Role-based access control](rolebased.md).

1. *Data ingestion* – Before proceeding with configuration, the system verifies that required datasets are ingested into Data Lake. You need the following, at minimum. For more information about which table and columns are used by Demand Planning, including prerequisites, see [Demand Planning](required_entities.md).
   + Required: Outbound Order Line and Product data
   + Recommended: Product Alternate and Supplementary Time Series data

1. *Plan configuration* – After data ingestion is complete, you'll configure various aspects of your demand plan, including forecast dimensions, time frames, settings, and scheduling options. After Demand Planning is configured, you can view or modify the demand plan configuration settings by choosing **Settings**, **Organization**, and **Demand Planning**. 

1. *Plan creation* – After configuration, choosing **Generate Forecast** initiates three sub-processes:
   + Data Validation: System validates data quality and completeness
   + Demand Pattern Analysis & Recommendations: System analyzes historical patterns and provides insights
   + Forecast Creation: System generates the forecast

In an ideal scenario, where no data validation errors are found, the system smoothly proceeds through all three steps, creating both the demand pattern analysis report and forecast. However, if any data validation errors are detected, the system halts both the forecast creation and demand pattern analysis until the errors are resolved. Work with your data administrator to correct the underlying data issues, and choose **Retry** to try forecast creation again.

1. On the **Configure Demand Planning** page, there are five steps to configure Demand Planning.
   + **Scope** – Defines the dimensions and the time frame for Demand Planning to generate forecasts.
   + **Configure your dataset** – Defines the outbound\$1order\$1line dataset. This option is mandatory for Demand Planning to generate an accurate forecast. You also define how you want Demand Planning to handle negative quantity values in the outbound\$1order\$1line dataset. For more information about mandatory and optional Demand Planning fields, see [Data entities and columns used in AWS Supply Chain](data-model.md).
   + **Forecast Settings** – Set global parameters to determine the forecast period, minimum forecast value, and initialization values for new products with no alternate data.
   + **Scheduler** – You can define how and when forecasts should be refreshed and published.
   + **Organization Settings** – Defines where your Demand Plans will be published. It also shows other configuration options within the application.

1. Under **Scope**, **Planning Horizon**, select the following:
   + **Time Interval** – Select the time interval from the choice of daily, weekly, monthly, or yearly options. The time interval is used to aggregate and analyze data. Choose a time interval based on the nature of your business, availability, and granularity of historical data. 
   + **Time Horizon** – Time horizon is the specific period for when a forecast is generated. The value should be a whole number with a minimum value of 1 and maximum of 500. The amount of historical data available also will dictate the Time Horizon. Make sure that at least one product in the outbound\$1order\$1line dataset has sales history at least four times the time horizon set. For example, if you set **Time Horizon** to 26 and **Time Interval** as *weekly*, the minimum order data requirement is 26\$14 = 104 weeks.

   Under **Forecast Granularity**, **Required Hierarchy**, select the parameters to define your forecast hierarchy. Product ID attribute is mandatory and is automatically selected as the last level in the hierarchy. You can choose **Add level** to add additional hierarchy levels between product\$1group\$1id, product\$1type, brand\$1name, color, display\$1desc, and parent\$1product\$1id. Make sure that the required hierarchy attributes have information in the product dataset, because you can use these attributes to filter the demand plan.

   Under **Optional Hierarchy**, choose **Add level** to add up to five attributes from **Site**, **Channel**, and **Customer** to better manage your forecast. The supported columns from the *outbound\$1order\$1line* dataset are:
   + Site hierarchy = ship\$1from\$1site\$1id, ship\$1to\$1site\$1id, ship\$1to\$1site\$1address\$1city, ship\$1to\$1address\$1state, ship\$1to\$1address\$1country 
   + Channel hierarchy = channel\$1id
   + Customer hierarchy = customer\$1tpartner\$1id 

   Make sure that the required hierarchy attributes have information in the product dataset since these attributes are used to filter demand plans.

1. Choose **Continue**.

1. On the **Configure your dataset** page, under **Configure Forecast Input**, you should configure the required and recommended datasets.
**Note**  
AWS Supply Chain recommends uploading two to three years of outbound order line history as an input to generate an accurate forecast. This duration allows the forecasting models to capture your business cycles and ensure a more robust and reliable prediction. For improved forecast accuracy, it is also recommended to include product attributes such as *brand*, *product\$1group\$1id*, and *price* in the product dataset.
   + Required Datasets – The *outbound\$1order\$1line* and *product* data entities are required to generate a forecast.
   + Recommended Datasets – The *product\$1alternate* and *supplementary\$1time\$1series* data entities are optional. You can generate a forecast without these data entities but when provided, the forecast quality will be improved.

1. Under **Required Datasets**, expand **Historical Demand** and choose **Configure** to set the negative value for missing data. *outbound\$1order\$1line* dataset is the primary source of historical demand.
   + **Ignore** – Select if you want AWS Supply Chain to ignore the products with missing order\$1date before creating the forecast.
   + **Replace with zero** – Select if you want AWS Supply Chain to replace the missing order\$1date fields with zero by default to the final requested quantity.

1. No additional configuration is required for *product* data entity. Product attributes are used for filters, configure hierarchy, and for training the learning model.

1. Under **Recommended Datasets**, no additional configuration is required for *product\$1lineage*. You can use the *product\$1alternate* data entity to provide information on alternate or previous version of the product. For more information on product lineage, see [Product lineage](product_lineage.md).

1. Select **Demand Drivers** if you have demand drivers information such as promotions, price changes, and so on, you can use *supplementary\$1time\$1series* data entity to ingest data. You can select up to 13 demand drivers and configure aggregation and missing data filling strategy. For more information on demand drivers, see [Forecast based on demand drivers](demand_drivers.md).

1. Choose **Continue**.

1. On the **Forecast Settings** page, you need to configure the following:
   + Choose the forecast model/ensembler for the plan. AWS Supply Chain Demand Planning has a default forecast model assigned for the demand plan. Customers have the ability to change the default if they choose to.
**Note**  
The AWS Supply Chain assigned default model will be used if the user does not change the selection.
   + Under **Forecast Start Date**, enter the forecast start date to start the planning cycle.
     + Max History Date – Select this option if you want to start forecasting from the next time period after the last complete historical data point. 
     + Plan Execution Date – Demand Planning uses this date when the forecast is triggered as the start of the planning cycle. 
     + Custom Date – Select this option when the selected forecast start date is later than the outbound\$1order\$1line dataset end date then the default planning cycle start date is considered. If the selected forecast start date is before the outbound\$1order\$1line start date or if the length of the demand history is insufficient, the forecast will fail and display an error. For more information, see [Prequisites before uploading your dataset](data_quality.md). It is recommended to select the first of the month for monthly intervals or Monday for weekly intervals. If you choose a different date, Demand Planning will automatically adjust to the nearest default date. For example, if you selected Wednesday as the forecast start date, Demand Planning will select the next Monday as the forecast start date for weekly intervals. Similarly, selecting May 10th 2024 will result in June 1st 2024 as the planning cycle start date for monthly intervals.
   + Under **Handling Partial History and Filling Strategy**, select one of the following:
     + Trim Partial History – Select this option to trim the partial history. For example, the illustration below explains how trim partial history works for the following settings: 
       + Weekly granularity start period – Monday (default Demand Planning setting)
       + Monthly granularity start period – 1st of the Gregorian Calendar Month (default Demand Planning setting)
       + Demand plan granularity – Weekly
       + Forecast start date– Plan run date
       + Trim partial history – Set to *Yes*
       + Plan run date – Set to *Monday*
       + Forecast horizon – Four weeks  
![\[Trim partial history example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/Trim_history.png)
     +  Include Partial History – Select this option to include the partial history and use a filling strategy to fill the gaps. 

       For example, if you are forecasting at a monthly level and your last month in history has only 10 days of data, you can choose to trim or exclude the 10 days of data. If you choose not to trim or exclude the 10 days of data, you can select a filling strategy to fill data for the rest of the month.
       + Zero – Use this filling method when no sales activity is expected for certain periods. Impact: May lead to lower forecast, best for seasonal data with expected zero demand 
       + NaN – Use this filling method when marking data is missing.
       + Mean – Use this filling method when smoothing out fluctuations.
       + Median – Use this filling method when minimizing the influence of outliers or data skewness.
       + Min – Use this filling method when representing the lowest possible value for conservative forecasting.
       + Max – Use this filling method when assuming the highest possible value for optimistic forecasting Impact.
   + Under **Configure Forecast Periods in...**, select the start and end dates for New Product Introduction (NPI) and End-of-life EOL) products. For more information, see [Product lifecycle](product_lifecycle.md).
   + Under **New Product Initial Forecast**, enter an initial forecast value for products with no demand history or product lineage to make the products searchable in the demand plan web application and to create a forecast. Specify the value and the periods to apply.
**Note**  
The time period displayed will depend on the time period you chose under **Time intervals** in the **Planning Horizon** page. For example, if you chose *Monthly* under **Time intervals**, you will be able to specify the number of months before or after to start and stop the forecast, and for products with no demand history.
   + The planning cycle start date is based on the last order date in the outbound order line dataset. If the time interval configuration is:
     + **Daily** – Planning cycle start date will be the day after the last order date. For example, if the last order date is October 30, 2023, the planning cycle start date will be October 31, 2023.
     + **Weekly or Monthly** – When the last order date is the same as the time boundary, the planning cycle start date will be after a week or month. For example, when the last order date is October 29, 2023 (which is a Sunday and Demand Planning's week time boundary), the planning cycle start date will be October 30, 2023.

        When the last order date falls within the time boundary, Demand Planning will trim the order history for the last time window and create forecast from the new period. For example, when the last order date is November 01, 2023 (which is a Wednesday and not in the Demand Planning's week time boundary), the planning cycle start date will be October 30, 2023. Demand Planning will ignore the order history from October 30, 2023 to November 01, 2023.
   + Under **Accuracy Metrics Preferences**, setup three different lags for your organization.

1. Choose **Continue**.

1. On the **Demand Plan Publish Scheduler** page, under **How do you like to manage ongoing forecast refresh and demand plan release?**, choose **Auto** to view your next forecast plan published on the Demand Planning page. 

   Under **Set the release frequency for the final demand plan**, choose the frequency at which you want to publish the demand plans to the downstream processes and close the planning cycle.

   (Optional) Under **Set the intra-cycle forecast refresh frequency**, select the frequency of the forecast update within the same planning cycle without releasing the interim updates to the downstream processes or closing the planning cycle. You can also select **None** to opt-out of intra-cycle forecast refresh frequency.

1. Choose **Continue**.

1. Under **Organization Settings**, note the Amazon Simple Storage Service (Amazon S3) path where the demand plans are published.
**Note**  
You can also find the Amazon S3 path for the published demand plans on the **Settings** page. For more information, see [Manage Demand Plan settings](settings.md).  
Forecast is generated only when you ingest data into AWS Supply Chain. Make sure that all the required and optional attributes that you chose have information in the dataset.

# Data Validation and Demand Pattern Analysis


Data Validation and Demand Pattern Analysis tools help you evaluate the quality of your data and identify key patterns influencing your demand forecasts. These insights help you understand which patterns are likely to impact demand.

**Topics**
+ [

# Data Validation
](data-validation.md)
+ [

# Demand Pattern and Recommendation
](demand-patterns.md)

# Data Validation


Data Validation is a crucial step early in the forecast creation process that ensures the input data meets the necessary quality standards for forecasting. This feature runs a series of checks on your data, surfacing data errors that need to be fixed before proceeding to forecast creation, helping you identify and resolve issues early in the process.

The data validation step is preceded by a set of preprocessing activities to prepare the data, based on the plan settings or definition, which includes the following:
+ *Aggregation to align with forecast granularity.* For example:
  + If your forecast granularity is set to weekly, daily demand history data will be aggregated to weekly totals.
  + If your demand history contains product, site, customer, and channel dimensions, but your forecast granularity is set to product-site level, the system will aggregate sales across all customers and channels for each product-site combination.
+ *Data transformations from Demand Plan settings.* These transformations are based on your Demand Planning configuration settings. For example, if you have configured the system to ignore negative values, these will be handled accordingly.
+ *Product lineage consideration*. The system takes into account product relationships, such as predecessor-successor pairs or product alternatives, as defined in your configuration.
+ *Supplementary time series transformation*. The system transforms supplementary time series data into demand drivers that can influence the forecast generation. These transformed demand drivers provide additional context to the items above. 

**Topics**
+ [

# Data Validation Process
](data-validation-process.md)
+ [

# Data Validation Report Access
](data-validation-report-access.md)
+ [

# Data Validation Error Export
](data-validation-error-export.md)
+ [

# Data Validation Rules
](data-validation-rules.md)

# Data Validation Process


After the preprocessing process described above completes, the data validation process begins. Data validation consists of three steps:

1. Data Structure Validation[Demand Planning](required_entities.md) - This step includes checks to ensure all required tables and columns exist and have data before any transformation begins. This stage confirms your data tables are properly set up.

1. Data Quality Validation - This step ensures that data content is complete and error-free. It checks for:
   + Missing values in essential fields
   + Validation checks on data formats and validity of dates
   + Data completeness required for building forecast input

   This ensures all necessary data is present and valid before proceeding with transformations.

1. Forecasting Eligibility Validation: This step ensures that sufficient data is provided to create a forecast, including:
   + Minimum historical data requirements
   + Time series length limitations
   + Other algorithm-specific constraints

   This stage ensures that your data is suitable for generating forecasts.

Even a single validation failure will stop the forecast creation process. You must work with your data administrator to correct the underlying data issues, then choose **Retry** to try forecast creation again.

# Data Validation Report Access


When creating a forecast for the first time, navigate to the **Demand Planning** module in AWS Supply Chain and choose **Create a Plan**. The system guides you through three steps: Data Ingestion, Plan Configuration, and finally, Forecast Generation. After completing data ingestion and plan configuration, choose **Generate Forecast** to initiate data validation. Each new forecast generation creates a fresh validation report based on the current state of your data.

 Data Structure validation failures (such as missing tables or columns) appear as banner messages at the top of your screen. These fundamental issues must be resolved before proceeding. After data structure validation passes, the system proceeds with Data Quality and Forecasting Eligibility validations. Any failures in these stages are detailed in the validation report, accessible by choosing **Data Validations**.

## Subsequent Forecast Creation


For subsequent forecasts, choose **Generate Forecast**. You will see a banner displaying three steps, with data validation as the first step. The same validation behavior applies. Structural issues appear as banners, while other validation failures are available in the detailed report.

## Report Content


The Data Validation Issues report provides a comprehensive view of Data Quality and Forecasting Eligibility validation failures that need to be addressed. The report displays the following:
+ Dataset: Identifies the specific dataset where the issue occurs
+ Rule: Describes the type of validation that failed
+ Error Date/Time: Shows when the error was detected
+ Status Message: Provides detailed information about the records affected and recommended actions

To help navigate and resolve these issues, you can do the following:
+ Use the search box to find specific types of errors
+ Filter by dataset using the drop-down menu
+ Download a detailed report containing all validation failures
+ View **Records affected** for each validation to understand the scope of the issue

# Data Validation Error Export


Error records can be exported by choosing **Download** on the **Data Validation** report page when the validation is checking individual data points that failed.

**Note**  
The export option is not available when the validation is checking structural, systemic, or aggregate-level requirements. 

Export is available for the following:
+ Validation checks for content or quality of existing data
+ Validations that involve checking for missing or invalid values in existing fields
+ Data Quality Validations (such as null checks, and date range validations)

**Note**  
 The system limits error record downloads to a maximum of 10,000 rows. If the total error count exceeds this limit, a notification will appear on the screen. Work with your data administrator to review and resolve all errors in the source table. 

 Export is not available for the following:
+ Validation checks for structural elements (such as table existence or column presence)
+ Validations that involve system-level constraints (such as size limits, counts, and thresholds)
+ Forecasting eligibility checks (such as time series limits or active product counts)

# Data Validation Rules


The validations performed prior to forecast creation are below. For more information, see [Demand Planning](required_entities.md).


****  

| Rule Type | Rule | Datasets | Description | Export error records? | 
| --- | --- | --- | --- | --- | 
| Data Structure Validation | Mandatory columns existence validation | Product, Outbound order line, Supplementary time series |  Verifies presence of critical columns in datasets in required datasets: Outbound order line: product\$1id, order\$1date, final\$1quantity\$1requested Product: id, description Verifies presence of critical columns in recommended datasets, if provided: Supplementary Time Series: id, order\$1date, time\$1series\$1name, time\$1series\$1value  | No | 
| Data Structure Validation | Granularity columns existence validation | Product, Outbound order line |  Verifies presence of columns set as forecast granularity, if set in the demand plan settings. Outbound order line: product\$1id, ship\$1from\$1site\$1id, ship\$1to\$1site\$1id, ship\$1to\$1site\$1address\$1city, ship\$1to\$1address\$1state, ship\$1to\$1address\$1country, channel\$1id, customer\$1tpartner\$1id Product: id, product\$1group\$1id, product\$1type, brand\$1name, color, display\$1desc, parent\$1product\$1id  | No | 
| Data Structure Validation | Active product's history validation | Product, Outbound order line,Product Alternate | Verifies that there is atleast one active product that has history on its own or through product lineage | No | 
| Data Quality Validation | Missing values in mandatory columns validation | Product, Outbound order line, Supplementary time series | Verifies for null/empty values in mandatory columns specified in Mandatory columns existence check | Yes | 
| Data Quality Validation | Missing values in granularity columns validation | Product, Outbound order line | Verifies for null/empty values in mandatory columns specified in Granularity columns existence check | Yes | 
| Data Quality Validation | Date Range validation | OutboundOrderLine, SupplementaryTimeSeries | The order\$1date column in the dataset must contain dates in a sane time range: Anywhere from 01/01/1900 00:00:00 to 12/31/2050 00:00:00.  | Yes | 
| Forecasting Eligibility Validation | Timeseries per Predictor validation | OutboundOrderLine |  The timeseries per predictor must not exceed 5,000,000.  "Timeseries per predictor" is calculated by taking the count of unique values for the product\$1id column and each of the forecast granularity columns and then taking the product of all those counts.  | No | 
| Forecasting Eligibility Validation | Count of active products validation | Product | The number of active products with records in the OOL dataset must not exceed 800,000. | No | 
| Forecasting Eligibility Validation | Historical data sufficiency validation | Outbound order line |  Verifies if at least one product in the dataset has sufficient historical demand data to generate reliable forecasts The forecast horizon must be no greater than 1/3 the time range in the dataset (if training a new auto predictor) or 1/4 the time range in the dataset (if training an existing auto predictor). There is also a global maximum forecast horizon, which is 500.  | No | 
| Forecasting Eligibility Validation | Row Count validation | Partitioned OutboundOrderLine | The number of records in the partitioned OOL dataset must not exceed 3,000,000,000. There are certain forecast models that have smaller limits that are checked here as well, if those models are being used. | No | 
| Forecasting Eligibility Validation | Maximum Timeseries validation | Partitioned OutboundOrderLine |  The number of distinct timeseries must not exceed the model's limit, if there is one.  "Distinct timeseries" is defined as the number of distinct rows in the dataset when product\$1id \$1 all forecast granularity columns are considered.  | No | 
| Forecasting Eligibility Validation |  Data Density validation  | Partitioned OutboundOrderLine |  The Data density of the dataset must be at least 5. Data density is defined as (number of distinct products in the dataset) / (total number of rows in the dataset). In other words it is "average rows per product". The rule applies only when Prophet is selected as the forecasting algorithm.  | No | 

# Demand Pattern and Recommendation


Demand Pattern and Recommendation examines the transformed historical demand input at each configured forecast granularity level (for example, product, location, or channel) to uncover underlying patterns and characteristics in your demand data. Its primary purpose is to identify key demand pattern distribution, such as smooth, intermittent, erratic, and lumpy. It also provides statistical insights about length of history and trailing 12-month demand.

The analysis automatically triggers after successful data validation during the forecast generation process, and runs in parallel with forecast creation. However, it does not block or delay the forecasting process. The Demand Pattern analysis is triggered as part of the same workflow as data validation when you initiate forecast creation. However, any data validation failure prevents both the analysis from being generated and the forecast from being created. 

By providing this analytical overview, the system helps users understand the patterns in the dataset to improve forecast accuracy. 

# Demand Patterns Components


Demand Patterns analysis happens on three dimensions:
+ Demand Patterns (based on how demand changes over time and in quantity)
+ Annual Demand (total quantity demanded over a 12-month period)
+ History Length (the time period for which historical demand data is available)

The analysis categorizes your demand patterns into four distinct types: smooth, intermittent, erratic, and lumpy. Each is determined by analyzing the frequency and variability of demand. If there are eligible in-scope products with no historical data, it is grouped under the **Zero Forecast Demand** section. For more information, see [Demand pattern](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/overview_dp.html#demand-pattern).

The distribution of demand patterns across your products provides valuable insights into expected forecast reliability. Products with smooth demand patterns (showing consistent order volumes and frequencies) typically yield the most reliable forecasts, because their behavior is more predictable. In contrast, erratic or lumpy patterns, characterized by irregular spikes and varying order frequencies, generally result in lower forecast reliability due to their unpredictable nature. By understanding this distribution, demand planners can set appropriate expectations and take proactive measures.

The system also analyzes your trailing 12-month demand (subject to trimming configuration), also known as Annual Demand, immediately preceding your forecast start date. For example, assume the forecast start date is January 15, 2024 (Monday) and the planning bucket is weekly. The system considers the trailing 12 month analysis period to be from January 16, 2023 to January 14, 2024. The trailing 12-month demand analysis helps demand planners distinguish between active and inactive products, while identifying products transitioning between these states - patterns that directly impact forecast reliability. By focusing on recent history rather than older data patterns, you can make more informed decisions about which products need special attention or alternative forecasting approaches, particularly for cases like seasonal items, discontinued products, or items in phase-out. For more information, see [Forecast Algorithms](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/forecast-algorithims.html).

The history length in years is calculated for each forecast granularity (for example, product-location combination) based on the earliest and latest dates available in your preprocessed historical demand data, after adjusting the dates to the default start of the period. This analysis helps determine if products have accumulated enough historical data to generate reliable forecasts, with a minimum of two years typically needed to capture seasonal patterns and long-term trends. 

![\[Raw demand history\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/raw-demand-history.png)


# Demand Patterns Recommendations


The system provides targeted recommendations based on identified demand patterns to help improve forecast accuracy. For products displaying erratic demand, characterized by irregular spikes in order volume, the system suggests incorporating potential external influences, such as promotions or price changes. In such cases, you can significantly improve forecast accuracy by collaborating with your data administrator to upload relevant demand driver data to the [https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/demand_drivers.html](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/demand_drivers.html) table in the data lake. This additional context helps the forecasting models better understand and predict demand fluctuations. 

For products with insufficient history (less than 2 years) or no history at all, the system recommends leveraging alternate product mapping. This approach allows you to utilize the demand patterns of similar, established products to enhance forecast reliability. Work with your data administrator to upload these product relationships to the [https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/product_lineage.html](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/product_lineage.html) table in the data lake. This is particularly important because accurate seasonality and long-term trend detection requires at least 2 full years of historical data. By mapping to alternate products with sufficient history, you can establish a more reliable forecast baseline for newer or limited-history products.

# Demand Pattern and Recommendation Report Access




## First time forecast creation


When creating a forecast for the first time, under the **Demand Planning** module in AWS Supply Chain, choose **Create a Plan**. The system guides you through three steps: Data Ingestion, Plan Configuration, and finally, Forecast Generation. After completing data ingestion and plan configuration, choose **Generate Forecast** to initiate data validation. Upon successful validation, the system performs demand pattern analysis, and you see a hyperlink to access this analysis while your forecast generates. 

## Subsequent forecast creation


For subsequent forecasts, choose **Generate Forecast**. You see a banner displaying three steps: data validation, demand pattern analysis & recommendation, and forecast creation. After data validation is successful and the demand pattern analysis is complete, access the report by choosing its hyperlink in the banner. 

## Report content


The Demand Pattern and Recommendations report provides a summary view of exploratory data analysis at your configured forecast level for a given plan. At the top of the screen, you see five key pattern cards that show how your products are distributed: Smooth patterns, Intermittent patterns, Erratic patterns, Lumpy patterns, and Products with Zero Historical Demand.

Below this summary, you can find a detailed table breaking down patterns by the highest configured level in product hierarchy in the Demand Plan Settings. For example, if your product hierarchy configuration follows pattern product id, product group id, then you will see the summary at the product group id. For each category, you can see the following:
+ \$1 Forecasts, indicating the unique time series are eligible for forecast and its percentage of total
+ The annual demand volume and its percentage of total
+ A visual breakdown of demand pattern within that category
+ A visual breakdown of the length of history available within that category

To help you navigate this information, you can do the following:
+ Use the search box to find specific product categories
+ Download a detailed report. The report contains detailed analysis for each individual forecast at your configured granularity level 
+ Sort any product category, \$1 Forecasts, and Annual Demand to focus on specific metrics. For product categories containing alphanumeric formats or blank values, using the search function may be more effective.

## Ongoing access


After each successful forecast creation, you can revisit this analysis on the **Demand Pattern** tab in the forecast review pages. In this view, the analysis responds to any filters you apply in the forecast review. The downloaded report contains analysis specific to your filtered selection.

# Forecast Algorithms


 AWS Supply Chain Demand Planning offers a combination of 25 built-in forecast models to create baseline demand forecasts for products with diverse demand patterns in customers’ datasets. The list of 25 forecast models includes 11 forecast ensemblers (each ensembler is unique based on the set of models that make up the ensembler and/or the metric the ensembler optimizes to) and 14 individual forecast algorithms including statistical algorithms like Autoregressive Integrated and Moving Average (ARIMA) to complex neural network algorithms like CNN-QR, Temporal Fusion Transformer and DeepAR\$1. Customers have the choice of using forecast ensembler or individual forecast algorithm based on their use case and unique needs. While the forecast ensemblers offer the advantage of customers not having to manually deal with cumbersome tasks such as model selection, hyperparameter tuning and having to simply pick the forecast error metric that is best suited for the customer use case that the ensembler would optimize , the individual forecast algorithms offer flexibility for customer use cases that and best forecasted with a single model instead of an ensemble. 

The following table lists the 25 built-in forecast models offered by AWS Supply Chain Demand Planning along with what they are best suited for.


| Type | Forecast Ensembler/Algorithm  | Demand History Requirement  | Model(s) in Ensemble | Automated hyper Parameter Tuning (Yes/No) | Default Parameters | Metric Optimized | Scenario(s) the model is best suited for | Supports Related Times as Forecast Inputt - Yes/No? | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | 
|  Forecast Model(s) Ensembler  |  AutoGluon Best Quality (MAPE)  |  At least 2 times the forecast horizon  |  Ensemble of baseline, statistical , ML/Deep learning models in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library.  |  Yes  |  AutoGluon best\$1quality preset  |  MAPE (Mean Absolute Percentage Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Model(s) Ensembler  |  AutoGluon Best Quality (WAPE)  |  At least 2 times the forecast horizon  |  Ensemble of baseline, statistical , ML/Deep learning models in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library.  |  Yes  |  AutoGluon best\$1quality preset  |  WAPE (Weighted Absolute Percentage Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Model(s) Ensembler  |  AutoGluon Best Quality (MASE)  |  At least 2 times the forecast horizon  |  Ensemble of baseline, statistical , ML/Deep learning models in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library.  |  Yes  |  AutoGluon best\$1quality preset  |  MASE (Mean Absolute Scaled Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Model(s) Ensembler  |  AutoGluon Best Quality (RMSE)  |  At least 2 times the forecast horizon  |  Ensemble of baseline, statistical , ML/Deep learning models in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library.  |  Yes  |  AutoGluon best\$1quality preset  |  RMSE (Root Mean Squared Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Model(s) Ensembler  |  AutoGluon Best Quality (WCD)  |  At least 2 times the forecast horizon  |  Ensemble of baseline, statistical , ML/Deep learning models in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library.  |  Yes  |  AutoGluon best\$1quality preset  |  WCD (Weighted Cumulative Deviation)  |  Automated Ensemble without need for manual model assignment/selection.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Model(s) Ensembler  |  AutoGluon StatEnsemble (MAPE)  |  At least 2 times the forecast horizon  |  Ensemble of all statistical models(only) in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library eto produce forecasts.  |  Yes  |  AutoGluon all Supported Stats Model  |  MAPE (Mean Absolute Percentage Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  No  | 
|  Forecast Model(s) Ensembler  |  AutoGluon StatEnsemble (WAPE)  |  At least 2 times the forecast horizon  |  Ensemble of all statistical models(only) in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library eto produce forecasts.  |  Yes  |  AutoGluon all Supported Stats Model  |  WAPE (Weighted Absolute Percentage Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  No  | 
|  Forecast Model(s) Ensembler  |  AutoGluon StatEnsemble (MASE)  |  At least 2 times the forecast horizon  |  Ensemble of all statistical models(only) in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library eto produce forecasts.  |  Yes  |  AutoGluon all Supported Stats Model  |  MASE (Mean Absolute Scaled Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  No  | 
|  Forecast Model(s) Ensembler  |  AutoGluon StatEnsemble (RMSE)  |  At least 2 times the forecast horizon  |  Ensemble of all statistical models(only) in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library eto produce forecasts.  |  Yes  |  AutoGluon all Supported Stats Model  |  RMSE (Root Mean Squared Error)  |  Automated Ensemble without need for manual model assignment/selection.  |  No  | 
|  Forecast Model(s) Ensembler  |  AutoGluon StatEnsemble (WCD)  |  At least 2 times the forecast horizon  |  Ensemble of all statistical models(only) in the [AutoGluon](https://auto.gluon.ai/stable/index.html) model library eto produce forecasts.  |  Yes  |  AutoGluon all Supported Stats Model  |  WCD (Weighted Cumulative Deviation  |  Automated Ensemble without need for manual model assignment/selection.  |  No  | 
|  Forecast Model(s) Ensembler  |  AWS Supply Chain AutoML  |  At least 2 times the forecast horizon  |  Ensemble of all in [Amazon Forecast AutoML](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-choosing-recipes.html).  |  Not Applicable  |  AutoML default settings  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Automated Ensemble without need for manual model assignment/selection.  |  Depends on Selected Models by Ensembler.  | 
|  Forecast Algorithm  |  CNN-QR  |  At least 4 times the forecast horizon  |  CNN-QR (Convolutional Neural Network - Quantile Regression) is a machine learning algorithm for time series forecasting using causal convolutional neural networks (CNNs).  |  Not Applicable  |  [CNN-based parameters](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-algo-cnnqr.html)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for large datasets containing hundreds of time series.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Algorithm  |  DeepAR\$1  |  At least 4 times the forecast horizon  |  DeepAR\$1 is a machine learning algorithm for time series forecasting using recurrent neural networks (RNNs).  |  Not Applicable  |  [DeepAR default settings](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-recipe-deeparplus.html)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for large datasets containing hundreds of time series.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Algorithm  |  LightGBM  |  At least 2 times the forecast horizon  |  Light Gradient-Boosting Machine (LGBM) is a tabular machine learning model that uses historical demand data from past seasons.  |  Not Applicable  |  LightGBM default parameters  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for datasets where different items share similar demand trends. Less effective on datasets with diverse item characteristics and demand patterns.  |  No  | 
|  Forecast Algorithm  |  Prophet  |  At least 4 times the forecast horizon  |  Prophet is a time series forecasting algorithm based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality.  |  Not Applicable  |  [Default Prophet settings](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-recipe-prophet.html)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for time series that have strong seasonal effects and several seasons of historical data.  |  Yes, Past and Future Related Time Series  | 
|  Forecast Algorithm  |  Triple Exponential Smoothing  |  At least 4 times the forecast horizon  |  Exponential Smoothing (ETS) is a statistical model for time series forecasting.  |  Not Applicable  |  Default ETS parameters  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for datasets with seasonality patterns, computing weighted averages of past observations with exponentially decreasing weights. ETS is most effective for time series with fewer than 100 items.  |  No  | 
|  Forecast Algorithm  |  Auto Complex Exponential Smoothing (AutoCES)  |  At least 2 times the forecast horizon  |  Auto Complex Exponential Smoothing is an advanced variant of exponential smoothing, automatically adjusts smoothing parameters, offering accurate forecasts for time series with intricate seasonal structures.  |  Not Applicable  |  [Default AutoCES settings](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-model-zoo.html#autogluon.timeseries.models.AutoCESModel)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for complex seasonal patterns in time series data, including multiple seasonality or irregular cycles.  |  No  | 
|  Forecast Algorithm  |  ARIMA  |  At least 4 times the forecast horizon  |  ARIMA (Auto-Regressive Integrated Moving Average) is a statistical model for time series forecasting. It combines autoregressive, moving average, and differencing components to model trends.  |  Not Applicable  |  [ARIMA default parameters](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-recipe-arima.html)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for datasets without strong seasonal effects.  |  No  | 
|  Forecast Algorithm  |  Seasonal ARIMA  |  At least 2 times the forecast horizon  |  SARIMA (Seasonal Auto-Regressive Integrated Moving Average) is an extension of ARIMA that includes seasonal components, It models both non-seasonal and seasonal trends, ensuring accurate predictions for datasets with multiple seasons of historical data.  |  Not Applicable  |  Seasonal ARIMA default parameters  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for time series with strong seasonal patterns.  |  No  | 
|  Forecast Algorithm  |  Theta  |  At least 2 times the forecast horizon  |  The Theta model is a time series forecasting method that combines exponential smoothing with a decomposition approach to handle trend, seasonality, and noise. It uses a linear trend model and non-linear smoothing components to capture both short-term and long-term patterns, often outperforming traditional methods.  |  Not Applicable  |  [Theta method default settings](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-model-zoo.html#autogluon.timeseries.models.ThetaModel)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for intermittent demand forecasting.  |  No  | 
|  Forecast Algorithm  |  Aggregate-Disaggregate Intermittent Demand Approach (ADIDA)  |  At least 2 times the forecast horizon  |  ADIDAaggregates data at a higher level to capture broader patterns, then disaggregates it for accurate forecasts improves accuracy by reducing noise.  |  Not Applicable  |  [ADIDA default parameters](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-model-zoo.html#autogluon.timeseries.models.ADIDAModel)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for products with low or irregular demand, intermittent demand.  |  No  | 
|  Forecast Algorithm  |  Croston  |  At least 2 times the forecast horizon  |  The Croston method is designed for intermittent demand forecasting. It separates demand into two components the size of non-zero demands and the intervals between them. These components are independently forecasted and combined.  |  Not Applicable  |  [Croston default settings](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-model-zoo.html#autogluon.timeseries.models.CrostonModel)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for intermittent demand forecasting.  |  No  | 
|  Forecast Algorithm  |  Intermittent Multiple Aggregation Prediction Algorithm (IMAPA)  |  At least 2 times the forecast horizon  |  IMAPA is a forecasting method for intermittent demand data, where demand is irregular with many zero values. It aggregates data at multiple levels to capture different demand patterns, offering more robust predictions for datasets with highly irregular demand compared to methods like Croston.  |  Not Applicable  |  [IMAPA default parameters](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-model-zoo.html#autogluon.timeseries.models.IMAPAModel)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for improving accuracy for intermittent demand patterns (compared to traditional methods like exponential smoothing).  |  No  | 
|  Forecast Algorithm  |  Moving Average  |  At least 2 times the forecast horizon  |  The Moving Average model forecasts by averaging past data points over a fixed window.  |  Not Applicable  |  Moving Average default parameters  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for short-term forecasts, especially in sparse data scenarios. This method performs well on time series with simple trends, providing quick, easy predictions without requiring complex modeling.  |  No  | 
|  Forecast Algorithm  |  Non Parametric Time Series (NPTS)  |  At least 4 times the forecast horizon  |  NPTS is a baseline forecasting method for sparse or intermittent time series data. It includes variants such as Standard NPTS and Seasonal NPTS.  |  Not Applicable  |  [NPTS default parameters](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-recipe-npts.html)  |  WQL (Weighted Quantile Loss) for P10, P50, P90  |  Best suited for robust predictions for irregular time series by handling missing data and seasonal effects. It is scalable and effective for irregular demand data.  |  No  | 

The following table lists the metrics available in Support Demand Planning forecast models.


| Metric | Metric Description | Metric Formula | When to use this metric to optimize | Link | 
| --- | --- | --- | --- | --- | 
|  MAPE  |  MAPE measures the average magnitude of the errors in a set of forecasts, expressed as a percentage of the actual values.  |  Not Applicable  |  It is commonly used for evaluating the accuracy of predictive models, especially in time series forecasting, where all time series are treated equal for forecast error evaluation.  |  [https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MAPE](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MAPE)  | 
|  WAPE  |  WAPE is a variation of MAPE that considers the weighted contributions of different data points.  |  Not Applicable  |  It is particularly useful when the data has varying importance or when some observations are more significant than others.  |  [https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WAPE](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WAPE)  | 
|  RMSE  |  RMSE measures the square root of the average squared differences between predicted and actual values.  |  Not Applicable  |  RMSE is sensitive to large errors because of the squaring operation, which gives more weight to larger errors.In use cases where only a few large mispredictions can be very costly, the RMSE is the more relevant metric.  |  [https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.RMSE](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.RMSE)  | 
|  WCD  |  WCD is a measure of cumulative forecast error weighted by a set of predetermined weights.  |  Not Applicable  |  This metric is often used in applications where certain time periods, products, or data points have more importance than others, allowing for prioritization in the error analysis.  |  Not Applicable  | 
|  wQL  |  wQL is a loss function that evaluates the performance of a model based on quantiles, with weighted contributions from different data points.  |  Not Applicable  |  It’s useful for assessing model performance in scenarios where the importance of different quantiles (e.g., 90th percentile, 50th percentile) or observations varies. It is particularly useful when there are different costs for underpredicting and overpredicting.  |  [https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WQL](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WQL)  | 
|  MASE  |  MASE (Mean Absolute Scaled Error) is a performance metric used to evaluate the accuracy of time series forecasting models. It compares the mean absolute error (MAE) of the forecasted values to the mean absolute error of a naive forecast.  |  Not Applicable  |  MASE is ideal for datasets that are cyclical in nature or have seasonal properties. For example, forecasting for items that are in high demand during summers and in low demand during winters can benefit from taking into account the seasonal impact.  |  [https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MASE](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MASE)  | 

# Forecast based on demand drivers


To enhance forecast accuracy while configuring your forecast, you can use demand drivers. *Demand drivers* are related time series inputs that capture product trends and seasons. Instead of depending on historical demand, you can use demand drivers to influence the supply chain based on various factors. For example, promotions, price changes, and marketing campaigns. Demand Planning supports both historical and future demand drivers.

## Prequisites to use demand drivers


Before ingesting data for demand drivers, make sure that the data meets the following conditions:
+ Make sure to ingest the demand drivers data in the *supplementary\$1time\$1series* data entity. You can provide both historical and future demand driver information. For information about the data entities that Demand Planning requires, see [Demand Planning](required_entities.md).

  If you cannot locate the *supplementary\$1time\$1series* data entity, your instance might be using an earlier data model version. You can contact AWS Support to upgrade your data model version or create a new data connection.
+ Make sure that the following columns are populated in the *supplementary\$1time\$1series* data entity.
  + *id* – This column is the unique record identifier and is required for a successful data ingestion.
  + *order\$1date* – This column indicates the timestamp of the demand driver. It can be both past and future dated.
  + *time\$1series\$1name* – This column is the identifier for each demand driver. The value of this column must start with a letter, should be 2–56 characters long, and may contain letters, numbers, and underscores. Other special characters are not valid.
  + *time\$1series\$1value* – This column provides the data point measurement of a particular demand driver at a specific point in time. Only numerical values are supported.
+ Select a minimum of 1 and a maximum of 13 demand drivers. Make sure that the aggregation and filling methods are configured. For more information on filling methods, see [Demand drivers data filling method](configuration_demand_drivers.md#filling_method_demand_drivers). You can modify the settings at any time. Demand Planning will apply the changes in the next forecast cycle.

The following example illustrates how a Demand Plan is generated when the required demand driver columns are ingested in the *supplementary\$1time\$1series* data entity. Demand Planning recommends providing both historical and future demand driver data (if available). This data helps the learning model to learn and apply the pattern to the forecast.

![\[Demand drivers example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/demand_drivers_example.png)


The following example illustrates how you can set up some common demand drivers in your dataset.

![\[Demand drivers example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/demand_drivers_example2.png)


When you provide leading indicators, Demand Planning highly recommends that you adjust the time series date. For example, say that a particular metric serves as a 20-day leading indicator with a 70% conversion rate. In this case, consider shifting the date in the time series by 20 days and then applying the appropriate conversion factor. While the learning model can learn patterns without such adjustments, aligning leading indicator data with corresponding outcome is more effective in pattern recognition. The magnitude of the value plays a significant role in this process, enhancing the model's ability to learn and interpret patterns accurately.

# Demand driver configuration


To use demand drivers, you must configure them. You can configure demand drivers only when you've ingested data in the *supplementary\$1time\$1series* data entity.

**Note**  
If you don't configure the demand drivers, you can still generate a forecast. However, Demand Planning won't use the demand drivers.

## Demand drivers data filling method


A *filling method* represents (or "fills") missing values in a time series. Demand Planning supports the following filling methods. The filling method that Demand Planning applies depends on the location of the gap in the data. 
+ Back filling – Applied when the gap is between a product's earlier recorded date and the last recorded date.
+ Middle filling – Applied when the gap is between the last recorded data point for a given product and the global last recorded date.
+ Future filling – Applied when the demand driver has at least one data point in the future and there is a gap in the future time horizon.

![\[Demand drivers filling method\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/filling_method.png)


Demand Planning utilizes the last 64 data points from the *supplementary\$1time\$1series* data entity corresponding to the demand driver for consideration. Demand Planning supports *zero*, *median*, *mean*, *maximum*, and *minimum* options for all three filling methods.

The following example illustrates how demand drivers handle missing data when data is ingested to the *price* column in the *supplementary\$1time\$1series* data entity for Product 1, that includes both history and future data.

![\[Demand drivers filling method\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/filling_method_example1.png)


## Aggregation method


Demand Planning uses the aggregation method to facilitate the integration of demand drivers at various levels of granularity by consolidating data over specific periods and granularity levels.

Time period aggregation – For example, when the *Inventory* demand driver is available at daily level but the forecast is at weekly level, demand planning will apply the aggregation method configured under the demand plan settings for inventory to use the information for forecasting.

![\[Aggregation method used by Demand Planning\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/aggregation_example1.png)


Granularity level aggregation – Here is an example of how demand planning uses the granularity level aggregation. *out\$1of\$1stock\$1indicator* is available daily at product-site level but forecast granularity is only available at product level. Demand Planning will apply the aggregation method configured under the demand plan settings for this demand driver.

![\[Granularity method used by Demand Planning\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/granularity_example.png)


# Demand driver recommendations


While configuring aggregation and filling methods for demand drivers, a general guideline is to assign *mean* aggregation for both boolean and continuous data types. To fill a missing value, use *zero* filling for boolean data while *mean* filling is suitable for continuous data.

Note that the choice of aggregation and filling method configuration depends on the data characteristics and assumptions about missing values. Here is an example.

![\[Demand driver recommendation\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/demand_driver_recommendation.png)


Demand Planning recommends adjusting the demand driver configuration to best suit your dataset needs. The demand driver configuration will impact the forecast accuracy.

On the AWS Supply Chain web application, under **Demand planning**, **Overview**, you will view the impact scores associated with demand drivers, aggregated at the demand plan level. These impact scores measure the relative influence of demand drivers on forecast. A low impact score does not indicate that the demand driver has a minimal effect on forecast values. Instead, it suggests that its influence on forecast value is comparatively lower than the other demand drivers. When the impact score is zero under certain circumstances, it should be interpreted as the demand driver has no impact on the forecast values. Demand Planning recommends revisiting the aggregation and filling method configuration applied to that particular demand driver.

# Product lineage


*Product lineage* refers to the relationship established between products and their previous versions or alternate products. Demand Planning uses product lineage information to create surrogate histories for these products, which serve as forecast inputs for demand predictions.

Product lineage supports the following patterns:
+ A single product has one lineage or alternate product = 1:1  
![\[Product lineage pattern = 1:1\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/product_lineage_pattern1.png)

  The following example shows an 1:1 scenario.  
![\[Product lineage pattern = 1:1\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/1 is to 1_example.png)
+ A single product has more than one product as lineage or alternate = Many:1  
![\[Product lineage pattern = Many:1\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/product_lineage_pattern2.png)

  Demand Planning supports product lineage relationship modeled as both *chain* or *flattened* methods.
  + **Chain format** – You can directly model lineage relationships like A to B and B to C. In the following example. Demand Planning will model the lineage relationship as A to B, B to C, and A to C.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/product_lineage.html)

    The following example shows an Many:1 scenario - Chain format  
![\[Product lineage pattern = Chain format\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/chain_format.png)
  + **Flattened format** – Demand Planning will continue to support lineage information in A to B and A to C format. In the following example, Demand planning will model the lineage relationship as A to B and A to C. B to C is not considered.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/product_lineage.html)
**Note**  
Chain format only supports 6 levels of lineage relationship. If you have more than 6, you can use flattened format to model the lineage relationship.

  The following example shows an Many:1 scenario - Flattened format  
![\[Product lineage pattern = Flattened format\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/1 is to many_example.png)
+ A single product can be lineage or alternate for more than 1 product = 1 : Many  
![\[Product lineage pattern = 1:Many\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/product_lineage_pattern3.png)

To enable the product lineage feature, you can define the lineage relationship for the different versions of the products or alternates/substitutes in the *product\$1alternate* data entity. For more information, see [Demand Planning](required_entities.md).

If your instance was created on or after September 11, 2023, you will see *product\$1alternate* data entity in the AWS Supply Chain data Connection module. If your instance was created before September 11, 2023, create a new data connection to enable the *product\$1alternate* data entity for ingestion.

To ingest data into the *product\$1alternate* data entity, follow the guidelines below:
+ *product\$1id* – The primary product to create the forecast.
+ *alternative\$1product\$1id* – Previous version of the product or alternate/substitute product.

  To consider multiple *alternative\$1product\$1id* for a single *product\$1id*, enter them in separate rows.
+ Demand Planning will consider the data ONLY when the values are provided in the following format.
  + *alternate\$1type* is * similar\$1demand\$1product*.
  + *status* is *active*.
  + *alternate\$1product\$1qty\$1uom* is the text *percentage*.
  + *alternate\$1product\$1qty* – Enter the proportion of history of the alternate product you want to use for forecasting new products in the *alternate\$1product\$1qty* data field. For example, if it is 60%, enter 60. When you have multiple *alternative\$1product\$1id* for a single *product\$1id*, the *alternate\$1product\$1qty* does not have to add up to 100.
+ The *eff\$1start\$1date* and *eff\$1end\$1date* data fields are required. However, you can leave this field empty and Demand Planning will auto-fill with 1000 and 9999 years respectively.

When the forecast is created using product lineage data, you will see an indicator *Forecast is based on alternate product's history* on the Demand Planning page when you filter by *product ID*.

The following table shows an example of how Demand Planning Product lineage feature works based on the data ingested into the *product\$1alternate* data entity.


| Column | Required or Optional | Example 1 | Example 2 | Example 3 | Example 4 | Example 5 | Example 6 | Example 7 | Example 8 | Example 9 | Example 10 | Example 11 | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
|  product\$1id  | Required | Product 123 | Product 123 | Product 123 | Product 123 | Product 123 | Product 123 | Product 123 | Product 123 | Product 123 | Null | Product 123 | 
|  alternative\$1product\$1id  | Required | Product XYZ | Null | Product XYZ | Product XYZ | Product XYZ | Product XYZ | Product XYZ | Product XYZ | Product XYZ | Null | Product XYZ | 
|  alternate\$1type  | Required | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Null or a different value | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | Similar\$1Demand\$1Product | 
|  status\$1  | Required | active | active | active | inactive | active | active | Null | active | active | active | active | 
|  alternate\$1product\$1qty  | Required | 100 | 60 | 100 | 100 | Null | 100 | 100 | 100 | 100 | 100 | 60 | 
|  alternate\$1product\$1qty\$1uom  | Required | percentage | percentage | percentage | percentage | percentage | Null or a different value | percentage | percentage | percentage | percentage | percentage | 
|  eff\$1start\$1date  | Required | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | Null | 2023-01-01 00:00:00 | 2023-01-01 00:00:00 | Null | 
|  eff\$1end\$1date  | Required | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | 2025-12-31 23:59:59 | Null | 2025-12-31 23:59:59 | Null | 
|  **Expected behavior**  | NA | 100% of product XYZ's history from 1/1/2023 to 31/12/2025 will be used to forecast product 123. | Invalid mapping since alternative\$1product\$1id is missing. | Invalid mapping since alternate \$1type is not 'similar\$1demand\$1product'. | Inactive mapping. | Invalid mapping since alternate\$1product\$1qty is missing. | Invalid mapping since alternate\$1product\$1qty\$1uom is missing or not percentage. | Invalid mapping since status is missing. | Ingestion will fail. | Ingestion will fail. | Invalid mapping since product\$1id and alternative\$1product\$1id are missing. | Ingestion will fail. | 
|    | NA | NA | NA | NA | NA | NA | NA | NA |  Demand Planning will auto-populate the *eff\$1start\$1date* to year 1000. This scenario is valid and data ingestion will not fail. |  Demand Planning will auto-populate the *eff\$1end\$1date* to year 9999. This scenario is valid and ingestion will not fail. | NA |  Demand Planning will auto-populate the *eff\$1start\$1date* to year 1000 and *eff\$1end\$1date* to year 9999. This scenario is valid and ingestion will not fail. | 

The following example explains how Demand Planning will interpret when the *status* is set as *inactive* and the product lineage is in chain format.


| Column | Column | Status | 
| --- | --- | --- | 
|  A  |  B  |  Active  | 
|  B  |  C  |  Inactive  | 
|  C  |  D  |  Active  | 

Demand planing considers the status of the first root and child mapping as the status for the entire chain.

 A to B Active

A to C Active

A to D Active

B to C Inactive

B to D Inactive

C to D Active

# Product lifecycle


Product lifecycle describes the lifecycle of a product from introduction to End of Life (EoL). AWS Supply Chain supports forecasting products through it's lifecycle. To enable the Product lifecycle feature, populate the *product\$1introduction\$1day* and *discontinue\$1day* columns in the *Product* data entity. Demand Planning uses the data from these columns to create forecast for a product when the product is active. For more information data entities, see [Data entities and columns used in AWS Supply Chain](data-model.md).

To enable product lifecycle, make sure the columns *id*, *description*, *product\$1available\$1day*, *discontinue\$1day*, and *is\$1deleted* are populated in the *Product* data entity.

The example below displays how Demand Planning works when data is ingested in the Product data entity.


| Column name | Required for Data Lake | Required for Demand Planning | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | Scenario 5 | Scenario 6 | Scenario 7 | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
|  id  |  Yes  |  Yes  |  Product123  |  Product123  |  Product123  |  Product123  |  Product123  |  Product123  |  Product123  | 
|  description  |  Yes  |  Yes  |  Bottle  |  Bottle  |  Bottle  |  Bottle  |  Bottle  |  Bottle  |  Bottle  | 
|  product\$1available\$1day  |  No  |  No  |  May 1, 2023  |  May 1, 2023  |  May 1, 2023  |  Null  |  Null  |  May 1, 2022  |  May 1, 2022  | 
|  discontinue\$1day  |  No  |  No  |  Null  |  December 31, 2023  |  December 31, 2023  |  Null  |  Null  |  May 1, 2023  |  Past  | 
|  is\$1deleted  |  No  |  No  |  No  |  No  |  Yes  |  No  |  Null  |  No  |  No  | 
|  **Expected behavior**  |  NA  |  NA  |  Forecast will be created starting 3 months prior (or as configured) prior to May 1, 2023 to the end of the planning horizon since there is no discontinue date.  |  Forecast will be created starting 3 months prior (or as configured) prior to May 1, 2023 until the discontinue date (or as configured).  |  Forecast will not be created since the product is considered inactive.  |  Forecast will be created for the entire planning horizon.  |  Assumed that the product is active.  |  Forecast will be created for one day (May 1).  |  In case of conflict between is\$1deleted and discontinue\$1day, is\$1deleted is considered.  | 

For information on how to configure Product lifecycle, see [Create your first demand plan](onboarding.md).

Under Demand Planning settings, you can set your forecast start date depending on the *product\$1available\$1day* in the Product data entity. By default, the forecast starts on the *product\$1available\$1day*. *Period* refers to the time interval set under **Scope** (daily, weekly, monthly, or yearly). You can adjust the start date to optimize inventory management.

Similar to start date, you can set an end date for your forecast depending on the *product\$1discontinue\$1day* in the Product data entity. By default, forecast will end on the *product\$1discontinue\$1day*. You can adjust the end date to prevent inaccurate forecasting beyond the product shelf life and avoid excess inventory cost. Enter zero if you want the forecast to match the *product\$1available\$1day* and *product\$1discontinue\$1day*. This global setting will apply to all eligible products.

When *product\$1available\$1day* and *product\$1discontinue\$1day* are not available, the forecast is created for the entire planning horizon.

You can also configure your system to initialize forecast values for products without historical data or alternate product links. The default value is zero. You can also set the period until which your system should use the initialize product forecast value based on the time interval set under **Scope** (daily, weekly, monthly, or yearly). The default value is three periods. This global setting will apply to all eligible products at the intersection of site, customer and channel dimensions, if they are selected as additional forecast granularity. For example, when forecast is set to weekly with an initialized value of 10 for 12 periods, and the start forecast is set to three periods before the *product\$1available\$1day*, for a Product X with October 2, 2023 product\$1available\$1date, the initialized value of 10 will be applied for each week from September 11, 2023 to December 3, 2023.

To change the *product\$1available\$1day* and *product\$1discontinue\$1day*, update the Product data entity in AWS Supply Chain data lake. You can also update the forecast start and stop date. When you change the initialization value and period settings, the changes are applied to all eligible products, including those which were initialized with a different value in the previous planning cycles. All the updates are applied to the next forecast creation cycle. 

# Manage demand plans


After the forecast is generated, choose **Demand Planning**, and then choose **Manage Demand Plan**. On the **Demand Planning** page, you can view the overall influence factors used in generating the forecast and the accuracy metrics of the forecast. You can also view the current demand plan.

**Topics**
+ [

# Overview
](overview_dp.md)
+ [

# Demand plan
](changing_category.md)
+ [

# Forecast lock
](forecast_lock.md)

# Overview


**Note**  
You can only view the **Overview** page after the forecast is generated for the first time.

The **Overview** tab provides the following information.
+ **Overall Influence Factors** – Indicates the impact score of product metadata attributes and demand drivers (if any), used to generate forecast in the current planning cycle. You can view the influence factors after the first successful forecast generation. A negative value indicates the attributes caused the forecast to go down and vice versa. A zero value indicates that the attribute has no influence on the forecast result. For information on forecast based on demand drivers, see [Forecast based on demand drivers](demand_drivers.md).
+ **Accuracy Metrics** – After you update the dataset (outbound\$1order\$1line) that contains the actual demand for the forecast period, choose **Recalculate**. You can view the accuracy metrics for the latest demand plan under the **Demand Plan** tab. Accuracy metrics measure how the accuracy of the current demand plan aligns with the actual demand.

  Accuracy metrics are available at **plan (aggregate)** and **granular lowest** level during forecast generation. The **Overview** page displays the aggregate level metrics and under **Accuracy Metrics**, you can choose **Download ** to download the granular metrics.

  The following are the formulas used to calculate the metrics displayed on the web application.
  + ** Mean Absolute Percentage Error (MAPE)** – MAPE takes the absolute value of the percentage error between observed and predicted values for each unit of time and averages those values.

    The formula at granular and plan level is below:   
![\[Calculating MAPE\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/MAPE_formula.png)

    A MAPE less than 5% indicates the forecast is acceptably accurate. A MAPE greater than 10% but less than 25% indicates low, but acceptable accuracy, and MAPE greater than 25% indicates very low accuracy and the forecast is not acceptable.
  + **Weighted Average Percentage Error (WAPE)** – WAPE measures the overall deviation of forecasted values from observed values. WAPE is calculated by taking the sum of observed values and the sum of predicted values, and calculating the error between those two values. A lower value indicates a more accurate model.

    The formula at granular and plan level is below:   
![\[Calculating WAPE\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/WAPE_formula.png)

    A WAPE less than 5% is considered as acceptably accurate. A WAPE greater than 10% but less than 25% indicates low, but acceptable accuracy and WAPE greater than 25% indicates very low accuracy.

See the following example: 

![\[WAPE calculation example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/Accuracy_metrics.png)


The metrics are not calculated when actual is zero or null. When a new forecast is generated subsequently, the previous reported metrics will no longer be available on the web application. Make sure the latest outbound\$1order\$1line dataset is updated and choose **Recalculate** to view the updated metrics. 

The accuracy metrics reflect the accuracy of the current demand plan for all time periods that have an actual demand value in the current executed forecast.

For example, if your current planning cycle has forecast from January to December 2023 with monthly forecasts and you updated the actual data for January 2023, accuracy metrics will be computed for January 2023. Similarly, if your current planning cycle has forecast from January to December 2023 with monthly forecasts and you updated the actual data for January 2023 and February 2023, accuracy metrics will be computed for January 2023 and February 2023. The Demand Planning web application will display the aggregated metric for Jan-Feb-2023 and the export file will display the granular details.

**Note**  
When you modify the *Time interval* or *Hierarchy* configuration and regenerate the forecast, the accuracy metrics will not be displayed since the accuracy metric values are not relevant.

## Demand pattern


You can expand the individual metrics to view the demand characteristics such as *Smooth Demand*, *Intermittent Demand*, *Erratic Demand*, and *Lumpy Demand*. The segments are derived based on the actual demand used in the last forecast.

When one or more of the four segments are missing in the Demand Planning web application, it indicates that the Demand Planning web application could not find any product aligned with the patterns associated with the missing segments.

The following intermediate results are calculated:

**Note**  
Records with zero demand are not considered for ADI and CV² computation.
+ *Average Demand Interval (ADI)* – Represents the average time between consecutive demands. ADI = total number of periods / number of demand buckets
+ *Squared Coefficient of Variation (CV²)* – Measures the variability in demand quantities. CV² = (standard deviation of a population / average value of the population)²

The following cut-offs are applied to derive the segments:
+ *Smooth Demand* (ADI less then 1.32 and CV² less than 0.49) is highly regular in time and quantity, making it easy to forecast with low error margins.
+ *Intermittent Demand* (ADI greater than or equal to 1.32 and CV² lesser than 0.49) exhibits little variation in quantity but high variation in demand interval, leading to higher forecast error margins.
+ *Erratic Demand * (ADI less then 1.32 and CV² greater than or equal to 0.49) has regular occurrence in time but high variations in quantity, resulting in shaky forecast accuracy.
+ *Lumpy Demand* (ADI greater than or equal to 1.32 and CV² greater than or equal to 0.49) is characterized by large variations in both quantity and time, making it unforecastable.

## Forecast validation


By default, forecast validation is enabled. To make sure the forecast generated is accurate, Demand Planning will monitor and update you on the forecast quality or accuracy. If Demand Planning determines the forecast requires additional validation, Demand Planning will delay publishing the forecast and you will see a message that displays the date and time when the forecast will be published on the AWS Supply Chain web application.

You can also opt-out and Demand Planning will not monitor your forecast. For more information on how to opt-out, see [Opt-out preference](https://docs.aws.amazon.com/aws-supply-chain/latest/adminguide/data-protection.html#opt-out-preference).

You can view the last published demand plan in read-only mode.

## Lags


Lags represent the time interval between when the forecast was created and the actual forecast was realized. You can configure up to three forecast lags when you configure demand plan. For more information, see [Create your first demand plan](onboarding.md). The forecast accuracy metrics displays the analysis based on the lag intervals defined.

Forecasts for the defined lags are generated for every planning cycle and the accuracy metrics can only be evaluated after the corresponding number of planning cycles. For example, if you choose lag six, accuracy metrics for lag six forecast will be calculated after six planning cycles.

![\[Demand pattern example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/demand_pattern.png)


**Note**  
When you change the lag configuration, the drop-down values displayed are the newly selected lags. Choose **Refresh Metrics** to view the latest metrics. When you change the time interval (daily/weekly/monthly/yearly), or hierarchy (product/site/customer/channel) granularity, the previous lag metrics will no longer be available when you choose **Refresh Metrics**. The recalculation results will display the latest demand planning cycle as the only cycle in history.

Choose **Export Metrics** to download a detailed file that includes granular data corresponding to the aggregated metrics displayed on the web application. The downloaded file will contain the following information:
+ Timestamp - Forecasted Period, Forecast Creation Date, Last Actual Period, Lag
+ XYZ segment (smooth, intermittent, erratic or lumpy)
+ Granularity - Product/site/customer/channel as configured
+ Baseline forecasts - P10, P50 and P90
+ Actual demand
+ Metrics - Bias Units, Bias %, MAPE, SMAPE (at granular level, MAPE and WAPE are the same)

# Demand plan


After the forecast is generated, you can review the forecast values on the **Demand Plan** tab. The **Enterprise demand plan** is a single workbook that serves as a collaborative platform to work together. It provides a centralized location for you to consolidate and synchronize the forecasting effort.

The Demand Plan table displays the following information:
+ **Forecasted Demand** – Displays the system generated forecast and includes the following three values:
  + **Lower Bound** – Forecast prediction that is typically higher than the actual demand around 90 percent of the time.
  + **Median Demand** – Forecast prediction that is typically higher than the actual demand 50 percent of the time (central estimate).
  + **Upper Bound** – Forecast prediction that is typically higher than the actual demand 10 percent of the time.
**Note**  
*Lower and Upper Bound * information is only displayed when a *product\$1id* is selected. *Median Demand* is displayed at both aggregate level and when a single *product id* is selected.
+ **Demand Plan** – Median Demand is replicated in this row to allow for overrides.
+ **Actual Demand** – Displays demand history for the current and prior years.

  When comparing historical data on a weekly basis, Demand Planning will reference the closest Monday in the previous year. This is because Demand Planning considers Monday as the starting day of the week. Due to variations between years and leap years, the corresponding week in the previous year might not have the exact same date. For example, to compare if historical sales data for the week of 6/3/2023 is available, which is a Monday, Demand Planning will reference the week with the closest Monday in the previous year, which is 7/2/2022. 
+ **Prior Forecast Versions** – The last published demand plan displays. This will be blank during the first forecast creation because no history is available.
+ **Lifecycle and Events** – Displays the products in the demand plan that are New Product Introductions (NPI) or products that are nearing End of Life (EoL). When you hover over the **NPI** or **EoL** icons, when more than one product is selected, you can view the number of products and the list of products. When only one product is selected, you can view the product metadata. , product available day in case of NPI, discontinue day in case of EoL, and forecast start and stop date.
**Note**  
You will only see the number of products that are new or nearing EoL listed when the product category is set to all or when a higher level in product hierarchy is selected.

You can use the **Graph** toggle button to hide or show the graph view. You can hide or show the specific value by choosing the eye icon. When you filter by products, you can hover over the *i* help icon to view the product description, unit of measure (UoM), product available date, and discontinue date.

## Viewing the forecast


To view the forecast, complete the following steps:

1. On the **Enterprise demand plan** page, you can see the timestamp of the forecast generated. If the **Enterprise demand plan** is in *active* state, you can use the filters and make adjustments.

1. On the **Enterprise demand plan** page, under **All**, choose **Change category/product** to change the generated forecast view. By default, the forecast displayed represents the total forecast demand for all products within the defined scope or time horizon.

1. On the **Select Category/Product** page, you can select the product from the list or use the search box to search for a particular product by *Product ID* or *Description*.

1. Choose **Apply**. You can now view the filtered forecast for the selected product or category.
**Note**  
If you had chosen optional hierarchies during forecast configuration, the summary box will display the count of site, customer, and channel the selected product is sold.

1. Under **Refine your search**, if you chose optional hierarchies during forecast configuration, you can filter for **Site**, **Channel**, or **Customer** to further refine your forecast. For example, if you chose **Site** and **Channel** hierarchy during forecast configuration, the filters for Site and Channel will be available on the **Demand Plan** page.

1. In the **Time interval** dropdown list, select the time interval to view the forecast. You can use this filter to adjust the time hierarchy and view the forecast in both table and graph form. The lowest value corresponds to the forecast granularity time interval setting. For example, if the time interval is *Weekly*, you can view the forecast at *Weekly*, *Monthly* and *Yearly*. 

   You can also use the **Viewing window start** and **Viewing window end** to narrow down the period that you want to view in the forecast, both in table and graph view. You can view the historical sales for 28 days, 52 weeks, 48 months, and 10 years.

**Time interval example 1**

Demand Plan is generated at daily time-intervals per configuration. You can view the Demand Plan at weekly time interval by selecting the option on the Time Interval filter on the Demand Plan page. The system will aggregate values into weeks with Monday as the starting day of the week.

You can also view the demand plan in monthly time interval by using the Time Interval filter and selecting the monthly option. System will aggregate values into Gregorian calendar month with start day as 1, because demand plan is available at daily granularity.

![\[Time interval example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/Time-interval-example1.png)


**Time interval example 2**

Demand plan is generated at weekly time-interval per configuration. You can view the Demand plan at monthly time interval by selecting the Time Interval filter. The time boundaries for month will not be strict Gregorian calendar month.

![\[Time interval example\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/Time_interval_example.png)


## Adding an override


This section describes how to manually edit the forecast to override the projected demand.

**Note**  
Manual forecast overrides from one planning cycle are automatically saved and reapplied on the next planning cycle. 

1. Under **Demand Plan**, you can add overrides on the graph by moving the dot to the desired value or update the values directly on the Demand Plan row in the table.

1. On the **Edit Quantity** page, under **Change**, select if you want to increase, decrease, or fixed amount the demand.

1. Choose **Bulk edit** to bulk edit the forecast and add an override.

   The **Edit your forecast** page appears.

1. Under **Change**, select the dropdown to increase or decrease the demand, or enter a value.

1. Under **Reason Code**, select from one of the options between *Promotion*, *Holiday*, *Seasonal*, *New Product*, *Product Rampdown* or *Others*. The reason code is mandatory to successfully process the override. It is optional to add more descriptive notes to a forecast override.

1. Choose **Save and Update**.

   When you create an override, the impact can be viewed throughout the relevant levels of hierarchies. You can create many overrides but only the last override will be considered. After an override is created, a *clock* icon appears under **Demand Plan**. When you choose the *clock* icon, you can view the most recent change in the planning cycle. Choose **View more changes** to view past updates.

1. To make multiple overrides at the same time, from the **Edit Quantity**, choose **Go to bulk editing**. You can also choose **Bulk Edit** against **Demand Plan**.
**Note**  
You can bulk edit only from the table.

1. On the **Edit your forecast** page, you can select all check boxes or a check box for each time period that you want to update, and then enter the updates.

1. Choose **Save and Update**.

   The **Forecasted Demand** is updated.

## Exporting data plan files


You can export **Demand Plan**, **Forecast Demand**, **Prior Forecast Versions**, and **Actual Demand History** from Demand Planning as individual .csv files.

**Note**  
The exported .csv file will contain the entire demand plan, despite which filters were active on the **Demand Planning **page at the time of export.

To export the data plan, complete the following steps:

1. On the **Enterprise demand plan** page, select the vertical ellipsis.

1. Choose **Export Data Plan**.  
![\[Exporting data plans\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/export_data_plan.png)

1. On the **Export** page, select the required data you would like to download.

1. Choose **Export**.

   The file is downloaded on your local computer.

## Importing forecast overrides


You can use the import forecast overrides option to import the forecast overrides using a .csv file.

To upload the forecast overrides through a .csv file, complete the following steps:

1. On the **Enterprise demand plan** page, select the vertical ellipsis.

1. Choose **Import Forecast Overrides**.

   The **Import Forecast Overrides** page appears.  
![\[Importing forecast overrides\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/import_forecast_overrides.png)

1. Under **Upload files**, choose **Download CSV template** to download the .csv file you need to use to add the override values.

   The .csv file will contain the headers from the dataset you used to generate the forecast. The .csv file can only contain upto 1000 rows and the file size should be within 5 MB.

1. After the .csv file is updated, you can drag and drop the files or choose **select files** to add the file.

1. Choose **Upload overrides**.

   If the upload fails, check the following:
   + Make sure the required fields *override\$1start\$1date*, *override\$1end\$1date*, *value*, and *reason\$1code* are populated.
   + The supported reason codes are *Promotion* *Holiday*, *Seasonal*, *New Product*, *Product Rampdown*, and *Others*.
   + Make sure the *override\$1start\$1date* and *override\$1end\$1date* is the first day of the week or month depending on your configuration.

1. Under **Import Forecast Overrides Status**, you will see the status of all the forecast overrides you uploaded.

   You can filter the forecast override status by **Data Uploaded**, **User ID**, or upload status.

## Demand Plan scheduler


Schedulers in Demand Planning determine when forecasts are generated and demand plans are finalized. Schedulers can be configured to operate automatically at set time intervals (auto schedulers) or triggered manually. Auto-schedulers ensure that the planning process runs smoothly and consistently accordingly to predefined timelines, while manual schedulers gives you the flexibility to initiate forecast refreshes and finalize demand plans.
+ Manual refresh and release – Make sure you choose **Manual** under **Demand Plan Scheduler** when you configure demand planning. To start a forecast refresh, on the **Demand Plan** page, select the three dots on the top-right, and choose **Generate Forecast**.

  Select **Finalize demand plan**, if the demand plan is final and ready to be released to downstream processes.

  Once the demand plan is final, the information is published to the *Forecast* data entity in Data Lake and to Amazon S3. The status on the demand plan page for this plan is changed to *Published*. You can view the Amazon S3 link under *Settings* > *Organization*, *Demand Planning*, *Publish Demand Plans*. You can see the **Generate forecast** button to start the next planning cycle.

  When the **Finalize demand plan** is not selected, Demand Planning will publish the forecast as an interim version to the *Forecast* data entity in Data Lake. The status is changed to *Closed*. You can see the **Generate forecast** button to start the next planning cycle. Demand planning will initiate a new forecast as set in the demand planning configuration page and will use the same start date as the previous plan.
+ Automatic refresh and release – Make sure you choose **Auto** under **Demand Plan Scheduler** when you configure demand planning. For more information, see [Create your first demand plan](onboarding.md).



# Forecast lock


You can use the forecast lock feature to lock specific periods in your forecast to prevent any further edits or adjustments. To configure the forecast lock, enter a number between zero and time horizon -1 in the Demand Plan settings page to lock the first *x* forecast period. The default value is 0, indicating no periods are locked.

The forecast lock is not applied to the initial forecast but will take effect from the second demand planning cycle carrying over the finalized values from the previous demand plan. In the Demand Plan, locked periods are indicated by a *lock* icon. The change history icon will display the reason code *PLAN\$1LOCKED* for audit purpose at the most granular level. Once the forecast period is locked, the lock applies to all products within that timeframe. 

When the forecast granularity is changed, forecast overrides from the prior planning cycles are not carried over to the current planning cycle. The prior forecast and accuracy metrics will also not display any data in the Demand plan and any prior forecast locks are no longer valid. It takes two consecutive forecast executions in the modified granularity to apply a new forecast lock. You can unlock forecast periods by setting the configuration to zero and starting a new forecast.

The example below displays how intra-cycle forecast refresh scheduler works (when it's disabled) with forecast lock in the following settings:
+ Demand plan granularity – Weekly
+ Forecast horizon selected – 5
+ intra-cycle forecast refresh schedule – Disabled
+ Final forecast publish – 7th day of the week
+ Lock period – 2

![\[Displays intra-cycle forecast refresh scheduler works (when it's disabled) with forecast lock\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/intra-cycle_with_forecast_lock.png)


The example below displays how intra-cycle forecast refresh scheduler works (when it's enabled) with forecast lock in the following settings:
+ Demand plan granularity – Weekly
+ Forecast horizon selected – 5
+ intra-cycle forecast refresh schedule – Enabled
+ Final forecast publish – 7th day of the week
+ Interim forecast publish – 3rd day of the week
+ Lock period – 2

![\[Displays intra-cycle forecast refresh scheduler works (when it's enabled) with forecast lock\]](http://docs.aws.amazon.com/aws-supply-chain/latest/userguide/images/forecast_intracycle.png)


# Forecast model analyzer


Forecast model analyzer is a self-service tool that you can use to execute forecast experiments on multiple forecast models (forecast period in past and future). Once executed, you can review the results of the different forecast models. Using accuracy metrics and visual comparison between forecasts and actual demand, you can choose the required forecast model that suits your business data patterns. You can execute the forecast model analyzer at the same time the production demand plan is running without any interference between each other or vice-versa.

**Note**  
Forecast model analyzer is an optional work flow. If you do not have multiple forecast models to compare, you can continue to use the default forecast model recommendations provided by AWS Supply Chain.

The forecast model analyzer supports two main evaluation scenarios:
+ Back test scenario – You set the forecast start date in the past. In this scenario, forecasts are created and accuracy metrics are calculated and reported for forecast periods of overlap with actual demand periods.
+ Forward forecast scenario – You do not set the forecast start date and there is no overlap between forecast and actual data. In this scenario, forecasts are created but since actual demand data is not available (for future periods), accuracy metrics are not calculated or reported. You can still verify how the demand is forecasted against recent trend and prior year(s) demand.

Make sure the demand plan settings are configured before you execute the forecast model analyzer. The forecast model analyzer inherits the demand plan settings for *time interval* and *hierarchy granularity*, while providing the flexibility to adjust the forecast horizon and optionally select the forecast start date.

You can choose to execute a back test or a forward forecast scenario. The default is forward forecast scenario where you do not specify a forecast start date and it is based on the last order date in the actual demand history. For more information, see [Create your first demand plan](onboarding.md). However, if you choose to run a back test scenario, you can override the forecast start date and select a date in the past for back testing purposes. When the selected forecast start date is later than the *outbound\$1order\$1line *dataset end date, the default planning cycle last order date in the actual demand history is used. When the selected forecast start date is before the *outbound\$1order\$1line * start date or if the length of the demand history is insufficient, the forecast will fail and display an error. For more information, see [Prequisites before uploading your dataset](data_quality.md).

 It is recommended to select the first of the month for monthly intervals or Monday for weekly intervals. If you choose a different date, Demand Planning will automatically adjust to the nearest default date. For example, if you selected Wednesday as the forecast start date, Demand Planning will select the next Monday as the forecast start date for weekly intervals. Similarly, selecting May 10th 2024 will result in June 1st 2024 as the planning cycle start date for monthly intervals.

**Note**  
Make sure you have at least four times the historical demand data for the forecast period you enter.

After reviewing the model analyzer results, you can select or change the choice of forecast algorithm in the forecast analyzer tool. Alternatively, you can choose not to use model analyzer and proceed to directly selecting or changing the choice of forecast algorithm to be used. AWS Supply Chain will pick the default forecast method for your dataset when the model analyzer is not used. 

Forecast Model Analyzer produces forecasts and forecast metrics from across multiple models. The list of models included in [Forecast Algorithms](forecast-algorithims.md).

## Viewing the forecast model analyzer details


To view the generated forecast model analyzer details, complete the following steps:

1. In the left navigation pane on the AWS Supply Chain dashboard, choose **Demand Planning** and then choose **Forecast Model Analyzer**.

1. Under **Forecast Model Analyzer**, you can view the meta data for each iteration of model analyzer including forecast summary that includes key metrics (such as the count of products, sites, channels and customers for which forecast were created), forecast scope such as time-interval, forecast horizon, forecast start date, the list of datasets used, forecast granularity, and input data used.

1. Under **Forecast(s) Vs. Actual Demand**, you can view a graph that displays the actual demand history, prior year demand, and the forecast to analyze trends and seasonality. You can adjust the **Viewing window start** and **Viewing window end** to review historical periods. Depending on the configured time-interval, you can view the historical sales for 28 days, 52 weeks, 48 months, and 10 years. You can view and compare up to five forecast results simultaneously.

1. Under **Measures**, choose **Edit** to edit the selected forecast models.

1. Under **Model Overview and Selection**, the tables displays a summary of the forecast methods that were evaluated. In a back testing scenario, the table also displays aggregate forecast accuracy metrics such as, WAPE, Bias %, MAPE and sMAPE. Additionally, you can choose **Select** to select the forecast model. The change will be applied during the subsequent forecast cycle.

1. Choose **Apply Selection to Demand Plan**.

   You can view up to two forecast model analyzer results simultaneously. The most recent analyzer result remains fully interactive, allowing you to select and apply the preferred forecast method after careful evaluating the products. This will be applied in the next forecast generation. The previous analyzer result is rendered as read-only. You can export both the results of the forecast method with actual demand history. The exported data includes detailed information at the forecast period and granularity level, forecast by the P10/50/90 quantiles. For back test scenarios, the export will include actual demand data and corresponding accuracy metrics.

   You can modify the forecast selection method using the forecast model analyzer or under demand plan settings anytime. The changes will be applied during the subsequent forecast cycle. The demand plan page will show meta data around the forecast method for current and the next forecast model.

# Manage Demand Plan settings


You can update the Demand Planning settings at any time to make sure that your forecasts are more accurate and take effect when the forecast is successfully generated.

**Note**  
Your prior forecast versions will be unavailable when you modify the *Time Interval* and *Hierarchy levels* on the **Demand Plan** page, because those prior versions will no longer align with the new forecast settings.  
When you modify the *Time interval* or *Hierarchy* configuration and when you regenerate the forecast, the accuracy metrics will not be displayed since the accuracy metric values are not relevant.

1. In the left navigation pane on the AWS Supply Chain dashboard, choose the **Settings** icon.

1. Under **Organization**, choose **Demand Planning**.

   The **Demand Planning Setting** page appears.

   Use the steps in [Create your first demand plan](onboarding.md) to edit the Demand Planning configuration settings.

# Role-based access control


AWS Supply Chain Demand Planning offers two default access levels:
+ Manage Access
  + Full demand planning capabilities (create, configure, generate forecasts)
  + Add overrides and publish demand plans
  + Export plans and reports
  + Access data validations, demand pattern analysis, and Model Analyzer
+ View Access
  + View created and published demand plans
  + View demand pattern analysis (**Demand patterns** tab in the **Forecast review** page)

**Topics**
+ [

# Managing user access
](manage-user-access.md)

# Managing user access


AWS Supply Chain administrators can modify roles and permissions. 

**Topics**
+ [

# Adding new users
](add-new-users.md)
+ [

# Modifying existing user access
](modify-user-access.md)
+ [

# Creating custom roles
](create-custom-roles.md)
+ [

# Dataset requirements
](dataset-requirements.md)

# Adding new users


To add new users, follow these steps:

1. Choose **Settings**, **Users and Permissions**, and **Users**.

1. Choose **Add New User** and search for user.

1. Assign permission role.

# Modifying existing user access


To modify existing user access, follow these steps:

1. Choose **Settings**, **Users and Permissions**, and **Users**.

1. From the **Permission Role** drop-down menu, select the appropriate role.
**Note**  
Users can have only one permission role. For multiple access privileges, create a custom role. 

# Creating custom roles


To create custom roles, follow these steps:

1. Choose **Settings**, **Users and Permissions**, and **Create New Role**.

1. Enter **Role Name** and choose **Manage** or **View access** in the **Demand Planning** section.

1. Configure dataset access.

1. Choose **Save**.

# Dataset requirements


The following are important dataset requirements:
+ Default roles automatically include access to all required datasets.
+ Custom roles must be granted access to seven essential datasets: asc\$1adp\$1dp\$1segmentation, asc\$1adp\$1forecast, asc\$1adp\$1planning\$1cycle\$1accuracy, outbound\$1order\$1line, product, product\$1alternate, and supplementary\$1time\$1series.
+ Access to "asc\$1adp\$1dp\$1segmentation" is specifically required for demand pattern and recommendation functionality.