

# Legacy HRNN recipes
<a name="legacy-user-personalization-recipes"></a>

Legacy HRNN recipes are no longer available. This documentation is for reference purposes.

 We recommend using the aws-user-personalizaton (User-Personalization) recipe over the legacy HRNN recipes. User-Personalization improves upon and unifies the functionality offered by the HRNN recipes. For more information, see [User-Personalization recipe](native-recipe-new-item-USER_PERSONALIZATION.md). 

Amazon Personalize can automatically choose the most appropriate hierarchical recurrent neural network (HRNN) recipe based on its analysis of the input data. This option is called AutoML. For more information, see [Using AutoML to choose an HRNN recipe (API only)](training-solution-auto-ml.md).

**Topics**
+ [Using AutoML to choose an HRNN recipe (API only)](training-solution-auto-ml.md)
+ [HRNN recipe (legacy)](native-recipe-hrnn.md)
+ [HRNN-Metadata recipe (legacy)](native-recipe-hrnn-metadata.md)
+ [HRNN-Coldstart recipe (legacy)](native-recipe-hrnn-coldstart.md)

# Using AutoML to choose an HRNN recipe (API only)
<a name="training-solution-auto-ml"></a>

Amazon Personalize can automatically choose the most appropriate hierarchical recurrent neural network (HRNN) recipe based on its analysis of the input data. This option is called AutoML. To perform AutoML, set the `performAutoML` parameter to `true` when you call the [CreateSolution](API_CreateSolution.md) API. 

You can also specify the list of recipes that Amazon Personalize examines to determine the optimal recipe, based on a metric you specify. In this case, you call the `CreateSolution` operation, specify `true` for the `performAutoML` parameter, omit the `recipeArn` parameter, and include the `solutionConfig` parameter, specifying the `metricName` and `recipeList` as part of the `autoMLConfig` object. 

How a recipe is chosen is shown in the following table. Either `performAutoML`or `recipeArn` must be specified but not both. AutoML is only performed using the HRNN recipes.


| performAutoML | recipeArn | solutionConfig | Result | 
| --- | --- | --- | --- | 
| true | omit | omitted | Amazon Personalize chooses the recipe | 
| true | omit | autoMLConfig: metricName and recipeList specified | Amazon Personalize chooses a recipe from the list that optimizes the metric | 
| omit | specified | omitted | You specify the recipe | 
| omit | specified | specified | You specify the recipe and override the default training properties | 

**Note**  
When `performAutoML` is `true`, all parameters of the `solutionConfig` object are ignored except for `autoMLConfig`.

# HRNN recipe (legacy)
<a name="native-recipe-hrnn"></a>

**Note**  
Legacy HRNN recipes are no longer available. This documentation is for reference purposes.  
 We recommend using the aws-user-personalizaton (User-Personalization) recipe over the legacy HRNN recipes. User-Personalization improves upon and unifies the functionality offered by the HRNN recipes. For more information, see [User-Personalization recipe](native-recipe-new-item-USER_PERSONALIZATION.md). 

The Amazon Personalize hierarchical recurrent neural network (HRNN) recipe models changes in user behavior to provide recommendations during a session. A session is a set of user interactions within a given timeframe with a goal of finding a specific item to fill a need, for example. By weighing a user's recent interactions higher, you can provide more relevant recommendations during a session.

HRNN accommodates user intent and interests, which can change over time. It takes ordered user histories and automatically weights them to make better inferences. HRNN uses a gating mechanism to model the discount weights as a learnable function of the items and timestamps.

Amazon Personalize derives the features for each user from your dataset. If you have done real-time data integration, these features are updated in real time according to user activity. To get a recommendation, you provide only the `USER_ID`. If you also provide an `ITEM_ID`, Amazon Personalize ignores it.

The HRNN recipe has the following properties:
+  **Name** – `aws-hrnn`
+  **Recipe Amazon Resource Name (ARN)** – `arn:aws:personalize:::recipe/aws-hrnn`
+  **Algorithm ARN** – `arn:aws:personalize:::algorithm/aws-hrnn`
+  **Feature transformation ARN** – `arn:aws:personalize:::feature-transformation/JSON-percentile-filtering`
+  **Recipe type** – `USER_PERSONALIZATION`

The following table describes the hyperparameters for the HRNN recipe. A *hyperparameter* is an algorithm parameter that you can adjust to improve model performance. Algorithm hyperparameters control how the model performs. Featurization hyperparameters control how to filter the data to use in training. The process of choosing the best value for a hyperparameter is called hyperparameter optimization (HPO). For more information, see [Hyperparameters and HPO](customizing-solution-config-hpo.md). 

The table also provides the following information for each hyperparameter:
+ **Range**: [lower bound, upper bound]
+ **Value type**: Integer, Continuous (float), Categorical (Boolean, list, string)
+ **HPO tunable**: Can the parameter participate in HPO?


| Name | Description | 
| --- | --- | 
| Algorithm hyperparameters | 
| hidden\$1dimension |  The number of hidden variables used in the model. *Hidden variables* recreate users' purchase history and item statistics to generate ranking scores. Specify a greater number of hidden dimensions when your Item interactions dataset includes more complicated patterns. Using more hidden dimensions requires a larger dataset and more time to process. To decide on the optimal value, use HPO. To use HPO, set `performHPO` to `true` when you call [CreateSolution](API_CreateSolution.md) and [CreateSolutionVersion](API_CreateSolutionVersion.md) operations. Default value: 43 Range: [32, 256] Value type: Integer HPO tunable: Yes  | 
| bptt |  Determines whether to use the back-propagation through time technique. *Back-propagation through time* is a technique that updates weights in recurrent neural network-based algorithms. Use `bptt` for long-term credits to connect delayed rewards to early events. For example, a delayed reward can be a purchase made after several clicks. An early event can be an initial click. Even within the same event types, such as a click, it’s a good idea to consider long-term effects and maximize the total rewards. To consider long-term effects, use larger `bptt` values. Using a larger `bptt` value requires larger datasets and more time to process. Default value: 32 Range: [2, 32] Value type: Integer HPO tunable: Yes  | 
| recency\$1mask |  Determines whether the model should consider the latest popularity trends in the Item interactions dataset. Latest popularity trends might include sudden changes in the underlying patterns of interaction events. To train a model that places more weight on recent events, set `recency_mask` to `true`. To train a model that equally weighs all past interactions, set `recency_mask` to `false`. To get good recommendations using an equal weight, you might need a larger training dataset. Default value: `True` Range: `True` or `False` Value type: Boolean HPO tunable: Yes  | 
| Featurization hyperparameters | 
| min\$1user\$1history\$1length\$1percentile |  The minimum percentile of user history lengths to include in model training. *History length* is the total amount of data about a user. Use `min_user_history_length_percentile` to exclude a percentage of users with short history lengths. Users with a short history often show patterns based on item popularity instead of the user's personal needs or wants. Removing them can train models with more focus on underlying patterns in your data. Choose an appropriate value after you review user history lengths, using a histogram or similar tool. We recommend setting a value that retains the majority of users, but removes the edge cases.  For example, setting `min__user_history_length_percentile to 0.05` and `max_user_history_length_percentile to 0.95` includes all users except those with history lengths at the bottom or top 5%. Default value: 0.0 Range: [0.0, 1.0] Value type: Float HPO tunable: No  | 
| max\$1user\$1history\$1length\$1percentile |  The maximum percentile of user history lengths to include in model training. *History length* is the total amount of data about a user. Use `max_user_history_length_percentile` to exclude a percentage of users with long history lengths because data for these users tend to contain noise. For example, a robot might have a long list of automated interactions. Removing these users limits noise in training. Choose an appropriate value after you review user history lengths using a histogram or similar tool. We recommend setting a value that retains the majority of users but removes the edge cases. For example, setting `min__user_history_length_percentile to 0.05` and `max_user_history_length_percentile to 0.95` includes all users except those with history lengths at the bottom or top 5%. Default value: 0.99 Range: [0.0, 1.0] Value type: Float HPO tunable: No  | 

# HRNN-Metadata recipe (legacy)
<a name="native-recipe-hrnn-metadata"></a>

**Note**  
Legacy HRNN recipes are no longer available. This documentation is for reference purposes.  
 We recommend using the aws-user-personalizaton (User-Personalization) recipe over the legacy HRNN recipes. User-Personalization improves upon and unifies the functionality offered by the HRNN recipes. For more information, see [User-Personalization recipe](native-recipe-new-item-USER_PERSONALIZATION.md). 

The HRNN-Metadata recipe predicts the items that a user will interact with. It is similar to the [HRNN](native-recipe-hrnn.md) recipe, with additional features derived from contextual, user, and item metadata (from Interactions, Users, and Items datasets, respectively). HRNN-Metadata provides accuracy benefits over non-metadata models when high quality metadata is available. Using this recipe might require longer training times.

The HRNN-Metadata recipe has the following properties:
+  **Name** – `aws-hrnn-metadata`
+  **Recipe Amazon Resource Name (ARN)** – `arn:aws:personalize:::recipe/aws-hrnn-metadata`
+  **Algorithm ARN** – `arn:aws:personalize:::algorithm/aws-hrnn-metadata`
+  **Feature transformation ARN** – `arn:aws:personalize:::feature-transformation/featurize_metadata`
+  **Recipe type** – `USER_PERSONALIZATION`

The following table describes the hyperparameters for the HRNN-Metadata recipe. A *hyperparameter* is an algorithm parameter that you can adjust to improve model performance. Algorithm hyperparameters control how the model performs. Featurization hyperparameters control how to filter the data to use in training. The process of choosing the best value for a hyperparameter is called hyperparameter optimization (HPO). For more information, see [Hyperparameters and HPO](customizing-solution-config-hpo.md). 

The table also provides the following information for each hyperparameter:
+ **Range**: [lower bound, upper bound]
+ **Value type**: Integer, Continuous (float), Categorical (Boolean, list, string)
+ **HPO tunable**: Can the parameter participate in hyperparameter optimization (HPO)?


| Name | Description | 
| --- | --- | 
| Algorithm Hyperparameters | 
| hidden\$1dimension |  The number of hidden variables used in the model. *Hidden variables* recreate users' purchase history and item statistics to generate ranking scores. Specify a greater number of hidden dimensions when your Item interactions dataset includes more complicated patterns. Using more hidden dimensions requires a larger dataset and more time to process. To decide on the optimal value, use HPO. To use HPO, set `performHPO` to `true` when you call [CreateSolution](API_CreateSolution.md) and [CreateSolutionVersion](API_CreateSolutionVersion.md) operations. Default value: 43 Range: [32, 256] Value type: Integer HPO tunable: Yes  | 
| bptt |  Determines whether to use the back-propagation through time technique. *Back-propagation through time* is a technique that updates weights in recurrent neural network-based algorithms. Use `bptt` for long-term credits to connect delayed rewards to early events. For example, a delayed reward can be a purchase made after several clicks. An early event can be an initial click. Even within the same event types, such as a click, it’s a good idea to consider long-term effects and maximize the total rewards. To consider long-term effects, use larger `bptt` values. Using a larger `bptt` value requires larger datasets and more time to process. Default value: 32 Range: [2, 32] Value type: Integer HPO tunable: Yes  | 
| recency\$1mask |  Determines whether the model should consider the latest popularity trends in the Item interactions dataset. Latest popularity trends might include sudden changes in the underlying patterns of interaction events. To train a model that places more weight on recent events, set `recency_mask` to `true`. To train a model that equally weighs all past interactions, set `recency_mask` to `false`. To get good recommendations using an equal weight, you might need a larger training dataset. Default value: `True` Range: `True` or `False` Value type: Boolean HPO tunable: Yes  | 
| Featurization hyperparameters | 
| min\$1user\$1history\$1length\$1percentile |  The minimum percentile of user history lengths to include in model training. *History length* is the total amount of data about a user. Use `min_user_history_length_percentile` to exclude a percentage of users with short history lengths. Users with a short history often show patterns based on item popularity instead of the user's personal needs or wants. Removing them can train models with more focus on underlying patterns in your data. Choose an appropriate value after you review user history lengths, using a histogram or similar tool. We recommend setting a value that retains the majority of users, but removes the edge cases.  For example, setting `min__user_history_length_percentile to 0.05` and `max_user_history_length_percentile to 0.95` includes all users except those with history lengths at the bottom or top 5%. Default value: 0.0 Range: [0.0, 1.0] Value type: Float HPO tunable: No  | 
| max\$1user\$1history\$1length\$1percentile |  The maximum percentile of user history lengths to include in model training. *History length* is the total amount of data about a user. Use `max_user_history_length_percentile` to exclude a percentage of users with long history lengths because data for these users tend to contain noise. For example, a robot might have a long list of automated interactions. Removing these users limits noise in training. Choose an appropriate value after you review user history lengths using a histogram or similar tool. We recommend setting a value that retains the majority of users but removes the edge cases. For example, setting `min__user_history_length_percentile to 0.05` and `max_user_history_length_percentile to 0.95` includes all users except those with history lengths at the bottom or top 5%. Default value: 0.99 Range: [0.0, 1.0] Value type: Float HPO tunable: No  | 

# HRNN-Coldstart recipe (legacy)
<a name="native-recipe-hrnn-coldstart"></a>

**Note**  
Legacy HRNN recipes are no longer available. This documentation is for reference purposes.  
 We recommend using the aws-user-personalizaton (User-Personalization) recipe over the legacy HRNN recipes. User-Personalization improves upon and unifies the functionality offered by the HRNN recipes. For more information, see [User-Personalization recipe](native-recipe-new-item-USER_PERSONALIZATION.md). 

Use the HRNN-Coldstart recipe to predict the items that a user will interact with when you frequently add new items and interactions and want to get recommendations for those items immediately. The HRNN-Coldstart recipe is similar to the [HRNN-Metadata](native-recipe-hrnn-metadata.md) recipe, but it allows you to get recommendations for new items. 

In addition, you can use the HRNN-Coldstart recipe when you want to exclude from training items that have a long list of interactions either because of a recent popularity trend or because the interactions might be highly unusual and introduce noise in training. With HRNN-Coldstart, you can filter out less relevant items to create a subset for training. The subset of items, called *cold items*, are items that have related interaction events in the Item interactions dataset. An item is considered a cold item when it has the following:
+ Fewer interactions than a specified number of maximum interactions. You specify this value in the recipe's `cold_start_max_interactions` hyperparameter.
+ A shorter relative duration than the maximum duration. You specify this value in the recipe's `cold_start_max_duration` hyperparameter.

To reduce the number of cold items, set a lower value for `cold_start_max_interactions` or `cold_start_max_duration`. To increase the number of cold items, set a greater value for `cold_start_max_interactions` or `cold_start_max_duration`.



HRNN-Coldstart has the following cold item limits:
+ `Maximum cold start items`: 80,000
+ `Minimum cold start items`: 100

If the number of cold items is outside this range, attempts to create a solution will fail.

The HRNN-Coldstart recipe has the following properties:
+  **Name** – `aws-hrnn-coldstart`
+  **Recipe Amazon Resource Name (ARN)** – `arn:aws:personalize:::recipe/aws-hrnn-coldstart`
+  **Algorithm ARN** – `arn:aws:personalize:::algorithm/aws-hrnn-coldstart`
+  **Feature transformation ARN** – `arn:aws:personalize:::feature-transformation/featurize_coldstart`
+  **Recipe type** – `USER_PERSONALIZATION`

For more information, see [Choosing a recipe](working-with-predefined-recipes.md).

The following table describes the hyperparameters for the HRNN-Coldstart recipe. A *hyperparameter* is an algorithm parameter that you can adjust to improve model performance. Algorithm hyperparameters control how the model performs. Featurization hyperparameters control how to filter the data to use in training. The process of choosing the best value for a hyperparameter is called hyperparameter optimization (HPO). For more information, see [Hyperparameters and HPO](customizing-solution-config-hpo.md). 

The table also provides the following information for each hyperparameter:
+ **Range**: [lower bound, upper bound]
+ **Value type**: Integer, Continuous (float), Categorical (Boolean, list, string)
+ **HPO tunable**: Can the parameter participate in HPO?


| Name | Description | 
| --- | --- | 
| Algorithm hyperparameters | 
| hidden\$1dimension | The number of hidden variables used in the model. *Hidden variables* recreate users' purchase history and item statistics to generate ranking scores. Specify a greater number of hidden dimensions when your Item interactions dataset includes more complicated patterns. Using more hidden dimensions requires a larger dataset and more time to process. To decide on the optimal value, use HPO. To use HPO, set `performHPO` to `true` when you call [CreateSolution](API_CreateSolution.md) and [CreateSolutionVersion](API_CreateSolutionVersion.md) operations. Default value: 149 Range: [32, 256] Value type: Integer HPO tunable: Yes  | 
| bptt | Determines whether to use the back-propagation through time technique. *Back-propagation through time* is a technique that updates weights in recurrent neural network-based algorithms. Use `bptt` for long-term credits to connect delayed rewards to early events. For example, a delayed reward can be a purchase made after several clicks. An early event can be an initial click. Even within the same event types, such as a click, it’s a good idea to consider long-term effects and maximize the total rewards. To consider long-term effects, use larger `bptt` values. Using a larger `bptt` value requires larger datasets and more time to process. Default value: 32 Range: [2, 32] Value type: Integer HPO tunable: Yes  | 
| recency\$1mask |  Determines whether the model should consider the latest popularity trends in the Item interactions dataset. Latest popularity trends might include sudden changes in the underlying patterns of interaction events. To train a model that places more weight on recent events, set `recency_mask` to `true`. To train a model that equally weighs all past interactions, set `recency_mask` to `false`. To get good recommendations using an equal weight, you might need a larger training dataset. Default value: `True` Range: `True` or `False` Value type: Boolean HPO tunable: Yes  | 
| Featurization hyperparameters | 
| cold\$1start\$1max\$1interactions |  The maximum number of user-item interactions an item can have to be considered a cold item. Default value: 15 Range: Positive integers Value type: Integer HPO tunable: No  | 
| cold\$1start\$1max\$1duration | The maximum duration in days relative to the starting point for a user-item interaction to be considered a cold start item. To set the starting point of the user-item interaction, set the `cold_start_relative_from` hyperparameter. Default value: 5.0 Range: Positive floats Value type: Float HPO tunable: No  | 
| cold\$1start\$1relative\$1from |  Determines the starting point for the HRNN-Coldstart recipe to calculate `cold_start_max_duration`. To calculate from the current time, choose `currentTime`. To calculate `cold_start_max_duration` from the timestamp of the latest item in the Item interactions dataset, choose `latestItem`. This setting is useful if you frequently add new items. Default value: `latestItem` Range: `currentTime`, `latestItem` Value type: String HPO tunable: No  | 
| min\$1user\$1history\$1length\$1percentile |  The minimum percentile of user history lengths to include in model training. *History length* is the total amount of data about a user. Use `min_user_history_length_percentile` to exclude a percentage of users with short history lengths. Users with a short history often show patterns based on item popularity instead of the user's personal needs or wants. Removing them can train models with more focus on underlying patterns in your data. Choose an appropriate value after you review user history lengths, using a histogram or similar tool. We recommend setting a value that retains the majority of users, but removes the edge cases.  For example, setting `min__user_history_length_percentile to 0.05` and `max_user_history_length_percentile to 0.95` includes all users except those with history lengths at the bottom or top 5%. Default value: 0.0 Range: [0.0, 1.0] Value type: Float HPO tunable: No  | 
| max\$1user\$1history\$1length\$1percentile |  The maximum percentile of user history lengths to include in model training. *History length* is the total amount of data about a user. Use `max_user_history_length_percentile` to exclude a percentage of users with long history lengths because data for these users tend to contain noise. For example, a robot might have a long list of automated interactions. Removing these users limits noise in training. Choose an appropriate value after you review user history lengths using a histogram or similar tool. We recommend setting a value that retains the majority of users but removes the edge cases. For example, setting `min__user_history_length_percentile to 0.05` and `max_user_history_length_percentile to 0.95` includes all users except those with history lengths at the bottom or top 5%. Default value: 0.99 Range: [0.0, 1.0] Value type: Float HPO tunable: No  | 