/
Business User Reference (Optimization - Forecast)

Business User Reference (Optimization - Forecast)

To run the Forecast Accelerator, you must create a new Model Object from the Optimization > Models menu, using the Forecast Model Class to instantiate the machine learning model and give it the required parameters. The model will then be added to the list and will be editable.

New Model

  1. Go to Optimization > Models and add a new model.

    Button to add a model
  2. Define the name of the model and choose Forecast as Model Class.

    image-20250106-152112.png
    Create a model
  3. The new model opens. It is based on the data you defined when deploying the Accelerator.

  4. When a new model is opened for the first time, it will run a small calculation to generate a parameters table used for some configuration. This will only happen once.

The resulting interface is shown below. The steps of the model are: Definition, Additional Configuration, Model Training, Model Predictions, and Export Forecast. They will be explained in the next section.

Definition Step

This step aims to define the input data and their mapping. You can also apply a filter to your data. There are two tabs:

  • Definition defines the historical data and the scope of the forecast.

  • Model Configuration defines the model's settings.

Definition Tab

In this tab, you define the data source (typically a transaction source, see Data Requirements (Optimization - Forecast)) and map it. Some general recommendations are:

  • There are two different metrics that can be forecasted: Quantity or Revenue. The Quantity forecast is pretty similar to Accelerate Multifactor Elasticity Optimization. Depending on your choice, some options will be different. They are detailed along the documentation.

  • There are checkboxes at the bottom of the tab for automatic filtering of negative quantities and prices; it is recommended that you keep these checked.

  • If the Revenue at List Price is set, values will be used to calculate the discount percentage from the Revenue and store it in the processed_data table, as part of the model training features. The discount percentage values will be automatically updated over the price range of the elasticity calculation, leading to more accurate predictions.

  • There is a checkbox to perform a log transformation of the metric (quantity or revenue). This may help to produce a better forecast in some cases, for example, in cases where sales are heavily skewed towards low values – i.e., there are many long-tail products present in the data.

  • The Time Period field defines the level of aggregation for the forecast – daily, weekly, or monthly.

  • The forecast generated in the final step may be extended up to 15 future time periods (e.g. 15 weeks for a weekly forecast).

  • Additional categorical and numerical features may optionally be added to the model from the source to improve the forecast.

    • Examples to include in the categorical features are product hierarchies and product Pareto information. Categories would be unique for a given product and time period – fields such as channel and store should not be included here. If a category is not unique for a given product, the mode will be kept (i.e. the value most often associated to the given product).

    • Numerical features may include any numerical attributes that can be averaged over the selected time period, such as discounts, stock levels, or seasonal events. Values do not need to be unique – they will be averaged over time.

    • While numerical features can be included in the categorical features (for example, categories that use numerical codes), the same features should not be included in both categorical and numerical features at the same time. A warning will be displayed in this case.

    • In the case of a Revenue metric forecast, you can add a Customer field and some customer categorical features. The aggregation would be by default at the Product x Customer level. There is no Customer level in the Quantity-metric forecast.

The right side of the dashboard shows the transactions in scope, taking the user filters and the default ones into account. Only the fields mapped in the interface are displayed. The second portlet displays the filtered-out transactions. Alls the fields are displayed.

Model Configuration Tab

In this tab, you may configure details of the forecasting model. The default values will work well in most cases for a weekly forecast. Some general recommendations are:

  • The test set size defines how many time periods are held back to estimate the model’s performance on data it has not seen before. It should be small compared to the count of time periods covered in the source (up to ~20%).

  • The number of past time steps for lags and differences and the rolling statistical window will help the model learn seasonal variation in the data. For example, for a weekly forecast, the default of 4 weeks will capture monthly variation. For a daily forecast it is recommended to increase these values to 30. For a monthly forecast, change to 3 (for quarterly variation) or 12 (for yearly variation) depending on data availability.

  • You may choose to perform automatic tuning of the model’s internal parameters to improve the results. The default parameters will work in most cases but if run time is not a concern, increasing the number of tuning trials to 100 may provide small further improvements.

    • The tuning does not need to be run every time a model is calculated. If the model has been tuned before and the data has not changed significantly, you may deselect Perform automatic model tuning and select Use last known parameters from the buttons that appear to use the previously tuned parameters. If this option is selected before any tuning is done, default parameters will be used.

    • An advanced user may choose to apply their own configuration through choosing the Use parameters from parameters table option. The trainingParameters table, found in the parameters tables tab of the model tables may be edited according to the users' wishes.

    • The tuned parameters may be transferred to other models with a similar data scope to save time on tuning new models. Parameters can be exported from the model_parameters model table and imported into the trainingParameters parameter table. These parameters will then be used when choosing the Use parameters from parameters table option.

    • When working with very large datasets, the default parameters may result in poor model fits. Tuning a model with even a low number of trials will likely produce significantly better results in such cases.

Additional Configuration Step

This step aims to define all the additional configuration needed to train a model. There are two tabs:

  • Additional Sources define up to three additional sources that may optionally extend the Definition step source data or provide future values for the forecast.

  • Aggregation Level defines the aggregation used to train the forecast, but also the granularity to get at the final step, and some parameters for the export of the results.

Additional Sources Tab

This dashboard allows the user to define up to three additional sources that may optionally extend the Definition step source data or provide future values for the forecast. The available sources are:

  • Events source: to define business and calendar events.

  • Source with manually mapped feature: to extend a source column with future values or add a new feature to the transaction source.

  • Source with multiple features: to extend the source data for multiple features, but with a limitation that the manually mapped feature source does not have.

Events Source

Events Source – This source can be used to define business and calendar events. It should contain two fields: Date and Event names. The names of these fields are not strict. The events source table represents a list of dates with events and the names of events occurring on those dates. A date can occur multiple times, once for each event on the date. Past and future dates should be included to allow the model to learn from past events and apply them to future dates.

An example source is shown below:

Upon selecting the source, inputs will appear to map the Date and Event names fields.

Source with Manually Mapped Feature

This source can be used to add a single column mapped to a column in the Definition step source. This allows for using columns not having matching names in both sources. The secondary source will aggregated at the Date and Secondary Keys levels, and the feature will be summarized. Then it will be merged to the main source data, using the mapping entered by the user to match the fields between the main source and the secondary one.

The user inputs are:

  • Optional advanced filter

  • Date – The date field to match with the main source date field

  • Secondary Keys – Up to five fields that will be used to aggregate the secondary source, and then merge to the transaction source. Common examples include the SKU or product category. Up to 5 keys may be selected.

    • After selecting keys, additional inputs will appear per key to map the keys to the matching columns from the transaction source.

    • Keys must be present in the definition step source and should be unique for a given time period – fields such as channel and store should not be included here.

  • Feature – The feature column that should be added/updated.

  • Mapping to Definition Source – Optional entry. If there is a value, the secondary source feature field will overwrite the values of this selected transaction field.

  • Feature aggregation method – How the feature should be aggregated. The available options are:

    • Mode (= most frequent value)

    • Mean

    • Max

    • Min

    • Mode should be used if the feature contains categories, the others may be used for numerical features.

Important note: The additional sources will not overwrite any of the data from the transaction source. If there are historical dates in an additional source, any of selected features columns that match existing columns in the Definition step source will be ignored in the training step, and used only as future data for the forecast. A warning with details of the specific affected columns will be shown in the job details of the Model Training calculation in this case. If the intent is to provide also historical data, the feature column should not match any of the existing columns.

Source with Multiple Features

This source is much like the Source with manually mapped feature, the main difference being that any number of features may be included from the source. It leads to the following limitation:
It is important that the names of any features intended to update the forecast must match those of the columns mapped in the Definition step source. There is one notable exception:

There are still mapping entries for the Secondary Keys fields.

Important note: The additional sources will not overwrite any of the data from the transaction source. If there are historical dates in an additional source, any of selected features columns that match existing columns in the Definition step source will be ignored in the training step, and used only as future data for the forecast. A warning with details of the specific affected columns will be shown in the job details of the Model Training calculation in this case. If the intent is to provide also historical data, the feature column should not match any of the existing columns.

Aggregation Level Tab

This step aims to define the aggregation for the training, the split in postprocessing, and the export parameters.

Aggregation Fields

You define the aggregations to do before training the model. The product aggregation levels will define the granularity on the product side. By default, the value is set to the mapped product field, but you can remove it. Using a lower granularity will increase the training velocity, but decrease the precision of the results. You can define any of the product categorical features (defined in Business User Reference (Optimization - Forecast) | Definition Tab) as an aggregation level.

If the model is revenue-based, and you have defined a customer field, there is also a user input to define the customer aggregation levels. The default value is the mapped customer field. You can remove it and replace it with as many as customer categorical features (defined in Business User Reference (Optimization - Forecast) | Definition Tab) as you want.

You define also a replacement value for the null values. The default value is __null__. This string is used to replace the categorical feature null values. This way, the aggregations take all the aggregation level values into account, including the null one. If you export the forecast data (see Business User Reference (Optimization - Forecast) | Export Forecast Step), the corresponding category values will be set back to null.

Split Configuration

The training can be run at an aggregated level, but in general the user wants the results at the product level, or at the product and customer one, if a customer level is enabled.

To define the split, a Split Franction Length value is necessary. It determines how far back in time to consider transactions when calculating the split for each product (and customer) from aggregated level. The number represents a number of time periods.

During the preprocessing, if the aggregation levels do not contain the product (and customer) fields, a split weights table is calculated. It represents the weights bein used after the training, to split the aggregated predicted values. The weights depend on the historical share of the revenue or the quantity along each aggregation level. During the prediction postprocessing, this table is used to split the data.

Export parameters

The final forecast values for future dates can be exported to a Data Source. Each row represents one future date, one future product and potentially one customer. For a good database performance, it is important to define the keys of the Data Source.

The use can also choose if they want to create a new Data Source or update an existing one.

  • New Data Source: the default name is built as Forecast<metric type><time period> (for instance: ForecastRevenueDaily) and can be changed. If a Data Source with the same name already exists, it will be overwritten during the export.

  • Existing Data Source: the user can either truncate or append the Data Source. There will be a check on the Data Source’s consistency with the model data. If it is not consistent (inconsistent mapping, or duplicated key values), the export will not run.

Model Training Step

The Continue button in the Additional Configuration step will automatically run the calculation that starts the Model Training step. This calculation will gather the chosen fields from the Definition step source and process the data into a time series aggregated at the chosen levels. It will then enrich the data with the following extra features:

  • Date-related features to learn sales seasonality

  • Lag, difference, and rolling statistics features chosen in the Model Configuration tab of the Definition step

  • Product ages and recent sales activity features

If additional sources have been selected, they will be aggregated to the same level as the Definition step source and loaded into model tables. If they contain historical dates that match dates found in the Definition step source, they will be merged into the data used for model training.

The calculation will then split the data into a training set and test set according to the value chosen in the Model Configuration and use the training data to train a forecasting model, estimating its performance on the test set. The training results are displayed in four tabs:

  • Model Training Results

  • Train and Test Forecasts

  • Training Curves

  • Elasticity Settings

Model Training Results Tab

This tab displays portlets that summarize the performance of the forecasting model:

  • The first portlet is the forecast chart itself, where you can compare the actual values (either quantity or revenue) to the predictions. The green zone at the right of the chart represents the values of the test set, meaning that the model did not learn on those actual values.

  • The Feature Importances chart shows how strong the influence of each feature is on the model predictions. This may be used to assess the impact of including features in the additional categorical and numerical features inputs. Features with low importance can be removed to reduce future training time without significantly affecting model performance. Features with very low importances are not shown. If there are many features in your model, you may need to extend the height of this chart to view them clearly.

  • The Metrics table shows various statistics describing the model’s performance on both the training and test data. The available metrics are:

  • Note that when training a model using the log of the quantity field, the MAE, RMSE and Poisson metrics will reflect the errors in the predictions of the log quantity, and are therefore not directly comparable with results from a model that does not use the log quantity.

  • Scatter charts of the model predictions vs. the actual historical quantities for both the training data and the test data show how close the model’s predictions are to the known truth. The black dashed line represents the line of perfect predictions. A good model should show points being close to this line for both the training and test data.

Train and Test Forecasts Tab

This tab displays forecasts alongside historical metric (revenue or quantity) for the aggregation level values selected on the left panel. The test set is shaded in green. The historical price is also displayed.

Training Curves Tab

This tab is mainly for data scientists and advanced users. This tab displays diagnostic metrics (RMSSE, RMSE, MAE, Poisson Poisson negative log-likelihood) of the model performance over the course of the training.

Elasticity Settings Tab

The elasticity calculation is available only if the metric is Quantity and the Product is one of the aggregation levels. If so, it is up to the user to check or not the “Calculate Elasticity” button.

This tab defines the settings for the elasticity calculation in the Model Predictions step. The elasticity calculation may be optionally disabled to save on computation time. When Calculate elasticities is selected, the following options are available:

  • Range of prediction for elasticity fit - the elasticity is calculated around the historical product price considered at each date. This range defines the price range to calculate the elasticity fit. If the historical price is 100 and the range is 20, then the fit will be calculated for any price from 80 to 120.

  • Adjust elasticity curve fit based on last known price prediction – This setting will force the calculated elasticity curve to go through the model prediction at the last known price. This may increase the reliability of elasticities for products/datasets with low price variation and reduce bias in outputs.

  • Fitting to get a quantity of 0 with a high price – This setting will replace parameter b of sigmoid curve with 0 (see the formula in the next step). This leads to quantity dropping to 0 for high price.

  • Fallback level For products with low transaction counts or low quality elasticity curves (due to low historical price variation for example) a fallback level may be set to provide a simple elasticity for these products based on the elasticities of other products at the defined fallback level. The available levels will be taken from the additional categorical features selected in the Definition step.

  • Minimum number of transactions per product – If a fallback level is set, products with fewer than this number of transactions will use the fallback elasticity. Note that this is separate from the minimum number of transactions defined on the Definition step, which defines the minimum number of transactions for a product to be included in the data scope.

  • Minimum elasticity curve R² score If a fallback level is set, products with R² scores below this value will use the fallback elasticity. The R²score used for this threshold is the average R² score for each product across all forecasted dates. As such, it is still possible to see R² scores under this threshold in the Elasticities table on the Model Predictions step for selected dates.

Model Predictions Step

If you are happy with the training results you can click Continue. This will begin the calculation for the Model Predictions step. This calculation will train a final forecasting model on the full historical data and produce a recursive forecast (for the number of time steps selected in the definition step).

If the model is quantity-based and you selected the elasticity calculation, using the forecast predictions with varying prices it will then calculate elasticity curves for each product for each date of the forecast. More explanations about the elasticity in Business User Reference (Optimization - Multifactor Elasticity) | Model Predictions Step.

The results are then displayed in the following tabs:

Overview Tab

This tab displays overall results for all products in a number of charts and tables.

The sum of the individual forecasts is shown as the Total forecast, much like on the Train and Test Forecasts Tab of the Model Training step. The difference here is that the green shaded area represents the future forecasted values.

If the elasticity calculation is set, the following portlets are also displayed.

  • Three histograms showing the distribution of R² scores, the simple elasticities, and the k values (for more details about k, refer Business User Reference (Optimization - Multifactor Elasticity) | Model Predictions Step).

  • A table showing the elasticity data for each product and date of the forecast is included.

  • If the fallback level is set, a second table showing the simple elasticity for each value of the selected fallback level. Products identified as part of the fallback are removed from the previous elasticity data table.

Details Tab

This tab displays the forecast at a detailed level, set by the user. The granularity level corresponds to the aggregation levels, plus the export key levels, both defined in the step Additional Configuration, tab Aggregation Levels (Business User Reference (Optimization - Forecast) | Aggregation Level Tab). As for the Overview tab, the green shaded area represents the future forecasted values.

If the elasticity calculation is set:

  • the elasticity data table, filtered on the user entries, is also displayed. This table may be empty if the product is part of the fallback products.

  • the user inputs include a date selection.

  • if the user selects a product and a date, the corresponding elasticity curve is displayed. The chart shows the raw model predictions and a fitted s-curve, showing the R² score of the fit in the chart title.

Export Forecast Step

If you are happy with the training results you can click Continue. This will begin the calculation for the Export Forecast step. You have defined the parameters for this calculation in the step Additional Configuration, tab Aggregation Levels (see Business User Reference (Optimization - Forecast) | Export parameters). The calculation saves the forecast data in a DataSource, accessible in the Data Manager. This Data Source contains the forecast data, at a product, customer (if defined) and date level. For each raw, the forecast metric is saved, the unit price and the second metric are calculated.

Once the calculation is done, the dashboard displays a single button which is a shortcut to the forecast DataSource.