This section details the Model Class and the logics that the List Price Optimization Accelerator deploys. For each step, its aim, its outputs, and the main reasons to modify the logics are explained. If there is a need to modify the logics, refer to the process in Optimization Accelerator Customization and to documentation in Problem Modeling (Optimization Engine), Problem Description, and Problem Tables (Optimization Engine).
In this section:
List Price Optimization Model Class
The List Price Optimization Model Class organizes a list of logics to create the model architecture. It is transformed into an UI in the Pricefx platform that is organized in 4 steps:
Definition − Maps the sources of data and filters out invalid values.
Scope − Sets the scope of the optimization.
Configuration − Sets the parameters of the optimization.
Results − Reviews the outputs of the optimization.
There are two types of logics: calculation logics, which write tables in the model, and evaluation logics, whose purpose is only to display some results. The standard Model Class definition is documented in Model Class (MC).
All the logics of the List Price Optimization Accelerator follow a standard naming convention: first, the LP_ prefix, then the step order number and the first letters of the step name, followed by Calc or Eval, depending on the formula's nature, and finally the name of the tab. Additionally, there is a library logic named LP_Lib.
Library
The logic is LP_Lib.
Aim of the logic
This logic contains functions specifically needed for this Accelerator, including reading its configuration from the application settings, applying user filters to each part of the model, preprocessing data for the charts, and Optimization Engine configuration tools. The elements ParametersUtils, LabelsUtils, and TablesUtils contain the names of numerous elements and fields within the models.
Common reasons to modify the logic
This library is where you can change the names within the model to reflect the user's business vocabulary. You can also write functions here to be used in different parts of the model class.
It is also the place where the definition of the optimization problem can be changed.
Definition Step
There is no calculation logic run in this step. The tabs are Transactions and Costs.
Transactions Tab
The logics are LP_1_Def_Eval_Transactions and LP_1_Def_Eval_Transactions_Configurator.
Aim of the logic
These logics define the Data Source or Datamart, then the mapping of the entries for the transactions in the configurator. The main logic calls the configurator and the code for the dashboard portlets.
Outputs of the evaluation
Two portlets show the data that will be materialized in the model (table Aggregated
) and the filtered-out rows.
Common reasons to modify the logic
The mapping could be changed if a field is removed or added.
Costs Tab
The logics are LP_1_Def_Eval_Cost and LP_1_Def_Eval_Cost_Configurator.
Aim of the logic
These logics define the Data Source or Datamart, then the mapping of the entries for the costs in the configurator. The main logic calls the configurator and the code for the dashboard portlets.
Outputs of the evaluation
Two portlets show the data that will be materialized in the model (table Aggregated
) and the filtered-out rows.
Common reasons to modify the logic
The mapping could be changed if a field is removed or added.
Scope Step
The calculation logic is LP_2_Sco_Calc_Aggregating and there is one tab called Scope.
Calculation: Aggregating
The logic is LP_2_Sco_Calc_Aggregating.
Aim of the logic
This logic materializes the Data Sources in the table Aggregated
.
Outputs of the calculation
The table Aggregated
is created in the model and can be used in further steps.
Scope Tab
The logics are LP_2_Sco_Eval_Scope and LP_2_Sco_Eval_Scope_Configurator.
Aim of the logic
These logics let the end user choose the filters to scope the optimization, through the configurator LP_2_Sco_Eval_Scope_Configurator. It means, for example, creating a model with a scope filtered on a set of customer levels or a minimum product revenue. On the right, it displays some charts to evaluate how the scope is defined.
Outputs of the evaluation
A map of filters that are called later in the code by libs.LP_Lib.ScopeUtils.inputs(model)
. The source data query, filtered on the scope, is given by this sample code, in the next parts of the code:
def scope = libs.LP_Lib.ScopeUtils
def dmCtx = api.getDatamartContext()
def query = scope.applyFiltersToQuery(
dmCtx.newQuery(tables.getTable('Aggregated'), false).selectAll(true),
scope.inputs(model)
)
Common reasons to modify the logic
The main reason to modify these logics is to enrich the scope outputs.
Configuration Step
The calculation logic is LP_3_Conf_Calc_FilterScope and there are three tabs: Strategy, Alignments, and Advanced Parameters.
Calculation: Filtering
The logic is LP_3_Conf_Calc_FilterScope.
Aim of the logic
This logic materializes the aggregated table corresponding to the exact scope of the optimization. The reason for this logic is that with large data, a time overflow could happen with filtering on the flow to display the user dashboards.
Outputs of the calculation
A table called ScopedTransactions
, to be used as a reference for the following steps (and not the aggregated table called Aggregated
, outputted by the Scope calculation).
Common reasons to modify the logic
It is possible to add, remove, or change the table fields. But be careful, if you want to change the query to compute this table, the change has to be done in the Shelf Price library, libs.LP_Lib.AggregationUtils.scopedAggregatedTableQuery
.
Strategy
The logic is LP_3_Conf_Eval_Strategy_Configurator.
Aim of the logic
The tab is used to retrieve the user objectives and as such, it is a configurator. It contains all the information needed to guide the optimization process by setting the constraints and the goals to reach, except the alignment constraints.
Outputs of the evaluation
The user inputs are stored to be aggregated in the following step with the rest of the data. The problem tables of the Optimization Engine use the configuration inputs to define the optimization goals.
Common reasons to modify the logic
The smallest common change is the change of the default values.
It is also quite common to change these logics by adding or modifying the constraints and objectives in the problem; for example, adding targets at some levels or setting thresholds to keep some values in check. These modifications are needed but not sufficient as the problem modeling itself must be changed to take them into account. It is possible to change the number of tabs in the configuration step, but then the Model Class definition has to be modified too.
Business Alignments
The logic is LP_3_Conf_Eval_BusinessAlignments_Configurator.
Aim of the logic
The tab is used to retrieve the user alignment objectives and as such, it is a configurator. It contains all the information needed to define the alignment constraints in the optimization process.
Outputs of the evaluation
The user inputs are stored to be aggregated in the following step with the rest of the data. The problem tables of the Optimization Engine use the configuration inputs to define the optimization goals.
An important point to notice is that if an alignment is unchecked, its inputs are not visible in the configurator, but they are still saved in the model and can be automatically restored to the model by checking back the alignment.
Common reasons to modify the logic
It can be difficult to make changes to this configurator, as it contains complex mechanisms. You should reach out to the Optimization team if a change is required here.
Advanced Parameters
The logic is LP_3_Conf_Eval_AdvancedParameters_Configurator.
Aim of the logic
The tab is used to retrieve the advanced parameters for the optimization run, and as such it is a configurator. It is an advanced user tab.
Outputs of the evaluation
Most of the user entries are to adjust the parameters of the run of the Optimization Engine. But there are also fine-tuning values to adjust the problem description.
Common reasons to modify the logic
You may want to restrict access to changing these values or modify the default values.
Results Step
Calculations: Run Optimization and Prepare Results
Calculation 1: Run Optimization
The logic is LP_4_Res_Calc_RunOptimization.
Aim of the logic
The goal of this calculation is to create a simulation and an optimization run and retrieve their results. To do so, we need to create a Problem Description that details the structure of the problem to solve by the Optimization Engine and to give endpoints for the OE to get the data of the problem. The previous steps will change the problem by altering its scope and changing the objectives, and the data will be fed directly to the OE thanks to the model tables.
This step consists of:
Data manipulation to prepare the last tables needed by the OE. These logics are prefixed by Create_ and create the model tables prefixed by Problem_ that act as an endpoint for the OE. Be careful, the table names follow a strict format: these endpoints must be named according to the Problem_nameOfTheSpace_nameOfTheScope
present in the ProblemDescription.groovy and return the corresponding data. The behavior of the OE and its way of reading data from endpoints highlight the need for a well-thought-out Scope step. Creating tables of the needed data, already computed and aggregated, implies being well aware of “where is the data I need” and “how do I need to transform it”. Due to computations depending on user inputs from business rules and objectives, the tables have to be created after the Configuration step, explaining why the RunOptimization calculation can have the create logics. That is why it is normal to refactor and improve store logics during the development of ProblemDescription.groovy.
Run.groovy element – Contains the code that handles the problem description. It takes the description of the problem and the advanced parameters user inputs, and triggers the two optimization jobs thanks to model.startJobTriggerCalculation
. Each run will return prefixed tables of similar structures. The jobs will run in parallel. The first one is the optimization itself and its outputs are prefixed by “Optimized”. The second one is a simulation: it simulates the first state of the optimization and will be a reference to compare before/after values in the results dashboards. Its outputs are prefixed by “Current”. The job type (optimization vs. simulation) is indicated by the input parameters of the model.startJobTriggerCalculation
function.
An important piece of this process is the problem description itself, created in the lib: LP_Lib/ProblemDescriptionDefinition.groovy
. This element returns the problem description in a map. The content of the problem description is detailed in Problem Description.
In summary, once the problem description is created, it is sent via a Kafka message to trigger the instantiation of a job running an OE configured by this file. The OE has to have access to the correct endpoints to get the data and to know where to write back the results when the computation is finished.
Outputs of the calculation
The Groovy code does some preparation work. It creates Problem_nameOfTheSpace_nameOfTheScope tables – data manipulation to prepare the last tables needed by the OE. These logics are prefixed by “Create_” and create the model tables that act as an endpoint for the OE. Be careful, their names follow a strict format.
At the end of its run, the OE will write a set of model tables containing its results and the Glassbox information needed to understand why this solution was used. This writing is done directly by the OE and is not related to a Groovy logic. This job is done by the two OE jobs, the simulation one and the optimization one.
The Glassbox table provides optimization indicators for each pair of instantiated value finder – criterion. The simulation job does not create any Glassbox table.
The tables prefixed by Results_ present the state of the objectives and constraints at the end of the optimization.
The tables prefixed by Solution_ present the raw values that the system was meant to find (declared as Value_Finder in the Problem Description). The simulation job does not create any Solution table.
The tables prefixed by Simulation_ present the value of computed variables marked as exposed in the description, typically including values of interest such as forecasted quantities.
Common reasons to modify the logic
Any modification of the problem modeling and type of constraints to apply or objectives to reach implies a modification of the Problem Description, thus LP_Lib/ProblemDescriptionDefinition.groovy should change accordingly.
⚠ If the problem description changes, do not forget to check if it is necessary to change or create some Problem table.
In some cases, it could be useful to change the OE image and/or the OE tag that the job trigger refers to. Their values are in the element Run.groovy.
Calculation 2: Prepare Results
The logic is LP_Res_Calc_PrepareResults.
Aim of the logic
This calculation retrieves the outputs of the RunOptimization logic and reformats them to provide tables that can be used to show user-friendly optimization results. Each element stores one model table or some similar tables.
StoreGlassboxTables element is a call to the Optimization Common Library to calculate aggregated metrics on the optimization agents criteria and value finders. (Refer to https://pricefx.atlassian.net/wiki/spaces/ACCDEV/pages/4818370673/Optimization+Common+Libraries#GlassboxLib – internal access only.)
Store_Optimized and Store_Current logics create the Optimized and Current tables that mirror each other. The Current table details the state of the system before the optimization; the Optimized one is the state after the optimization. Therefore a lot of visualizations will directly use these two tables to highlight the evolution of key values and the impact of the optimization. They are both created the same way, using the function defined in AggregationUtils.groovy
.
Store_Details_ logics write a table called Details, comparing the Current and the Optimized values, at the finest granularity. This table is used to display most of the values in the Result step dashboards.
CreateOrUpdate_ChartParameterTable logic deals with a parameter table containing advanced parameters for the Results step, Influencers tab. This table is for an advanced user to set different chart parameters.
Outputs of the calculation
This calculation writes a collection of tables:
Current table – Details the state of the system before the optimization.
Optimized table – Details the state of the system after the optimization. Its architecture mirrors the Current table one.
Details table – The table contains the optimized value, the reference value (what would that value be if the OE did not optimize anything), their difference, and the dimension values. This way, the user can see the impact of the optimization on every adjusted value.
GlassboxVF_ tables – Their names are built as GlassboxVF_NameOfTheSpace_NameOfTheValueFinder, there is one table by value finder key (i.e. type of value finder). These tables store the overall values of each value finder.
GlassboxProbe_ tables – Their names are built as GlassboxProbe_NameOfTheSpace_NameOfTheValueFinder, there is one table by value finder key (i.e. type of value finder). These tables store the initial movement of each value finder.
GlassboxCriteria_ tables – Their names are built as GlassboxCriteria_NameOfTheSpace_NameOfTheCriterion, there is one table by criterion key (i.e. type of criterion). These tables store the overall values of each criterion.
Glassbox_AggregatedMetrics table – Summarizes the global interaction indicators between each value finder key and each criterion key.
Glassbox_VFs_by_Key table – Summarizes the global overall indicators of each value finder key.
Glassbox_Criteria_by_Key table – Summarizes the global overall indicators of each criterion key.
Glassbox_Spaces table – Summarizes the total number of criteria and value finders in each space.
Common reasons to modify the logic
If the problem description has been changed, then the parameter set that helps to create the Current and the Optimized tables has to be changed too. It is in the element AggregationUtils.groovy
.
The other most common reason to change the logic is to reformat some data to ease the work of providing charts in the Result step tabs.
Impact Tab
The logics are LP_Res_Eval_Impact and LP_Res_Eval_Filter_Configurator. A configurator to set the filters to apply to the dashboard is embedded. This configurator is also used in the tab Details.
Aim of the logic
This tab exposes the results of the OE execution in an HTML summary and some complex graphs, to analyze the forecasted values. The data are filtered according to the user entries set by LP_Res_Eval_Filter_Configurator. This filter configurator is shared by both Impact and Details tabs.
Outputs of the evaluation
Common reasons to modify the logic
Add, modify, or remove visualizations. This step is one of the most straightforward ones and its modification should not impact the previous steps.
Details Tab
The logics are LP_Res_Eval_Details and LP_Res_Eval_Filter_Configurator. A configurator to set the filters to apply to the dashboard is embedded. This configurator is also used in the tab Details.
Aim of the logic
This tab clearly shows the output data of the optimization. This way, the user can see the impact of the optimization on every adjusted value. The data are filtered according to the user entries set by LP_Res_Eval_Filter_Configurator. This filter configurator is shared by both Impact and Details tabs.
Outputs of the evaluation
Common reasons to modify the logic
Add, modify, or remove table outputs. In practice, if the problem description changes, this tab should provide tables to reflect the changes in the variables and constraints. The tables displayed here should allow the end user to access any useful information related to the optimization.
Glassbox and Drivers Tabs
The logics are LP_Res_Eval_Glassbox and LP_Res_Eval_Drivers.
Aim of the logic
These tabs expose the technical state of the OE execution at the end of the process. It is a development tool to help finetune the model.
Outputs of the evaluation
This logic displays charts that show the satisfaction, influences, impacts of the value finders and the criteria, initial movements of the value finders, and evolution of the criticality during the process of optimization. For details see Glassbox Dashboards.
Evaluation Tab
This tab mocks the model evaluation. More details in the next paragraph https://pricefx.atlassian.net/wiki/spaces/ACCDEV/pages/edit-v2/5034803262#Query-results.
Model Evaluations
More details about the model evaluations are available here: https://pricefx.atlassian.net/wiki/spaces/KB/pages/3862364240/Model+Class+MC#Evaluation-(evaluations). The List Price Optimization Model Class has two evaluations:
query_results
, handled by the logic LP_4_Res_Eval_Evaluation;
eval_product_batch
, handled by the logic LP_4_Res_Eval_Evaluation_Product_Batch.
Depending on your needs, you can add as many model evaluations as you want on top of those two.
Query results
The logic is LP_Res_Eval_Evaluation.
Aim of the logic
The evaluation is used to access model results from outside of the model itself; for example in another logic. The first step is to use api.model("ModelName")
to get the model and then use the function evaluate
on it to retrieve an answer depending on the nature of the given parameters. The code needed to get these results is:
def model = api.model(“TheModelUniqueName”)
def results = model.evaluate(
“query_results”,
[
product: "someProductID",
customer_level: "aCustomerCategory",
]
)["Data"]
The customer level value is optional. The output of the evaluation is the dimensions and the key values of the optimization model, for the product and the customer level given as input parameters.
Outputs of the evaluation
The output of the call is a Map where all the optimization problem agents are the keys and the values are the values for the given product and customer level inputs.
Common reasons to modify the logic
If the optimization model depends on new fields and if it provides new values, the evaluator should be changed to take them into account.
It is also possible to add other evaluators to the same model.
Product batch
The logic is LP_4_Res_Eval_Evaluation_Product_Batch.
Aim of the logic
This evaluation is use by the Price Setting Package accelerator (/wiki/spaces/ACCDEV/pages/1716748886) to get all the optimized prices in one call.
The evaluation is used to access model results from outside of the model itself; for example in another logic. The first step is to use api.model("ModelName")
to get the model and then use the function evaluate
on it to retrieve an answer. The code needed to get these results is:
def model = api.model(“TheModelUniqueName”)
def results = model.evaluate(“eval_product_batch”, [:])["ProductsWithPrice"]
Outputs of the evaluation
The output of the call is a Map with all the products of the scope and their optimized prices..
Common reasons to modify the logic
If the formatting needed by Price Setting Package changes.