Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
/
Technical User Reference (Optimization - Markdown)
The Markdown Model Class organizes a list of logics to create the model architecture. It is transformed into an UI in the Pricefx platform that is organized in 4 steps:
Definition − Maps the sources of data and filters out invalid values.
Scope − Sets the scope of the optimization.
Configuration − Sets the parameters of the optimization.
Results − Looks at the outputs of the optimization.
There are two types of logics: calculation, which writes tables in the model, and evaluation, whose purpose is only to display some results. The standard Model Class definition is documented in https://pricefx.atlassian.net/wiki/spaces/KB/pages/3862364240.
All the logics of the Optimization - Markdown Accelerator follow a standard naming convention: first MD_ prefix, then the first letters of the step name, then Calc or Eval, depending on the formula nature, then the name of the tab. In the end, there is a library logic named MD_Lib.
Library
The logic is MD_Lib.
This logic contains some functions needed specifically for this Accelerator, such as reading its configuration from the application settings, applying the user filters in each part of the model, preprocessing the data for the charts, and many small helpers for the charts rendering. The elements ParametersUtils, LabelsUtils, and TablesUtils contain the names of many elements and fields of the models.
This lib is the place where to change the names inside the model, to reflect the user business vocabulary. Here you can also write a function to be used in different places of the model class.
Definition Step
There is no calculation logic run in this step. The tabs are Sales, Products, Stock, and Competition, and their related logics are MD_Def_Eval_Sales, MD_Def_Eval_Sales_Configurator, MD_Def_Eval_Products, MD_Def_Eval_Products_Configurator, MD_Def_Eval_Stock, MD_Def_Eval_Stock_Configurator, MD_Def_Eval_Competition, and MD_Def_Eval_Competition_Configurator.
Sales Tab
The logics are MD_Def_Eval_Sales and MD_Def_Eval_Sales_Configurator.
These logics define the Data Source and the mapping of the entries for the transactions, in the configurator. The main logic calls the configurator and the code for the dashboard portlets.
Two portlets show the data that will be materialized in the model (table sales) and the filtered-out rows.
The mapping could be changed if a field is removed or added.
Products Tab
The logics are MD_Def_Eval_Products and MD_Def_Eval_Products_Configurator.
These logics define the Data Source and the mapping of the entries for the products to markdown, in the configurator. The main logic calls the configurator and the code for the dashboard portlets.
Two portlets show the data that will be used as a filter at the product x store level on sales data before materializing it (table sales) and the filtered-out rows.
The mapping could be changed if a field is removed or added.
Stock Tab
The logics are MD_Def_Eval_Stock and MD_Def_Eval_Stock_Configurator.
These logics define the Data Source and the mapping of the entries for the stock data, in the configurator. The main logic calls the configurator and the code for the dashboard portlets.
Two portlets show the data that will be materialized in the model (table stock) and the filtered-out rows.
The mapping could be changed if a field is removed or added.
Competition Tab
The logics are MD_Def_Eval_Competition and MD_Def_Eval_Competition_Configurator.
These logics allow to use the Competition in the model. If the competition is used, the logics define the Data Source and the mapping of the entries for the competition data, in the configurator. The main logic calls the configurator and the code for the dashboard portlets.
Nothing is shown in the output if the competition is not added. When the competition is set, two portlets show the data that will be materialized in the model (table competition) and the filtered-out rows.
The mapping could be changed if a field is removed or added.
Scope Step
The calculation logic is MD_Sco_Calc_Dataprep and there is one tab called Scope.
Calculation: Data Preparation
The logic is MD_Sco_Calc_Dataprep.
This logic validates some prerequisites and materializes the Data Sources in the three tables sales, stock, and competition.
The prerequisites are the consistency of the types of the product and store fields in the different sources and no negative values for the quantities and the prices.
There is no aggregation in the stored data, except to provide summary values, like revenue per product or per store.
The tables sales, stock, and competition are created in the model and can be used in further steps. These tables are the main connection to external data.
The logics are MD_Sco_Eval_Scope and MD_Sco_Eval_Scope_Configurator.
These logics let the end user choose the filters to scope the optimization, through the configurator MD_Sco_Eval_Scope_Configurator.It means, for example, creating a model with a scope filtered on a set of product groups or a minimum store revenue. On the right, it displays some charts to evaluate how the scope is defined.
The scope applies to the Sales data only. The Competition and the Stock data are used as far as they join to the scope of the sales.
A map of filters that are called later in the code by libs.MD_Lib.ScopeUtils.inputs(model). The source data query, filtered on the scope, is given by this sample code, in the next parts of the code:
A dashboard with information, charts, and tables that summarize the scope of transactions taken into account according to the user filters.
The main reason to modify these logics is to enrich the scope outputs with the data from the stock or the competition sources or with different filters. The filtering options that we expose to the user are modified here.
Configuration Step
There is no calculation in this step. It is separated from the previous step only for better user experience. There are three tabs: General, Price, and Competition.
The logics are MD_Conf_Eval_General_Configurator, MD_Conf_Eval_Boundaries_Configurator, and MD_Conf_Eval_Competition_Configurator.
The Configuration tabs are used to retrieve the user objectives and as such, are configurators. They contain all the information needed to guide the optimization process by setting the constraints and the goals to reach. The separation into five tabs is mainly for user experience purposes. Each tab has a different meaning.
Tab
Logic
Aim
Tab
Logic
Aim
General
MD_Conf_Eval_General_Configurator
Sets the most global optimization inputs.
Price
MD_Conf_Eval_Boundaries_Configurator
Sets the goals for the shelf prices: change limits and fixed limits.
Competition
MD_Conf_Eval_Competition_Configurator
Disabled when the competition is not used. When the competition is used, the tab sets the targets of the gap between the model shelf prices and the competition ones, at a product x store level and in average.
The user inputs are stored to be aggregated in the following step with the rest of the data. The problem tables of the Optimization Engine use the configuration input to define the optimization goals.
The smallest common change is the change of the default values.
It is also quite common to change these logics by adding or modifying the constraints and objectives in the problem; for example, adding targets at some levels or setting thresholds to keep some values in check. These modifications are needed but not sufficient as the problem modeling itself must be changed to take them into account. It is possible to change the number of tabs in the Configuration step, but then the Model Class definition has to be modified too.
Results Step
The calculation logics are MD_Res_Calc_Run_Initialization, MD_Res_Calc_Run_Optimization, and MD_Res_Cal_PrepareResults. There are four tabs: Impact, Details, Glassbox, and Evaluation.
Calculation: Run Initialization
The logic is SP_Res_Calc_Run_Initialization.
The goal of this calculation is to create a simulation whose results will be used to initialize the optimization run in the next calculation. To do so, we need to create a Problem Description that details the structure of the problem to solve by the Optimization Engine and to give endpoints for the OE to get the data of the problem. The previous steps will change the problem by altering its scope and changing the objectives, and the data will be fed directly to the OE thanks to the model tables.
This step consists of:
Validation of the elasticity model.
Data manipulation to prepare the last tables needed by the OE. These logics are prefixed by “Create_” and create the model tables prefixed by “Problem_” that act as an endpoint for the OE. Be careful, their names follow a strict format: These endpoints must be named according to the Problem_nameOfTheSpace_nameOfTheScope present in the ProblemDescription.groovy and return the corresponding data. The library function problemTable creates automatically such a well-named problem table. The behavior of the OE and its way of reading data from endpoints highlight the need for a well-thought-out Scope step. Creating tables of the needed data, already computed and aggregated, implies being well aware of "where is the data I need" and "how do I need to transform it". That is why it is normal to refactor and improve the create table elements during the development of ProblemDescription.groovy.
Run.groovy element – Contains the code that handles the problem description. It takes the description of the problem and the advanced parameters user inputs, and triggers the simulation job thanks to model.startJobTriggerCalculation. The run will create tables prefixed by “Initialization”.
The problem description is used to configure the instantiation of a job running an OE. The OE has to have access to the correct endpoints to get the data and to know where to write back the results when the computation is finished.
The Groovy code does some preparation work. It creates Problem_nameOfTheSpace_nameOfTheScope tables – data manipulation to prepare the last tables needed by the OE. These logics are prefixed by "Create_" and create the model tables that act as an endpoint for the OE. Be careful, their names follow a strict format.
At the end of its run, the OE will write a set of model tables containing its results. This writing is done directly by the OE job and is not related to a Groovy logic.
The tables prefixed by “Results_” present the state of the objectives and constraints at the end of the optimization.
The tables prefixed by “Simulation_” present the value of computed variables marked as exposed in the description, typically including values of interest.
Any modification of the problem modeling and type of constraints to apply or objectives to reach might imply a change or creation of some problem tables.
In some cases, it could be useful to change the OE image and/or the OE tag that the job trigger refers to. Their values are in the element Run.groovy.
Calculation: Run Optimization
The logic is MD_Res_Calc_Run_Optimization.
The goal of this calculation is to create a simulation and an optimization run and retrieve their results. To do so, we use a Problem Description that details the structure of the problem to solve by the Optimization Engine and to give endpoints for the OE to get the data of the problem. The previous steps will change the problem by altering its scope and changing the objectives, and the data will be fed directly to the OE thanks to the model tables.
This step consists of:
Create_Global_All.groovy element – Prepares the last table needed by the OE based on the results of the previous Initialization calculation.
GlassboxConfig.groovy element – Parses the problem description to automate the post-processing.
Run.groovy element – Contains the code that handles the problem description. It takes the description of the problem and the advanced parameters user inputs, and triggers the two jobs thanks to model.startJobTriggerCalculation. Each run will return prefixed tables of similar structures. The jobs will run in parallel. The first one is the optimization itself and its outputs are prefixed by “Optimized”. The second one is a simulation: it simulates the first state of the optimization and will be a reference to compare before/after values in the results dashboards. Its outputs are prefixed by “Current”. The job type (optimization vs. simulation) is indicated by the input parameters of the model.startJobTriggerCalculation function.
Once the problem description is created, it is used to trigger the instantiation of a job running an OE configured by this file. The OE has to have access to the correct endpoints to get the data and to know where to write back the results when the computation is finished.
The Groovy code does some preparation work. It creates Problem_Global_All table – data manipulation to prepare the last table needed by the OE.
A Groovy element also reads the problem description to retrieve a list of parameters used during the postprocessing step to reformat the Glassbox data.
At the end of its run, the OE will write a set of model tables containing its results and the Glassbox information needed to understand why this solution was used. This writing is done directly by the two OE jobs, simulation and optimization, and is not related to a Groovy logic.
The Glassbox table provides optimization indicators for each pair of instantiated value finder – criterion. The simulation job does not create any Glassbox table.
The tables prefixed by “Results_” present the state of the objectives and constraints at the end of the optimization.
The tables prefixed by “Solution_” present the raw values that the system was meant to find (declared as Value_Finder in the Problem Description). The simulation job does not create any Solution table.
The tables prefixed by “Simulation_” present the value of computed variables marked as exposed in the description, typically including values of interest such as forecasted quantities.
In some cases, it could be useful to change the OE image and/or the OE tag that the job trigger refers to. Their values are in the element Run.groovy.
Calculation: Prepare Results
The logic is MD_Res_Calc_PrepareResults.
This calculation retrieves the outputs of the RunOptimization logic and reformats them to provide tables that can be used to show user-friendly optimization results. Each element stores one model table or some similar tables.
Create_Glassbox_ logics calculate aggregated metrics on the optimization agents criteria and value finders.
Various other tables are created to make the calculation of the dashboards of the Results step faster.
This calculation writes a collection of tables:
GlassboxVF_ tables – Their names are built as GlassboxVF_NameOfTheSpace_NameOfTheValueFinder, there is one table by value finder key (i.e. type of value finder). These tables store the overall values of each value finder.
GlassboxCriteria_ tables – Their names are built as GlassboxCriteria_NameOfTheSpace_NameOfTheCriterion, there is one table by criterion key (i.e. type of criterion). These tables store the overall values of each criterion.
Glassbox_AggregatedMetrics table – Summarizes the global interaction indicators between each value finder key and each criterion key.
Glassbox_VFs_by_Key table – Summarizes the global overall indicators of each value finder key.
Glassbox_Criteria_by_Key table – Summarizes the global overall indicators of each criterion key.
stock_coverage table – Calculates the stock coverage of each product Pareto category, for each period of time.
historical table – Represents the state for all periods of time before the optimization.
forecasted table – Has the same structure as the historical table, but provides the optimized values for all future periods of time.
details_product_store_period table – Contains all the information at product x store x period level for all periods of time, including more extended and unit values than there is in historical and forecasted tables. It is used as the main source of data for most of the Results step charts and tables.
impact_competition_positioning table – Gives the comparisons metrics between the competition prices and the optimized prices.
If the problem description has been changed, the Create_Forecasted element, which refers to many of the tables created by the OE, may change too.
The other most common reason to change the logic is to reformat some data to ease the work of providing charts in the Result step tabs.
Impact Tab
The logic is MD_Res_Eval_Impact.
This tab exposes the results of the OE execution in an HTML summary and some complex graphs, to analyze the forecasted values. The data are filtered according to the user entries set by MD_Res_Eval_Filter_Configurator. This filter configurator is shared by both Impact and Details tabs.
Add, modify, or remove visualisations. This step is one of the most straightforward ones and its modification should not impact the previous steps.
Details Tab
The logic is MD_Res_Eval_Details.
This tab clearly shows the output data of the optimization. This way, the user can see the impact of the optimization on every adjusted value. The data are filtered according to the user entries set by MD_Res_Eval_Filter_Configurator. This filter configurator is shared by both Impact and Details tabs.
Add, modify, or remove table outputs. In practice, if the problem description changes, this tab should provide tables to reflect the changes in the variables and constraints. The tables displayed here should allow the end user to access any useful information related to the optimization.
Glassbox and Drivers Tabs
The logics are MD_Res_Eval_Glassbox, MD_Res_Eval_Drivers, and MD_Res_Eval_Drivers_Configurator.
These tabs expose the technical state of the OE execution at the end of the process. It is a development tool to help tune the model. It is only a call to the Optimization Common Library tools. A previous call to the same library should have been run from a calculation (refer to the PrepareResult calculation paragraph).
The evaluation is used to access model results from outside of the model itself; for example in another logic. The first step is to use api.model("ModelName") to get the model and then use the function evaluate on it to retrieve an answer depending on the nature of the given parameters. The code needed to get these results is:
The product ID, Period Start Date, and Store ID are all optimal keys. The output of the evaluation is a result matrix with columns product, period_start_date, store, and unit_shelf_price, filtered on the values provided by the input parameters.