Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Take the following steps to configure a model:

Create a Model Based on Negotiation Guidance Model Class

Go to Optimization > Models (MO) and click Add Model button at the top right.

A pop-up is shown where you provide a name for your model, and the Model Class, which is Negotiation Guidance. Another Model Class would belong to another kind of optimization model.

You can also duplicate an existing model. In this case, you will keep all the inputs of the previous model and will have to rerun all the steps to get the outputs. Once you have copied a model, you can change its name by double-clicking the blank side of its name/label.

⚠ Remember to do it before running the model. You cannot change a model name once it has been computed.

The same model class, Negotiation Guidance, can be used by many models. Use informative names for your models, providing information on your dataset, and your calculation case.

Set the Scope of Transactions (Definition Step)

In the Definition step, you map the inputs and set the scope of the model. The user inputs are always on the left. By default the inputs are filled with values set during the deployment of the accelerator.

LEARN MORE: For explanations of the fields, click here and scroll to Definition Step.

  • Transaction source – Datamart or Data Source used to calculate the segments. It must fill the requirements listed in Installation (Optimization - Negotiation Guidance). Once provided, some fields based on it appear:

    • Transaction Filter – Allows you to filter the data. We recommend that you define at least these filters:

      • Positive cost, revenue, and quantity.

      • Optimization target in a realistic range, for example between -0.1 and 1.

    • Customer Field

    • Product Field

    • Quantity Measure

    • Revenue Measure

    • Margin Measure

    • Optimization Target

    • Target Type

    • Weight Measure

    • List Price

The revenue, margin, and list price values are used to simulate the transactions when the optimization target value changes. If needed, create some calculated fields in the transaction source: revenue, margin, margin rate (i.e., margin/revenue).

Use the check-boxes for additional filters ensuring calculations without exceptions:

Once you apply the settings, the right panel provides:

  • Transactions in Scope – Data that are in the scope of the segmentation.

  • Filtered Out Transactions – Data that are filtered out by the set transactions filter. ⚠️ some data rows could not appear neither in the Transactions in Scope, nor in the Filtered Out Transactions. It is the case if a value is filtered out by the advanced filter, but its value is null and it is also filtered out with the opposite of the user filter. Please consider the Filtered Out Transactions portlet as an information that could miss some data.

You can then click the Continue button (top right) which takes you to the Analysis step.

Analyze Data in the Scope and Set Price Drivers Parameters (Analysis Step)

When you arrive at the Analysis step from the Definition step, the model first runs a calculation to prepare the data. It can take some minutes, depending on the size of the transaction source. The goal of this calculation is to format and save the data that will be used in the next steps of the model. Two tables are created: Transactions and Profile. The tables of a model are always accessible through the menu in the top right corner. But usually, you would not need to access them like this; all needed information is directly provided in the sections of the model.

Once the calculation has run, two tabs appear: Data Profile and Price Drivers Setup.

Data Profile

This tab is mainly a dashboard made of three portlets:

  • Scope Summary – This bar chart shows how much data is in the scope of the segmentation and how much is filtered, among different dimensions, from the total profit to the number of transactions.

  • Details for all Fields – This table is a summary of all the mapped fields and the dimensions of the transaction source, taking into account the filtered data in the scope. For any data, you get the min and max value, the number of nulls, the number of different values (cardinality), and information about the type of the field and its owner. It is useful to check if there are some nulls in fields that you would want to use as segmentation levels, or if some cardinalities are too high for a segmentation.

  • Distinct Values – This portlet is optional. It is created if you enter a value in the left user input Show distinct values for, and apply the settings. It allows you to deeper check any dimension of the data scope, to validate if you are interested in using it for the segmentation.

Price Drivers Setup

In this tab, you choose the fields on which the price drivers are calculated. The price drivers evaluate the importance of any feature of the data to forecast the optimization target (i.e. generally the margin percentage), the interactions among all features, detect hierarchies, and provide dimension recommendations for the segmentation. These values and recommendations will then help you define the dimensions you will take as levels of segmentation and their order. It will only provide help: you can use any dimension in the segmentation even if the related price driver has not been calculated. The more features you choose, and the more cardinality they have, the longer the calculation will take. By default, all the dimensions of cardinality between 2 and 30 are pre-selected (excluding non-dimensional features).

Once the choice is made, click Continue. The next step, Configuration, will first run the calculation of price drivers and then provide the next section.

Configure the Segmentation (Configuration Step)

When you arrive at the Configuration step from the Analysis step, the model first runs a calculation to get the importance of the price drivers, along with their interactions with each other and their hierarchies, and recommendations of dimensions for the segmentation. In this step, you can now set the parameters of the segmentation itself.

Select Segmentation Dimensions

The first section, Select segmentation dimensions, is a table of the dimensions available for the segmentation. The rank column display the result of the Segmentation Dimensions Recommendation portlet. If a dimension was not selected in the price drivers setup (Analysis step), its rank is null. Due to business concerns, you may want and you can use other dimensions. Check all the fields that you want to use for the segmentation. You can also drag and drop the lines of the input matrix to define the order of the segmentation levels. You must select at least one and up to twenty levels of segmentation. Note also that dimensions with null values cannot be selected as segmentation levels (to avoid issues at later stages).

Segmentation Dimensions Recommendation portlet: is it available in the dashboard. The segmentation dimensions recommendation is based on:

  • Feature importance is displayed in Feature Importance and Percent Importance portlets.

  • Feature Interactions are displayed in Feature Interactions and Interactions between features portlets

  • Hierarchies are displayed in the portlet of the same name.

The first portlets, Feature Importance and Percent Importance, are related to the importance of the features. Importance is a measure of how good the feature is at predicting the optimization target (defined at the Definition step). Simply put, the higher the importance, the more the feature will accurately provide a good segmentation.

The importance is determined through feature permutation and is further enhanced for segmentation purposes. It retains its inherent randomness, ensuring that feature importance remains unchanged when recalculated with the same features selected. To enhance segmentation accuracy, natural noise is eliminated by assigning a value of 0 to features that lack real importance. Additionally, values are adjusted to prioritize features with lower cardinality for segmentation.

The importance measure is highly dependent on the features selected in the Analysis step.

What does this mean?⤵️

For example, if you selected only two features, you might have high importance for the two features, but the features might not be good candidates for segmentation. A measure of the quality of the importance measure called Explained Variance, is available in Price Drivers - Relative Importance portlet (in the figure Price Drivers - Feature Importance portlet, Explained Variance is 0.71). Given that it is possible to select Numerical features in the Analysis step, but they cannot be used for the segmentation, they will be displayed differently in the Price Drivers - Feature Importance portlet, and will not be displayed in the Price Drivers - Relative Importance portlet.

LEARN MORE: To know more about features, click here.

Segmentation Threshold

The second section, Segmentation Thresholds, contains three minimum values. The segmentation tree will only build nodes that match all of these three thresholds.

Elasticity

The third section, Elasticity, lets you choose the elasticity model, either Sigmoidal or Exponential. This defines the kind of elasticity functions that will be fitted to each segment’s data to get the elasticity parameters. There is also a checkbox Calculate metrics based on elasticity. If true, then the next step will not only calculate the elasticity function but also the projected quantity, revenue, and margin if the optimal target metric value is used.

Two parameters allow you to cancel the elasticity calculations for uninteresting or too large segments:

  • Min depth of leaves for elasticity calculation – If the segment is deeper than this value to the leaf nodes, the elasticity is not calculated in it.

  • Max #Transactions in segment for elasticity calculation – If the segment represents more than this amount of transactions, the elasticity is not calculated in it.

LEARN MORE: To know more about elasticity models, click here.

Click the Continue button (top right) to go to the Segmentation step.

Set up Optimization Parameters (Segmentation Step)

When you arrive at the Segmentation step from the Configuration step, the model first runs a calculation to define the segmentation tree and all the segments. It can take a couple of minutes, depending on the size of the source data. This step is made of three tabs: Tree View, Indicators, and Optimization Setup.

Tree View

This is a dynamic view of the segmentation tree. You can expand and collapse it and get information about any segment.

For each segment, i.e. node of the tree, the following information is directly given: name of the segment (here S000002), the last level of segmentation and the value it belongs to (CG), the number of transactions in the segment (here 154), the average value of the optimization metric in the segment, the total revenue represented by the segment.

LEARN MORE: For more details on segments, click here.

If you click a segment, more information is provided in the right panel.

In addition to the information you already have in the tree view, you have some other metrics of the segment. In the segment Transactions, you also have access to the sample of input data present in this segment, two histograms, and a Quantity Chart. The first histogram is called Distribution.

The blue bars represent the number of transactions depending on the optimization metric value. Here, four transactions were done with a margin percentage between 16.3 and 16.4%. The green curve is the cumulative value. Next to it, 142 transactions were done with a margin percentage inferior to 16.4%. The black curve is a fit of a normal law on the data. The average value is 15% and the normal curve does not fit well on the segment.

The Quantity Chart displays one dot for each transaction of the segment

The dots are placed on X axis according to the quantity of the transaction and on Y axis according to the optimization target for the transaction, here the margin percentage. Note that for large segments only 500 transactions are displayed on the chart.

The second histogram is called Optimal Target Chart.

The histogram is the same as the previous chart. On top of it, the fitted elasticity curve is displayed in black. Expected revenue and expected profit curves are also displayed. They are shown as non-dimensional values. That means that they refer to an index and not an absolute value. It is the reason why the profit curve can be higher than the revenue one. Their shapes are important: they show the maximum of each curve (what is the optimal target metric value, in terms of profit or revenue) and how much the metric decreases if the target metric is not at its optimum. The vertical line shows the optimal target metric value based on a mix of profit and revenue: profit optimum represents 2/3 of the weight and revenue one represents 1/3.

⚠️ If the next step has run, its outputs are also displayed in this tree view. Read below for more details.

Indicators

This tab provides metrics and data that allow evaluating the pertinence of the segmentation results. There are five portlets:

LEARN MORE: For portlets and their explanations, click here.

  • Details by segment

  • Segmentation overview

  • Fitting of Elasticity

  • Elasticity fit chart

  • Elasticity fit by segment

Optimization Setup

The Optimization Setup tab allows you to define the way to provide an optimum for each segment. You define three global percentiles. It is possible to override these global values for some segments (see below).

How the Percentile Values Are Used?

Three percentiles and four strategy coefficients define the optimization strategy.

The percentile values are used to split each segment among the optimization target values. For instance, a floor percentile of 30 means that 30% of the optimization target values in each segment will be considered “below the floor”. The floor and the ceiling percentiles are required, the target percentile is optional: if not provided, a specific value will be calculated for each segment, based on the score segment (see below).

The strategy coefficients are required. They indicate how much the optimization target values will move toward the next threshold in the optimization calculation. The optimization based on the percentiles is performed like this:

  • If a transaction was done with an optimization metric lesser than the floor percentile value, it is increased toward the floor percentile value, by the amount defined with the first strategy coefficient.

  • If a transaction was done with an optimization metric between the floor percentile value and the target percentile one, it is increased toward the target percentile value, by the amount defined with the second strategy coefficient.

  • If a transaction was done with an optimization metric between the target percentile value and the ceiling percentile one, it is decreased toward the target percentile value, by the amount defined with the third strategy coefficient.

  • If a transaction was done with an optimization metric upper than the ceiling percentile value, it is decreased toward the ceiling percentile value, by the amount defined with the fourth strategy coefficient.

LEARN MORE: For score calculations, click here.

Override Percentiles for Selected Segments

If the global percentiles are not to be used on a specific segment, you can override them by checking the box Use percentile values from parameters table when present. Once it is checked, the values used will be the ones provided in the Parameter Table called Segments. Use the three dots at the top right of the model and then go to the Parameter Tables tab.

Any field of this table is editable if there is no calculation processing.

Be careful not to change the values defining the segments' names and levels, but only the next fields.

Manage the Segmentation Results (Result step)

When you arrive at the Results step from the Segmentation step, the model first runs the optimization calculation of each segment, it may take some minutes, depending on the size of the model. Then four tabs are provided: Impact, Tree View, Recommendations and Evaluation.

Impact

This dashboard displays the optimization result for the strategy floor-target-ceiling. Refer to the paragraph https://pricefx.atlassian.net/wiki/spaces/ACC/pages/4599284034#How-the-Percentile-Values-Are-Used%3F for more explanations.

The left panel allows you to choose the granularity of the bar charts and the overall realization dashboard.

  • Segmentation level – All the transactions of the scope are aggregated at this level to provide the bar charts of the projected revenue and profit. The global overview and the profit potential matrix are not impacted by this input value.

  • Overall Realization % – The optimization strategy is a best to have. You can simulate with this input the fact that only a certain percentage of the optimization target is reached. The value must be between 0 and 100 and will impact all the dashboard portlets.

Tree View

This view is the same as https://pricefx.atlassian.net/wiki/spaces/ACC/pages/4599284034#Tree-View but now more results are provided when clicking on a segment. The Price Recommendation section provides the score, target percentile, values of the optimization metric for the given percentiles, and projections of metrics based on the optimization strategy defined in the previous paragraph.

Recommendations

This tab provides the segments table with all the optimized values for each segment. In particular, the optimization values are provided for the revenue, quantity, and optimization metric.

LEARN MORE: For a detailed list of the fields and their explanations, click here.

Evaluation

This tab simulates a call of the evaluation of the model, which can be called from any other part of the partition (like price lists, or quotes). The goal is mainly for an advanced user to test the logics, or to ensure what the inputs and outputs of the model evaluation are.

LEARN MORE: To learn how to do this, click here and navigate all the way down to From Another Module.

  • No labels