Results Description (Optimization - Price Waterfall)

Once a model has been run, the Results step contains fours tabs:

Impact Tab

This tab displays comparisons between projections with the current pricing and projections with the optimized one. So, be careful, the “current” charts represent the initial state of the Optimization Engine and not historical values from the past. They show what the Optimization Engine thinks would happen if the historical prices and discounts were kept as they are. There are eight portlets in the dashboard, described below.

Overview

The overview provides a summary of the difference between the projections from current pricing and the projections from optimized pricing: global revenue and profit, but also the count of increased, stable, and decreased prices.

Overview portlet

Waterfall

The waterfall displays both the current and the optimized waterfall (in parallel). All the values are extended ones, within the scope of the optimization. They are presented in a bar chart, hence negative values like discounts are below the X-axis.

Waterfalls comparison

Revenue and Margin by Product and Customer Groups

These four portlets display the extended values of revenue and profit, aggregated by product and customer groups, to allow the end user to see where the optimization had more or less impact on the results.

Pocket Margin % vs. Volume

Two portlets display bubble charts, one with the current values and one with the optimized ones. Each bubble is a pair (product, customer). The size of the bubble is the revenue, the position of the bubble is the sold quantity on the X-axis, and the margin % on the Y-axis. A regression line shows the tendency. The color of the bubble of a given pair is the same in both portlets.

Details Tab

The Details tab displays the results tables in an easy way for the user to interact with. Different aggregations are provided: by product, by customer, by product and customer, and by customer group. Each table provides values for all the fields that are calculated at this level of granularity. The values provided are the historical, current, and optimized ones, plus the delta between the current and the optimized value. If needed, you can export these tables to Excel.

For the record:

  • historical means the aggregated value coming from the source Datamart;

  • current means the initial state of the Optimization Engine;

  • optimized means the final state of the Optimization Engine.

Glassbox Tab

This tab’s target audience is Configuration Engineers, Business Analysts, and team members working on improving a new optimization model. The tab provides insights to understand how the Optimization Engine reached its final state. One needs to understand the main concepts of the Optimization Engine to benefit from this dashboard. Two interesting pages to help users are and .

There are the following portlets in this dashboard:

  • three portlets about criteria satisfaction (Satisfaction, Criteria Comparion, and Satisfaction by Criteria Type),

  • two bar charts about the interactions between value finders and criteria (Impact vs. Satisfaction and Impact vs. Influence),

  • two bubble charts about the value finders' influences and the criteria impacts,

  • two boxplot charts about the value finders' variations and two others about the initial movements of the value finders,

  • a portlet comparing satisfaction of criteria between the current and optimized scenarios (in form of a Sankey chart),

  • and optionally a chart of the evolution of the criteria agents during the optimization process.

A full documentation of the Glassbox charts is available in Glassbox Dashboards.

Set the Evolution of Criteria Satisfaction Option

As it involves a large amount of data, the last portlet is only created if the “Profiling” option in the Advanced Parameters (Configuration step) is checked.

Influencers Tab

This tab’s target audience is Configuration Engineers, Business Analysts, and team members working on improving a new optimization model. One needs to understand the main concepts of the Optimization Engine to benefit from this dashboard.

The user is able to select a value finder, know its dimensions coordinates. The chart displays information about the criteria in relationship with the selected value finder. The complete documentation is in Value Finder - Criterion Influence.

Evaluation Tab

This tab simulates the evaluation logic that can be called from any other module. One portlet represents one visible element of the logic. The query that can be done from any other module of the partition is:

api.model("myModelName").evaluate( "query_results", [ product: "myProductId", customer: "myCustomerId", product_group: "myProductGroup", customer_group: "myCustomerGroup" ] )

Any of the second parameter keys are optional. The outputs depend on the provided keys. See the details.