This section answers the following questions:
- Which dimensions to define or not?
- Where space to situate the criteria?
- How to define aggregating computations?
Chose Dimensions in Spaces
Main idea: dimensions are non-contingent features to the space required to situate a variable or a criterion in the problem.
A critical step of problem modeling is the definition of the dimensions and of the spaces they form.
Most of the time, dimensions are obvious: they simply are what defines levers and criteria. For instance, if your levers are product prices and your criteria are margin targets per product family and customer, then your dimensions are product, product family, and customer. There will be a 1-dimensional space product, and a 2-dimensional space product family x customer. As a side note, product and product family are dimensions that share the same axis (products), they can be seen as parallel and you should never define a product x product family space as it would make no sense.
In some cases, such as make-to-order projects, identifying the correct dimensions can get complicated. For instance in this problem derived from a previous use-case (only accessible to PFX employees), prices are not assigned one-to-one to products but are assigned to several products at once, regrouped by the same Category, Assortment, and Packaging. So these are dimensions and we do not need a product dimension at this point. But then, prices are also differentiated by Brand features. So should Brand also be a dimension? The answer is no because in this example the Brand feature depends on the underlying products in the Category x Assortment x Packaging space. In other words, Brand is contingent: for a given Category x Assortment x Packaging triplet, only one Brand value is possible. Whereas for a Category x Assortment couple, every Packaging value is possible, the value of Packaging does not depend on the other dimensions.
If you make a Brand dimension, you will end up with a "holey cheese" space that is difficult to navigate and the system will be way larger than needed. Instead, the Brand feature should be used to define a scope within the Category x Assortment x Packaging space.
A ground-rule could be: dimensions are non-contingent features to the space required to situate a variable or a criterion in the problem.
The Granularity of Levers and Criteria
Main idea: when possible, situate levers and criteria in the same space, in other words, place them at the same level of granularity.
Usually, we start by situating levers and criteria to identify the dimensions of the problem and the relevant spaces.
Levers are often imposed by the customer (typically some kind of prices and discounts): if the customer wants discounts at a product x customer family level, then so be it. Criteria, however, are often more vaguely described. It is up to optimization engine configuration to translate the customer needs and requirements into actual criteria. A good practice is to seek to simplify the work at the future agents' local level, i.e. at the level of the lever. What is simple for a Value Finder? It is when it has few neighbors (ideally just one) and when these neighbors are shared with as few other Value Finders as possible (ideally zero).
The best way to achieve this is to situate levers and criteria in the same space, in other words, place them at the same level of granularity. For instance, one of our typical cases is turnover stability when a customer wants to streamline its pricing: the customer wants to change list prices (situated in a product space) and discounts (situated in a product family x customer group space) but wants its global turnover to stay the same. Where should we situate the constraint of equality between current and historical turnovers?
- Placing this constraint at the highest granularity - in what would be the global space - is the most straightforward solution. But this sole criterion would constrain all the prices and all the discounts, augmenting their interferences and risking to overconstrain lots of them if there are other criteria elsewhere in the system.
- It also can cause accuracy problems if the minimum amplitude of the Value Finders is not small enough: the more Value Finders the smaller it should be to ensure precision since the error of each local Value Finders is cumulated in the global criterion.
- Placing the constraint at the lowest granularity, in product x customer spaces, would create lots of agents and give lots of neighbors to Value Finders. For instance, each discount would have as many neighbors as there are (product, customer) pairs for a given product family x customer group space. This can be a lot, and these neighbors may be antinomic at times so it would take more time to converge.
- Placing the constraint at the same granularity as the discounts is the best solution here as it would ensure that each discount has only one neighbor, and each constraint is influenced by only one discount and some prices. Then, discounts would always be able to satisfy their constraint whatever the prices do.
The Multi-Agent Optimization Engine should be able to converge no matter what we chose, but the convergence will be smoother with the third one.
Of course, in some cases, these three propositions may not be exactly the same from a domain point of view and there are often other considerations to weigh in when making this choice. This is the whole question of understanding the customer's needs.
Intermediate Variables
Main idea: large aggregating computations with lots of inputs and parameters should be avoided.
Once levers and criteria are situated, we often realize that there is a computed variable that aggregates a large amount of lower granularity values. Large aggregating computations with lots of inputs and parameters should be avoided. Decomposing these computations into intermediate variables at intermediate granularity levels should be preferred to gain clarity, flexibility, and maintainability. If the system runs on multiple threads, this also improves performance as agents run in parallel while a Computation Agent has lots of processing to do on its inputs which do not scale well. The trade-off is that the resulting multi-agent system will be larger, which could cause some memory issues.
An example of such decomposition could be:
- If there are
- list prices in a product space
- margin targets in a product family x customer group space
- do not
- use list prices and costs in the product space and volumes in the product x customer space directly to compute margins in the product family x customer group space with a large aggregating computation
- do
- compute margins in the product x customer space from volume in the same space and list prices and costs from the product space
- introduce a product x customer group space to compute a margin, that is simply the sum of product x customer margins
- sum margins from product x customer group spaces to compute the margin in the corresponding product family x customer group space where you apply the criteria
In this sense, the simple case shown as an example in the Graphical Norm and Description File sections is bad practice, sorry about that.