Chose Dimensions in Spaces
Main idea: dimensions are non-contingent features to the space required to situate a variable or a criterion in the problem.
A critical step of problem modeling is the definition of the dimensions and of the spaces they form.
Most of the timestime, dimensions are obvious: they simply are what defines levers and criteria. For instance, if your levers are product prices and your criteria are margin targets per product family and customer, then your dimensions are product, product family, and customer. There will be a 1-dimensional space product, and a 2-dimensional space product family x customer. As a side note, product and product family are dimensions that share the same axis (products), they can be seen as parallel and you should never define a product x product family space as it would make no sense.
In some cases, such as make-to-order projects, identifying the correct dimensions can get complicated. For instance in this problem derived from a previous use-case (only accessible to PFX employees), prices are not assigned one-to-one to products but are assigned to several products at once, regrouped by the same TintCatCategory, Assortment, and Packaging. So these are dimensions and we do not need a product dimension at this point. But then, prices are also differentiated with respect to a Festool featureby Brand features. So should Festool Brand also be a dimension? The answer is no , because in this Festool example the Brand feature actually depends on the underlying products in the TintCat Category x Assortment x Packaging space. In other words, Festool Brand is contingent: for a given TintCat Category x Assortment x Packaging triplet, only one Festool Brand value is possible. Whereas for a TintCat Category x Assortment couple, every Packaging value is possible, the value of Packaging does not depend on the other dimensions.
If you make a Festool Brand dimension, you will end up with a "holey cheese" space that is difficult to navigate and a the system will be way larger than needed. Instead, the Festool Brand feature should be used to define a scope within the TintCat Category x Assortment x Packaging space.
A ground-rule could be: dimensions are non-contingent features to the space required to situate a variable or a criterion in the problem.
The Granularity of Levers and Criteria
Main idea: situate levers and criteria in the same space, in other words, place them at the same level of granularity.
Usually, we start by situating levers and criteria in order to identify the dimensions of the problem and the relevant spaces.
Levers are often imposed by the customer (typically some kind of prices and discounts): if the customer wants discounts at a product x customer family level, then so be it. Criteria, however, are often more vaguely described. It is up to us optimization engine configuration to translate the customers customer needs and requirements into actual criteria. A good practice is to seek to simplify the work at the future agents' local level, i.e. at the levers level of the lever. What is simple for a Value Finder? It is when it has few neighbors (ideally just one) and when these neighbors are shared with a few other Value Finders (ideally zero).
The best way to achieve this is to situate levers and criteria in the same space, in other words, place them at the same level of granularity. For instance, one of our typical cases is turnover stability when a customer wants to streamline its pricing: the customer wants to change list prices (situated in a product space) and discounts (situated in a product family x customer group space) but wants its global turnover to stay the same. Where should we situate the constraint turnover == historical_turnoverof equality between current and historical turnovers?
- Placing this constraint at the highest granularity , - in what would be the root space , - is the most straightforward solution. But this sole criterion would constrain all the prices and all the discounts, augmenting their interferences and risking to overconstrain lots of them if there are other criteria elsewhere in the system.
- It also can causes cause accuracy problems if the minimum amplitude of the Value Finders is not small enough: the more VFs more Value Finders the smaller it should be in order to ensure precision , since the error of each local VF local Value Finders is cumulated in the global criterion.
- Placing the constraint at the lowest granularity, in product x customer spaces, would create lots of agents and give lots of neighbors to Value Finders. For instance, each discount would have as many neighbors as there are (product, customer) pairs for a given product family x customer group space. This can be a lot, and these neighbors may be antinomic at times so it would take more time to converge.
- Placing the constraint at the same granularity as the discounts is the best solution here as it would ensure that each discount has only one neighbor, and each constraint is influenced by only one discount and some prices. Then, discounts would always be able to satisfy their constraint whatever the prices do.
The Multi-Agent Optimization Engine should be able to converge no matter what we chose, but the convergence will definitely be smoother with the third one.
Of course, in some cases, these three propositions may not be exactly the same from a domain point of view and there are often other considerations to weigh in when making this choice. This is the whole question of understanding the customer's needs.
Intermediate Variables
Main idea: large aggregating computations with lots of inputs and parameters should be avoided.
Once levers and criteria are situated, we often realize that there is a computed variable that aggregates a large amount of lower granularity values. Large aggregating computations with lots of inputs and parameters should be avoided. Decomposing these computations into intermediate variables at intermediate granularity levels should be preferred in order to gain in clarity, flexibility, and maintainability. If the system runs on multiple threads, this also improves performance as agents run in parallel while a Computation Agent has lots of processing to do on its inputs which do not scale well. The trade-off is that the resulting multi-agent system will be larger, which could cause some memory issues.
...
- If there are
- list prices in a product space
- margin targets in a product family x customer group space
- do not
- use list prices and costs in the product space and volumes in the product x customer space directly to compute margins in the product family x customer group space with a large aggregating computation
- do
- compute margins in the product x customer space from volume in the same space and list prices and costs from the product space
- introduce a product x customer group space to compute a margin, that is simply the sum of product x customer customer margins
- sum margins from product x customer group spaces to compute the margin in the corresponding product family x customer group space where you apply the criteria
...