Run Logic Test
This chapter describes how to execute the whole logic (all elements) in the test mode. For test execution of smaller code snippets, use one of the Groovy Consoles.
Learn more about different types of Calculation logics in the product documentation.
Define Inputs for Test Run
To be able to run a logic test drive, you need to define the test inputs first.Â
Open logic.json and go to the Inputs tab.
If this is the first run, click the Generate button. This will populate the input fields for you to fill in.
You need to include the sku field. The targetDate field is also needed, but if you omit it, it will be automatically set with today's date.ÂOptionally, use these additional options:Â
Use Local Configurator – If enabled, it allows you to run the configurator logic locally, without making a call to the partition.
Keep values – If enabled, the inputs which remained the same as in the last run keep the values they had filled in before.Â
Show hidden – If enabled, you can see hidden inputs.Â
Allow data modification (at the top) – The test execution runs by default in the read-only mode. If you want to enable the write access, e.g. for
api.addOrUpdate()
orapi.sendMail()
, use this option.Preset (at the bottom) – You can save the inputs for future use by using the options Save preset as and Load preset.
Â
Run Test
After you have defined the inputs, you can run the test:
Open the logic.json file.Â
Select the environment and partition.
Click the Test Logic button or press Alt+F5 on the keyboard.
 You can run the test directly by pressing Alt+F5 from the logic.json editor as well as from the element editor. If you expand the PfxResult tool window, you can instantly display the result value of the actual element.
For details on the Debug function, see Debug Logics (Experimental Feature). Note that the use is only limited and experimental.
View Test Results
When the test is completed, you get a list of executed elements and their values, traces and other details.Â
For each element, there are details displayed in the list on the left and in the tabs on the right side of the panel.
You can use the quick search field above the list to find a specific element.
Element details in the list on the left:
Test status is indicated by an icon in the first column:
Green – successful run
Red – error
Clicking the icon opens the element file; in case of a parse or runtime error the cursor is placed at the line/column where it occurred.
Result (Raw) – Provides the element’s return value.
Result – Provides the element’s return value.
Duration – In the last column you get a visual hint how long the element took to execute.Â
Duration summarized for all elements is listed in the last line in the "__TRACE__" element. This helps you find potential performance issues during the development phase.
Elements details in tabs on the right:
Raw / Tree / Grid / Highchart – To inspect the element results, there are several views available. You can view the results either raw, as a tree, in a grid or as a highchart – depending on the type of the element. Only relevant tabs are active for each element.
Warnings – Shows warnings for the given element (if any were raised).
Traces – Shows traces, inserted by Configuration Engineer for troubleshooting purposes.
Logs – Shows the corresponding part of the server log messages which relate to the run of the given logic.
Note: For this feature to work correctly, it is necessary to have locally the same time as on the partition. The server log filtering is based on the the execution time of the logic.
Tips:
If you keep the PfxResult tool window expanded, you can instantly display the full elements result value while browsing the elements (selecting the element by mouse or using arrow keys in the elements list).Â
If you run the test for multiple products, you will have a separate Results tab for each of them.
You can save the test results in a JSON file – using the Save results button at the bottom right. The file is then stored in the project tree on the left. Next time when e.g. work on a new version, you can save the results again and then just compare the files to see if the calculation has been impacted or not.
Found an issue in documentation? Write to us.
Â