/
Common Performance sprint

Common Performance sprint

The performance should be preceived as one of the risk factors that needs to be properly observed and addressed since beginning. This apllies especially for the new customers. The performance attention covers the following aspects:

  1. Integration performance (in case or large data volumes)

  2. Internal calculation performance (asynchronous: internal jobs, or LPG calculation, pricelist calculation)

  3. Synchronous calculation performance (quote logic, agreement logic, Price Analyzer / dashboard)

  4. API call performance (in case PfX is being called)

  5. General UI performance (e.g. opening a quote, sorting a quote list, refreshing dashboard, etc…)

Why do performance issues only pop up at the end of a project?

  1. Only at the end is the full logic ready (e.g. Pricelist logic).

  2. Often, we only get the full data set at the end.

  3. Sometimes performance issues are related to the volume of pricelists/LPGs/Quotes in the system. These typically are only created at the end (during the UAT).


Why do performance issues pop up after the customer is business-live for a while?

  1. Data is continuously loaded into Pricefx, but old data not archived (in fact data is appended to the existing ones).

  2. Sometimes performance issues are related to the volume of pricelists/LPGs/Quotes in the system that is growing over time. Also, here we must think of archiving old data. Typically, there is no data archiving user story in scope.


What is recommended to do in projects:

  1. Have a focus on performance since the very beginning. Don't wait until the performance sprint and expect all issues can be solved in that sprint. Even with low complexity logics and low data volumes, potential bottlenecks can be found and solved.

  2. Get the full size data set as soon as possible in your project. Get also in writing data volume numbers from customers that we can fall back to later. If later certain data volumes are significantly bigger than anticipated, we are in a better position to raise of example a change order. Data sizing takes into consideration both row count and column count of the master data & extensions, transaction history, and important company parameter tables.

  3. Make the customer co-owner of the performance topic. Their data volumes and their complexities in the logic are the driver for performance. Are those large volumes really needed, and does it need to be that complex? Performance is not something we can solve by just adding hardware, or just 'optimizing code'.

  4. In case of any expected performance issue, involve Performance Engineers from the start. If you do this late, the risk is a lot of code needs to be re-written and functionality re-tested. Having it nailed from the start will save a lot of money. It can never harm involving the experts early and get their views and opinions.

  5. Use the performance sprint for final testing and tuning; not as the time period where the bulk of testing and rework is done. Experience shows we can improve performance a lot, but this often takes many iterations.

  6. Never commit to any performance number. Data sizes, lookups, logic complexity etc. can vary so much that performance is impossible to predict beforehand. Only the sales team can commit to numbers during the sales cycle when it is needed to close a deal. When the deal is closed, we do not commit anymore, but will do our utmost best to improve the system performance as much as possible.

Recommended steps:

  • Initiation (pre-scriptive) or Foundation (classic PfX agile)

    • Check the acceptance criteria to see if they contain any performance requirements regardless, if they are vague (“quick enough”) or concrete (“response time <100 ms”) and refuse or escalate.

    • Ask customer for the expected data volumes. Document them in meeting minutes or another document.

    • Ask customer for the full data set (“production-sized”)

  • Project checkpoint at PMO

    • present the performance assessment

  • Feature sprint 1:

    • involve a performance engineer, spot potential areas of performance issues based on the data volume and logics

    • Try to have a production-sized data set already in DEV (in line with Prescriptive Delivery standards)

  • Feature sprint x:

    • Have the performance engineer do checks and give recommendations

    • Do this every sprint

    • Make (better: have the customer make) the test cases to be tested during the Performance Sprint

      • Make clear test cases

      • Agree on the data and testing metrics to be used. This is the most important part of performance testing: what is representative data for performance testing. E.g. how many items go in a quote or order, for how many of those items is there a special price agreement (additional lookup), how many parallel users are in the system. For API calls there are many more parameters to be defined, e.g. number of calls per minute, payload size, etc. Involve the Performance team in this discussion.

      • Split in Pricefx processing time vs. other system's processing time (especially when doing API calls - customers want end-to-end performance, Pricefx is just a part of that)

  • Performance Sprint

    • Ensure representative data is loaded in the system

    • Run the test cases as defined before

    • Do the optimizations

Notes & hints

  • Have as much data as possible in (QA) partition.

  • Create a separate partition just for performance testing. The smoothest way is to clone the QA partition.

  • There is no generic performance testing scenario. That depends on the modules enabled and data volumes and data complexity. The PM is accountable, and the SA & Performance engineer are responsible for performance testing design - to be made during the feature sprints.

  • The system response time (GUI or API) as it is perceived by customer, has always multiple particular contributors like Pricefx performance, network latency, screen rendering in a browser, and other applications' response time. We as Pricefx can be held responsible only for Pricefx’s performance on its endpoint.

  • For the sake of the project velocity, this sprint is supposed to run in parallel with the UAT Preparation sprint.