Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It is a common case that you need to run calculation jobs in a certain sequence or with dependencies, e.g., CFSs prior to LPGs or various Data Loads in PriceAnalyzerAnalytics.

A typical scenario looks like this: There is a file loaded to PriceAnalyzer Analytics once per day and IntegrationManager triggers a Data Feed → Data Source Flush job. You want to define that the data will end up in the Datamart as soon as possible, so you define that a Data Source → Datamart Refresh job gets triggered once the Data Feed → Data Source is finished for any of the depending data sources.

...

Please note: The type of jobs that can be scheduled are those that run within the so called "Job Status Tracker" framework. The parameters that are required for a job to properly run can vary by job type. Best is to pick a manually started JST job from the admin console as an example. The chaining is only implementing a sequential order of jobs being executed. There is no contextual "wrapper" of any kind within that chain. I.e., no back or forward references or "knowledge" where in the chain a job is running.

...

  • Kick job A with A1, 2, 3 as “nextJobs” as per above. The jobs A1-3 would themselves have a single next job CF1 which would run immediately if e.g. A1 (or 2 or 3) finish (exactly 3 times).
  • This calculation flow job CF1 can check in its 3 runs if all 3 jobs A1-3 are done and if so (i.e., on 3rd run) trigger job B.
  • This gets rid of the constant polling/high-frequency calculation flow running while keeping all flow control options.