Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
stylenone

Vocabulary

Analytical database - Postgres database where Data sources, Datamarts, Rollups and ModelClass tables are stored.

Transactional database - Postgres database (previously MariaDB in the older infrastructure) where all other tables are stored (Master data, Company parameters, Condition records, Price lists, Quotes, Agreements, Rebates …)

What is Query API?

Info

Available since 14.0 Caribou Lou release

...

  • Supports JOIN of tables

  • Provide a single unified interface for querying data for both transactional as well as analytics analytical database

  • Works natively with field names (analytical tables) or field labels (master and transactional tables)

  • Data is provided in a stream (working supported in the input generation mode)

  • Aggregation of the data

  • Advanced expressions

  • Allows to use column aliases to retrieve column under different name

...

Info

Note: Joining data between analytical database (PA module) and transactional database (other modules) is not possible (e.g. a product extensions with a data source). You can join tables only within the same database.

...

Columns are fields referenced from table objects. For master or transactional tables, those can be:

...

  • px.Cost.as("StandardCost")

  • p.sku().as("Product")

Data types

In transactional database, the Groovy data type for an attribute field is determined by it’s configured Type:

  • String → java.lang.String

  • Integer → java.lang.Long

  • Real → java.math.BigDecimal

  • Date → java.time.LocalDate

  • Timestamp → org.joda.time.DateTime

  • Boolean → java.lang.Boolean

  • Link → java.lang.String

  • Entity reference → java.lang.String

Note

QueryAPI converts the value using CAST in SQL. So if there is an issue with the data integrity (e.g. integration loaded a string value in an attribute field of type Integer) then the whole query call crashes.

Note

Unity stores empty values of attribute fields in transactional database as an empty string. Due to the CAST above, QueryAPI performs a special CASE-WHEN check for empty string values in order to not cause crashes for empty values. E.g.:

CASE
WHEN attribute1 = '' THEN NULL
ELSE CAST(attribute1 AS LocalDate)
END

This check cause index that has attribute1not to be used.

The check is omitted if the attribute field is marked with Required flag. The expression will be come simple cast:

CAST(attribute1 AS LocalDate)

So if you can it is recommended to mark attribute fields with Required flag whenever possible. But it has to be ensured (e.g. in integration or by boundCall) that a value cannot be empty.

In analytical database, the Groovy data type for attribute field is determined by it’s configured Data Type:

  • Text → java.lang.String

  • Integer → java.lang.Long

  • Money → java.math.BigDecimal

  • Number → java.math.BigDecimal

  • Date → java.time.LocalDate

  • DateTime → org.joda.time.DateTime

  • Boolean → java.lang.Boolean

  • Currency → java.lang.String

  • Quantity → java.math.BigDecimal

Source table

Source function table is spcified using .source() method is telling which table will be queried. If you are familiar with SQL language, you can look at this method as “FROM” clause in SQL.

The first parameter is a mandatory one and accepts a reference to a table, the other parameters are optional. The second parameter specifies the list of columns to be returned. The third parameter is an expression that defines filter on the returned rows. So it is like a “WHERE” clause in SQL language. Since existing Filter class was not sufficient for the purpose of QueryAPI, a new interface Expression has been introduced.

Example:

Code Block
q.source(t1)

Returns all fields from table t1. No row filtering. Note that production code you should specify the fields in order to only return fields really used due to performance. So preferably use this:

Example:

Code Block
q.source(t1, [t1.sku(), t1.Brand])

Returns sku and Brand fields from the table t1. No row filtering.

Example:

Code Block
q.source(t1, [t1.sku(), t1.Brand], 
    t1.sku().equal("MB-0001")
)

Returns sku and Brand fields from the table t1 and filters the rows to return row with sku = “MB-0001”.

...

Result stream

QueryAPI provides the result always as a stream via .stream(ResultStream interface)closure) method. The closure receives ResultStream (which extends Iterator), as a parameter. So it is up to you how you will to iterate over rows and tell how to consume each individual row. The processing of the row should be fast, because it is holding a blocking the database connection. On the other hand collecting big amount of records to memory using .collect { it } and processing it later may lead to OutOfMemory exception. So this needs to the approach should be wisely evaluated for a good compromise.

Expressions

There is quite many methods that are available, kindly refer to the Javadoc.

https://developer.pricefx.eu/pricefx-api/groovy/master/net/pricefx/formulaengine/scripting/queryapi/Expression.html

Joining tables

Adding columns from another table can be done using these methods:

  • .leftOuterJoin()

  • .innerJoin()

The methods takes following parameters:

  1. reference to a table

  2. closure returning columns added

  3. closure returning

Code Block
def q = api.queryApi()

def t1 = q.tables().products()
def t2 = q.tables().productExtensionRows("Cost")

return q.source(t1, [t1.sku()])
        .leftOuterJoin(t2, { cols -> [t2.Cost] },
                { cols -> 
                    q.exprs().and(
                            t2.sku().equal(cols.sku)
                    )
                }
        )
        .stream { it.collect { it } }

add

It is strongly recommended that you make joins only based on index fields in the join criteria.

Aggregations

Tracing

...

Row limit

The limit of the number of rows being returned is set by .take(maxResults) method called at the end of the pipeline and just before the .stream() call.

Example:

.take(10)

Will return the first 10 rows.

Use of .take() is recommended when debugging the queries.

Tracing

The QueryAPI builds a SQL (for analytical database) or HQL (for transactional database) which is then executed by a database engine. To check the final SQL/HQL query built by QueryAPI, you can call the method .traceQuery() at the end of the pipeline and just before the stream part: .stream() call.

.traceQuery()

It will return the final HQL/SQL query which may help you to better understand how the data will be retrieved.

Performance, Important Prerequisites

Master and transactional Transactional tables:

Analytical tables:

...

Following methods are deprecated as of 14.0 since they are better supported by the QueryAPI.:

  • api.getItemCompleteCalculationResults() (performance)

  • api.productExtension() (performance)

  • api.customerExtension() (performance)

More tables will be supported by the QueryAPI in the future. Therefore these methods that may be deprecated in the future:

...