Reference (QueryAPI)
- 1 Base Vocabulary
- 2 Tables
- 3 Columns
- 3.1 Data Types
- 4 Source Table
- 5 Result Stream
- 6 Expressions
- 6.1 Aggregation
- 6.2 Mathematical
- 6.3 Logical
- 6.4 Conditional
- 6.5 Literal
- 6.6 Comparison
- 6.7 Window Functions
- 7 Joining Tables
- 7.1 Example
- 8 Removing Columns
- 8.1 Example
- 9 Retaining Columns
- 9.1 Example
- 10 Additional Filtering
- 10.1 Example
- 10.2 Filters in Dashboard
- 10.2.1 Example
- 10.3 No Operation Expression for Filter Pattern
- 10.3.1 Example
- 11 Aggregations
- 11.1 Example
- 12 Sorting
- 12.1 Example
- 13 Row Limit
- 13.1 Example
- 14 Tracing
Base Vocabulary
Analytical Database – A PostgreSQL database that stores Data Sources, Datamarts, Rollups, and ModelClass tables.
Transactional Database – A PostgreSQL database (previously MariaDB in the older infrastructure) that contains all other tables, including Master Data, Company Parameters, Condition Records, Price Lists, Quotes, Agreements, Rebates, and more.
Tables
References to a specific table can be made using methods (e.g., products()) available under QueryApi.tables(). All of these methods return a Table interface, which is used to query the desired table.
Example
def tables = api.queryApi().tables()
def tProduct = tables.products()
def tCosts = tables.productExtensionRows("Costs")
def tCountries = tables.companyParameterRows("Countries")The table objects are used to reference columns.
Supported Transactional Tables
Product
Product Extension
Product Competition (since 15.0)
Product BoM - since 16.1
Customer
Customer Extension
Seller
Seller Extension
Company Parameters
Condition Records
Price Lists
Price List Line Items
Price Grids
Price Grid Line Items
Calculation Grids
Calculation Grid Items
Quote Line Items
Contract Line Items
Rebate Line Items
Rebate Records
Compensation Line Items
Compensation Records
Action Items - since 15.0
Model Tables - since 15.0
Data Feeds - since 15.0
Data Sources - since 15.0
Datamarts - since 15.0
Rollups - since 15.0
Custom Forms - since 16.1
Your table is not in the list? Please contact Product team to put in on the roadmap.
Supported Analytical Tables
Analytical tables are currently not supported, the plan is to support these in 15.0 release:
Data Sources
Datamarts
Rollups
Model Class Tables
Columns
Columns are fields referenced from table objects. For transactional tables, those can be:
Configured fields - fields where meaning is defined by configuration (attributeX, keyX, additionalInfoX)
System fields - have specific meaning defined by backend
Expressions with attribute or system fields
Attribute fields can be used in the QueryAPI only if you configure a name and data type for that attribute field. Configured fields are accessed as a property by the configured label, e.g. px.Cost, cp.Country. Therefore it is not recommended to configure attribute labels with spaces, since it makes the code worse readable, however such column names are supported. Attribute fields cannot be accessed by their system field name, e.g. p.attribute3 will not work.
System fields are accessed using methods in the given table interface, e.g. p.sku(), c.customerId(), p.lastUpdateDate(). It is possible to access system fields also as properties, e.g. p.sku, c.customerId, p.lastUpdateDate or by their label e.g. Product, Customer, “Last Update Date” but that is not recommended way. The best practice is to use methods so it is obvious what is system and what is configured field in the code.
If you want to retrieve the column under a different name, you can use an alias method as(), eg.:
px.Cost.as("StandardCost")p.sku().as("Product")
Use key1() to refer to the name column when querying Company Parameter table of a type SIMPLE.
Data Types
Query API defines its own set of data type it can handle and output. Here is a table listing those QueryAPI Types and the corresponding provided Groovy Types.
Query API Type | Groovy Data Type |
|---|---|
String | java.lang.String |
Integer | java.lang.Long |
Real | java.math.BigDecimal |
DateOnly | java.time.LocalDate |
DateTime | org.joda.time.DateTime |
Boolean | java.lang.Boolean |
Pojo | pricefx DTO object. For now this is only:
|
In transactional database, the Groovy data type of standard fields depends either on it’s default datatype or on it’s configured data type.
The returned data type of configurable fields (like attributes fields) is determined by it’s configured Type:
Configured Type | Field Type | Query API Type | Groovy Data Type |
|---|---|---|---|
String | 2 | String | java.lang.String |
Integer | 3 | Integer | java.lang.Long |
Real | 1 | Real | java.math.BigDecimal |
Date | 4 | DateOnly | java.time.LocalDate |
Timestamp | 5 | DateTime | org.joda.time.DateTime |
Boolean | 9 | Boolean | java.lang.Boolean |
Link | 6 | String | java.lang.String |
Entity reference | 8 | String | java.lang.String |
QueryAPI converts the value using CAST in SQL. So if there is an issue with the data integrity (e.g. integration loaded a string value in an attribute field of type Integer) then the whole query call crashes.
Unity stores empty values of attribute fields in transactional database as an empty string. Due to the CAST above, QueryAPI performs a special CASE-WHEN check for empty string values in order to not cause crashes for empty values. E.g.:
CASE WHEN attribute1 = '' THEN NULL ELSE CAST(attribute1 AS LocalDate)END
This check cause index that has attribute1not to be used.
The check is omitted if the attribute field is marked with Required flag. The expression will become simple cast:
CAST(attribute1 AS LocalDate)
So if you can it is recommended to mark attribute fields with Required flag whenever possible. But it has to be ensured (e.g. in integration or by boundCall) that a value cannot be empty.
In analytical database, the Groovy data type for attribute field is determined by it’s configured Data Type:
Column Type | Groovy Data Type |
|---|---|
Text | java.lang.String |
Currency | java.lang.String |
Integer | java.lang.Long |
Money | java.math.BigDecimal |
Number | java.math.BigDecimal |
Date | java.time.LocalDate |
Timestamp | org.joda.time.DateTime |
Boolean | java.lang.Boolean |
Quantity | java.math.BigDecimal |
Source Table
Source table is specified using source() method is telling which table will be queried. If you are familiar with SQL language, you can look at this method as “FROM” clause in SQL.
The first parameter is a mandatory one and accepts a reference to a table, the other parameters are optional. The second parameter specifies the list of columns or expressions to be returned. The third parameter is an expression that defines filter on the returned rows. So it is like a “WHERE” clause in SQL language. Since existing Filter class was not sufficient for the purpose of QueryAPI, a new interface Expression has been introduced.
The second parameter with column list can be omitted, but that is not recommended, since it has impact on performance - see Performance (QueryAPI) . Furthermore if you use selecting all columns, you need to ensure that your configuration will not be impacted in the future in case of new columns will get introduced in tables in future releases.
Example
qapi.source(t1, [t1.sku(), t1.Brand])Returns sku and Brand fields from the table t1. No row filtering.
Example
qapi.source(t1, [t1.sku(), t1.Brand],
t1.sku().equal("MB-0001")
)Returns sku and Brand fields from the table t1 and filters the rows to return row with sku = “MB-0001”.
Result Stream
QueryAPI provides the result always as a stream via stream(closure) method. The closure receives ResultStream (which extends Iterator), as a parameter. So it is up to you to iterate over rows and tell how to consume each individual row. The processing of the row should be implement to consume the result as fast as possible, since it is blocking the database connection. On the other hand collecting big amount of records to memory using collect { it } and processing it later may lead to OutOfMemory exception. So the approach should be wisely evaluated for a good compromise.
The stream(closure) is explicitly closed, so there is no need to call ResultStream.close() nor .withCloseable { }.
Expressions
There are quite many methods available. For more information see Public Groovy API.
Aggregation
sum()count()min()max()avg()
Note that, starting from version 15.0, the following QueryAPI Count methods have been deprecated. Instead, use the new replacements listed below:
Deprecated Expression | Replacing Expression | Description |
|---|---|---|
|
| This method counts all items. |
|
| This method counts non-null instances of the specified expression. |
|
| This method allows counting the number of distinct non null value resulting from the given expression. |
Mathematical
abs()exp()ceil()floor()round()pow()
Logical
and()or()not()
Conditional
caseWhen()coalesce()nullIf()
Literal
string()integer()real()dateOnly()dateTime()LocalDate()bool()
Comparison
notIn()in()equals()notEqual()lowerThan()greaterThan()lowerOrEqual()
Note that, starting from version 15.0, all methods in Expressions that use the Date object have been deprecated. Use the following expressions instead:
qapi.exprs().dateOnly(Date)qapi.exprs().dateTime(Date)
Window Functions
Window Function – Performs calculations across a set of table rows related to the current row. It allows operations like:
Running totals
Moving averages
Rankings
Aggregate Function (GROUP BY) – Aggregates data across multiple rows to produce a single result per group.
Key Differences
Window Functions – Retain individual rows; calculations like sums or averages extend over a window of related rows.
Aggregate Functions – Group rows into a single output row for each group.
The diagram illustrates how window functions keep all rows while calculating values like running sums or averages, while aggregate functions condense them into fewer results.
Examples
See Also
Joining Tables
Adding columns from another table is done using these methods:
leftOuterJoin()- result contains all records from source table and matching records from the joining tableinnerJoin()- result contains only matching records from source and joining tablecrossJoin()- result contains records from both source and joining tables. Note it can produce huge results!
Right outer join method is not implemented since the same effect can be done using leftOuterJoin().
Multiple joins can be done.
How to join multiple pipelines:Join Pipelines.
The methods takes following parameters:
Reference to a table.
Closure that returns the list of columns or expressions to be added to the result.
Closure that returns the joining condition as a boolean expression
It is strongly recommended that the joining conditions use all fields of the database primary index.
Example
def date = new Date()
def q = api.queryApi()
def t1 = q.tables().products()
def t2 = q.tables().productExtensionRows("Cost")
return q.source(t1, [t1.sku(), t1.ProductGroup])
.leftOuterJoin(t2, { cols -> [t2.Cost, t2.Currency] },
{ cols ->
q.exprs().and(
t2.sku().equal(cols.sku),
t2.ValidFrom.lessOrEqual(date),
t2.ValidTo.greaterOrEqual(date),
)
}
)
.stream { it.collect { it } }The query returns all product records and their cost and currency from the product extension Cost. Or null for cost and currency if the product extension record does not exist.
Removing Columns
If you had to add columns via source(), leftOuterJoin or innerJoin in order to make the joining condition but you do not need those in the result, you can do this using removeColumns() method. The method takes a closure as the argument and should return a list of columns to be removed.
Example
.removeColumns { cols -> [cols.ValidFrom, cols.ValidTo] }Retaining Columns
If you had to add columns via source(), leftOuterJoin or innerJoin in order to make the joining condition but you do not need those in the result, you can do this using retainColumns() method. The method takes a closure as the argument and should return a list of columns that should be present in the result.
Example
.retainColumns { cols -> [cols.sku, cols.Cost] }Additional Filtering
If you made several joins using leftOuterJoin or innerJoin you can apply additional filtering of the result using filter() method. The method takes a closure as the argument and should return an expression which if evaluated to true, tells if the row should be present in the result.
Example
.filter { cols -> cols.Cost.isNotNull() }Filters in Dashboard
You can filter in Dashboards using the qapi.exprs().fromDataSlice method that translates a DatamartContext.DataSlice into a QueryAPI Expression.
Example
No Operation Expression for Filter Pattern
Sometimes, it is necessary to conditionally build a filter. To avoid writing complex code that calls different methods on PipelineStage depending on whether a filter needs to be added, users can use a no-operation (noop) expression. This approach simplifies the code and improves readability.
Example
protected Expression buildQueryExpression(Table table, String customerType, Integer revenueTotal) {
Exprs exprs = api.queryApi().exprs()
List<Expression> expressions = []
if (customerType) {
expressions += table.getAt("CustomerType").equal(customerType)
}
if (revenueTotal) {
expressions += table.getAt("Revenue").lessThan(revenueTotal)
}
return expressions ? exprs.and(*expressions) : exprs.bool(true)Aggregations
Example
def qapi = api.queryApi()
def t1 = qapi.tables().productExtensionRows("Competition")
def exprs = qapi.exprs()
return qapi.source(t1, [t1.sku(), t1.jm Price])
.aggregateBy { cols ->
[cols.sku]
} { cols ->
[cols.sku, exprs.min(cols.Price).as("MinPrice")]
}
.stream { it.collect { it } }
Returns lowest competitor’s price for each SKU.
Sorting
To sort the result rows, use a sortBy() method. This method takes a closure as an argument and should return a list of orders. The orders can be one of the following methods, depending whether the column values should be sorted in ascending or descending order and if there are null values whether those should come first or last:
orders.ascNullsFirst()orders.ascNullsLast()orders.descNullsFirst()orders.descNullsLast()
Example
def qapi = api.queryApi()
def t1 = qapi.tables().companyParameterRows("ProductHierarchy")
def orders = qapi.orders()
return qapi.source(t1, [t1.ProductGroup, t1.ProductLine])
.sortBy { cols -> [orders.ascNullsLast(cols.ProductGroup), orders.ascNullsLast(cols.ProductLine)] }
.stream { it.collect { it } }Will sort the result rows by ProductGroup and then by ProductLine.
Row Limit
The limit of the number of rows being returned is set by take(maxResults) method called at the end of the pipeline and just before the stream() call.
Example
def qapi = api.queryApi()
def p = qapi.tables().products()
return qapi.source(p)
.take(10)
.stream { it.collect { it } }Will return the first 10 rows.
Use of take() is recommended when debugging the queries.
Tracing
The QueryAPI builds a SQL (for analytical database) or HQL (for transactional database) which is then executed by a database engine. To check the final SQL/HQL query built by QueryAPI, you can call the method traceQuery() at the end of the pipeline before the stream() call.
def qapi = api.queryApi()
def p = qapi.tables().products()
return qapi.source(p)
.traceQuery()
.stream { it.collect { it } }It will log in the traces the HQL/SQL query at the stage you inserted traceQuery. This which may help you to better understand how the data will be retrieved. The following example shows how traceQuery can be used multiple times.
def qapi = api.queryApi()
def p = qapi.tables().products()
return qapi.source(p)
.traceQuery() // traces the query without filter
.filter { prev -> p.sku().like("%pack")}
.traceQuery() // traces the query with the filter
.stream { it.collect { it } }Found an issue in documentation? Write to us.