Backend Error Messages

Error messageReasonAction
java.util.concurrent.ExecutionException: net.pricefx.service.DMPersistenceServiceException: 
java.lang.IllegalArgumentException: Schema for partition 'xxxx' does not exist
Analytics is not set up.If the customer has a licence for Analytics, contact Support to set it up.
java.util.concurrent.ExecutionException: net.pricefx.service.ObjectLockedException:
Object xxx is locked


net.pricefx.service.impl.dmsession.FBCException: java.lang.IllegalStateException: 
Reading from xxx failed; file length -1 read length 12288 at 156382424 [1.4.197/1]


java.lang.IllegalStateException: This store is closed
There is a bug in the (H2 DB) MVStore lib that we used up to Salty Dog 3.5.5, see: PFCD-5075 - Getting issue details... STATUS Since Salty Dog 3.5.6, we use the latest H2 DB release and this error should not appear there.
o.h.e.j.spi.SqlExceptionHelper - (conn=360710) 
Could not send query: query size is >= to max_allowed_packet (16777216)
SQL query is too big.See the article Avoid Returning Big Data in Element Result and /wiki/spaces/SUP/pages/2071757881
Thread died
The server was restarted or crashed.Check if the instance was restarted and if not, check the server log.
Unable to acquire JDBC Connection
DB server not available.Contact Support to check the instance
java.util.concurrent.ExecutionException: net.pricefx.service.DMPersistenceServiceException:
Calling method 'createQuery' is not valid without an active transaction
(Current status: MARKED_ROLLBACK)
This typically happens if you get a DatamartContext and try to use it later in a different place/thread. Make sure not to store it or pass it around. If this is not the case, then there was an error that caused the transaction to be rolled back.
Check the server logs for more info.
Too many new instances created: xxxx
Too many objects were created in the server memory. Sometimes the Groovy engine creates new unexpected instances of the objects.See the article Put "def" outside of the loops.
Maximum In-Memory row count of 100000 exceeded.
The data load job tries to work with more rows than the allowed maximum (100.000).

Try to enable batching on the data load.

or

Split the data load into several smaller ones.

No content to map due to end-of-input at 
[Source: okhttp3.ResponseBody$BomAwareReader@50fdc87a; line: 1, column: 0]

The logic which you want to test-execute has a bug.

Example:

api.trace("THIS", "FrozenLastFinalPrice", FrozenLastFinalPrice)
BigDecimal FrozenLastFinalPrice = api.getElement("FrozenLastFinalPrice")?.toBigDecimal()
FrozenLastFinalPrice was not declared before the API trace call. In the Salty Dog 3.5 version, we had no error, after deploying the next version El Presidente 3.6 we get this issue. It was resolved by fixing the code.

Check the server logs for more info.


Missing cloudconvert api token in config
The CloudConvert service needs a token.

Ask Support to enter the CloudConvert token on that instance.

n.p.s.c.i.actions.AbstractAction - Action.call: rolling back transaction due to unexpected error:

org.hibernate.HibernateException: Calling method 'getHibernateFlushMode' is not valid without an active transaction (Current status: MARKED_ROLLBACK)

When Analytics logic execution failed, the server started to return this message.

Try to wait until the lock is released.

What can also help: replacing closures with the 'for' loop.

04:20:34.269 [CalculationTask-zZ1KK] [customer_qa] INFO n.p.f.DefaultFormulaEngine - Caught error in [AllItemsLoaderTesting - RedBook - Customers] : ERROR(@15): Not starting new StreamingSearchExecutor as the max of 1 is already running
java.sql.SQLException: java.sql.SQLException: Formula [AllItemsLoaderTesting[3203]] init step error: RedBook [via Customers] : ERROR(@15): Not starting new StreamingSearchExecutor as the max of 1 is already running
Caused by: java.sql.SQLException: Formula [AllItemsLoaderTesting[3203]] init step error: RedBook [via Customers] : ERROR(@15): Not starting new StreamingSearchExecutor as the max of 1 is already running

You have to increase the limit to at least 2 (or more) depending on your logic and the api.stream usage at the given time.


<!-- StreamingSearchExecutor (api.stream): max concurrent streamers in a given partition -->
<maxStreamingSearchConcurrency>1</maxStreamingSearchConcurrency>
Batch update returned unexpected row count from update [X]; actual row count X; expected: X


java.sql.SQLException: java.sql.SQLException: java.lang.IllegalStateException:
Chunk XXX not found [1.4.200/9] at net.pricefx.service.impl.dmload.PACalculationTask.doCalculation(PACalculationTask.java:463)

Follow the Distributed Calculation Dataload approach instead. Or, in some cases, 
setting mvstoreAutoCommitBufferInMB to 0 in the cluster config might help.

Found an issue in documentation? Write to us.