I encountered the 100,000 row count limit for the InMemory table. Is this limit for each InMemory table created in the current context or is it the sum of all rows of all the InMemory tables created in the current context? I created two InMemory tables, each with less then 100,000 rows but I got an error of the 100,000 row count limit. In case, the row count limit is sum of all InMemory tables created in the current context, how can we handle a huge dataset?
Another question: Using the code, is there a way to drop the InMemory table once the processing is done?
The limit is for the sum of all rows of all InMemory tables.
When handling huge datasets, we need to strike a balance in a multi-tenant environment, hence the current limit. For more demanding customers, we have the option to move the PA jobs to a separate node where a higher limit can be set. This is assessed on a case by case basis.
As for dropping the InMemory table, you have two options:
TableContext.dropTable(String tableName) // see the Javadoc
More sneakily: Creating a new table with the same name that replaces the existing table.