Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Current »

Camel choice

Q: How I can get the following?

<route>
            <from uri="direct:customerList" />
            <when>
                <simple>${header.token} contains "ahoj"</simple>
                <log message="VALID TOKEN!" />
            </when>
            <otherwise>
                <log message="INVALID TOKEN!" />
            </otherwise>
...
</route>

LOG:

12:18:46.756 | INFO  | qtp1409864883-19 | route5 | ID-ni3mm4nd-K73SV-1523009915657-0-1 | route5 | VALID TOKEN!
12:18:46.759 | INFO  | qtp1409864883-19 | route5 | ID-ni3mm4nd-K73SV-1523009915657-0-1 | route5 | INVALID TOKEN!

How can I receive both? It does the same with "contains" or "==" or "=~", no matter what I choose. What am I doing wrong?

A: Choice is missing.

<choice>
	<when>
		<simple>$simple{headers["CamelSqlRowCount"]} > 0</simple>
		<log message="Received matmas material from DB $simple{body}" loggingLevel="INFO" logName="business.global.material" />
		<split parallelProcessing="true">
			<simple>$simple{body}</simple>
			<to uri="bean://productDataMapper" />
			<to uri="disruptor://pxIntegrate?size=8192" />
		</split>
	</when>
	<otherwise>
		<log message="No matmas material from db received" loggingLevel="WARN" logName="business.global.material" />
	</otherwise>
</choice>

Warning when calling calculation logic in PFX without sending any data to it

Q: I need to just call a calculation logic in PFX without sending any data to it. Any idea how to send an empty body? I tried <setBody> and empty ArrayList but I still got a warning.

A: It can be handled by catching the exception and continuing as if nothing happened. Or you just send some random dummy which then hopefully would be ignored.

A2: I think you need to set empty Map to body instead of ArrayList

Note about loadMapper 

If you already have values from a CSV header in the "sku;attribute1;..." format, just use:

<pfx:loadMapper id="productMapper" includeUnmappedProperties="true"/>

You do not need to define a mapper if your CSV is 1:1.

Access body in criterion

Q: How do I access body[0] in a criterion like this?

<pfx:criterion fieldName="sku" operator="equals" value="simple:body[0][sku]"/>

And a property?

<pfx:criterion fieldName="attribute3" operator="greaterThan" value="simple:property[JMSUpdated]"/>

Q: It depends on what is in your body.

body[0][sku] means that your body is an array and you take the first element and it is the name sku.

If your body is just an object, it should be body.sku.

To learn about simple expressions, see http://camel.apache.org/simple.html

Since Camel 2.15 it is correct to use:

<pfx:criterion fieldName="attribute3" operator="greaterThan" value="simple: exchangeProperty[JMSUpdated]"/>

Concatenation of several input values in mapper to final out field

Q: Is it possible to concatenate several input values in a mapper to final out field? Something like this:

<bean parent="mapperEntry" p:in="DOCNUM + MAT13 + VKORG" p:out="IDOCKey" />

A: You need to write a transformer.

<pfx:loadMapper>
	<pfx:groovy expression="body.DOCNUM + ':' + body.MAT13  + ':' + body.VKORG" out="IDOCKey"/>
</pfx:loadMapper>

Fetch QuoteLineItems

Q: How to fetch quote line items data?

A: Fetch the full quote using:

priceFxClient.getQuoteApi().fetch(typedId);

Cleaning up data feed

Q: How to clean up a data feed?

A: Call the Data Load Truncate after some FLUSH events. IM listens to the PADATALOAD_COMPLETED event and filters for DS_FLUSH | READY, then it will call DL TRUNCATE on the Data Feed.

PADATALOAD_COMPLETED event
#listen to PADATALOAD_COMPLETED event
# route to execute logic when DS_FLUSH completed
<route id="eventPADataLoadCompleted">
	<from uri="direct:eventPADataLoadCompleted"/>
	<log message="PADATALOAD_COMPLETED event received with info - type: ${body[data][0][type]} - targetName: ${body[data][0][targetName]} - status: ${body[data][0][status]}"/>
    <filter>
		<simple>${body[data][0][type]} == 'DS_FLUSH' &amp;&amp; ${body[data][0][status]} == 'READY'</simple>
		<to uri="direct:dataLoadCompleted"/>
	</filter>
</route>

Exporting price lists – mapping by label instead of name

Q: How can we export price list data correctly when its fields names are changeable?

A: Since IM 1.0.4.2, we can use net.pricefx.integration.processor.AddPLIMetadataBasedFieldsProcessor and the injectHeaderFromKeysToFirstLine attribute of the list-to-csv component to have headers from keys included and data mapped by its metadata/labels. For details see list-to-csv - Export Data to CSV (DEPRECATED)

Price List export
<bean id="addPLIMetadataBasedFieldsProcessor" class="net.pricefx.integration.processor.AddPLIMetadataBasedFieldsProcessor">
	<property name="priceListId" value="simple:property.priceListId"/>
</bean>
<pfx:list-to-csv id="priceBookCSVTransform" outputUri="direct:priceBookToFile"
                     injectHeaderFromKeysToFirstLine="true" mapper="priceBookMapper" dataFormat="priceBookCsvFormat"/>
<route id="priceList-Fetch">
	<from uri="direct:priceListApproved"/>
	<setProperty propertyName="priceListId">
		<groovy>Double.valueOf(body.data.id[0]).longValue() + ""</groovy>
	</setProperty>
	<setProperty propertyName="priceListLabel">
		<simple>${body[data][0][label]}</simple>
	</setProperty>
	<setProperty propertyName="targetDate">
		<simple>${body[data][0][targetDate]}</simple>
	</setProperty>
    <setProperty propertyName="expiryDate">
		<simple>${body[data][0][expiryDate]}</simple>
	</setProperty>
	<log message="'ITEM_APPROVED_PL' notification received for price list '${property[priceListLabel]}' (id: ${property[priceListId]})" loggingLevel="INFO"/>
    <to uri="fetchPriceListItems"/>     
    <log message="Fetched ${body.size()} rows from PL '${property[priceListLabel]}' (id: ${property[priceListId]})" loggingLevel="INFO"/>
    <process ref="addPLIMetadataBasedFieldsProcessor"/>
    <to uri="direct://priceBookCSVTransform" />
</route>

Setting TAB character as CSV delimiter 

Q: How can I set the TAB character to delimiter a property of CsvDataFormat?

A: Use the hex code "&#x9;"

<property name="delimiter" value="&#x9;"/>

Saving or updating quote 

Q: How can I update a Quote from IM?

A: Use massEdit or quoteApi from pfxClient.

<pfx:filter id="fetchQuoteFilter">
    <pfx:and>
        <pfx:criterion fieldName="uniqueName" operator="equals" value="simple:header[sourceId]"/>
    </pfx:and>
</pfx:filter>
<pfx:dsMassEdit id="updateContractId" objectType="Q" filter="fetchQuoteFilter">
	<pfx:field name="additionalInfo3" value="simple:header.contractLN"/>
	<pfx:field name="additionalInfo4" value="simple:header.contractCA"/>
</pfx:dsMassEdit>

or 

SaveQuoteRequest saveQuoteRequest = new SaveQuoteRequest();
PriceQuoteRequest priceQuoteRequest = new PriceQuoteRequest();
PriceQuoteRequestData priceQuoteRequestData = new PriceQuoteRequestData();
try {
	Response price = pricefxClient.getQuoteApi().price(priceQuoteRequest.data(priceQuoteRequestData.quote(quote)));
	...
} catch (ApiException e) {
	...
}

Retrieving email list from user group 

Q: Is there an "official" way to retrieve an email list from a PFX user group?

A: use GeneralAPI and method fetch with object type U

fetch User Group by Id
priceFxClient.getGeneralApi().fetchByObjectTypeAndObjectId("UG", objectId, requestBody)

Exporting to CSV file with duplicating header column name

Q: As you can notice, there is a duplicity in the column name. So my question is how can I achieve something like this?

A:  Write the header beforehand and then export only values.

Export header
<setBody>                			         <simple>sku;OutletID;CreateTime;UserName;ChangePriceType;DeletePreMaintainedPriceFlag;SalesPrice;Currency;CreateSystem;ScheduledTime;Status;ExecTime;ExecMessage;Subsidiary;Brand;CalculationUUID;PriceTypeCode;PriceTypeText;FinalPriceReason;PriceProposal;LastPrice;S-Flags;P-Flags\n</simple>
            </setBody>
            <setHeader headerName="CamelFileName">
                <simple>${property[CamelFileName]}</simple>
            </setHeader>
            <to uri="file://{{central-sales-price-export-work-directory}}${fileOutParameters}&fileExist=Append" />

Adding CFS calculation while processing

Q: Calling it to calculate when it is pending is ignored, but how is it when it's processing? Can we successfully add a calculate job while it's in processing state?

A:  While processing, the request will get into the queue.

Creating price list from IM

Q:  Is it possible to create a price list and upload the lines to it via a CSV header and data in a file. Is this possible from IM?

A:  Create JSON with a price list configuration. When you create a price list in PFX, it is the same process. You prepare a configuration and then you click Create PL and it is done.

Fetching data with BatchModed=true

Q: I've played with the new pfx-api:fetch component with BatchedMode=true. What should be the result of the fetch? I am getting List of BatchingInterval objects but with no data.

A: 

<route>
    <from uri="timer://fetchDataFromDatamartByQuery?repeatCount=1"/>
    <to uri="pfx-api:fetch?objectType=DM&dsUniqueName=1669.DMDS&batchedMode=true&batchSize=5000"/>
    <split>
        <simple>${body}</simple>
        <to uri="pfx-api:next"/>
        <log message="${body}"/>
    </split>
</route>

Web crawling - Metoda

Q: Did somebody hear about Methoda (web crawling partner)? Is it used somewhere?

A: The correct name is Metoda and its support is part of Pricefx.

Loading data to Data Source directly

Q:  I'm checking the new PFX APIs in IM 1.1.7 and I found that we can use pfx-api:loaddata to load data directly into a data source (ignoring steps of loading data into data feed + dmFlush call). Do you think it is possible?

A: Yes. Delete DF, then the data are loaded into DS. 

No events generated

Q: I have a listener for ITEM_UDPATE_PR and I wonder why no event is created if I do mass update on Price Records. What should be the right trigger or what are the rules? 

A: Mass edit/delete do not generate events. If you do an update, it generates an event. You can call integrate.

Starting LPG calculation 

Q: Is there any "official" API to start a LPG calculation from IM?

A: 

<bean id="calculateFinalPriceListLPG" class="net.pricefx.integration.command.pg.Calculate">
        <property name="priceGridId" value="simple:property[loadedPriceGridId]"/>
</bean>

Preventing from being overridden by null value

Q: I have a requirement from the customer that they need to prevent null values from overriding existing values. Is there any solutions for that?

A:  No. You need to fetch data from Pricefx and compare it. Or if it is just only about one attribute, you can make a conditional integration request.

Working with IDOC format

Q: Do we have any project that works with SAP iDoc format (inbound data)?

A: iDocs are just XML files. You can search over Bitbucket and see how we dealt with them.

Working with JSON – jsonpath component

Q: Camel jsonpath is a good component for working with JSON. For example, I have this JSON: 

{
"data": [
{
"version": 1,
"typedId": "22818674.PGI",
"sku": "121832",
"label": "Strybal Scoop Tee SS Women",
"resultPrice": 100.00000,
"allowedOverrides": "",
"calculatedResultPrice": 100.00000,
"tainted": false,
"priceGridId": 2595,
"approvalState": "APPROVED",
"activePrice": 100.00000,
"manualEditVersion": 0,
"manualPriceExpired": false,
"submittedByName": "admin",
"approvedByName": "admin",
"createDate": "2018-08-03T13:47:30",
"createdBy": 3139,
"lastUpdateDate": "2018-08-03T16:01:11",
"approvalDate": "2018-08-03T16:35:30",
"activePriceDate": "2018-08-03T16:35:30",
"completeResultsAvailable": true,
"itemExtensions": {},
"allCalculationResults": [
{
"result": 100,
"resultName": "price"
},
{
"result": 111.00,
"resultName": "price_authorized"
},
{
"result": 120.0,
"resultName": "price_nonauthorized"
}
]
}
],
"metricName": "PGI_Approved",
"eventType": "ITEM_APPROVED_PGI"
}

I would like to get the result 111 for allCalculationResults where resultName = price_authorized.

A:

<setHeader headerName="authorizedPrice">
    <jsonpath>$.data[0].allCalculationResults[?(@.resultName == 'price_authorized')].result</jsonpath>
</setHeader>

Persistent message store in Camel

Q: Does anyone have some experience with persistent message store in Camel ? I tried to use krati but I had some issues with it.

A: We use MySQL and Oracle to store data on Bosch and MediaSaturn. 

Example with leveldb: https://svn.apache.org/repos/asf/camel/trunk/components/camel-leveldb/src/test/resources/org/apache/camel/component/leveldb/LevelDBSpringAggregateTest.xml (for aggregation only) 

Deleting approved contact

Q: Is it possible to delete an approved contract within PFX?

A: It is not possible. But Support can do that.

Using new components in IM 1.1.7

Q: I'm trying to use new components (in IM 1.1.7 and higher), but face error below:

Failed to resolve endpoint: pfx-csv://unmarshal?delimiter=%7C due to: No component found with scheme: pfx-csv

A: Please add this section into your POM

<dependency>
  <groupId>net.pricefx.integration</groupId>
  <artifactId>camel-pfx</artifactId>
</dependency>

Merging CSV columns into one PFX attribute

Q: How to merge two columns in CSV data into one column PFX table? (For instance, merging a “date” column and a “time” column into one column “datetime”)

A: You can do it in mapper:

<pfx:groovy expression=" body.Date + ' ' + body.Time" out="attribute1"/>

Calling LPG recalculate in IM

Q: How to call LPG recalculate?

A: Please check MS project, custom processor "TriggerCompetitionRecalcProcessor".

SAP iDOC support in IM

Q: Does IM support working with SAP's iDOC flat text format?

A: No, IM currently only support iDOC as XML format.

Using dsLoad with key predefined in PFX

Q: When using dsLoad, how to force IM using keys defined in PFX instead of sending businessKey="sku,name"?

A: You can use detectJoinFields attribute (Supported from 1.1.9).

Using multiple PFX connections in single IM project

Q: Is it possible to use one inbound folder but data loaded into two partitions?

A: Yes, it's possible to have more than one pfx-connection in a IM project. These connections are identified by ID.

Connections are set up in camel-context.xml.

PRX Connections in camel-context.xml
    <pfx:connection id="coesia" uri="${pfx-coesia.url}" partition="${pfx-coesia.partition}" username="${pfx-coesia.username}" password="${pfx-coesia.password}" debug="${pfx-coesia.debug:false}"/>
    <pfx:connection id="gdm" uri="${pfx-coesia-gdm.url}" partition="${pfx-coesia-gdm.partition}" username="${pfx-coesia-gdm.username}" password="${pfx-coesia-gdm.password}" debug="${pfx-coesia-gdm.debug:false}"/>
    <pfx:connection id="raj" uri="${pfx-coesia-raj.url}" partition="${pfx-coesia-raj.partition}" username="${pfx-coesia-raj.username}" password="${pfx-coesia-raj.password}" debug="${pfx-coesia-raj.debug:false}"/>
    <pfx:connection id="norden" uri="${pfx-coesia-norden.url}" partition="${pfx-coesia-norden.partition}" username="${pfx-coesia-norden.username}" password="${pfx-coesia-norden.password}" debug="${pfx-coesia-norden.debug:false}"/>

To use a particalur connection you have to set the header partitionPfxApi before calling the PFX.

Route
    <route id="productMasterData">
      <from uri="{{coesia-products-fromUri}}"/>
      <setHeader headerName="partitionPfxApi">
        <constant>coesia</constant>
      </setHeader>
      <to uri="direct:ProductProcess" />  
    </route>

Refer to these projects for more details: https://bitbucket.org/pricefx/toys-r-us-integration/src/master/src/main/resources/camel-context.xml

or https://bitbucket.org/pricefx/coesia-integration/src/master/ .

Working with big ZIP file larger than 4GB

Q: I am using Camel's ZipFileDataFormat to extract .ZIP files from customer. However, it showed error below when parsing files which have size bigger than 4GB.

org.apache.camel.RuntimeCamelException: java.util.zip.ZipException: invalid entry size (expected 5629585467198288 but got 5587552134 bytes)
  at org.apache.camel.dataformat.zipfile.ZipIterator.getNextElement(ZipIterator.java:116)
  at org.apache.camel.dataformat.zipfile.ZipIterator.next(ZipIterator.java:85)
  at org.apache.camel.dataformat.zipfile.ZipIterator.next(ZipIterator.java:39)
  at org.apache.camel.processor.Splitter$SplitterIterable$1.next(Splitter.java:188)
  at org.apache.camel.processor.Splitter$SplitterIterable$1.next(Splitter.java:164)
  at org.apache.camel.processor.MulticastProcessor.doProcessSequential(MulticastProcessor.java:616)
  at org.apache.camel.processor.MulticastProcessor.process(MulticastProcessor.java:248)
  at org.apache.camel.processor.Splitter.process(Splitter.java:114) ...

A: That ZIP file must be corrupted. Possibly customer is using a non-supported ZIP64 application to compress the data file. In this case, the output file still can be extracted by some applications, but will be failed in some "standard" applications such as WinZip or Java JDK.

Samples of reading and writing data in Product Extension table

Q: Is there an example of read from and write to a Product Extension table?

A: Please put them here:
https://pricefx.atlassian.net/wiki/spaces/INTG/pages/537952266/Fetch+Data+from+Price+f+x
https://pricefx.atlassian.net/wiki/spaces/INTG/pages/537854028/Parse+CSV+and+Load+Data+to+General+Data+Source

A library of e-books

Q: Do/will we have a sharing library of e-books that we need in daily work?

A: I'm not aware of any shared ebook? We have an only a paper book: Camel in Action. One book is in Prague and the second is in Ostrava

Defining Table Name and Keys for PX or CX Master Data

Defining a name of a table for Product or Customer Extensions (CX, PX) is not all that intuitive.

Set the name of the table in a loadMapper first.

loadMapper
    <!-- Product Extension Mapper -->
    <pfx:loadMapper convertEmptyStringToNull="true" id="ProductExtensionMasterDataMapper">
        <!-- set Product Extension table name -->
        <pfx:simple expression="ProductExtension1" out="name"/>
        <!-- attribute mapping -->
        <pfx:body in="DIM_ARTICLE_KEY"       out="sku"/>
        <pfx:body in="PALLET_SPEC_ID"        out="attribute1"/>
        <pfx:body in="ROLLS_PER_PACK"    ...

Do not forget to set detectJoinFields in the URI.

Key definition
<to uri="pfx-api:loaddata?mapper=ProductExtensionMasterDataMapper&objectType=PX&detectJoinFields=true"/>

Saving a flag into PFX Advanced configuration

Q: How to store a flag or a value to PFX Advanced configuration? I would like to store for example last updated time for a data feed.

A: There will be an API in XML. For now you have create a configurator bean and call its method. See the dana project.

Creating a REST endpoint to call your route

Q: I need to call a route that consumes a web service with special parameters,  e.g. for a retry or an initial data load.

A: Create a REST endpoint running on the localhost and implement a proxy route that consumes the endpoint and calls the target system. The implementation is used in the Cox project.

The sample code takes a JSON payload, sets headers and calls the target service.

Define the REST endpoint in the camel-context.xml file:

REST endpoint
        <dataFormats>
            <json id="gson" useList="true" library="Gson"/>
            <json id="json" useList="true" library="Jackson"/>
        </dataFormats>
        <!-- REST service for initial load of Price2Spy data. Works as a proxy for the Price2Spy. -->
        <!-- example:
            curl -X POST -H "Content-Type:application/json" http://localhost:42080/Price2Spy/price -d '{"dateChangeFrom": "2018-01-01 00:00:00","dateChangeTo":"2018-12-01 00:00:00" }'
        -->
        <restConfiguration bindingMode="json" component="jetty" port="42080" host="localhost"/>
        <rest consumes="application/json" produces="text/plain">
            <post uri="/Price2Spy/price">
                <route id="adhocPrice2SpyREST" >
                    <to uri="direct:adhocPrice2SpyCompetitorData"/>
                </route>
            </post>
        </rest>

Implement the proxy route:

The proxy route
        <!-- for loading data through REST bound to localhost:42080 -->
        <route id="adhocPrice2SpyCompetitorData">
            <from uri="direct:adhocPrice2SpyCompetitorData"/>
            <!--<log message="Got ${body}"/>-->
            <setHeader headerName="Price2SpyDateChangeTo">
                <jsonpath>$.dateChangeTo</jsonpath>
            </setHeader>
            <setHeader headerName="Price2SpyDateChangeFrom">
                <jsonpath>$.dateChangeFrom</jsonpath>
            </setHeader>
            <to uri="seda:updatePrice2SpyCompetitorData"/>
        </route>

The target route consumes the seda endpoint. Do not forget to override HTTP headers:

The target route
        <route id="updatePrice2SpyCompetitorData">
            <!--<from uri="quartz2://price2spyDataUpdateTimer?cron={{price2spy-pricing-data-cron}}&trigger.timeZone=America/Chicago&stateful=true" />-->
            <from uri="direct:updatePrice2SpyCompetitorData"/>
            <from uri="seda:updatePrice2SpyCompetitorData"/>
            <!-- setup headers for the REST call -->
            <setHeader headerName="CamelHttpMethod">
                <constant>POST</constant>
            </setHeader>
            <setHeader headerName="Content-Type">
                <constant>application/json</constant>
            </setHeader>
            <setHeader headerName="Accept">
                <constant>application/json</constant>
            </setHeader>
            <setHeader headerName="Authorization">
                <simple>{{price2spy-pricing-data-authToken}}</simple>
            </setHeader>
            <setHeader headerName="CamelHttpUri">
                <simple>https4://your-service-provider.....</simple>
            </setHeader>
            <setBody>
                <simple>{
                    "dateChangeFrom": "${header.Price2SpyDateChangeFrom}",
                    "dateChangeTo": "${header.Price2SpyDateChangeTo}"
                    }</simple>
            </setBody>
            <log message="Fetching data from ${header.Price2SpyDateChangeFrom} to ${header.Price2SpyDateChangeTo}." />
            <!--<log message="Sending body:\n${body}"/>-->
            <!--<process ref="debugProcessor"/>-->
            <to uri="https4://your-service-provider....."/>
            <!-- convert json to map -->
            <unmarshal ref="gson"/>

Changing the heap size for the IntegrationManager in a dedicated environment

You may find that a dedicated environment is very slow. You get very a slow response from the PFX UI too. Check the operational memory allocation. What you usually get is 32 GB of RAM. The PFX server alone has the heap size of 24 GB. Once you deploy more than one IntegrationManager on the server, you are in a potential trouble. IM has a default heap allocation of 4 GB. Once both IMs will reach their maximum heapsize, the operating system will start swapping and everything slows down. The heapsize of IM can be adjusted. You need to create a a Helpdesk ticket and specify the size. The size is controlled by the parameter -Xmx.

Checking the heapsize of IM, both have 2 GB:

Heap Size
root@node1.irm-qa.pricefx.net ~ # ps ax|grep im-
 2700 ?        Sl    13:12 /usr/bin/java -Dsun.misc.URLClassPath.disableJarChecking=true -Xmx2G -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/pricefx/runtime/im-iron-mountain-qa -Xloggc:gc.log -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=5M -Dfile.encoding=UTF-8 -Dspring.profiles.active=iron-mountain_qa -jar /var/pricefx/runtime/im-iron-mountain-qa/im-iron-mountain-qa.jar
 3282 ?        Sl     0:58 /usr/bin/java -Dsun.misc.URLClassPath.disableJarChecking=true -Xmx2G -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/pricefx/runtime/im-iron-mountain-dev -Xloggc:gc.log -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=5M -Dfile.encoding=UTF-8 -Dspring.profiles.active=iron-mountain_dev -jar /var/pricefx/runtime/im-iron-mountain-dev/im-iron-mountain-dev.jar

Checking the amount of the RAM:

RAM
root@node1.irm-qa.pricefx.net ~ # free -h
              total        used        free      shared  buff/cache   available
Mem:            31G         30G        186M         10M        201M         93M
Swap:           15G        1.9G         14G

GUnzip processor for bigger files

The camel GZip unmarshal is in memory only. For larger files you end up with the out of memory exception. The following processor uses streams and creates .done file when unzipping is done.

GUnzipProcessor.java
package net.pricefx.integration.processor.your_project;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.*;
import java.util.zip.GZIPInputStream;
/**
 * Processor for unzipping gzipped files using streams.
 *
 * @author rbiciste
 */
public final class GUnZipProcessor implements Processor {
    private static final Logger LOGGER = LoggerFactory.getLogger(GUnZipProcessor.class);
    /**
     * Unzips a file using gzip as a streams. Implemented to avoid out of memory exception.
     * Camel gzip unmarshal is memory only.
     * After decompression it creates .done file to signal CSV routes.
     */
    @Override
    public void process(Exchange exchange) throws Exception {
        //Decompress the file to a temp folder
        File file = new File(exchange.getIn().getHeader("CamelFilePath", String.class));
        GZIPInputStream in = null;
        OutputStream out = null;
        File target = null;
        File doneFile = null;
        try {
            //Open the compressed file
            in = new GZIPInputStream(new FileInputStream(file));
            String targetFileName = file.getName().substring(0, file.getName().lastIndexOf('.'));
            LOGGER.debug("Unzipping to " + targetFileName);
            //Open the output file
            target = new File(file.getParent(), targetFileName);
            target.createNewFile();
            out = new FileOutputStream(target);
            //Transfer bytes from the compressed file to the output file
            //Buffer of 1 mb.
            byte[] buf = new byte[1048576];
            int len;
            while ((len = in.read(buf)) > 0) {
                out.write(buf, 0, len);
            }
            // close streams
            in.close();
            out.close();
            LOGGER.debug("Unzipping is done. Streams closed.");
            // create .done file for CSV routes
            doneFile = new File(file.getParent(), targetFileName + ".done");
            doneFile.createNewFile();
            LOGGER.debug(".done file created.");
        } catch (FileNotFoundException e) {
            LOGGER.error("File " + file.getName() + " not found");
            throw e;
        } catch (IOException e) {
            LOGGER.error("Gunzipping the file " + file.getName() + " failed." + e.getMessage());
            throw e;
        } finally {
            // close in
            if(in != null) {
                in.close();
            }
            // close out
            if(out != null) {
                out.close();
            }
        }
    }
}

How to use constants and reusable code snippets

I had a map value that was static but used it in a number of places.

  1. I created a class with a static method returning static members.

    Util Class

    package net.pricefx.integration.util.ironmountain;
    import java.util.HashMap;
    import java.util.Map;
    public class DataLoadOrchestrationUtils {
        /** returns Country Code from a file name like irm_bi_ca_rm_contract_details_20190217.csv
         *
          * @param fileName
         * @return
         */
        public static String getCountryCode(String fileName) {
            String[] parts = fileName.split("_");
            // country from 'irm_bi_ca_rm_contract_details_20190217.csv'
            return parts[2].toUpperCase();
        }
        /** returns Data Set from a file name like irm_bi_ca_rm_contract_details_20190217.csv
         *
         * @param fileName
         * @return
         */
        public static String getDataSet(String fileName) {
            String[] parts = fileName.split("_");
            return parts[parts.length-1].replace(".csv","");
        }
        /** returns Map of entities needed to run Calculations
         *
         * @return
         */
        static Map<String, String> entityMap;
        static {
            entityMap = new HashMap<>();
            entityMap.put("InvoiceRevenue", "attribute2");
            entityMap.put("ContractDetails","attribute3");
            entityMap.put("RateTable",      "attribute4");
            entityMap.put("BillCode",       "attribute5");
        }
        public static Map getEntityMap() {
            return entityMap;
        }
    }
    

     

  2. I used in a Groovy code whenever I needed.

    Groovy snippets

    ...
                <setHeader headerName="country">
                    <groovy>
                        return net.pricefx.integration.util.ironmountain.DataLoadOrchestrationUtils.getCountryCode(headers.CamelFileNameOnly)
                    </groovy>
                </setHeader>
                <log message="Setting country to ${header[country]}" loggingLevel="INFO"/>
                <!-- Data Set Date -->
                <setHeader headerName="dataSet">
                    <groovy>
                        return net.pricefx.integration.util.ironmountain.DataLoadOrchestrationUtils.getDataSet(headers.CamelFileNameOnly)
                    </groovy>
                </setHeader>
    ...
                        <transform>
                            <groovy>
                                // map entities to attributes in PPV
                                def entities = net.pricefx.integration.util.ironmountain.DataLoadOrchestrationUtils.getEntityMap();
                                // if list is empty throw the exception
                                if (body.size()==0) {
                                    throw new RuntimeException("Received the flush event without data in Running state.")
                                } else { // update existing row
                                    body[0].put(entities.get(headers['entity']),'Done');
                                }
                                return body;
                            </groovy>
                        </transform>

Making a route to be a singleton

I need to update status of a row in a PPV table from multiple routes. First I fetch the row and then make changes. This unfortunately leads to a concurrency conflict. Data are overwritten as more than one instance of the update route exists. You can set <from> for the "direct:" to be blocking.

http://camel.apache.org/direct.html

Route
        <!-- sets the entity status into the PPV row -->
        <route id="dataSetFileStatus">
            <!-- blocks for only one consumer, multiple calls raise concurrent update issues. -->
            <from uri="direct:dataSetFileStatus?block=true"/>

Formatting date when exporting a price list

When fetching a price list, the data come as String. Normally you would do a converter inside a mapper. That does not work and actually throws an exception. You can resolve it in Groovy:

Date Formatting
<pfx:groovy expression="if (body.ServiceReviewDate) { Date.parse('yyyy-mm-dd', body.ServiceReviewDate).format('MM/dd/yyyy')}" out="ServiceReviewDate" />

Getting output of a formula logic

Sometimes it is easier to implement a service inside Pricefx using a formula. The formula creates a map that is returned in a JSON form when the formula is called.

The logic element that emits the response has to have the Display Mode set to 'Everywhere'.
Request URI: https://irm-qa.pricefx.eu/pricefx/ironmtn-dev/formulamanager.executeformula/RadovanJsonTest
Parameters request payload:

{
    "data": {
        "estimatedVolume": 456,
        "industry": "IMIndustry",
        "numberOfMarkets": 8
    }
}

Response body:

{
    "response": {
        "node": "node1",
        "csrfToken": "1q06b29ijhxutvw7zje1j6i2l",
        "data": [
            {
                "resultName": "response",
                "resultLabel": "response",
                "result": {
                    "data": {
                        "params": [
                            {
                                "estimatedVolume": 456,
                                "industry": "IMIndustry",
                                "numberOfMarkets": 8
                            }
                        ]
                    }
                },
                "warnings": null,
                "alertMessage": null,
                "alertType": null,
                "displayOptions": 16,
                "formatType": null,
                "suffix": null,
                "resultType": "SIMPLE",
                "cssProperties": null,
                "userGroup": null,
                "resultGroup": null,
                "overrideValueOptions": null,
                "overrideAllowEmpty": true,
                "labelTranslations": null,
                "overridable": false,
                "overridden": false,
                "resultDescription": null
            }
        ],
        "status": 0
    }
}

 

Logic parameter access:

def input = api.local.input
api.local.estimatedVolume = api.stringUserEntry("estimatedVolume")
api.local.industry = api.stringUserEntry("industry")
api.local.numberOfMarkets = api.stringUserEntry("numberOfMarkets")
api.logInfo("---- estimatedVolume = " + api.stringUserEntry("estimatedVolume"))
api.logInfo("**** estimatedVolume = " + api.local.estimatedVolume)

Logic emitting the response:

api.local.response = [:]
def response = [:]
response.params = [] as List
api.local.estimatedVolume = api.stringUserEntry("estimatedVolume")
api.local.industry = api.stringUserEntry("industry")
api.local.numberOfMarkets = api.stringUserEntry("numberOfMarkets")
def params = [:]
params["estimatedVolume"] = api.local.estimatedVolume
params["industry"] = api.local.industry
params["numberOfMarkets"] = api.local.numberOfMarkets
response.params.add(params)
api.local.response["data"] = response
api.logInfo(api.jsonEncode(api.local.response))
return api.local.response

Custom file processing strategy driven by a signal file

A customer uploads files on a cloud based SFTP. Once the upload is finished, it creates done_yyyymmdd.ctl file. After that we should start downloading files. As simple as it sounds it is not easily implemented in Camel.

I implemented a custom file processing strategy based on Class GenericFileRenameProcessStrategy<T> from the Camel sources. You define a bean and then put into it the file URI.

<bean id="sftpDoneCtlFileProcessingStrategy" class="net.pricefx.integration.component.file.strategy.ironmountain.SftpDoneCtlFileProcessingStrategy"/>
...
<from uri="sftp://{{sftp-inbound-fromUri}}&processStrategy=#sftpDoneCtlFileProcessingStrategy"/>

Source code:

https://bitbucket.org/pricefx/iron-mountain-integration/src/master/src/main/java/net/pricefx/integration/component/file/strategy/ironmountain/SftpDoneCtlFileProcessingStrategy.java

Server health check warning

How to disable this warning?

02:14:30.969 | INFO  | http-nio-8089-exec-5 |  |  | o.s.c.c.c.ConfigServicePropertySourceLocator | Fetching config from server at: http://localhost:8888
02:14:30.971 | WARN  | http-nio-8089-exec-5 |  |  | o.s.c.c.c.ConfigServicePropertySourceLocator | Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/im-avery-dennison-anz-prod/averydanz_prod": Connection refused; nested exception is java.net.ConnectException: Connection refused

Set the following in the application.properties file (for core 1.1.11 or earlier): 

health.config.enabled=false

This will be by default disabled in IM 1.1.12.

How to fetch mapping of attributes to labels for Customer, Products and Extensions

Got tired of typing attribute numbers and their labels from PriceBuilder tables (Customers, Products, Customer Extensions, Product Extensions)?

You can easily fetch them over the API.

Sample endpoint for the CX mapping:

https://delphi-qa.pricefx.eu/pricefx/delphi-dev/fetch/CXAM

Filter definition for a particular extension:

Filter
{
    "data": {
        "_constructor": "AdvancedCriteria",
        "criteria": [
            {
                "operator": "equals",
                "fieldName": "name",
                "value": "CustomerAddtionalAttr"
            }
        ]
    }
}

Type Codes can be found here.

Sample response:

Response
{
    "response": {
        "status": 0,
        "startRow": 0,
        "node": "node1",
        "csrfToken": "1t8kgyrh14ou7l8y9p3kqpa2q",
        "data": [
            {
                "version": 0,
                "typedId": "200.CXAM",
                "fieldName": "attribute1",
                "label": "INCOTERMS",
                "fieldType": 2,
                "requiredField": false,
                "readOnly": false,
                "name": "CustomerAddAttr",
                "createDate": "2019-08-19T22:05:05",
                "createdBy": 10,
                "lastUpdateDate": "2019-08-19T22:05:05",
                "lastUpdateBy": 10
            },

This sample shows how to get just the relevant lines from the response using awk. It can be done with jq https://stedolan.github.io/jq/ too.

awk '/fieldName/||/label/ { print $0 }' c.json | sed -n 'h;n;p;g;p'| sed 's/"label":/<pfx:body in =/;s/"fieldName":/ out =/'| sed '$!N;s/\n/ /'| sed 's/,//g'|sed 's/$/\/>/'

There is a set of Groovy scripts to support this functionality.

https://bitbucket.org/pricefx/im-interface-generator/src/master/

How to set old IM <1.1.17 to work with openJDK 11 and newer

It may happen that new IM 1.1.17 is not released yet but some IMs should go to new servers with openJDK 11. Then you need to start old IM versions on openJDK 11 and newer.

  1. Add two dependencies into pom.xml and change the version of the spring-boot-maven-plugin.
    Example commit is here: https://bitbucket.org/pricefx/flint-nw-integration/commits/3b51ab77d1ea2878e7fd749bd34e78c55b9b26ee

  2. Create a support ticket specifying that you want to change your IM to work with openjdk11 and newer.

  3. The change to be done is from this:

    if [ ${3} == "flintnw_prod" ]; then
           TARGET=int4.eu.pricefx.net
           deploy_im_standard ${1} ${2} ${3} ${4}
           exit 0
    fi

    to this:

    if [ ${3} == "flintnw_prod" ]; then
           TARGET=int4.eu.pricefx.net
           deploy_im_standard_openjdk11andnewer ${1} ${2} ${3} ${4}
           exit 0
    fi

How to disable IM auto-registration into PlatformManager

Since IntegrationManager 1.1.17, IM instances are auto-registered in PlatformManager: IM sends the IntegrationManagerInstanceStartup event into Kafka every time IM starts up. This is enabled by default and could be disabled with the following configuration property:

# disable kafka (it is recommended for DEV environment)
integration.event-driven.enabled=false
# disable autoregistration
integration.event-driven.auto-registration.enabled=false

How to process CUSTOM events in IM

Events in general are processed by eventType as in integration.events.event-to-route-mapping.ITEM_APPROVED_Q=direct:quoteApproved.

However, CUSTOM events have different "name" (eventType) in EventAdmin and eventType in the data of the event. IM has two routes to process events: the first one downloads the event by the "name" and the second one processes it (sends it to the route) using the eventType inside the event.

As these two eventTypes are different in CUSTOM events, you have to use this hack to process them (1st downloads all CUSTOM events, 2nd processes your custom event):

integration.events.event-to-route-mapping.CUSTOM=hackForCUSTOMevents
integration.events.event-to-route-mapping.CUSTOM_OutBound_PP_ADD=direct:exportPriceList

Using Header/Property in groovy exchange loadMapper

<pfx:groovy expression="request.headers.nationalCode + body.sku" out="prod_id"/>
<pfx:groovy expression="exchange.properties.nationalCode + body.sku" out="prod_id"/>

Understanding Imports and Includes in WSDL Files

  • include –  Only includes types from XSD to the current namespace.

  • import – Imports a namespace. There is a difference between xsd:import and wsdl_import.

It is well explained at https://www.ibm.com/developerworks/webservices/library/ws-tip-imports/ws-tip-imports-pdf.pdf .

Truncating Only Flushed Rows from Data Feed

Only flushed rows from a data feed can be truncated by adding dtoFilter to the truncate command. You have to filter rows where the column formulateResult equals "OK".

Filter and truncate
<pfx:filter id="truncateFlushedFilter" resultFields="name">
    <pfx:and>
        <pfx:criterion fieldName="formulaResult" operator="equals" value="OK"/>
    </pfx:and>
</pfx:filter>
....
<to uri="pfx-api:truncate?targetName=DMF.SalesTransactions&dtoFilter=truncateFlushedFilter"/>

PX Columns Character Size Limits

For each column, there is a limit of 255 characters. For a product extension with 50 attributes, the limit is 70 characters for each column. If you change the size of the product extension from a lower number of attributes to 50 attributes and some of the attributes have more than 70 characters, a warning is displayed and the character string is truncated.

For details see /wiki/spaces/UDEV/pages/1680834622

Event Types

For details see EventType Class

pfx-api:unmarshal does not propagate an exception outside of a split

When there is an error in the CSV format of the data like a quote inside quotes etc., the unmarshal component throws an exception but it does not stop processing the file. It merely skips the current batch and goes on. You have to add stopOnException to the split definition.

Split
            <!-- batching, add total number of rows in the header  -->
            <split streaming="true"  strategyRef="recordsCountAggregation" stopOnException="true">
                <tokenize token="\n" group="5000"/>

Avoid Concurrent Updates on a Row

When I saved a status of processing in a PP table, I ran into an issue when the row-updating route was called from multiple places. During high load, it caused randomly the following exception:

net.pricefx.integration.api.NonRecoverableException: Error: There is probably a long running task which already updated or deleted data you are trying to manipulate. Please refresh your view. (Concurrent Data Modification)
at net.pricefx.integration.api.PriceFxExceptionTranslator.doRecoveryActions(PriceFxExceptionTranslator.java:119)
at net.pricefx.integration.api.client.LookuptableApi.integrate(LookuptableApi.java:501)

at net.pricefx.integration.command.ppv.Integrate.execute(Integrate.java:71)

I avoided the issue by letting the route process only one exchange at a time using ThrottlingInflightRoutePolicy.

Bean Definition
    <!-- Process only one exchange at a time. Used for updating statuses -->
    <bean id="oneExchangeThrottlePolicy" class="org.apache.camel.impl.ThrottlingInflightRoutePolicy">
        <property name="maxInflightExchanges" value="1"/>
        <property name="scope" value="Route"/>
    </bean>

 

Route Definition
<route id="dataSetFileStatusUpdate" routePolicyRef="oneExchangeThrottlePolicy">

Log Snippet:
10:24:22.709 | INFO | Camel (camel-1) thread #13 - file:///home/customer/irm-emea-dev/filearea/inbound | dataSetFileStatusUpdate | ID-linux-wbx3-1601540648191-0-1 | o.a.c.i.ThrottlingInflightRoutePolicy | Throttling consumer: 2 > 1 inflight exchange by suspending consumer: Consumer[direct://dataSetFileStatusUpdate]
10:24:22.709 | INFO | Camel (camel-1) thread #13 - file:///home/customer/irm-emea-dev/filearea/inbound | dataSetFileStatusUpdate | ID-linux-wbx3-1601540648191-0-1 | o.a.c.i.ThrottlingInflightRoutePolicy | Throttling consumer: 1 <= 1 inflight exchange by resuming consumer: Consumer[direct://dataSetFileStatusUpdate]

onComplete behaviour when calling one route multiple times

OnComplete block is executed at the end of a parent route if used in a child route using the latest exchange. If the same route is called multiple times from a parent route, the onComplete block is not executed at the end of the child route but after all calls are finished. See the comments in the code sample:

        <!-- Price Condition 006 -->
        <route id="exportPriceConditionA006Regions">
            <from uri="direct:exportPriceConditionsA006Regions"/>
                <!-- EMEA -->
                    <log message="Exporting EMEA Region"/>
                    <!-- header for a dynamic over Sales Orgs -->
                    <setHeader headerName="Country">
                        <constant>EMEA</constant>
                    </setHeader>
                    <setHeader headerName="sftpOutputFolder">
                        <constant>INF0070</constant>
                    </setHeader>
                    <to uri="direct-vm:fetchSalesOrg"/>
                    <to uri="direct:exportPriceConditionsA006"/>
                <!-- Russia -->
                    <log message="Exporting Russia Region"/>
                    <!-- header for a dynamic over Sales Orgs -->
                    <setHeader headerName="Country">
                        <constant>RU</constant>
                    </setHeader>
                    <setHeader headerName="sftpOutputFolder">
                        <constant>INF0068-1</constant>
                    </setHeader>
                    <to uri="direct-vm:fetchSalesOrg"/>
                    <to uri="direct:exportPriceConditionsA006"/>
                <!-- North America -->
                    <log message="Exporting North America Region"/>
                    <!-- header for a dynamic over Sales Orgs -->
                    <setHeader headerName="Country">
                        <constant>US</constant>
                    </setHeader>
                    <setHeader headerName="sftpOutputFolder">
                        <constant>INF0068</constant>
                    </setHeader>
                    <to uri="direct-vm:fetchSalesOrg"/>
                    <to uri="direct:exportPriceConditionsA006"/>
                <!-- all three onComplete will run now with headers etc. setup by the last call -->
        </route>
        <route id="exportPriceConditionA006">
            <!--      <from uri="timer://exportPriceConditionA004?repeatCount=1"/>-->
            <from uri="direct:exportPriceConditionsA006"/>
            <log loggingLevel="INFO" message="Sales Orgs: ${header.salesOrgsList}"/>
            <!-- set the export file name -->
            <setHeader headerName="CamelFileNameOnly">
                <simple>PC_A006_DT_${date:now:yyyyMMdd_HHmmssSSS}.csv</simple>
            </setHeader>
            <setHeader headerName="CamelFileName">
                <simple>{{sap-price-condition-export.folder}}/${header.CamelFileNameOnly}</simple>
            </setHeader>
            <log loggingLevel="INFO" message="Starting export of the price condition to ${header.CamelFileName}"/>
            <!-- fetches the data set row by its status -->
            <to uri="pfx-api:fetch?objectType=LTV&pricingParameterName=A006_PricingConditions&filter=priceConditionsA006Filter"/>
            <!-- save number of rows for journal -->
            <setHeader headerName="exportedRows">
                <simple>${header.totalRows}</simple>
            </setHeader>
            <setHeader headerName="csvHeader">
                <constant>Condition type,Sales Org.,Distr. Channel,Price List,Document Currency,Material,Amount,Unit,Condition Pricing Unit,UoM,Valid From,Valid To,Scale Quantity,Scale Amount</constant>
            </setHeader>
            <!-- group by column for Scale Quantity Processor -->
            <setProperty propertyName="scaleQuantityKeyName">
                <constant>Material</constant>
            </setProperty>
            <filter>
                <simple>${header.exportedRows} > 0</simple>
                <multicast parallelProcessing="false" stopOnException="true">
                    <to uri="direct:exportPriceConditionA006ToFile"/>
                    <to uri="direct:exportPriceConditionA006ToDS"/>
                    <to uri="direct:exportPriceConditionA006MarkExported"/>
                </multicast>
            </filter>
            <log loggingLevel="INFO" message="Processing completed. Exported ${header.exportedRows} rows."/>
            <onCompletion onCompleteOnly="true">
                <!-- only if we found rows to export -->
                <choice>
                    <when>
                        <simple>${header.exportedRows} > 0</simple>
                        <!-- journal the export -->
                        <wireTap uri="direct-vm:journalExportedFile"/>
                        <!-- transfer the file to the sftp -->
                        <to uri="direct:exportPriceConditionsToSFTP"/>
                    </when>
                    <otherwise>
                    </otherwise>
                </choice>
            </onCompletion>
        </route>

How to Copy File(s) with SCP from One Server to Another via Proxy Jump

Since there is 2FA SSH in use, you no longer can easily copy a file from one server to another via SCP.

The way to do it: Copy from the remote server node1.customerA.pricefx.net (PROD) to the current server node1.customerA-qa.pricefx.net (QA).

root@node1.customerA-qa.pricefx.net # scp -P 666 -o "ProxyJump jan.kadlec@jmp.pricefx.eu -p 666" root@node1.customerA.pricefx.net:/home/customer/customerA_prod/filearea/lizeosopricesfull/LizeoSelloutPrices_AMN_20200927* /tmp/
LizeoSelloutPrices_AMN_20200927000000.csv                                                                                                  100%   59MB 102.7MB/s   00:00    
root@node1.customerA-qa.pricefx.net # 

Now you have the required file from the PROD server in the /tmp folder of the QA server.

How to Remove IM from Server (with Refresh of Monitoring Tool)

root@server-a.pricefx.net ~ # salt-call grains.get im
local:
    ----------
    installed_ims:
        - im-customer1-prod
        - im-customer1-qa
        - im-customer2-qa
        - im-customer1-dev
        - im-customr2-prod
        - im-customer3-prod
    installed_pies:
    running_ims:
        - im-customer1-prod
        - im-customer1-qa
        - im-customer2-qa
        - im-customer1-dev
        - im-customr2-prod
        - im-customer3-prod
    running_pies:
root@server-a.pricefx.net ~ # systemctl stop im-customer3-prod
root@int1.us-vh.pricefx.net ~ # systemctl disable im-customer3-prod
im-customer3-prod.service is not a native service, redirecting to systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable im-customer3-prod
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_ADDRESS = "cs_CZ.UTF-8",
LC_NAME = "cs_CZ.UTF-8",
LC_MONETARY = "cs_CZ.UTF-8",
LC_PAPER = "cs_CZ.UTF-8",
LC_IDENTIFICATION = "cs_CZ.UTF-8",
LC_TELEPHONE = "cs_CZ.UTF-8",
LC_MEASUREMENT = "cs_CZ.UTF-8",
LC_TIME = "cs_CZ.UTF-8",
LC_NUMERIC = "cs_CZ.UTF-8",
LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_ADDRESS = "cs_CZ.UTF-8",
LC_NAME = "cs_CZ.UTF-8",
LC_MONETARY = "cs_CZ.UTF-8",
LC_PAPER = "cs_CZ.UTF-8",
LC_IDENTIFICATION = "cs_CZ.UTF-8",
LC_TELEPHONE = "cs_CZ.UTF-8",
LC_MEASUREMENT = "cs_CZ.UTF-8",
LC_TIME = "cs_CZ.UTF-8",
LC_NUMERIC = "cs_CZ.UTF-8",
LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
root@server-a.pricefx.net ~ # unlink /etc/init.d/im-customer3-prod
root@server-a.pricefx.net ~ # systemctl daemon-reload
root@server-a.pricefx.net ~ # systemctl reset-failed
root@server-a.pricefx.net ~ # salt-call saltutil.refresh_grains
local:
    True
root@server-a.pricefx.net ~ # salt-call grains.get im
local:
    ----------
    installed_ims:
        - im-customer1-prod
        - im-customer1-qa
        - im-customer2-qa
        - im-customer1-dev
        - im-customr2-prod
    installed_pies:
    running_ims:
        - im-customer1-prod
        - im-customer1-qa
        - im-customer2-qa
        - im-customer1-dev
        - im-customr2-prod
    running_pies:
root@server-a.pricefx.net ~ #
Removed im-customer3-prod from configuration.py -> It will be replaced by command execution "salt-call state.apply im.monitoring" with next relase of monitoring tool.

How to Send a Message to /dev/null

Sometimes you need to throw away a message to get a flow going. It can be achieved by logging to an empty channel. Camel will mark the message as consumed and go to another one.

Empty log
                <choice>
                <when>
                    <simple>${header.exportedRows} > 0</simple>
                    <wireTap uri="direct-vm:journalExportedFile"/>
                    <!-- transfer the file to the sftp -->
                    <to uri="direct:exportCreditMemoToSFTP"/>
                </when>
                    <otherwise>
                        <to uri="log:dev.null?level=OFF"/>
                    </otherwise>
                </choice>

Correct setup for LOG and ELK in IM 1.3.x and newer

Many integrations might have a wrong setup. If you migrated to Camel 3.5 (IM 1.3.x and newer), you need to do these steps: 

  1. Remove the logback-spring.xml file from your IM project.
    IM has its own configuration file. The main.log file will be created in the new subfolder "logs" in the IM instance folder. Monitoring is available there. 

  2. Remove these properties from the property files. These properties are not needed. 

    logging.file.name=main.log
    integration.logging.file=main.log

     

  3. Add these new properties into the property files. It is configuration for ELK if a config server is not used. 

    integration.logstash.enabled=true
    integration.logstash.address=elkint.pricefx.eu:4560

How to set Constructor Parameter for Bean, e.g. Converter

Some values are not available as properties. For example pattern for DecimalToString can only be set as a constructor parameter.

  <bean id="decimalToString" class="net.pricefx.integration.mapper.converter.DecimalToString">
    <constructor-arg name="pattern" value="#.###############################################"/>
  </bean>

How to access Spring properties in Simple blocks

Here is an example how to log a property set in a property file:

Property file:
bosch-rexroth.initial-load-master-data-batch-size=50000
Route:
<log message="Running batch number# ${exchangeProperty.CamelSplitIndex}, batch size ${properties:bosch-rexroth.initial-load-master-data-batch-size} for file ${header.CamelFileNameOnly}"/>

Manual Integration that uses event processing is filled with "perEventTypeEventRouteInputEventRoutePADATALOAD_COMPLETED" messages

Under Loggers, search for net.pricefx.integration.api.PriceFxExceptionTranslator and change the log level to WARN.

Provisioned instance fails on no spring.config.import property has been defined

If you had a provisioned AWS instance on version 3.7.0 and below, you might encounter this problem when you switch your project to Custom Image build. This is caused by missing dependency in pom.xml. Just add this dependency to pom.xml to resolve the issue:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-bootstrap</artifactId>
  <version>4.0.4</version>
</dependency>

Explanation: We have not shipped IM with bootstrap dependency/setup because it was not necessary. Now, when we added an option to build a custom image, it actually builds a project as a Maven project. So without bootstrap, it cannot connect to the config server.

  • No labels