In IntegrationManager 2.x, the Groovy Joda framework is deprecated. To get the current date-time, we need to use simple language . You and you have to specify the ISO mask .as shown below:
Code Block |
---|
${date:now:yyyy-MM-dd'T'HH:mm:ssZ} |
|
How to add a row number to an exported file
Sometimes you need to add a row number to an exported file.
First, you need to save the properties PfxRecordIndex (internal Pricefx Mapper property) and CamelSplitIndex in the first mapper. Usually, you are inside a split.
...
Code Block |
---|
<pfx:loadMapper id="IndexPriceCalculationLogicSimulation_TestingMapper">
<pfx:body in="Currency" out="TI_COPY_RECORDS_WAERS" />
<pfx:body in="Quantity" out="TI_COPY_RECORDS_KPEIN" />
<pfx:body in="UOM" out="COPY_RECORDS_KMEIN" />
<pfx:property in="PfxRecordIndex" out="FifcoPfxRecordIndex" />
<pfx:property in="CamelSplitIndex" out="FifcoCamelSplitIndex" />
</pfx:loadMapper> |
Then you can calculate the row number in the second mapper. Just set includeUnmappedProperties=true to include all fields mapped in the first mapper.
...
Code Block |
---|
<pfx:loadMapper id="AddRowNumberMapper" includeUnmappedProperties="true">
<pfx:groovy expression="return (body.FifcoPfxRecordIndex+1)*(body.FifcoCamelSplitIndex+1)" out="TI_COPY_RECORDS_KPOSN" />
</pfx:loadMapper> |
You need to call api-model twice then. That's all.
Code Block |
<toD uri="pfx-api:fetch?filter=PGIExportFilter&objectType=PGI&typedId=${exchangeProperty.lpgId}&batchedMode=true&batchSize=10000"/>
<split>
<simple>${body}</simple>
<toD uri="pfx-api:fetch?filter=PGIExportFilter&objectType=PGI&typedId=${exchangeProperty.lpgId}"/>
<process ref="addPGIMetadataBasedFieldsProcessor"/>
<to uri="pfx-model:transform?mapper=IndexPriceCalculationLogicSimulation_TestingMapper"/>
<!-- add row number -->
<to uri="pfx-model:transform?mapper=AddRowNumberMapper"/>
<to uri="pfx-csv:marshal?delimiter=,&header=SI_APPLICATION,SI_CONDITION_TABLE,SI_CONDITION_TYPE,SI_DATE_FROM,SI_DATE_TO,SI_ENQUEUE,SI_MAINTAIN_MODE,SI_NO_AUTHORITY_CHECK,SI_SELECTION_DATE,SI_USED_BY_IDOC,SI_OVERLAP_CONFIRMED,SI_USED_BY_RETAIL,SI_I_KOMK_KONDA,SI_I_KOMP_KPOSN,SI_I_KOMP_MATNR,SI_KEY_FIELDS_KONDA,SI_KEY_FIELDS_MATNR,TI_COPY_RECORDS_MANDT,TI_COPY_RECORDS_KPOSN,TI_COPY_RECORDS_KAPPL,TI_COPY_RECORDS_KSCHL,TI_COPY_RECORDS_KDATU,TI_COPY_RECORDS_KRECH,TI_COPY_RECORDS_KBETR,TI_COPY_RECORDS_WAERS,TI_COPY_RECORDS_KPEIN,COPY_RECORDS_KMEIN,COPY_RECORDS_KOUPD,TI_COPY_RECORDS_STFKZ,TI_COPY_RECORDS_UPDKZ,TI_COPY_RECS_IDOC_KZNEP"/>
<log message="LPG ${exchangeProperty.lpgId exporting batch # ${exchangeProperty.CamelSplitIndex} to ${header.CamelFileName}"/>
<!-- file name and folder is in the header -->
<to uri="file://?fileExist=Append"/>
<!-- to overcome out of memory issue, split is holding all bodies until the end of iteration -->
<setBody>
<constant></constant>
</setBody>
</split>