How to Upload Data to Partition

Aim of this article

Explains how to upload a CSV / XLSX data (preferably in UTF-8) file (even zipped) to a selected partition. 

Note: When there is a Data Upload in progress, it is not possible to start a new one at the same partition; there is a message displayed to inform you about this.

Related sections

Required permissions

  • Partition Data Upload - Edit – Allows you to create a new Data Upload.

    • Partition Data Upload - Use – Allows you to provide files to existing Data Uploads.

If Data Upload is used for importing users into PlatformManager:

  • Partition Data Upload - import users

Prerequisites

You have prepared a clean data to upload.

 

Steps:


1.Click Add, provide Data Upload name and select the entity to be updated. 

See a list of supported entities; for some of the entities, you can create the corresponding table directly from here and also change the field types.




2. Select a file (zipped or unzipped CSV / XLSX) to upload. 

 See what the requirements for the file formats are. 



If your data file is prepared to exactly match the fields at the partition, you can use Quick Data Upload and thus skip the following steps.

You can also use a direct upload to SFTP server which triggers a Data Upload process for the given file. For details see the SFTP User Management option in the Data Uploads table.


3. A sample from your data is displayed and you can specify the following:

  • Parsing options – Determines what is used in your data file as a separator, quote and escape characters and decimal separator. The most common separators can be selected from a drop-down menu. You can also define the date format here. 

  • Uploaded file contains header – Indicates if your CSV / XLSX file has a header or not. If there is no header, generic header names “column1”, “column2”, etc. will be used.
     Uploading with no header is not supported when using the Quick Data Upload.

  • Upload options

    • Delete original data – Determines what happens to the existing data on the server for the given entity:

      • Never – No data is deleted. The process works as UPSERT. It adds new lines and updates the existing ones.

      • Before upload – The data is replaced by this data upload. This helps you prevent duplicate data on the server in case you cannot ensure a unique ID for each record.

      • After upload – After a successful import of a new file all non-updated records of a given entity are deleted.
         If there is an error in the CSV / XLSX file in some row, this record is not updated and therefore deleted after the upload. 

  • Upload Date & Time –  Allows you to postpone data upload so that it does not interfere with your daily operations and you do not have to wait until off-hours to perform an upload. A list of scheduled Data Uploads can be found in the Upload Queue.

  • Receive an email once the data upload is finished – If checked, you will be notified by email when the data upload is complete.


4. Define Data Mapping – which fields from your CSV / XLSX file correspond to the fields at your Pricefx partition. You can also define data types conversions in this step.

You have the following options:

  • Import File Columns – Lists all of the found fields in the CSV / XLSX file and lets you select for them corresponding output fields. You can also manipulate the data after you click the Convert link and open the Advanced Field Editor

  • Pricefx Columns – Allows you to select the corresponding Pricefx field where the information from the CSV / XLSX file should be sent.

    • The output field data type is read directly from the partition. 

    • This output type determines what options are available to manipulate the data (e.g. if it is number, you can write a formula using your input fields). For details see Advanced Field Editor.

Additional options:

  • You can manipulate the data in various ways after you click the Convert link. For details see Advanced Field Editor.

  • You can also add your own new field (using the button at the bottom of the page). 

  • You can decide how empty values should be passed on. You have two options:

    • Send empty value as empty string ""

    • Send empty value as NULL

  • When you map multiple fields, the definition is stored as a list of fields to combine. When sending the file again, it is shown as the same multiple fields definition.

You can speed up the Data Mapping step using field aliases (alternative names). The source file fields will get mapped to the Pricefx fields automatically if their name matches either the exact Pricefx field name or one the aliases. The aliases are defined in a step via mappingAliases in mandatoryFields and they are not case sensitive.






5. In the next step you can review all of your Data Upload settings. Sample data are shown with conversions applied (if there were any). If needed, you can go back and change some of the settings.

PlatformManager also provides you with hints and warnings: 






6. After you confirm this step, the upload starts. After successful completion, you get a confirmation. If there were errors, you are notified too.

If the Data Upload task takes long, you will be notified by email when it is completed. Later, you can check the results in the Data Upload history.

All running/waiting Data Uploads are also listed in Upload Queue where you can cancel or stop them if needed.


PlatformManager version 1.67.0