githubEdit

Updating Salesforce Data

The basic DataLoader provides a potent yet easy-to-use application to update data into Salesforce seamlessly.

Step-By-Step Guide:

  1. On logging into the nCino application, navigate to the DataLoader to initiate the creation of the "Update" job.

  2. On the "Login and select object" section of the flow, login to the Orgs and select the required objects.

  3. After selecting the required object, click Next to proceed to the Field Mapping section.

  4. Upload the appropriate CSV file at this step to update data into Salesforce seamlessly.

  5. Either click on "Choose File" or "Drop the file" to upload the CSV.

  6. After adding the file, proceed to upload it to enable the necessary field mappings.

  7. On successful file upload a success message will be displayed

  8. Click on "AutoMap" option to map the fields from the source against the destination.

  9. Click on the icon to observe the options

  10. The lookup field will have the "Look up" option against it at the "Salesforce field" column. This mapping is useful for mapping different fields.

  11. Selecting "Look via" option will provide a drop-down to select the required fields.

  12. For the selected fields, the further action can be specified using the two radio button options available.

    1. Use first match in multiple results: If multiple records match the selected non-unique field (e.g., several accounts named GenePoint), the Data Loader selects the first match automatically.

    2. Mark record with an error if more than one match is found: If enabled, ARM Data Loader flags the row with an error stating multiple matches exist. You can resolve this by fixing the data in Salesforce or by manually providing the correct ID in the CSV and re-uploading.

  13. On successful field mapping, click "Next" to move to the "Schedule" section of the flow.

  14. On setting up the required schedule, click on "Next" to move to the "Process Details" section of the flow

  15. To assign a job to a group, select a group from the list or create a new group by clicking on the "+" icon

  16. For faster processing of the records, use the "BULK API" option.

  17. Click the "Save" button to save the job configuration. Upon saving, the flow will automatically redirect to the Job List page, where the newly created job will be displayed along with existing jobs.

  18. Click on the "Run" option to run the job.

  19. Clicking the "Run" button directs the flow to the Run Configuration screen, where the available automation options for the job can be reviewed and configured as needed.

  20. Select the criteria you can set for the data loader process to continue:

Configurations
Descriptions

Use Bulk API (Batch API will be used if the option is not enabled)

The Bulk API is based on REST principles and is optimized for inserting, updating, and deleting large data sets. You can use the Bulk API to process jobs in serial or parallel mode. Processing batches serially means running them one after another, while processing batches in parallel means running multiple batches simultaneously. When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput. When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput. Note: When performing multiple insert operations into the same destination org while the ongoing jobs are still running, choosing the Serial Mode is recommended.

Batch Size

Whenever the Bulk API checkbox is left unchecked, the Batch API is used. Salesforce Batch API is based on SOAP principles and is optimized for real-time client applications that update small numbers of records at a time. Although SOAP API can also process large numbers of records, it becomes less practical when the data sets contain hundreds of thousands of records. In those cases, Bulk API is the best option. Batch API processes data in smaller batches than Bulk API, resulting in a higher API call usage per operation on large volumes of data. Note: When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput.

Disable workflow rules

All the workflows of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, workflows are reactivated.

Disable Validation Rules

Validation rules verify that the data a user enters in a record meets the specified criteria before the user can save the record. On selection, all the validation rules of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, validation rules are reactivated.

Insert/Update with null values.

This will either insert or update record field values with null (if the value is null in source org) in destination org.

Use UTF-8 file encoding for file read and write operations

Use UTF-8 as the internal representation of strings. Text is transcoded from the local encoding to UTF-8 when data is written to or read from a file. UTF-8 must be enabled in your data exclusively containing English alphabets. UTF-8 must be disabled if your data contains non-English alphabets. UTF-8 should be enabled by default as per Salesforce.

  1. Click Run.

  2. A new CSV file can also be input using the "Choose Different Data CSV File" option.

  3. Click on the "Run" button to run the job.

  4. Once the job run is completed, the results can be observed under the "Success" & "Failure" of the "Results of Last Run".

  5. Cick on the magnifying glass icon under the respective "Success & Failure" sections to observe the actual "Success & Failure" records.

  6. Click on the ellipses icon to observe the job optoins.

  7. Click on the "Edit" option to edit the job created

  8. Click on "Schedule" to observe the set schedule and create any schedules

Last updated

Was this helpful?