Inserting Salesforce Data
The "Basic DataLoader" feature offers a user-friendly and efficient way to insert data into Salesforce using a CSV file. It simplifies the data load process while maintaining accuracy and speed, making it ideal for quick and straightforward data uploads.
Step-By-Step Guide:
Navigate to the Basic DataLoader application by logging in to the nCino application.
Click on the Create button to observe the "Insert" option to initiate the creation of the "Basic DL" job.

Clicking on "Insert" will navigate the flow to the "Login and select object" section.

Click on login to fetch the object details.


Choose an object and move to the next screen to choose the file to insert the data.

After uploading the file, click on the "Upload File" option to upload the file into the system.

On successful upload of a file, the success message will be displayed.

The application will read the header of the uploaded file and populates the same information on the screen in the section below.

An "Automap" option will appear as the field details get populated on the screen. Enabling this option maps the fields from the source against the fields on the destination ORG.

Specify the criteria for matching fields to fetch records from the source system. This setting determines how the data should be identified and retrieved during processing.
Click on the drop-down beside the object name and select the required field from the list.

Observe the
icon beside the "Automap" option, clicking that will provide the options:All: Will display all of the fields existing from the mapping
Mapped: will display the fields mapped through the "Automap" option
Unmapped: will display the fields that remain unmapped after enabling the "Automap" option.

Click "Next" to continue to the "Schedule" section of the flow.
Observe the various scheduling options available under "Schedule" section.



On completing the scheduling, click on "Next" to continue to "Process Details".

Input a name for the job and click on "Save" to save the job.

Use Bulk API The Bulk API, built on REST principles, is designed for high-volume operations such as inserting, updating, or deleting large data sets. It supports both serial and parallel processing modes:
Serial Mode: Processes batches one after another in sequence.
Parallel Mode: Processes multiple batches simultaneously to increase throughput.
Enabling this option allows the system to execute the job more efficiently by leveraging concurrent batch processing where possible, thereby improving overall performance for large-scale data operations.
On saving the job, the flow will navigate to the job list page.
Observe and click on the "Run" button to initiate the job run

Observe the "Automations" available on the "Run Configuration" screen.

Select the criteria you can set for the data loader process to continue:
ConfigurationsDescriptionsUse Bulk API (Batch API will be used if the option is not enabled)
The Bulk API is based on REST principles and is optimized for inserting, updating, and deleting large data sets. You can use the Bulk API to process jobs in serial or parallel mode. Processing batches serially means running them one after another, while processing batches in parallel means running multiple batches simultaneously. When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput. When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput. Note: When performing multiple insert operations into the same destination org while the ongoing jobs are still running, choosing the
Serial Modeis recommended.Batch Size
Whenever the Bulk API checkbox is left unchecked, the Batch API is used. Salesforce Batch API is based on SOAP principles and is optimized for real-time client applications that update small numbers of records at a time. Although SOAP API can also process large numbers of records, it becomes less practical when the data sets contain hundreds of thousands of records. In those cases, Bulk API is the best option. Batch API processes data in smaller batches than Bulk API, resulting in a higher API call usage per operation on large volumes of data. Note: When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput.
Disable workflow rules
All the workflows of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, workflows are reactivated.
Disable Validation Rules
Validation rules verify that the data a user enters in a record meets the specified criteria before the user can save the record. On selection, all the validation rules of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, validation rules are reactivated.
Insert/Update with null values.
This will either insert or update record field values with null (if the value is null in source org) in destination org.
Use UTF-8 file encoding for file read and write operations
Use UTF-8 as the internal representation of strings. Text is transcoded from the local encoding to UTF-8 when data is written to or read from a file. UTF-8 must be enabled in your data exclusively containing English alphabets. UTF-8 must be disabled if your data contains non-English alphabets. UTF-8 should be enabled by default as per Salesforce.
Click
Run.
Click on "Run" to initiate the job run
Observe the in-progress and the abort options available during the job run

Click on the ellipsis icon to view the options available for the job.
Click on "Schedule" to observe the set schedule options.
Click on the "Delete" option to delete the job.


Click on the "Clone" to clone the job.


On completing the job run observe the success and "Failure" records


Click on the "Log" to view the details of the job.


Last updated
Was this helpful?

