Upsert Salesforce Data
  • 10 Jun 2022
  • 4 Minutes to read
  • Contributors
  • Dark

Upsert Salesforce Data

  • Dark

Upsert is a combination of Updating and Inserting. If a record in a file matches an existing record, the existing record is updated with the values in your file. If no match is found, then the record is created as a new entity.

The following articles describe how to use Single Dataloader to upsert data into Salesforce. The data is upserted via a CSV file.

  1. Log in to your ARM account.
  2. Hover your mouse over the DATALOADER module and select DATALOADER.
  3. Click UPSERT on the right side of the screen.
  4. Choose your Salesforce Org and your org environment (Production or Development Edition, Sandbox, or Pre-Release).
  5. The corresponding URL and your Username are automatically generated based on the above selection.
  6. Click LOGIN AND FETCH OBJECTS to fetch all the objects from your Salesforce Org.
  7. Select the object you wish to upsert the data. For example, Account, Contact, Lead, etc. You can use the search function to search through your objects and the filter button to quickly filter your standard/custom objects.
  8. Click NEXT.
  9. You can import your file from your local directory on the next screen. Upload the CSV file you wish to import by clicking the UPLOAD button.
  10. The next step is to prepare your field mappings. Field mappings match columns in your CSV to fields in your Salesforce Org. 
  11. You can automatically map the members and the fields using Automap. It compares the destination fields with the fields available in uploaded CSV files, and, if both match, the value is selected automatically.
  12. Make sure you have mapped all the required fields. Otherwise, you can't move forward. Click NEXT.
  13. On the Process Summary screen, you can:
    1. Give the process/job a Name.
    2. Select the Category. Categories are used to classify and group similar processes having similar functionality. In simple terms, you are assigning similar processes to a category. You can either select an existing category or create a new category by clicking the + icon.
    3. Select the External ID field. An external ID field is simply a field that is a unique identifier for your record, other than the Salesforce ID, usually coming from an external system. For example, if you are importing accounts, you may have a field called ERP Account Number, which is the identifier for your Accounts within your ERP system.
    4. View the main object.
    5. View the operation type (upsert).
    6. Use Bulk API.
      About Bulk API
      The Bulk API is based on REST principles and is optimized for inserting, updating, and deleting large sets of data. You can use the Bulk API to process jobs either in serial or parallel mode. Processing batches serially means running them one after another, while processing batches in parallel means running multiple batches simultaneously. When you run a bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, providing better overall data throughput.
  14. You can schedule your tasks so they start running regularly. You can choose between Daily, Weekly, or On-demand schedules.
  15. Finally, click on SAVE to save your task and run it later.
  16. Your task is listed on top of the lists in the Dataloader Summary screen. 
  17. Click Run to start the dataloader immediately before the scheduled time.
  18. Select the configurations here:
    • Use Batch Size.
      About Batch Size
      Whenever the Bulk API checkbox is left unchecked, the Batch API is used.
       Salesforce Batch API is based on SOAP principles and is optimized for real-time client applications that update small numbers of records at a time. Although SOAP API can also be used for processing large numbers of records when the data sets contain hundreds of thousands of records, it becomes less practical; in those cases, Bulk API is the best option. Batch API processes data in smaller batches than Bulk API, resulting in a higher API call usage per operation on large volumes of data.
    • Disable workflow rules.
      About Disable workflow rules
      The workflows of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, workflows are reactivated.
    • Disable validation rules.
      About Disable validation rules
      Validation rules verify that the data a user enters in a record meets the criteria you specify before the user can save the record. On selection, all the validation rules of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, validation rules are reactivated.
    • Insert/update with null values
    • Use UTF-8 file encoding for file read and write operations.
  19. Click RUN.
  20. The number of successful or failed records extracted can be seen in the Results of Last Run section. You can view the records or download them to your local system. The records are generated in CSV format.

More Options

  1. Edit: Modifies or updates the process details. 
  2. Abort: Aborts the process while it is still running.
  3. Schedule: Sets the schedule at which the process must run.
  4. Delete: Deletes the extract process.
  5. Clone: Creates a copy (clone) of the extract process.
  6. Log: Provides information about the execution of the extracted task.
  7. VR/WFR: ARM lists all the validations/workflow rules that were set. The UI lists all the validation rules, and users have to enable them for the disabled validation rules (if required). For more info, refer to the article: Validation/ Workflow Rules. Sample VR/WFR attached:Validation RulesWorkflow Rules

Was this article helpful?