Insert Salesforce Data

The following articles describe using the Single Dataloader to insert data into Salesforce. The data is inserted via a CSV file.

  1. Log in to your ARM account.

  2. Hover your mouse over the Dataloader module and select Dataloader.

The Dataloader screen is best viewed when the zoom setting is 75% on your Chrome/Firefox browser.

  1. Click Insert on the right side of the screen.

  1. Choose your Salesforce Org and your org Environment (Production or Development edition, Sandbox, or Pre-Release).

  2. The corresponding URL and your Username are automatically generated based on the above selection.

  3. Click Login and Fetch Objects to fetch all the objects from your Salesforce org.

  4. Select the object in which you wish to insert the data. For example, Account, Contact, Lead, etc. You can use the search function to search through your objects and the filter button to filter your standard/custom objects quickly.

  5. Click Next.

  6. You can import your file from your local directory on the next screen. Upload the CSV file you wish to import by clicking the Upload button.

  7. A notification pop-up will display the number of records that will be impacted. Click OK.

  8. The next step is to prepare your field mappings. Field mappings match columns in your CSV to fields in your Salesforce org. When clicking on a salesforce field, use the "Quick Search" function to search through those and the quick filter tabs to find your Required, Unmapped, IDs and Custom fields.

  1. You can automatically map the members and the fields using Automap. It compares the destination fields with the fields available in uploaded CSV files, and if both match, the value is selected automatically.

  1. The number of fields mapped out of the total number of fields is displayed below the Automap checkbox.

  2. Use the search option to look up a field by name from the long list to map it.

  3. Use the Filter dropdown to choose which fields to display:

    • All: Displays all fields, whether they have been mapped or not.

    • Mapped: Displays only the fields which have been mapped yet.

    • Unmapped: Displays only the fields which haven't been mapped yet. After selecting the filter, the list updates automatically as you map or unmap each field.

  4. Make sure you have mapped all the required fields. Otherwise, you can't move forward. Click Next.Lookup Reference ObjectsIf you need an object ID but don't have it, you can use the lookup function to retrieve it dynamically. For example, if you're importing Contacts with the "Account name" but not the "ID," you can use the lookup feature to find it.

  5. On the Process Summary screen, you can:

    1. Give the process/job a name.

    2. Select the Category. Categories are used to classify and group similar processes having similar functionality. In simple terms, you are assigning similar processes to a category. You can select an existing category or create a new one by clicking the + icon.

    3. View the main object.

    4. View the operation Type (Insert).

    5. View the number of impacted records.

    6. Use Bulk API.About Bulk APIThe Bulk API is based on REST principles and is optimized for inserting, updating, and deleting large data sets. You can use the Bulk API to process jobs in serial or parallel mode. Processing batches serially means running them one after another, while processing batches in parallel means running multiple batches simultaneously. When you run a bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, providing your overall run with better data throughput.

  6. You can schedule your tasks so they start running regularly. You can choose between Daily, Weekly, or On-demand schedules.

  7. Finally, click Save to save your task and run it later.

  8. Your task is listed at the top of the list on the Dataloader Summary screen.

  9. Click Run to start Dataloader immediately before the scheduled time.

  10. Select the criteria you can set for the data loader process to continue:

    ConfigurationsDescriptions

    Use Bulk API (Batch API will be used if the option is not enabled)

    The Bulk API is based on REST principles and is optimized for inserting, updating, and deleting large data sets. You can use the Bulk API to process jobs in serial or parallel mode. Processing batches serially means running them one after another, while processing batches in parallel means running multiple batches simultaneously. When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput. When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput. Note: When performing multiple insert operations into the same destination org while the ongoing jobs are still running, choosing the Serial Mode is recommended.

    Batch Size

    Whenever the Bulk API checkbox is left unchecked, the Batch API is used. Salesforce Batch API is based on SOAP principles and is optimized for real-time client applications that update small numbers of records at a time. Although SOAP API can also process large numbers of records, it becomes less practical when the data sets contain hundreds of thousands of records. In those cases, Bulk API is the best option. Batch API processes data in smaller batches than Bulk API, resulting in a higher API call usage per operation on large volumes of data. Note: When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, giving your overall run better data throughput.

    Disable workflow rules

    All the workflows of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, workflows are reactivated.

    Disable Validation Rules

    Validation rules verify that the data a user enters in a record meets the specified criteria before the user can save the record. On selection, all the validation rules of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, validation rules are reactivated.

    Insert/Update with null values.

    This will either insert or update record field values with null (if the value is null in source org) in destination org.

    Use UTF-8 file encoding for file read and write operations

    Use UTF-8 as the internal representation of strings. Text is transcoded from the local encoding to UTF-8 when data is written to or read from a file. UTF-8 must be enabled in your data exclusively containing English alphabets. UTF-8 must be disabled if your data contains non-English alphabets. UTF-8 should be enabled by default as per Salesforce.

  11. Click Run.

  1. The Results of Last Run section shows the number of successful or failed records. The values in this field are updated dynamically while the job is still running. You can view the records or download them to your local system. The records are generated in CSV format.

  2. The number of impacted records can be seen in the Records section. The value in this field is updated dynamically while the job is still running.

More Options

  1. Edit: Modifies or updates the process details.

  2. Abort: Aborts the process while it is still running.

  3. Schedule: Sets the schedule at which the process must run.

  4. Delete: Deletes the insert process

  5. Log: Provides information about the execution of the inserted task.

  6. VR/WFR: ARM lists all the validations/workflow rules that were set. The UI lists all the validation rules, and users must enable them for the disabled validation rules (if required). For more info, refer to the article: Validation/ Workflow Rules. Sample VR/WFR attached:

  1. Clone: Creates a copy (clone) of the insert process. Operation type and object name are displayed. Enter the Process Name in the field. The default Salesforce Org is automatically selected. To choose a different org, use the dropdown list. Select the Choose Different Data CSV File check box to upload a different CSV file. Finally, click Clone.

Last updated