Data Loader Pro
Last updated
Last updated
Data Loader Pro is an advanced feature ARM provides for transferring data from the source sandbox to the destination sandbox more conveniently and automatically handles the parent-child relationship. Migrating the Salesforce data/objects to more than one object-supporting hierarchy can be easily achieved using the Data Loader Pro feature in ARM.
While performing Data Loader Pro on the objects for the first time, ensure you perform Data Loader Configuration among the same orgs on all the objects included in your Data Loader Pro job. This is a one-time operation.
Data Loader plays an essential role in data migration from source sandbox to destination sandbox. However, in this data migration process, the chances of duplicate records being created always exist. To avoid this, ARM has developed a new feature that allows synchronizing between the orgs with the help of the ARM external ID AutorabitExtid__c field.
Log in to your ARM account.
Hover your mouse over the Data Loader
module and select Data Loader Pro
.
Click on Create New Job.
On the next screen, choose the Source Org
and the Destination Org
that automatically populate the selected sandbox's details.
Click Login and Fetch Objects
.
Next, Select Master Object
.
View the relationship between child objects/ancestor objects and the master object in the Schema (Grid View)
section.
For each object displayed, users can view the list of fields related to the corresponding object.
You can extract records within a specified limit using specifying criteria in the Filters
section. You can even filter the details via Date
or Date Literals
. A date literal is a fixed expression representing a relative range of time, such as last month, this week, or next year. (Refer here for the list of data literals supported).
Upload the CSV
file (if required) if there is a large amount of data and it requires the filter for those records. The max file size supported is 10 MB. Once the CSV is uploaded, click Auto-Populate
. The filters are autopopulated for the selected field and operator based on the values of the chosen field in the uploaded CSV file.
Format for CSV file to filter records:
ARM Data Loader Pro accepts CSV (comma-separated values) files. Use a spreadsheet program such as Microsoft Excel to create your CSV file.
Ensure you have a column header and rows of data populated for all system-required fields, such as Account Name
or Contact Last Name
.
There can be only one column header.
For more information, see Preparing the CSV file for Data Loader.
The Record Count Limit
box limits the number of records extracted from the source, giving a value in this field.
Click Validate
to fetch the number of records transferred from the source sandbox to the destination sandbox. Finally, click Apply
.
‘Skip Records’ can be enabled by entering a ‘0’ on the record count section under the filters or by selecting the ‘checkbox’ under the ‘Skip Records’ section.
To ‘Skip Records' using the filters, click on 'Filters.' ‘Skip Records’ will omit migrating an object to the destination.
Enter ‘0’ on the ‘Record Count’ field of the ‘Apply Filters’ page pop-up and click on ‘Validate’ to validate the query.
Upon completing the validation, click on ‘Apply’ to apply the inputs.
Select the checkbox to 'Skip Records' from migrating to the destination.
Upon selecting the checkbox, a pop-up will ask for confirmation.
Upon confirming, the checkbox will be selected to skip migrating those records to the destination with the following notification on filters applied.
Click on the filters section of the object on which the 'Skip Records' is applied to observe the query builder and the Record Count.
When ‘Skip Records’ is selected, the record count is set to ‘0’ and the records from the selected object will not be migrated to the destination.
When unchecking the ‘checkbox’, skip records will be disabled on the object. The following message will ask for confirmation.
Upon clicking 'confirm,' the records from that particular object will be migrated to the destination. The following notification will be displayed when unchecking the checkbox.
When ‘Skip Records’ is disabled, the record count that was set to ‘0’ in the filters and the query will be reset to blank and the records will be migrated to the destination.
Map the object fields between the source and destination sandboxes.
Using the Automap
feature, you can map the fields automatically based on fetched object fields with destination fields. To set up manual mappings, automapping needs to be disabled. Click on Clear Mappings
to remove the automapping and set up the desired manual mappings.
In this section, you can use an external ID instead of a related record's Salesforce record ID to relate or associate records to each other as you process the Data Loader Pro operation. For example, if Object B has a lookup field to another Object A, you can use the values in a field marked as an External ID
on Object A to relate the two (Object B to Object A records).
In the Source
field, select the source whose values will be populated in the destination external ID field.
In the Destination
field, select the required destination org whose values will remain unique for all the records.
Important Note: ARM does not support the automatic creation of an ExternalUniqueID. The user has to create this manually on both the Source Org and Destination Org.
Here in this section, fill in the process details listed below:
Enter a Name
for the job.
Select the category in the Job Group
field. This is important if you'd like to group related jobs into a single category. You can also create a new group and assign your job to this group.
Master Object
and External ID
are auto-populated.
Enter the User Name Suffix
for the Source Org
and the Destination Org
. Below are some examples of usernames and the corresponding suffixes:
Case 1 | User Name | User Name Suffix |
---|---|---|
Source | src | |
Destination | dest |
Case 2 | User Name | User Name Suffix |
---|---|---|
Source | qan.com | |
Destination | aws.com |
Case 3 | User Name | User Name Suffix |
---|---|---|
Source | Empty (leave it blank) | |
Destination | dest |
Case 4 | User Name | User Name Suffix |
---|---|---|
Source | src | |
Destination | Empty (leave it blank) |
Additionally, users can ignore certain records related to community users by selecting the Ignore Community Users
checkbox.
Data masking refers to changing certain data elements within a data store so the structure remains similar while the information is altered to protect sensitive information. It ensures sensitive customer information is unavailable beyond the permitted production environment.
Under the Masking Wizard
section, click on New
to add a masking rule.
In the Masking Form
screen, do the following:
Select Object
for masking and choose the Field Type
.
Choose the Masking Style
:
Prefix
: This option adds the character before the selected field name. For example, if the field value in source org is ABC, and the prefix masking value is kept as 123, then the final field value being deployed will be 123.ABC.
Suffix
: This option adds the character after the selected field name. For example, if the field value in source org is ABC, and the suffix masking value is kept as 123, then the final field value being deployed will be ABC.123.
Replace
: This option replaces the selected field name with the character you enter in the Masking Value
field. For example, if the field value in source org is ABC, and the replace masking value is kept as 123, then the final field value being deployed will be 123.
Shuffle
: Shuffles the data in the column (like a deck of cards) and leaves the other columns untouched. For example, if the field value in source org is ABCDE, then the final field value being deployed will be DCBEA.
Generate Random
: This option helps mask the original value with a random value within a specified range. For example, if the field value in source org is ABC, and the random string length value is set to 7, then the deployed field value would be similar to 15d3aRG.
Important Note: Masking is not applicable if the field value for the record is empty.
In the Scheduling procedure, the user can schedule the process it must run.
Daily:
The process will run every day at the scheduled or set time interval.
Weekly:
The process will run weekly on the scheduled day and time.
No schedule:
The process is only saved; you can run it when required.
Finally, click on Save
to complete the initial process. You will be redirected to the Dataloader Pro Summary
page, where the Data Loader process initiated can be seen at the top of the list.
This feature will allow the user to select the required job settings to disable workflow rules, disable validation rules etc., as required during the job creation.
Once the job is saved the selected job configuration settings will be saved.
On running the job, the retained job settings will be displayed on the pop-up, “Run Configuration”.
Any changes made during the run of the job will only affect that individual job run.
To permanently change the configured settings during the job creation, the user has to edit the created job and change the settings and save the job.
Select your job from the Data Loader Pro Summary
screen and click on Run
. This option allows you to run the processes created in the selected category.
The table below lists the configurations to choose from, along with their descriptions:
Serial Number | Configurations | Description |
---|---|---|
1 |
| The workflows of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, workflows are reactivated. |
2 |
| Validation rules verify that the data a user enters in a record meets the criteria you specify before the user can save the record. On selection, all the validation rules of the Salesforce objects are deactivated, and the data is transferred from the source to the destination sandbox. Once the migration is complete, validation rules are reactivated. |
3 |
| The Bulk API is based on REST principles and optimized for inserting, updating, and deleting large data sets. You can use the Bulk API to process jobs in serial or parallel mode. Processing batches serially means running them one after another, while processing batches in parallel means running multiple batches simultaneously. Note: When you run a Bulk API job, processing more batches in parallel means giving that job a higher degree of parallelism, providing your overall run better data throughput. |
4 |
| Insert/update with null values |
5 |
| When there are multiple references between the same objects, unnecessary API calls are not triggered upon selecting this option. |
6 |
| All objects in the hierarchy are calculated based on the Master Object filters; this option avoids extra records addition due to self-references and multiple references. |
7 |
| Data encryption for data files |
8 |
| After a data-loading process is done, only the newly added records are transferred into the destination sandbox. |
Edit:
Edits the processes in the selected category to rerun them.
Abort:
Aborts the process.
Schedule:
Schedules the data-loading process for the selected category.
Clone:
Clones the respective Data Loader Pro job.
Log:
Provides information about the process execution.
This section lets you view the master object and its related information for the test environment setup job. For the setup run with disabled validation/workflow rules, ARM lists all the validation rules set under VR/WFR
section. The UI lists all the workflow/validation rules; users must enable them for the disabled rules (if required).
Change the grid view to a graph view by clicking the Switch to Graph View
button. Click on the icon to view the graphical representation on full screen.