- 02 Feb 2022
- 1 Minute to read
Restore Best Practices
- Updated on 02 Feb 2022
- 1 Minute to read
This article illustrates many of the key best practices for the restores process on the Vault platform:
- Instead of executing full metadata restoration at once, which would fail if metadata exceeds salesforce governor limits, identify the code components and opt for selective restoration.
- If you try to restore full metadata in 2-3 cycles of selective restoration, you must understand the metadata's dependent components and include them in the selective restoration operations.Links to salesforce metadata limitation: https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_metadata.htm
- Active validation rules, triggers, process builders, and workflow may result in data and metadata restoration failures. Make sure these are turned off before you run the restore operation.
- Metadata API can deploy and retrieve up to 10,000 files or 400 MB at one time. If either of these limits is exceeded, the deployment or retrieval fails. Make sure the metadata size is less than 400 MB for a single job. You can split the metadata into multiple jobs to achieve restoration/replication if metadata is larger than 400 MB.
- Define batch size based depending on the size of metadata or data you want to perform jobs on.
- If you're initiating a data restoration, make sure your production org has full API limits, else the process will take a long time (due to API availability limits)
- If the triggers/validation rules are not properly configured, data restoration may fail.
- Make sure the data you're restoring belongs to active users/owners; any failure caused by inactive owners/accounts/users should be considered a salesforce data error, not a vault restoration issue.