C. Pausing the migration, investigating the inconsistencies, and fixing the source data or mapping errors.
Explanation:
When unexpected data inconsistencies occur during a Salesforce data migration, the most effective way to preserve data integrity is to pause the process, analyze the root cause, and take corrective action. While other options may offer temporary relief or speed, they risk introducing bad data into your new system—leading to long-term issues in reporting, automation, and customer experience.
🅰️ Option A: Ignoring minor inconsistencies to avoid delaying the migration timeline
While the pressure to meet tight deadlines can be intense, ignoring inconsistencies, even if they seem minor, is generally a poor strategy. Small errors can quickly scale into larger problems once data is live in Salesforce. For example, mismatched picklist values, formatting errors, or null fields could lead to broken reports, faulty automation, or user confusion.
By allowing flawed data into production, you risk eroding user trust, introducing compliance issues, and causing downstream process failures. Short-term speed sacrifices long-term data reliability, which is often more costly to fix post-launch than during migration.
🅱️ Option B: Implementing data cleansing scripts or manual data correction within Salesforce
Cleansing data during or after migration—either via scripts, tools, or manual updates—is a proactive step in maintaining quality. This could involve normalizing values, removing duplicates, or validating references. However, doing so within Salesforce post-migration can be reactive rather than preventative, especially if the issue originated in the source system or mapping logic.
Manual fixes can also be time-consuming and inconsistent without standardized rules. While this approach can work in some cases, especially for small datasets, it doesn’t address why the errors occurred and risks repeating the same issues in future loads.
✅ 🅲 Option C: Pausing the migration, investigating the inconsistencies, and fixing the source data or mapping errors
(Correct Answer)
This is the most effective and responsible strategy. By pausing the migration and thoroughly analyzing the inconsistencies, you can pinpoint whether the problem lies in the source data, the transformation logic, or the field mapping. Once identified, the root issue can be fixed—either in the legacy system, in the staging files, or within the data mapping configuration—before continuing with the migration.
This method ensures clean, consistent data is loaded into Salesforce, which supports downstream functions like reporting, automation, and analytics. It may extend the timeline slightly but results in a higher-quality, trustworthy system that users can rely on from day one.
🅳 Option D: All of the above, depending on the severity and impact of the data inconsistencies encountered
While this answer acknowledges flexibility, it risks justifying poor data practices. Not all inconsistencies are equal, but a blanket approach that includes ignoring errors (Option A) is risky. Data integrity should always be prioritized, and option C remains the best general practice, regardless of severity, because it promotes investigation and resolution rather than workaround behavior.
🧩 Summary:
Maintaining data integrity is essential for the success of any Salesforce implementation. Option C ensures that your migration process is not only fast but also accurate, clean, and scalable. Addressing issues at the root before continuing the migration minimizes risk and sets the foundation for reliable business operations.
📚 Official Salesforce Reference:
🔗 Salesforce Data Quality Best Practices