UC recently migrated 1 Billion customer related records from a legacy data store to Heroku Postgres. A subset of the data need to be synchronized with salesforce so that service agents are able to support customers directly within the service console. The remaining non- synchronized set of data will need to be accessed by salesforce at any point in time, but UC management is concerned about storage limitations. What should a data architect recommend to meet these requirements with minimal effort?
A.
Virtualize the remaining set of data with salesforce connect and external objects.
B.
Use Heroku connect to bi-directional, sync all data between systems.
C.
As needed, make call outs into Heroku postgres and persist the data in salesforce.
D.
Migrate the data to big objects and leverage async SOQL with custom objects.
A.
Virtualize the remaining set of data with salesforce connect and external objects.
Explanation:
Option A ("Virtualize the remaining set of data with Salesforce Connect and external objects") addresses storage concerns effectively. Salesforce Connect lets data remain external (in Heroku Postgres) and queries it dynamically, avoiding storage overload. This solution elegantly balances real-time access without physically storing all data in Salesforce, making it optimal for large-volume data use cases. Options involving direct synchronization (B) or migration (D) introduce storage complexity, and calling out with persistence (C) defeats the primary purpose of reducing Salesforce storage.