Salesforce-Platform-Data-Architect Practice Test
Updated On 1-Jan-2026
257 Questions
Universal Containers is establishing a call center that will use Salesforce. UC receives 10 million calls and creates 100 million cases every month. Cases are linked to a custom call object using lookup relationship. UC would like to run reports and dashboard to better understand different case types being created on calls to better serve customers. What solution should a data architect recommend to meet the business requirement?
A.
Archive records to a data warehouse and run analytics on the data warehouse.
B.
Leverage big objects to archive records and Einstein Analytics to run reports.
C.
Leverage custom objects to store aggregate data and run analytics.
D.
Leverage out-of-the-box reports and dashboard on case and interactive voice response (IVR) custom object.
Leverage big objects to archive records and Einstein Analytics to run reports.
Explanation:
Universal Containers needs to categorize 500,000 customers into Gold, Silver, or Bronze based on their purchase of a new support service, with 1% of non-categorized customers adopting monthly for five years. The solution must ensure long-term performance. A simple, scalable approach is required to track categories efficiently without overcomplicating the data model or impacting system performance.
Correct Option: 🅱️ Implement a new picklist custom field in the Account object with Gold, Silver, and Bronze values
This option is ideal because a picklist field on the Account object is simple, scalable, and efficient for categorizing customers. It avoids additional objects, minimizing storage and query complexity. With only three categories and an expected 60,000 categorized customers over five years (1% of 500,000 monthly), this solution ensures performance without overengineering, aligning with Salesforce’s best practices for data modeling.
Incorrect Option: 🅰️ Implement a new global picklist custom field with Gold, Silver, and Bronze values and enable it in Account
A global picklist is incorrect because it’s designed for reusability across multiple objects, which isn’t needed here. The requirement is specific to the Account object, and a global picklist adds unnecessary complexity. It doesn’t improve performance and could complicate maintenance, as global picklists are managed at the org level, making them less flexible for Account-specific categorization needs.
Incorrect Option: 🅲 Implement a new Categories custom object and a master-detail relationship from Account to Category
A master-detail relationship is unsuitable because it implies Accounts depend on Categories, which isn’t the case. It also requires a separate object, increasing storage and query complexity for a simple categorization need. With only three categories, this overcomplicates the data model, potentially degrading performance as the number of categorized customers grows to 60,000 over five years.
Incorrect Option: 🅳 Implement a new Categories custom object and create a lookup field from Account to Category
Using a lookup field with a Categories object is inefficient for this use case. It requires managing a separate object, increasing storage and maintenance overhead. For a simple three-value categorization, this approach is overly complex and could impact performance with 500,000 Accounts, especially as data grows, making it less optimal than a picklist field.
Reference
Salesforce Help: Custom Field Types
Salesforce Architect Guide: Data Modeling Best Practices
Universal Containers (UC) is implementing a formal, cross -business -unit data governance program As part of the program, UC will implement a team to make decisions on enterprise -wide data governance. Which two roles are appropriate as members of this team? Choose 2 answers
A.
Analytics/BI Owners
B.
Data Domain Stewards
C.
Salesforce Administrators
D.
Operational Data Users
Analytics/BI Owners
B.
Data Domain Stewards
Explanation:
A data governance team requires individuals who understand both the strategic and operational aspects of enterprise data. The right mix ensures consistent policies, shared accountability, and clarity across business units. Technical administrators and everyday users may be involved later for execution, but governance leadership needs roles directly responsible for data quality, definitions, and analytics.
Correct Options
✅ A. Analytics/BI Owners
These roles bring insights into how data is consumed for reporting and business intelligence. Their perspective ensures governance decisions align with downstream analytics needs, avoiding fragmented definitions and inconsistent reporting across units.
✅ B. Data Domain Stewards
Data stewards are responsible for managing data quality, standards, and definitions within their domains. Their involvement is critical to governance decisions, as they ensure consistency and alignment across business units.
Incorrect Options
❌ C. Salesforce Administrators
Admins are key to enforcing governance decisions inside Salesforce, but they aren’t typically decision-makers on the governance board. They execute policies rather than define them.
❌ D. Operational Data Users
While their feedback is valuable, day-to-day users are not suited for setting enterprise-wide data policies. They may participate in working groups but not the formal governance decision-making body.
Reference:
Salesforce Data Governance Best Practices
Universal Containers would like to have a Service-Level Agreement (SLA) of 1 day for any data loss due to unintentional or malicious updates of records in Salesforce. What approach should be suggested to address this requirement?
A.
Build a daily extract job and extract data to on-premise systems for long-term backup and archival purposes.
B.
Schedule a Weekly Extract Service for key objects and extract data in XL sheets to onpremise systems.
C.
Store all data in shadow custom objects on any updates and deletes, and extract them as needed.
D.
Evaluate a third-party AppExchange app, such as OwnBackup or Spanning, etc., for backup and archival purposes.
Evaluate a third-party AppExchange app, such as OwnBackup or Spanning, etc., for backup and archival purposes.
Explanation:
Salesforce’s native backup and recovery capabilities don’t meet strict SLA requirements like a guaranteed 1-day recovery. Organizations needing higher guarantees must look to external solutions built specifically for data protection. AppExchange backup tools provide automated daily backups, rapid restores, and retention policies to meet SLA-driven compliance requirements.
Correct Option
✅ D. Evaluate a third-party AppExchange app, such as OwnBackup or Spanning, etc., for backup and archival purposes.
These purpose-built apps provide automated backups, point-in-time recovery, and compliance features that Salesforce itself doesn’t natively guarantee. They align with the 1-day SLA requirement by making recovery faster and more reliable. Salesforce itself recommends third-party tools for organizations with strict data backup and restore needs.
Incorrect Options
❌ A. Build a daily extract job and extract data to on-premise systems for long-term backup and archival purposes.
This requires custom work, constant monitoring, and complex recovery processes. It doesn’t provide a true 1-day SLA guarantee since restoring from flat files can be slow and error-prone.
❌ B. Schedule a Weekly Extract Service for key objects and extract data in XL sheets to on-premise systems.
A weekly extract misses the daily SLA requirement. Even worse, Excel-based backups are error-prone, not scalable, and can’t reliably restore large datasets quickly.
❌ C. Store all data in shadow custom objects on any updates and deletes, and extract them as needed.
While creative, this adds unnecessary storage overhead, creates duplicate records, and complicates governance. It still doesn’t guarantee compliance with a 1-day SLA.
Reference:
Salesforce Help – Backup and Restore Options
The architect is planning a large data migration for Universal Containers from their legacy CRM system to Salesforce. What three things should the architect consider to optimize performance of the data migration? Choose 3 answers
A.
Review the time zones of the User loading the data.
B.
Remove custom indexes on the data being loaded.
C.
Determine if the legacy system is still in use.
D.
Defer sharing calculations of the Salesforce Org.
E.
Deactivate approval processes and workflow rules.
Review the time zones of the User loading the data.
D.
Defer sharing calculations of the Salesforce Org.
E.
Deactivate approval processes and workflow rules.
Explanation:
Summary:
When planning a large data migration, the architect must focus on reducing unnecessary system overhead, preventing automation slowdowns, and ensuring smooth execution. During bulk loads, every second counts, and features like sharing calculations, workflow rules, or time zone mismatches can slow the job down dramatically. By disabling these temporarily and carefully reviewing user context, migration speed and performance improve without risking long-term business functionality.
Correct Options
✅ A. Review the time zones of the User loading the data.
Time zone differences can impact date/time field values during migration. If the user running the data load has a different time zone than the source system, values may shift and appear inconsistent. Checking this avoids unwanted adjustments during conversion, ensuring data integrity and reducing reprocessing later.
✅ D. Defer sharing calculations of the Salesforce Org.
Sharing rules and recalculations can consume significant resources during large data operations. By deferring them until after the migration, the architect prevents performance bottlenecks and speeds up loading. Once the migration finishes, the sharing rules can be recalculated in a controlled way, keeping performance stable while data is still flowing in.
✅ E. Deactivate approval processes and workflow rules.
Automation like workflows, approval processes, and triggers fire for each record, slowing imports dramatically. Temporarily turning them off keeps migration lean and efficient. Once the migration is complete, these processes can be re-enabled so that normal business operations resume without unnecessary delays.
Incorrect Options
❌ B. Remove custom indexes on the data being loaded.
Indexes actually improve query and load performance by making searches more efficient. Removing them would increase the workload on the database and reduce optimization, which is the opposite of what’s needed during a migration.
❌ C. Determine if the legacy system is still in use.
While useful for planning timelines and cutover strategy, this doesn’t directly optimize the performance of the migration itself. It’s a governance and readiness issue rather than a performance lever during bulk loads.
Reference:
Salesforce Help – Best Practices for Data Migration
Universal Containers (UC) needs to move millions of records from an external enterprise resource planning (ERP) system into Salesforce.
What should a data architect recommend to be done while using the Bulk API in serial mode instead of parallel mode?
A.
Placing 20 batches on the queue for upset jobs.
B.
Inserting 1 million orders distributed across a variety of accounts with potential lock exceptions.
C.
Leveraging a controlled feed load with 10 batches per job.
D.
Inserting 1 million orders distributed across a variety of accounts with lock exceptions eliminated and managed.
Inserting 1 million orders distributed across a variety of accounts with lock exceptions eliminated and managed.
Explanation:
The question focuses on optimizing a large, complex data load to avoid record locking exceptions. Serial mode processes batches one at a time, which is slower but eliminates the concurrency that causes database locks when updating the same parent records (e.g., Accounts) from multiple parallel batches.
Correct Option:
D. ✅ Inserting 1 million orders distributed across a variety of accounts with lock exceptions eliminated and managed. This is correct because using serial mode is a specific strategy to prevent lock exceptions. By processing batches sequentially, you avoid simultaneous updates to the same account, thus "eliminating and managing" the locks that would occur in parallel mode with this data distribution.
Incorrect Options:
A. ❌ Placing 20 batches on the queue for upsert jobs. The number of batches in the queue is not the primary factor. Whether in serial or parallel mode, if multiple batches try to update the same parent account records concurrently, lock exceptions will still occur. Serial mode is chosen to avoid this concurrency.
B. ❌ Inserting 1 million orders distributed across a variety of accounts with potential lock exceptions. This describes the problem that serial mode is meant to solve. If you proceed with a parallel load with this data distribution, you should expect lock exceptions, not recommend it.
C. ❌ Leveraging a controlled feed load with 10 batches per job. While controlling batch size is a good general practice, it does not directly address the root cause of lock exceptions during parallel loads—concurrent writes to the same record. Serial mode is the prescribed solution for this specific scenario.
Reference:
Salesforce Help: Bulk API with Serial or Parallel Processing
Salesforce Help: Avoid Locking Contention
| Salesforce-Platform-Data-Architect Exam Questions - Home | Previous |
| Page 7 out of 52 Pages |