Salesforce-Platform-Data-Architect Practice Test
Updated On 1-Jan-2026
257 Questions
Universal Containers (UC) has an Application custom object, which has tens of millions of records created in the past 5 years. UC needs the last 5 years of data to exist in Salesforce at all times for reporting and queries. UC is currently encountering performance issues when reporting and running queries on this Object using date ranges as filters. Which two options can be used to improve report performance?
A.
Ask support to create a skinny table for Application with the necessary reporting fields.
B.
Add custom indexes to all fields on Application without a standard index.
C.
Run multiple reports to get different pieces of the data and combine them.
D.
Add custom indexes to the Date fields used for filtering the report.
Ask support to create a skinny table for Application with the necessary reporting fields.
D.
Add custom indexes to the Date fields used for filtering the report.
Explanation:
✅ Option A: Ask support to create a skinny table for Application with the necessary reporting fields.
Why it’s correct: A skinny table is a Salesforce feature designed to improve performance for large data volumes, especially for reporting and querying. It’s a custom table maintained by Salesforce that includes a subset of fields from the original object (in this case, the Application custom object) to reduce the need for costly joins and improve query speed. For an object with tens of millions of records, like UC’s Application object, skinny tables are particularly effective when reports or queries frequently filter on specific fields (e.g., date fields). By contacting Salesforce Support to create a skinny table with the necessary reporting fields, UC can significantly enhance performance for date-range-based reports.
Example: Imagine a library with millions of books. Instead of searching the entire library for books published in the last 5 years, a skinny table is like a pre-organized shelf with only the relevant books and their key details, making searches faster.
Context: Skinny tables are particularly useful for objects with high data volumes and frequent reporting needs, as they bypass standard indexing limitations and optimize query execution.
✅ Option D: Add custom indexes to the Date fields used for filtering the report.
Why it’s correct: Custom indexes improve query performance by allowing Salesforce to quickly locate records based on specific field values, such as dates used in report filters. Date fields are commonly used in filters for reports (e.g., “show records from the last 5 years”), and indexing these fields ensures that queries run more efficiently, especially on large datasets like UC’s Application object with tens of millions of records. UC can request Salesforce Support to create custom indexes on the Date fields used in their reports, reducing query execution time.
Example: Think of an index like a book’s table of contents. Without it, you’d need to scan every page to find a topic. With an index, you can jump directly to the relevant pages, saving time.
Context: Salesforce automatically indexes certain fields (e.g., Id, Name), but for non-standard fields like custom Date fields, a custom index must be requested through Salesforce Support.
Incorrect Answers
❌ Option B: Add custom indexes to all fields on Application without a standard index.
Why it’s incorrect: Adding custom indexes to all fields without a standard index is not a practical or recommended approach. Indexes are resource-intensive, and Salesforce limits the number of custom indexes per object. Indexing every field without a standard index would likely exceed these limits and may not address the specific performance issue, which is tied to date-range filtering. Additionally, indexes are most effective when applied to fields frequently used in filters, sorts, or lookups, not indiscriminately to all fields.
Common Misconception: Some might think indexing all fields maximizes performance, but this overlooks the fact that unnecessary indexes consume resources and may not improve query performance for fields rarely used in reports.
❌ Option C: Run multiple reports to get different pieces of the data and combine them.
Why it’s incorrect: Running multiple reports and combining them manually is not an efficient or scalable solution for improving report performance. This approach increases complexity, requires additional user effort, and doesn’t address the root cause of the performance issue (inefficient querying on a large dataset). It’s a workaround rather than a technical solution, and it may lead to errors or inconsistencies when combining data.
Common Misconception: Users might assume breaking reports into smaller pieces inherently improves performance, but this doesn’t optimize the underlying query execution and can create additional overhead.
Reference
Salesforce Documentation:
→ Working with Very Large SOQL Queries – Discusses skinny tables and custom indexes for optimizing performance with large data volumes.
→ Custom Indexes – Explains how to request custom indexes through Salesforce Support.
→ Skinny Tables – Describes how skinny tables can improve performance for large datasets.
Trailhead Module: Data Modeling for Large Data Volumes – Covers best practices for handling large datasets in Salesforce.
A national nonprofit organization is using Salesforce to recruit members. The recruitment process requires a member to be matched with a volunteer opportunity. Given the following:
1. A record is created in Project__ c and used to track the project through completion.
2. The member may then start volunteering and is required to track their volunteer hours, which is stored in VTOTime_c object related to the project.
3. Ability to view or edit the VTOTime__c object needs to be the same as the Project__ c record.
4. Managers must see total hours volunteered while viewing the Project__ c record.
Which data relationship should the data architect use to support this requirement when creating the custom VTOTime__c object?
A.
Lookup Field on Project_c to VTOTime_c displaying a list of VTOTime__c in a related list.
B.
Lookup field on VTOTime_c to Project_c with formula filed on Project__ c showing Sum of hours from VTOTime__c records.
C.
Master Detail Field on VTOTime_ c to Project_c with rollup summary field on Project __c showing sum of hours from VTOTime_c records.
D.
Master Detail field on Project _c to VTOTime _c showing a list of VTOTime_c Records in a related list.
Master Detail Field on VTOTime_ c to Project_c with rollup summary field on Project __c showing sum of hours from VTOTime_c records.
Explanation:
This question requires an understanding of Salesforce relationships and their capabilities, specifically when it comes to aggregation. The key requirements are:
1. A parent object (Project__c) and a child object (VTOTime__c).
2. The child object (VTOTime__c) must have the same security access as the parent (Project__c).
3. A sum of volunteer hours from the child records must be visible on the parent record.
C. Master Detail Field on VTOTime__c to Project__c with rollup summary field on Project__c showing sum of hours from VTOTime__c records. This is the correct answer because it satisfies all requirements:
✔️ Master-Detail Relationship: This establishes a tight parent-child connection where the child's existence and security permissions are dependent on the parent. Requirement #3 is met as the VTOTime__c record's view/edit permissions are inherited from the Project__c record.
✔️ Roll-Up Summary Field: This is the only type of field that can aggregate (sum, count, min, max) data from child records directly onto the parent record without using code or a formula field. Requirement #4 is met by summing the hours field from the VTOTime__c records.
A. Lookup Field on Project__c to VTOTime__c... This is a reversed relationship and incorrect. The child (VTOTime__c) should be related to the parent (Project__c), not the other way around.
B. Lookup field on VTOTime__c to Project__c with formula filed on Project__c... This is incorrect. A formula field on a parent object cannot perform a rollup (sum) of child records. Formula fields can only reference fields on the same record or on a parent record.
D. Master Detail field on Project__c to VTOTime__c... This is an incorrect representation of the relationship setup. The master-detail field is always created on the child object (VTOTime__c) and points to the master object (Project__c). The description is backward.
Northern Trail Outfitters (NTO) has the following systems:
Customer master-source of truth for customer information
Service cloud-customer support
Marketing cloud-marketing support
Enterprise data warehouse—business reporting
The customer data is duplicated across all these system and are not kept in sync.
Customers are also complaining that they get repeated marketing emails and have to call into update their information.
NTO is planning to implement master data management (MDM) solution across the enterprise.
Which three data will an MDM tool solve? Choose 3 answers
A.
Data completeness
B.
Data loss and recovery
C.
Data duplication
D.
Data accuracy and quality
E.
Data standardization
Data duplication
D.
Data accuracy and quality
E.
Data standardization
Explanation:
A Master Data Management (MDM) solution is designed to create a single, authoritative source of truth for an organization's critical data, such as customer information. The primary issues described (data duplicated across systems, not in sync, and customer complaints about repeated emails) are classic symptoms of a lack of MDM.
✅ C. Data duplication:
This is the core problem MDM solves. It identifies and merges duplicate records across various systems (e.g., Service Cloud, Marketing Cloud, Data Warehouse) to create a single, golden record.
✅ D. Data accuracy and quality:
By centralizing and managing customer data in a single system, an MDM tool ensures that information is accurate and consistent across the enterprise. It provides a governed process for updates, preventing conflicting information from being stored in different places.
✅ E. Data standardization:
MDM enforces consistent data formats, values, and rules across all systems. For example, it can standardize address formats or naming conventions, which is critical for clean, usable data. The problem of customers having to call in to update their information in multiple places is a symptom of poor standardization.
❌ A. Data completeness:
While an MDM solution can help with completeness by merging data from various sources, it is not its primary function. Its main purpose is to manage the "master" data, not to ensure every field is filled out, which is typically a data governance and business process issue.
❌ B. Data loss and recovery:
This is incorrect. Data loss and recovery are functions of backup and disaster recovery systems, not MDM. MDM focuses on managing data consistency and quality, not on recovering data from a system failure.
Universal Containers has a public website with several forms that create Lead records in Salesforce using the REST API. When designing these forms, which two techniques will help maintain a high level of data quality?
A.
Do client-side validation of phone number and email field formats.
B.
Prefer picklist form fields over free text fields, where possible.
C.
Ensure the website visitor is browsing using an HTTPS connection.
D.
Use cookies to track when visitors submit multiple forms.
Do client-side validation of phone number and email field formats.
B.
Prefer picklist form fields over free text fields, where possible.
Explanation:
The goal is to maintain a high level of data quality when creating Lead records from a public website.
🟢 A. Do client-side validation of phone number and email field formats.
This is correct because client-side validation prevents invalid data formats from ever being submitted to the Salesforce REST API. By checking that an email is in the correct user@domain.com format or a phone number follows a specific pattern before submission, you significantly reduce the amount of junk data entering your system. This is a foundational step in data quality.
🟢 B. Prefer picklist form fields over free text fields, where possible.
This is also correct. Picklists enforce a predefined set of values, eliminating variations and typos that can occur with free-text fields (e.g., "California," "CA," "Calif."). By limiting user input to a controlled list, you ensure consistency and standardization, which are key aspects of data quality.
🔴 C. Ensure the website visitor is browsing using an HTTPS connection.
This is incorrect. While HTTPS is crucial for data security and encryption during transmission, it has no direct impact on the quality or format of the data itself.
🔴 D. Use cookies to track when visitors submit multiple forms.
This is incorrect. Cookies are used to track user behavior and identify repeat visitors. While this can be useful for lead deduplication or marketing automation, it doesn't directly improve the quality of the data entered into the form fields.
NTO has implemented salesforce for its sales users. The opportunity management in salesforce is implemented as follows:
1.Sales users enter their opportunities in salesforce for forecasting and reporting purposes.
2.NTO has a product pricing system (PPS) that is used to update opportunity amount field on opportunities on a daily basis.
3.PPS is the trusted source within the NTO for opportunity amount.
4.NTO uses opportunity forecast for its sales planning and management.
Sales users have noticed that their updates to the opportunity amount field are overwritten when PPS updates their opportunities.
How should a data architect address this overriding issue?
A.
Create a custom field for opportunity amount that sales users update separating the fields that PPS updates.
B.
Create a custom field for opportunity amount that PPS updates separating the field that sales user updates.
C.
Change opportunity amount field access to read only for sales users using field level security.
D.
Change PPS integration to update only opportunity amount fields when values is NULL.
Create a custom field for opportunity amount that PPS updates separating the field that sales user updates.
Explanation:
Correct Answer:
The key fact is that PPS is the trusted source for the Opportunity Amount field and NTO uses this value in forecasting. That means PPS must control the standard Amount field so that forecasts remain accurate. If sales users also edit this same field, their changes will inevitably be overwritten because PPS runs daily updates.
The best design is to create a custom field where PPS writes its value, while sales users continue to work with the existing Opportunity Amount field. Or vice versa, but the important principle is to clearly separate system-controlled data from user-controlled data. Since forecasting relies on the PPS-owned value, it should stay on the standard Amount field. Sales users can then use the custom field for their estimates or proposed values without conflict.
Incorrect Answers:
A. Sales users in a custom field.
If the sales users update a custom field while PPS continues updating the standard Amount, forecasts will be based only on PPS values. Sales will lose visibility of their contributions to forecasts, which goes against their business process.
C. Making field read-only for users.
This blocks salespeople from entering or adjusting expected deal values. While it prevents overwrites, it also removes needed functionality and reduces trust in the system. Not a practical solution.
D. PPS only updating when NULL.
Since users will almost always enter an Amount, the PPS integration would rarely update the field, and forecasts would drift away from the PPS trusted source. This defeats the purpose of having PPS as the system of record.
Reference:
Salesforce Help: Forecasts and Opportunity Amount
| Salesforce-Platform-Data-Architect Exam Questions - Home | Previous |
| Page 3 out of 52 Pages |