Salesforce-Platform-Data-Architect Practice Test
Updated On 1-Jan-2026
257 Questions
Universal Container (US) is replacing a home grown CRM solution with Salesforce, UC has decided to migrate operational (Open and active) records to Salesforce, while keeping historical records in legacy system, UC would like historical records to be available in Salesforce on an as needed basis.
Which solution should a data architect recommend to meet business requirement?
A.
Leverage real-time integration to pull records into Salesforce.
B.
Bring all data Salesforce, and delete it after a year.
C.
Leverage mashup to display historical records in Salesforce.
D.
Build a chair solution to go the legacy system and display records.
Leverage mashup to display historical records in Salesforce.
Explanation:
UC needs to migrate active records to Salesforce but keep historical data in a legacy system, making it viewable on-demand. The requirement is for read-only access without the cost and complexity of a full data migration or a real-time integration for all historical data.
Correct Option:
C. 🧩 Leverage mashup to display historical records in Salesforce.
This is the correct answer. A mashup (often implemented via a Canvas app or Lightning web component with an iframe) allows the legacy application's user interface to be embedded directly within a Salesforce Lightning page. Users can click to view historical data on-demand without leaving Salesforce, satisfying the "as needed" requirement without moving any data.
Incorrect Options:
A. 🔌 Leverage real-time integration to pull records into Salesforce.
While this would work, it is inefficient for a simple "as needed" viewing requirement. It requires building complex APIs, handling authentication, and could performance issues if the legacy system can't handle the load, all for data that is rarely accessed.
B. 🗑️ Bring all data into Salesforce and delete it after a year.
This directly violates the business requirement to keep historical records in the legacy system and only access them as needed. It would also consume significant storage and require a complex data deletion process.
D. 🔍 Build a custom solution to query the legacy system and display records.
This is essentially a more vague description of what a mashup does. However, option C is the standard, supported Salesforce method for achieving this ("mashup"), making it the more precise and recommended answer.
Reference:
Salesforce Help: Lightning Experience Customization with Canvas
Northern Trail Outfitters
NIO employees will need to see these lol records within Salesforce and generate weekly reports on it. Developers may also need to write programmatic logic to aggregate the records and incorporate them into workflows. Which data pattern will allow a data architect to satisfy these requirements, while also keeping limits in mind?
A.
Bidirectional integration
B.
Unidirectional integration
C.
Virtualization
D.
Persistence
Virtualization
Explanation:
NIO needs to access a high volume of daily IoT data within Salesforce for reporting and automation, but storing all 36.5 million records annually in Salesforce would consume excessive data storage. The requirement for real-time access and aggregation points to a need for the data to be queryable as if it were in Salesforce, without physically storing it there.
Correct Option:
C. 🌐 Virtualization. This is the correct answer. Virtualization (e.g., using Salesforce Connect and an OData adapter) allows the IoT data stored in the external cloud database to be accessed in real-time from within Salesforce. The data appears as an external object, enabling employees to view it in reports and developers to query it via SOQL and use it in workflows, all without consuming any Salesforce data storage.
Incorrect Options:
A. 🔁 Bidirectional integration. This involves syncing data both to and from an external system. It is overly complex for a read-only requirement and would likely involve storing a copy of the data in Salesforce, defeating the purpose of saving storage.
B. ➡️ Unidirectional integration. This typically involves importing the data into Salesforce on a schedule (e.g., nightly ETL). This would consume massive amounts of data storage (100k records/day) and the data would not be available in real-time, only as of the last load.
D. 💾 Persistence. This means storing the data physically within Salesforce. This is the exact opposite of what is needed to "keep limits in mind," as it would quickly consume the org's data storage allocation with millions of new records each month.
Reference:
Salesforce Help: About Salesforce Connect
Salesforce Help: External Objects
Universal Containers wants to automatically archive all inactive Account data that is older than 3 years. The information does not need to remain accessible within the application. Which two methods should be recommended to meet this requirement? Choose 2 answers
A.
Use the Force.com Workbench to export the data.
B.
Schedule a weekly export file from the Salesforce UI.
C.
Schedule jobs to export and delete using an ETL tool.
D.
Schedule jobs to export and delete using the Data Loader.
Schedule jobs to export and delete using an ETL tool.
D.
Schedule jobs to export and delete using the Data Loader.
Explanation:
Universal Containers needs to archive inactive Account data older than three years. The key requirements are that the process must be automated and the data does not need to be accessible within Salesforce after archiving. This points to a solution that involves both exporting and permanently deleting the data from the Salesforce platform, using tools that can handle a scheduled, bulk process.
Correct Options
✅ C. Schedule jobs to export and delete using an ETL tool.
An ETL (Extract, Transform, Load) tool is the most robust and scalable solution for this requirement. Tools like MuleSoft, Informatica, or Talend can be configured to connect to Salesforce, query and export the specified data, store it in an external database or data warehouse, and then use the Salesforce APIs to delete the records. This approach is highly automated, reliable, and can handle large data volumes efficiently.
✅ D. Schedule jobs to export and delete using the Data Loader.
The Data Loader is a powerful, user-friendly tool for bulk data operations. While it is primarily a desktop application, it can be run from the command line using the process-conf.xml file. This allows for scheduling the data export and subsequent deletion of records. This is a common and effective method for automated, recurring data archiving tasks, especially for organizations without a dedicated ETL platform.
Incorrect Options
❌ A. Use the Force.com Workbench to export the data.
The Force.com Workbench is an excellent tool for quick, on-demand data queries and exports, but it is not designed for scheduled, automated tasks. It is a web-based utility and lacks the functionality to run recurring jobs. This method would require manual intervention every time the archive needs to be performed, which does not meet the "automatically" requirement.
❌ B. Schedule a weekly export file from the Salesforce UI.
The weekly export feature in the Salesforce UI is a backup tool, not an archiving solution. While it exports data, it does not delete the records from the live Salesforce org. The requirement is to remove the inactive data from the application, which this feature does not accomplish. It is meant for data recovery, not for data management or archiving.
Reference
Trailhead Module: Data Management
Salesforce Help Article: Data Loader Guide
Salesforce Help Article: Best Practices for Archiving Data in Salesforce
Northern Trail outfitters in migrating to salesforce from a legacy CRM system that identifies the agent relationships in a look-up table. What should the data architect do in order to migrate the data to Salesfoce?
A.
Create custom objects to store agent relationships.
B.
Migrate to Salesforce without a record owner.
C.
Assign record owner based on relationship.
D.
Migrate the data and assign to a non-person system user.
Create custom objects to store agent relationships.
Explanation:
Summary:
The data architect needs to migrate data from a legacy CRM, which uses a lookup table for agent relationships, to Salesforce. The primary task is to correctly model these relationships in Salesforce to ensure data integrity and functionality. Simply migrating the data without defining the proper relationships would lead to a loss of valuable business context and a fragmented data model. The Salesforce platform uses a relational database model, so the relationships must be clearly defined.
Correct Option
✅ A. Create custom objects to store agent relationships.
To properly represent a many-to-many relationship, which is often how agent-to-record relationships are structured, the best practice in Salesforce is to create a junction object. This custom object would have two master-detail or lookup relationships: one to the agent object (e.g., a Contact or User) and one to the related record (e.g., an Account or Opportunity). This approach preserves the data, maintains the relationships, and allows for accurate reporting and automation.
Incorrect Options
❌ B. Migrate to Salesforce without a record owner.
Every record in Salesforce must have an owner. Attempting to migrate data without assigning an owner would fail, as it's a fundamental requirement of the Salesforce platform's data model and security framework. The owner is crucial for record access, sharing rules, and reporting.
❌ C. Assign record owner based on relationship.
While assigning a record owner is necessary, simply assigning the agent as the owner might not be the correct approach. The agent might not be the appropriate user to own the record in all cases. This also fails to address the underlying many-to-many relationship issue, as a single record can only have one owner. It does not replicate the original lookup table's functionality.
❌ D. Migrate the data and assign to a non-person system user.
Assigning a system user as the owner might be a temporary solution for data loading but it is not a long-term data governance strategy. This practice can lead to a data ownership skew and makes it difficult to manage, report on, or secure the records based on a real person's role or hierarchy. It does not solve the relationship modeling problem.
Reference
Trailhead Module: Data Modeling
Salesforce Help Article: Designing a Relational Data Model
Universal Containers (UC) is implementing a new customer categorization process where customers should be assigned to a Gold, Silver, or Bronze category if they've purchased UC's new support service. Customers are expected to be evenly distributed across all three categories. Currently, UC has around 500,000 customers and is expecting 1% of existing non-categorized customers to purchase UC's new support service every month over the
next five years. What is the recommended solution to ensure long-term performance, bearing in mind the above requirements?
A.
Implement a new global picklist custom field with Gold, Silver, and Bronze values and enable it in Account.
B.
Implement a new picklist custom field in the Account object with Gold, Silver, and Bronze values.
C.
Implement a new Categories custom object and a master-detail relationship from Account to Category.
D.
Implement a new Categories custom object and create a lookup field from Account to Category.
Implement a new picklist custom field in the Account object with Gold, Silver, and Bronze values.
Explanation:
Universal Containers needs to categorize 500,000 customers into Gold, Silver, or Bronze based on their purchase of a new support service, with 1% of non-categorized customers adopting monthly for five years. The solution must ensure long-term performance. A simple, scalable approach is required to track categories efficiently without overcomplicating the data model or impacting system performance.
Correct Option: 🅱️ Implement a new picklist custom field in the Account object with Gold, Silver, and Bronze values
This option is ideal because a picklist field on the Account object is simple, scalable, and efficient for categorizing customers. It avoids additional objects, minimizing storage and query complexity. With only three categories and an expected 60,000 categorized customers over five years (1% of 500,000 monthly), this solution ensures performance without overengineering, aligning with Salesforce’s best practices for data modeling.
Incorrect Option: 🅰️ Implement a new global picklist custom field with Gold, Silver, and Bronze values and enable it in Account
A global picklist is incorrect because it’s designed for reusability across multiple objects, which isn’t needed here. The requirement is specific to the Account object, and a global picklist adds unnecessary complexity. It doesn’t improve performance and could complicate maintenance, as global picklists are managed at the org level, making them less flexible for Account-specific categorization needs.
Incorrect Option: 🅲 Implement a new Categories custom object and a master-detail relationship from Account to Category
A master-detail relationship is unsuitable because it implies Accounts depend on Categories, which isn’t the case. It also requires a separate object, increasing storage and query complexity for a simple categorization need. With only three categories, this overcomplicates the data model, potentially degrading performance as the number of categorized customers grows to 60,000 over five years.
Incorrect Option: 🅳 Implement a new Categories custom object and create a lookup field from Account to Category
Using a lookup field with a Categories object is inefficient for this use case. It requires managing a separate object, increasing storage and maintenance overhead. For a simple three-value categorization, this approach is overly complex and could impact performance with 500,000 Accounts, especially as data grows, making it less optimal than a picklist field.
Reference
Salesforce Help: Custom Field Types
Salesforce Architect Guide: Data Modeling Best Practices
| Salesforce-Platform-Data-Architect Exam Questions - Home | Previous |
| Page 6 out of 52 Pages |