Last Updated On : 20-Feb-2026
Salesforce Certified Platform Data Architect - Plat-Arch-201 Practice Test
Prepare with our free Salesforce Certified Platform Data Architect - Plat-Arch-201 sample questions and pass with confidence. Our Salesforce-Platform-Data-Architect practice test is designed to help you succeed on exam day.
Salesforce 2026
Universal Containers has implemented Salesforce for its operations. In order for customers to be created in their MDM solution, the customer record needs to have the following attributes:
1. First Name
2. Last Name
3. Email
Which option should the data architect recommend to mandate this when customers are created in Salesforce?
A.
Configure Page Layout marking attributes as required fields.
B.
Create validation rules to check If the required attributes are entered.
C.
Mark Fields for the attributes as required under Setup.
D.
Build validation in Integration with MDM to check required attributes.
Create validation rules to check If the required attributes are entered.
Explanation:
🔴 A. Configure Page Layout marking attributes as required fields. (Incorrect):
Marking a field as required on a page layout is a UI-level check. It can be easily bypassed by data loaded via Data Loader, other APIs, or processes that do not use the standard UI (e.g., integrations, Apex code). Therefore, it is not a foolproof method for ensuring data quality for an integration.
🟢 B. Create validation rules to check if the required attributes are entered. (Correct):
Validation rules enforce data integrity at the database level. They fire regardless of how the record is created or edited—whether through the UI, API, Data Loader, or Apex. This ensures that any record saved in Salesforce meets the criteria (e.g., First Name, Last Name, and Email are populated), making it the most robust and secure method to guarantee the data sent to the MDM system is complete.
🔴 C. Mark Fields for the attributes as required under Setup. (Incorrect):
This is the most basic way to make a field required and is stronger than a page layout requirement because it applies to the object itself. However, like page layouts, it can still be bypassed in certain scenarios, such as when a record is created with a default value in a formula field or through some API usages. A validation rule provides more flexibility and control (e.g., you can create complex criteria).
🔴 D. Build validation in Integration with MDM to check required attributes. (Incorrect):
This is a reactive and inefficient approach. It would allow incomplete data to be saved in Salesforce. The integration would then have to handle the error, potentially requiring a complex error-handling process to reject the record and notify Salesforce. It is a best practice to enforce data quality at the system of entry (Salesforce) rather than in a downstream system.
🔧 Reference/Concept:
This question tests the understanding of data validation strategies and their scope. A Data Architect must know that validation rules provide the strongest, most universal enforcement of business logic and data quality within Salesforce, which is critical for ensuring the success of downstream integrations.
Northern Trail Outfitters (NTO) has multiple Salesforce orgs based on regions. Users need read-only access to customers across all Salesforce orgs.
Which feature in Salesforce can be used to provide access to customer records across all NTO orgs?
A.
Salesforce Connect
B.
Salesforce 2 Salesforce
C.
Federated Search
D.
External APIs
Salesforce Connect
Explanation:
✅ A. Salesforce Connect (Correct):
Salesforce Connect is the correct tool for this scenario. It uses the OData standard to present data from an external source (which could be another Salesforce org exposed via an OData endpoint or any other external system) as an External Object within Salesforce. Users can then have read-only (or read-write) access to these "virtual" records as if they were native to their org, without the data being physically stored there. This is ideal for providing cross-org, read-only access.
❌ B. Salesforce to Salesforce (S2S):
S2S is designed for sharing opportunities and accounts between two partner orgs for collaboration. It is not intended for providing broad read-only access to all customer records across multiple orgs. It's more about specific record sharing between business partners.
❌ C. Federated Search:
Federated Search allows users to search across multiple data sources from within Salesforce search. While it can find records in other orgs, it does not provide seamless, integrated access to view and work with those records in list views, reports, or page layouts like Salesforce Connect does.
❌ D. External APIs:
While APIs (like the Salesforce REST or SOAP APIs) could be used to build a custom solution for accessing data in another org, this would require significant custom development. It is not a pre-built, declarative feature of Salesforce designed for this specific purpose, unlike Salesforce Connect.
🔧 Reference/Concept:
This question tests knowledge of cross-org data access strategies. The key is distinguishing between tools for collaboration (S2S), search (Federated Search), and integrated data virtualization (Salesforce Connect). Salesforce Connect is the premier tool for creating a unified, real-time view of data distributed across multiple systems.
UC is having issues using Informatica Cloud Louder to export +10MOrder records. Each Order record has 10 Order Line Items. What two steps can you take to help correct this? Choose two answers.
A.
Export in multiple batches
B.
Export Bulk API in parallel mode
C.
Use PK Chunking
D.
Limit Batch to 10K records
Export Bulk API in parallel mode
C.
Use PK Chunking
Explanation:
🟢 Why B is correct:
The Bulk API is specifically designed for large-scale data operations (over 50,000 records). Its "parallel mode" processes batches of data concurrently, significantly speeding up the export job for a massive dataset of 10 million+ records. Informatica Cloud Loader can leverage this API.
🟢 Why C is correct:
PK Chunking is a feature that works with the Bulk API to automatically split a large query on a large object (like Order) into smaller, manageable chunks based on the record ID (Primary Key). This prevents query timeouts, avoids straining database resources, and improves the reliability and performance of the export.
🔴 Why A is incorrect:
While breaking into batches is the general idea, options B and C specify the standard, automated, and optimized Salesforce methods for doing this. "Export in multiple batches" is a vague, manual approach that is inefficient and error-prone compared to using the built-in Bulk API with PK Chunking.
🔴 Why D is incorrect:
Limiting batches to 10k records is a good practice for the SOAP API or for manual data loads, but it is not the optimal solution for the Bulk API, which handles up to 10,000 records per batch by default and is designed for much larger volumes. The core issue with 10M records is the query scope, not just the batch size. PK Chunking addresses the root cause.
✔️ Key Concept/Takeaway:
For exporting or querying extremely large data volumes (EDV) in Salesforce, the optimal pattern is to use the Bulk API in parallel and enable PK Chunking. This combination ensures the operation is broken down into efficient, parallelized chunks that can be processed without timeouts.
Salesforce is being deployed in Ursa Major Solar's disparate, multi-system ERP environment. Ursa major Solar wants to maintain data synchronization between systems. Which two techniques should be used to achieve this goal? (Choose two.)
A.
Integrate Salesforce with the ERP environment.
B.
Utilize workbench to update files within systems
C.
Utilize an MDM strategy to outline a single source of truth.
D.
Build synchronization reports and dashboards.
Integrate Salesforce with the ERP environment.
C.
Utilize an MDM strategy to outline a single source of truth.
Explanation:
✅ Why A is correct:
Direct integration between Salesforce and the various ERP systems (e.g., using APIs, middleware like MuleSoft, or Salesforce Connect) is the technical mechanism that enables the automated, real-time or batch-based exchange of data. Without integration, there is no automated synchronization.
✅ Why C is correct:
In a disparate multi-system environment, a Master Data Management (MDM) strategy is the foundational governance framework. It defines which system is the authoritative source (single source of truth) for which piece of data (e.g., "The ERP is the master for customer financial data, Salesforce is the master for customer engagement data"). This prevents conflicts and ensures all systems synchronize to the correct master value.
❌ Why B is incorrect:
Workbench is a powerful administrative tool for data loading and inspection, but it is a manual, point-in-time tool. It is not a technique for maintaining ongoing, automated data synchronization between systems.
❌ Why D is incorrect:
Reports and dashboards are for monitoring and visualizing data. They are read-only and have no capability to actually synchronize or update data between different systems. They can show you that data is out of sync but cannot fix it.
🔧 Key Concept/Takeaway:
Effective data synchronization requires both a technical solution (Integration) to move the data and a governance strategy (MDM) to define the rules and authoritative sources, ensuring consistency and accuracy.
Universal Container (UC) stores 10 million rows of inventory data in a cloud database, As part of creating a connected experience in Salesforce, UC would like to this inventory data to Sales Cloud without a import. UC has asked its data architect to determine if Salesforce Connect is needed.
Which three consideration should the data architect make when evaluating the need for Salesforce Connect?
A.
You want real-time access to the latest data, from other systems.
B.
You have a large amount of data and would like to copy subsets of it into Salesforce.
C.
You need to expose data via a virtual private connection.
D.
You have a large amount of data that you don't want to copy into your Salesforce org.
E.
You need to small amounts of external data at any one time.
You want real-time access to the latest data, from other systems.
D.
You have a large amount of data that you don't want to copy into your Salesforce org.
E.
You need to small amounts of external data at any one time.
Explanation:
Option A: You want real-time access to the latest data, from other systems.
Why it’s correct: Salesforce Connect enables real-time access to external data without storing it in Salesforce. It uses external objects to map data from external systems (like UC’s cloud database) via protocols like OData, allowing users to view and interact with the data as if it were native to Salesforce. For UC’s goal of a connected experience without importing 10 million rows, real-time access to the latest inventory data is a key use case for Salesforce Connect.
Example: Imagine a live feed from a weather app. Instead of downloading all weather data, you see real-time updates as needed. Salesforce Connect works similarly, fetching external data on demand.
Context: Real-time access is critical when external data changes frequently, and UC needs up-to-date inventory information in Sales Cloud.
Option D: You have a large amount of data that you don’t want to copy into your Salesforce org.
Why it’s correct: Salesforce Connect is ideal for scenarios involving large datasets (like UC’s 10 million rows) that you don’t want to import into Salesforce due to storage limits, performance concerns, or data governance policies. Instead, Salesforce Connect allows UC to access the cloud database’s inventory data virtually, keeping it in the external system while integrating it into Sales Cloud workflows.
Example: Think of a digital library card catalog. Instead of owning every book, the catalog references books stored elsewhere. Salesforce Connect references external data without storing it.
Context: Importing 10 million rows into Salesforce could strain storage and performance, making Salesforce Connect a better fit.
Option E: You need to access small amounts of external data at any one time.
Why it’s correct: Salesforce Connect is efficient for accessing small subsets of external data on demand, which aligns with UC’s need for a connected experience. For example, users might only need to view specific inventory records during a Sales Cloud interaction, not the entire 10 million rows. Salesforce Connect retrieves only the requested data, reducing performance overhead.
Example: It’s like ordering a single pizza slice from a restaurant instead of buying the whole pizza. Salesforce Connect fetches only the data you need at the moment.
Context: This consideration ensures scalability and performance when users query specific records from the external database.
Incorrect Answers
Option B: You have a large amount of data and would like to copy subsets of it into Salesforce.
Why it’s incorrect: This option contradicts UC’s requirement to avoid importing data into Salesforce. Salesforce Connect is designed to access external data without copying it into the Salesforce org. If UC wanted to copy subsets of the 10 million rows, they would use data import tools (e.g., Data Loader or ETL processes) instead of Salesforce Connect.
Common Misconception: Some might think Salesforce Connect can selectively import data, but it’s strictly for virtual access, not data replication.
Option C: You need to expose data via a virtual private connection.
Why it’s incorrect: While Salesforce Connect can integrate with external systems securely (e.g., via OData over HTTPS), the term “virtual private connection” is not a standard Salesforce term and doesn’t specifically align with Salesforce Connect’s primary use cases. Salesforce Connect focuses on data access, not exposing Salesforce data to external systems via a private network. If UC needed a virtual private connection (e.g., VPN), this would involve other Salesforce features like Salesforce Private Connect, not Salesforce Connect.
Common Misconception: Users might confuse Salesforce Connect’s external data integration with network-level connectivity solutions, but these are distinct concepts.
Reference
Salesforce Documentation:
➡️ Salesforce Connect Overview – Explains how Salesforce Connect provides real-time access to external data without copying it.
➡️ External Objects – Describes how external objects work with Salesforce Connect to map external data.
➡️ Salesforce Connect Use Cases – Covers scenarios where Salesforce Connect is appropriate, including large datasets and real-time access.
Trailhead Module: Salesforce Connect – Provides practical examples of using Salesforce Connect for external data integration.
| Salesforce-Platform-Data-Architect Exam Questions - Home |
| Page 2 out of 52 Pages |