Salesforce-Platform-Integration-Architect Practice Test

Salesforce Spring 25 Release -
Updated On 10-Nov-2025

106 Questions

Universal Containers has a requirement for all accounts that do NOT qualify for a business extension (Custom field on the account record) for the next month to send a meeting invite to their contacts from the marketing automation system to discuss the next steps. It is estimated there will be approximately 1MilIion contacts per month. What is the recommended solution?

A.

Use Batch Apex

B.

Use Time-based workflow rule

C.

Use Process builder

D.

Use Trigger.

A.   

Use Batch Apex



Explanation:

The requirement involves processing a very large volume of records (approximately 1 million contacts per month) based on a specific business condition (Account field status) and initiating an integration action (sending a meeting invite via an external Marketing Automation System). To prevent hitting Salesforce's strict governor limits (like CPU time, heap size, and DML rows) when processing such massive data volumes and performing asynchronous callouts, the recommended approach is to use a dedicated asynchronous processing mechanism designed for bulk operations.

Correct Option: ✅

A. Use Batch Apex
Batch Apex is the ideal solution because it is designed for processing up to 50 million records by dividing the workload into smaller, manageable batches (typically 200 records per batch). This partitioning ensures that the entire operation, which involves querying a large dataset and making a subsequent callout to the external marketing system, remains well within the governor limits. The asynchronous nature of Batch Apex allows for high-volume, reliable, and scheduled execution of the required complex logic.

Incorrect Option: ❌

B. Use Time-based workflow rule
Time-based workflow rules are not suitable for processing 1 million records monthly. Workflow rules are designed for simpler automation and are severely limited by governor constraints. Attempting to enqueue and process such a massive number of actions through this mechanism would likely lead to system performance degradation and constant failure to execute within the allowed limits for queued jobs.

C. Use Process Builder
Process Builder is an excellent declarative tool but operates within the synchronous transactional limits when an event fires. Attempting to initiate the complex logic (querying 1 million contacts, preparing data, and making a callout) from a Process Builder would cause it to immediately hit transaction limits, such as the CPU time or the total number of SOQL queries. It is not architecturally sound for bulk, scheduled, or high-volume processing.

D. Use Trigger
An Apex Trigger executes synchronously, typically on a DML event (insert, update, delete). Executing the logic for querying and integrating with an external system for 1 million contacts within a synchronous trigger context is impossible; it would instantly fail due to exceeding governor limits like the CPU time limit (10,000 milliseconds). Triggers are reserved for real-time validation or context-specific data manipulation.

Reference:
Salesforce Apex Developer Guide: Batch Apex
Apex Developer Guide

A company in a heavily regulated industry requires data in legacy systems to be displayed in Salesforce user interfaces (UIs). They are proficient in their cloud-based ETL (extract, transform, load) tools. They expose APIs built on their on-premise middleware to cloud and on-premise applications. Which two findings about their current state will allow copies of legacy data in Salesforce? Choose 2 answers

A.

Only on-premise systems are allowed access to legacy systems

B.

Cloud-based ETL can access Salesforce and supports queues

C.

On-premise middleware provides APIs to legacy systems data

D.

Legacy systems can use queues for on-premise integration

B.   

Cloud-based ETL can access Salesforce and supports queues


C.   

On-premise middleware provides APIs to legacy systems data



Explanation

To create copies of legacy data inside Salesforce, the source data must be accessible and an integration tool must be capable of loading that data into Salesforce. Since the company already uses cloud-based ETL tools and exposes APIs through on-premise middleware, the critical factors are whether the ETL tool can connect to Salesforce and whether the middleware provides API access to legacy data. Options B and C are the findings that confirm this capability.

✔️ Correct Options

B. Cloud-based ETL can access Salesforce and supports queues
A cloud ETL tool that can access Salesforce ensures the ability to extract from legacy systems (via middleware), transform, and load the data into Salesforce. Queue support improves fault tolerance and batch handling. This capability is essential for copying legacy data into Salesforce on a recurring or scheduled basis, making this a key enabling factor.

C. On-premise middleware provides APIs to legacy systems data
If the middleware already exposes APIs to legacy systems, the ETL tool can pull data from these APIs without requiring direct access to legacy databases. This is exactly what allows the ETL solution to fetch legacy data and replicate it in Salesforce. The presence of accessible APIs is fundamental to creating data copies in Salesforce.

❌ Incorrect Options

A. Only on-premise systems are allowed access to legacy systems
If only on-premise systems are allowed access, cloud ETL tools would be blocked from retrieving legacy data. This would prevent data replication into Salesforce unless major architecture changes were made. This does not support the ability to copy legacy data to Salesforce.

D. Legacy systems can use queues for on-premise integration
Queue support within legacy systems may help with internal reliability, but it does not enable data movement into Salesforce. Queue capabilities alone do not create data access or expose the data to cloud ETL tools. This does not contribute to enabling data copies inside Salesforce.

Reference
Salesforce Data Integration Patterns (ETL & API-based Integration):
https://developer.salesforce.com/docs/atlas.en-us.integration_patterns_and_practices.meta/integration_patterns_and_practices/integ_pat_ptn_data_integration.htm

Sales representatives at Universal Containers (UC) use Salesforce Sales Cloud as their primary CRM. UC owns a legacy homegrown application that stores a copy of customer dataas well. Sales representatives may edit or update Contact records in Salesforce if there is a change. Both Salesforce and the homegrown application should be kept synchronized for consistency. UC has these requirements:

1. When a Contact record in Salesforce is updated, the external homegrown application should be
2. The synchronization should be event driven.
3. The integration should be asynchronous.

Which option should an architect recommend to satisfy the requirements?

A.

Leverage Platform Events to publish a custom event message containing changes to the Contact object.

B.

Leverage Change Data Capture to track changes to the Contact object and write a CometD subscriber on the homegrown application.

C.

Write an Apex Trigger with the @future annotation.

D.

Use an ETL tool to keep Salesforce and the homegrown application in sync on a regular candence.

A.   

Leverage Platform Events to publish a custom event message containing changes to the Contact object.



Explanation

This scenario requires real-time, event-driven synchronization between Salesforce and an external system. The solution must react immediately to Contact record changes, process them asynchronously to avoid blocking users, and reliably notify the external system. The architecture needs to capture changes as events and push them to the legacy application without manual intervention or scheduled batches.

✔️ Correct Option

(A) ✅ Leverage Platform Events...
Platform Events provide a perfect event-driven, asynchronous messaging pattern. When a Contact updates, an Apex trigger publishes a custom Platform Event containing the changed data. The external application subscribes to these events via the CometD protocol, receiving real-time notifications. This meets all requirements: event-driven, asynchronous, and immediate synchronization without user delays.

❌ Incorrect Options

(B) Leverage Change Data Capture...
While CDC is event-driven and asynchronous, it requires the homegrown application to actively subscribe to the change data stream using the CometD client. This places significant implementation burden on the legacy system to maintain connections and process the CDC payload format, making it less ideal than a custom Platform Event tailored to the external system's needs.

(C) Write an Apex Trigger with @future...
This approach only handles the asynchronous requirement but is not truly event-driven from the external system's perspective. The @future method would need to make a callout, but the external system would need to be available immediately. It also lacks the robust delivery guarantees and pub/sub architecture needed for reliable integration.

(D) Use an ETL tool...
ETL tools operate on scheduled batches, not real-time events. This violates the event-driven requirement since changes wouldn't be synchronized immediately. Scheduled synchronization creates data consistency gaps and doesn't provide the real-time experience sales representatives need.

📚 Reference
The official Salesforce Integration Patterns guide recommends the "Event-Driven Messaging" pattern using Platform Events for real-time, asynchronous integration scenarios where external systems need to be notified of changes immediately. This pattern provides the loose coupling and reliability needed for keeping systems synchronized.

Northern Trail Outfitters needs to present shipping costs and estimated delivery times to their customers. Shipping services used vary by region, and have similar but distinct service request parameters. Which integration component capability should be used?

A.

Enterprise Service Bus to determine which shipping service to use, and transform requests to the necessary format.

B.

Outbound Messaging to request costs and delivery times from Shipper delivery services with automated error retry.

C.

APEX REST Service to implement routing logic to the various shipping service.

D.

Enterprise Service Bus user interface to collect shipper-specific form data.

A.   

Enterprise Service Bus to determine which shipping service to use, and transform requests to the necessary format.



Explanation

Customers need real-time shipping quotes (cost + delivery time) based on region. Each shipping provider has similar but unique request formats. The system must dynamically route and transform outbound requests to the right provider — all within a user-facing flow. Low latency and flexibility are key.

✅ Correct Option: A. Enterprise Service Bus (ESB)
ESB acts as a smart middleware layer to route requests by region (e.g., US → FedEx, EU → DHL).
Built-in message transformation maps Salesforce data to each provider’s unique schema.
Supports orchestration, logging, retries — ideal for multi-vendor integration.
Decouples Salesforce from backend changes — future-proof and scalable.

❌ Incorrect Option: B. Outbound Messaging
Outbound Messaging sends SOAP-based messages on record changes — not suitable for real-time UI quotes.
No support for request transformation or dynamic routing.
Fire-and-forget model — no response handling for cost/delivery display.

❌ Incorrect Option: C. APEX REST Service
An Apex REST service is for inbound calls into Salesforce, not outbound routing.
Even if misused for callouts, Apex logic for routing + transformation = high maintenance, governor limits.
Tight coupling — every provider change requires code deploy.

❌ Incorrect Option: D. ESB user interface
ESB has no UI component for end-user forms — it’s backend integration middleware.
Collecting shipper-specific form data belongs in Salesforce (e.g., Lightning form), not ESB.
Misunderstands ESB role entirely — it’s not a presentation layer.

📚 Reference
Salesforce Architect Guide – Integration Patterns
ESB in Integration Architecture

Northern Trail Outfitters is creating a distributable Salesforce package for other Salesforce orgs within the company. The package needs to call into a custor Apex REST endpoint in the central org. The security team wants to ensure a specific integration account is used in the central org that they will authorize after installation of the package. Which three items should an architect recommend to secure the integration in the package?
Choose 3 answers

A.

Create an Auth provider in the package and set the consumer key and consumer secret of the connected app in the central org.

B.

Contact Salesforce support and create a case to temporarily enable API access for managed packages.

C.

Create a connected app in the central org and add the callback URL of each org the package is installed in to redirect to after successful authentication.

D.

Use an encrypted field to store the password that the security team enters and use password management for external orgs and set the encryption method to TLS 1.2. 

E.

Use the Auth Provider configured and select the identity type as Named Principal with OAuth 2.0 as the protocol and Select Start Authentication Flow on Save.

A.   

Create an Auth provider in the package and set the consumer key and consumer secret of the connected app in the central org.


C.   

Create a connected app in the central org and add the callback URL of each org the package is installed in to redirect to after successful authentication.


E.   

Use the Auth Provider configured and select the identity type as Named Principal with OAuth 2.0 as the protocol and Select Start Authentication Flow on Save.



Explanation

This scenario requires a secure and controlled OAuth integration from multiple subscriber orgs into a central Salesforce org. To ensure only a specific integration account is used, the package must rely on a centrally managed Connected App, a consistent OAuth flow, and an Auth Provider that allows a Named Principal identity type. This ensures all subscribing orgs authenticate using the same integration user authorized by the security team. Options A, C, and E align with these requirements.

✔️ Correct Options

A. Create an Auth Provider in the package and set the consumer key and consumer secret of the connected app in the central org.
This is required because all subscriber orgs need a way to authenticate into the central org. The Auth Provider inside the package uses the consumer key and secret from the central org’s connected app, allowing controlled OAuth authentication. The package does not own the connected app—only references it—which is the correct pattern for distributable integrations.

C. Create a connected app in the central org and add the callback URL of each org the package is installed in to redirect to after successful authentication.
The central org must host the connected app, because that is the OAuth authority. Since this package can be installed in multiple subscriber orgs, each installation org needs its own callback URL added to the connected app. This ensures the OAuth redirection flow works securely for every subscriber org the package is installed in.

E. Use the Auth Provider configured and select the identity type as Named Principal with OAuth 2.0 as the protocol and Select Start Authentication Flow on Save.
Choosing Named Principal ensures the same integration user is used for all incoming calls. This satisfies the security team's requirement to use a single, centrally controlled integration account. OAuth 2.0 establishes secure token-based authentication, and "Start Authentication Flow on Save" initiates the authorization process to bind the integration user.

❌ Incorrect Options

B. Contact Salesforce support and create a case to temporarily enable API access for managed packages.
There is no Salesforce setting that temporarily enables API access for managed packages. Managed packages already support API access when permitted by the subscriber org. This option does not contribute to securing the integration or setting up controlled OAuth authentication.

D. Use an encrypted field to store the password and use password management with TLS 1.2.
Password-based integrations are discouraged, insecure, and unnecessary when OAuth is available. OAuth is the recommended and secure authentication method for cross-org integrations. Storing passwords—even in encrypted fields—violates best practices and does not meet the requirement of using a centrally authorized integration account.

Reference
Connected App Use Cases
Auth Providers Overview
https://help.salesforce.com/s/articleView?id=sf.sso_authproviders.htm
Named Principal Identity Type
https://help.salesforce.com/s/articleView?id=sf.sso_authentication_named_principal.htm

Page 1 out of 22 Pages