Salesforce-Platform-Integration-Architect Practice Test

Salesforce Spring 25 Release -
Updated On 18-Sep-2025

106 Questions

Universal Containers has a requirement for all accounts that do NOT qualify for a business extension (Custom field on the account record) for the next month to send a meeting invite to their contacts from the marketing automation system to discuss the next steps. It is estimated there will be approximately 1MilIion contacts per month. What is the recommended solution?

A.

Use Batch Apex

B.

Use Time-based workflow rule

C.

Use Process builder

D.

Use Trigger.

A.   

Use Batch Apex



Explanation:

Processing ~1 million records requires an asynchronous, scalable mechanism:
✔ Batch Apex is explicitly designed for large volumes (up to 50 million records) by breaking them into manageable chunks, each with its own governor limits, and running asynchronously on the Apex batch queue.
✔ Time-based workflows and Process Builder are subject to limits on queued actions and are not designed for massive data volumes. They risk queue overflows and unpredictable performance.
✔ Triggers (even @future) are synchronous per transaction context and cannot handle this scale without violating governor limits.

With Batch Apex, you can schedule a monthly job to:

✔ Query Accounts where Business_Extension__c = false and related Contacts.
✔ Iterate through each contact batch, invoke the marketing automation API to create invites.
✔ Handle retries or failures per batch.

This approach is the most robust and maintainable for high-volume, scheduled processing.

A company in a heavily regulated industry requires data in legacy systems to be displayed in Salesforce user interfaces (UIs). They are proficient in their cloud-based ETL (extract, transform, load) tools. They expose APIs built on their on-premise middleware to cloud and on-premise applications. Which two findings about their current state will allow copies of legacy data in Salesforce? Choose 2 answers

A.

Only on-premise systems are allowed access to legacy systems

B.

Cloud-based ETL can access Salesforce and supports queues

C.

On-premise middleware provides APIs to legacy systems data

D.

Legacy systems can use queues for on-premise integration

B.   

Cloud-based ETL can access Salesforce and supports queues


C.   

On-premise middleware provides APIs to legacy systems data



Explanation:

To replicate legacy data into Salesforce, you need both:

B. Cloud-based ETL can access Salesforce and supports queues.
Modern cloud ETL platforms (e.g., AWS Glue) offer native Salesforce connectors that read/write Salesforce objects and can orchestrate jobs in response to queue-based triggers, ensuring reliable delivery and retry semantics.

C. On-premise middleware provides APIs to legacy systems data.
Since the middleware already exposes REST/SOAP endpoints for legacy data, the ETL tool can pull from those APIs, transform the payloads, and load the data into Salesforce custom objects—completing the extract and load steps without building custom adapters.

Answers A and D are constraints on the legacy side (on-prem only or queue-only) but do not by themselves enable ETL-driven replication into Salesforce.

Sales representatives at Universal Containers (UC) use Salesforce Sales Cloud as their primary CRM. UC owns a legacy homegrown application that stores a copy of customer dataas well. Sales representatives may edit or update Contact records in Salesforce if there is a change. Both Salesforce and the homegrown application should be kept synchronized for consistency. UC has these requirements:

1. When a Contact record in Salesforce is updated, the external homegrown application should be
2. The synchronization should be event driven.
3. The integration should be asynchronous.

Which option should an architect recommend to satisfy the requirements?

A.

Leverage Platform Events to publish a custom event message containing changes to the Contact object.

B.

Leverage Change Data Capture to track changes to the Contact object and write a CometD subscriber on the homegrown application.

C.

Write an Apex Trigger with the @future annotation.

D.

Use an ETL tool to keep Salesforce and the homegrown application in sync on a regular candence.

B.   

Leverage Change Data Capture to track changes to the Contact object and write a CometD subscriber on the homegrown application.



Explanation:

Change Data Capture (CDC) natively publishes events for record changes on standard and custom objects. It is:
✔ Event-driven: automatically streams create, update, delete, and undelete events whenever a Contact changes.
✔ Asynchronous: events are buffered and retrievable via the CometD (Streaming) API, ensuring external systems can subscribe at their own pace without blocking Salesforce.
✔ Minimal configuration: simply enable CDC for the Contact object in Setup; no Apex triggers or middleware polling required.

Platform Events (A) would require custom triggers to publish, adding code. Future methods (C) aren’t event-driven and tie into record transactions. ETL (D) is batch-oriented, not real-time. CDC provides the cleanest, no-code path for real-time, asynchronous synchronization.

Northern Trail Outfitters needs to present shipping costs and estimated delivery times to their customers. Shipping services used vary by region, and have similar but distinct service request parameters. Which integration component capability should be used?

A.

Enterprise Service Bus to determine which shipping service to use, and transform requests to the necessary format.

B.

Outbound Messaging to request costs and delivery times from Shipper delivery services with automated error retry.

C.

APEX REST Service to implement routing logic to the various shipping service.

D.

Enterprise Service Bus user interface to collect shipper-specific form data.

A.   

Enterprise Service Bus to determine which shipping service to use, and transform requests to the necessary format.



Explanation:

An Enterprise Service Bus (ESB) provides a centralized mediation layer that can:
✔ Route each request to the appropriate regional shipping service based on metadata (e.g., customer location).
✔ Transform the uniform request into the exact SOAP/REST structure required by each carrier’s API.
✔ Orchestrate error handling, retries, and enrichment of responses before sending back to Salesforce or the front-end.

This pattern matches the Content-Based Router and Message Translator integration patterns in the Salesforce Integration Patterns & Practices guide, which recommend an ESB for scenarios where multiple endpoint variations and dynamic routing are involved.

Outbound Messaging (B) lacks dynamic routing/transformation capabilities. An Apex REST service (C) would require significant custom code for each carrier. An ESB UI (D) is not relevant to automated backend integration.

Northern Trail Outfitters is creating a distributable Salesforce package for other Salesforce orgs within the company. The package needs to call into a custor Apex REST endpoint in the central org. The security team wants to ensure a specific integration account is used in the central org that they will authorize after installation of the package. Which three items should an architect recommend to secure the integration in the package?
Choose 3 answers

A.

Create an Auth provider in the package and set the consumer key and consumer secret of the connected app in the central org.

B.

Contact Salesforce support and create a case to temporarily enable API access for managed packages.

C.

Create a connected app in the central org and add the callback URL of each org the package is installed in to redirect to after successful authentication.

D.

Use an encrypted field to store the password that the security team enters and use password management for external orgs and set the encryption method to TLS 1.2. 

E.

Use the Auth Provider configured and select the identity type as Named Principal with OAuth 2.0 as the protocol and Select Start Authentication Flow on Save.

A.   

Create an Auth provider in the package and set the consumer key and consumer secret of the connected app in the central org.


C.   

Create a connected app in the central org and add the callback URL of each org the package is installed in to redirect to after successful authentication.


E.   

Use the Auth Provider configured and select the identity type as Named Principal with OAuth 2.0 as the protocol and Select Start Authentication Flow on Save.



Explanation:

To secure cross-org callouts in a managed package while allowing the security team to inject their own credentials:

A. Create an Auth. Provider in the package and set the consumer key and consumer secret of the connected app in the central org.
Packaging an AuthProvider metadata record lets you declaratively include it in the package; after installation, the admin enters the central-org Connected App’s consumer key/secret. This leverages Salesforce’s OAuth provider framework.

C. Create a Connected App in the central org and add the callback URL of each org the package is installed in.
A Connected App in the central org defines OAuth scopes and allowed redirect URIs. By pre-populating all possible callback URLs (one per subscriber org), the security team can control access and revoke as needed.

E. Use the Auth Provider configured and select the identity type as Named Principal with OAuth 2.0 as the protocol and Select “Start Authentication Flow on Save.”
Named Principal ensures all callouts use the same integration account. Checking “Start Authentication Flow on Save” will immediately prompt for that account’s consent post-install.

Option B is unnecessary for managed packages; option D (storing passwords in encrypted fields) is insecure and against best practice.

Page 1 out of 22 Pages