B2C-Commerce-Architect Exam Questions With Explanations

The best B2C-Commerce-Architect practice exam questions with research based explanations of each question will help you Prepare & Pass the exam!

Over 15K Students have given a five star review to SalesforceKing

Why choose our Practice Test

By familiarizing yourself with the B2C-Commerce-Architect exam format and question types, you can reduce test-day anxiety and improve your overall performance.

Up-to-date Content

Ensure you're studying with the latest exam objectives and content.

Unlimited Retakes

We offer unlimited retakes, ensuring you'll prepare each questions properly.

Realistic Exam Questions

Experience exam-like questions designed to mirror the actual B2C-Commerce-Architect test.

Targeted Learning

Detailed explanations help you understand the reasoning behind correct and incorrect answers.

Increased Confidence

The more you practice, the more confident you will become in your knowledge to pass the exam.

Study whenever you want, from any place in the world.

Salesforce B2C-Commerce-Architect Exam Sample Questions 2025

Start practicing today and take the fast track to becoming Salesforce B2C-Commerce-Architect certified.

2644 already prepared
Salesforce Spring 25 Release
64 Questions
4.9/5.0

A new dent is moving from their existing ecommerce platform to B2C Commerce. They have an existing service that connects to the Email Marketing System. The endpoint of the service can directly parse the data posted by the customer from the Storefront page for marketing materials subscriptions. it if required that the service implementation on the B2C Commerce site supports authentication and encoding. What type should the Architect document this new service as?

A. HTTP

B. HTTP Form

C. Generic

D. SOAP

A.   HTTP

Explanation:

✅ Why these options are correct?

✅ Option A: HTTP

Explanation:
The service described connects to an Email Marketing System and processes customer-submitted data from the storefront, with a requirement for authentication and encoding. This is a standard HTTP-based service, where the B2C Commerce platform would make HTTP requests to the external service endpoint. The service does not specifically mention needing SOAP or form submissions but rather focuses on securely transmitting data over HTTP. Using HTTP for communication is common in such cases where the service handles POST requests with authentication and data encoding. This approach would be the most flexible and straightforward for integrating with an external service like an Email Marketing System.

❌ Why these options are incorrect?

❌ Option B: HTTP Form

Explanation:
An HTTP Form is typically used for submitting data via HTML forms, and it's not ideal for service-to-service communication where there’s a need for authentication and encoding. Since the scenario describes a direct service integration rather than a form-based submission, this option does not meet the needs of the service implementation, making it less appropriate.

❌ Option C: Generic

Explanation:
While a Generic service type can handle different communication patterns, it is typically used when you are unsure about the specific protocol or service type being used, or when the integration has no predefined template. In this case, the service requires authentication, encoding, and works directly over HTTP, so it’s better to classify it as an HTTP service, which is more precise and aligned with the requirements.

❌ Option D: SOAP

Explanation:
SOAP is a protocol used for service communication, typically requiring specific XML-based messaging. The given scenario does not mention the use of SOAP or XML-based messaging, and the focus seems to be on HTTP-based requests for marketing material subscriptions. Since there’s no indication that the service uses SOAP, this option does not fit the described integration.

The Client is planning to switch to a new Payment Service Provider (PSP). They have approached an Architect to understand the time and effort to Integrate the new PSP The PSP offers a LINK cartridge compatible with Site Genesis Pipelines, but the Client’s website is build on Controllers. Which two options should the Architect take into consideration before starting analysis?
(Choose 2 answers)

A. Estimate the effort and risk to convert the LINK cartridge from pipelines to controllers.

B. Reach out to the PSP development team and ask if a new cartridge version that supports controllers is under development

C. Produce a proof of concept converting the most essential pipelines into controllers and integrate the cartridge.

D. Look for a different PSP that supports controllers and would not require conversion efforts.

A.   Estimate the effort and risk to convert the LINK cartridge from pipelines to controllers.
B.   Reach out to the PSP development team and ask if a new cartridge version that supports controllers is under development

Explanation:

✅ Why these options are correct?

✅ Option A: Estimate the effort and risk to convert the LINK cartridge from pipelines to controllers.

Explanation:
Since the client’s website is built on controllers, the existing LINK cartridge (which is compatible with SiteGenesis Pipelines) will need to be converted to work with controllers. Estimating the effort and risk involved in this conversion is critical for proper planning and setting expectations. Understanding how complex the process is, and what potential issues might arise, will help the architect estimate time, resources, and cost involved in the integration.

✅ Option B: Reach out to the PSP development team and ask if a new cartridge version that supports controllers is under development.

Explanation:
Before diving into a full conversion, it’s essential to check with the PSP if they are planning to release a new version of the LINK cartridge that supports controllers directly. This could save significant effort and time, as working with a version that already supports controllers would be much more efficient. This option allows the architect to explore whether there is a future-proof solution from the PSP that doesn’t require the conversion.

❌ Why these options are incorrect?

❌ Option C: Produce a proof of concept converting the most essential pipelines into controllers and integrate the cartridge.

Explanation:
While producing a proof of concept (PoC) for converting pipelines to controllers might seem like a good idea, this approach could be time-consuming and resource-intensive. Instead, estimating the full effort, risks, and timelines first is a more practical approach. A PoC might delay the actual integration and is not the first step before analyzing the overall effort required. It's better to evaluate the PSP's existing capabilities and check for a possible pre-existing solution before spending time on a PoC.

 
❌ Option D: Look for a different PSP that supports controllers and would not require conversion efforts.

Explanation:
While this option might seem appealing, switching to a new PSP could introduce a lot of unnecessary complexity and delay the project, especially if the current PSP is already compatible with pipelines (and could be converted). It’s more efficient to attempt the integration with the existing PSP first, either by converting the cartridge or checking if a version supporting controllers is available, rather than completely switching to a new provider that might require a new integration effort and vendor evaluation.

A third party survey provider offers both an API endpoint for individual survey data and an SFTP server endpoint that can accept batch survey data. The initial implementation of the integration includes

1. Marking the order as requiring a survey before order placement
2. On the order confirmation pace, the survey form is displayed for the customer to fill
3. The data is sent to the survey provider API, and the order it marked as not requiring a survey

Later it was identified that this solution is not fit for purpose as the following issues and additional requirements were identified:

1. If the API call fails, the corresponding survey data is lost. The Business requires to avoid data loss.
2. Some customers skipped the form. The Business require sending a survey email to such customers.
3. The Order Management System (OMS) uses a non-standard XML parser it did not manage to parse orders with the survey, until the survey attribute was manually removed from the xml.

How should the Architect address the issues and requirements described above?

A. Create a custom session attribute when the survey is required. Send to the API endpoint in real-time. On failure, capture the survey data in the session and reprocess, use me session attribute to send emails for the cases when survey was skipped.

B. Create a custom object to store the survey data. Send to the API endpoint using a job. On success, remove the custom object. On failure, send the survey data with API from the next execution of the same job. Use the custom object to send emails for the cases when the survey was skipped.

C. Create a custom object when the survey is required Send to the API endpoint in real- time. On success, remove the object. On failure, capture the survey data in the custom object and later reprocess with a job. Use the custom object to send emails for the cases when survey was skipped.

D. Send the survey data to the API endpoint in real-time until the survey data is successfully captured. Instruct the OMS development team to update their XML parser, use the Order survey attribute to send emails for the cases when the survey was skipped.

B.   Create a custom object to store the survey data. Send to the API endpoint using a job. On success, remove the custom object. On failure, send the survey data with API from the next execution of the same job. Use the custom object to send emails for the cases when the survey was skipped.

Explanation:

Why Option B?

✅ Persistent Storage (Custom Object) for Survey Data
Ensures no data loss if the API fails (stores survey responses reliably).
Batch processing via a job improves reliability (retries failed submissions).

✅ Handles Skipped Surveys (Email Fallback)
The custom object tracks which customers skipped the survey, enabling email follow-ups.

✅ Avoids OMS XML Parsing Issues
Since the survey data is stored separately (not embedded in the order XML), it prevents OMS parsing failures.

Why Not Other Options?

❌ A. Session-based storage
Session data is volatile (lost if the user leaves or the session expires).
Not reliable for retries or email follow-ups.

❌ C. Hybrid (Real-time + Job)
Real-time API calls can still fail, requiring retries.
Less efficient than a pure batch/job approach (Option B).

❌ D. Force real-time retries + OMS parser update
Does not guarantee data persistence (if API keeps failing).
Depends on OMS changes (out of the Architect’s control).

Best Practice & Reference:
Use Custom Objects for transient but critical data (e.g., surveys).
Batch jobs (instead of real-time) for resilient third-party integrations.
Salesforce B2C Commerce recommends asynchronous processing for reliability.

A developer is remotely fetching the reviews for a product. Assume that it's an HTTP GET request and caching needs to be implemented, what consideration should the developer keep in mind for building the caching strategy?

A. Cache the HTTP service request

B. Remote include with caching only the reviews

C. Use custom cache

D. Cached remote include with cache of the HTTP service

C.   Use custom cache

Explanation:

A. Cache the HTTP service request

B2C Commerce Service Framework does support response caching in the service definition:
...
cacheTTL="600"
/>

BUT… that Service-level cache:
Is global across all instances of that service definition.
Does not always cache per product unless you parameterize the cache key.
Has fewer controls compared to custom cache (e.g. invalidation, logic for different products).

So it’s possible — but less flexible.
→ Possible, but limited. Not best practice for per-product caching.

B. Remote include with caching only the reviews

A remote include calls a B2C controller and injects HTML into the page.
You can set caching on the include (e.g. iscache tags).

BUT:

Still requires generating HTML on every include call.
Doesn’t efficiently cache just the raw review data (e.g. JSON).
Introduces more moving parts vs. simply caching the service result.

→ Not ideal. We want data caching, not merely HTML caching.

C. Use custom cache

✅ Custom cache is the most precise solution.
You can store the HTTP response (JSON) keyed by product ID.

You fully control:
TTL
Cache key
Invalidation logic

Works perfectly for reviews:
Fetch once from the remote service
Serve quickly from cache for subsequent requests

Example:

var CacheMgr = require('dw/system/CacheMgr');
var reviewsCache = CacheMgr.getCache('ProductReviews');
var cachedReviews = reviewsCache.get(productId);
if (!cachedReviews) {
var result = callRemoteService(productId);
reviewsCache.put(productId, result, 3600);
} else {
var result = cachedReviews;
}

→ Recommended best practice. This gives you fine-grained, per-product caching.

D. Cached remote include with cache of the HTTP service

You could cache both the remote include AND the HTTP service call.

That’s basically duplicating caching layers:
Service cache
Remote include cache

→ Overengineering. One layer of caching (custom cache) is sufficient.

A client has just pushed a new site live to Production. However during smoke testing. It's found that some customers are not seeing the correct pricing on the Product Detail Page. What three places would the Architect begin to look for the cause of this Issue?
(Choose 3 answers)

A. Check Log Center

B. Check the Quota Status page.

C. Check the Global Preferences to be sure the settings are correct.

D. Check that there was not an error during replication.

E. Check that the cache is set correctly

A.   Check Log Center
D.   Check that there was not an error during replication.
E.   Check that the cache is set correctly

Explanation:

✅ Why these options are correct?

A. Check Log Center

Correct. The Log Center is critical for troubleshooting any production issue. If there’s a problem with pricing logic (e.g. pricebooks, promotions, custom logic), you might see:
Errors in pricing services
Custom exceptions
Scripts failing during product rendering

Checking logs helps quickly spot whether this is a systemic error or localized problem.

✅ Correct choice.

D. Check that there was not an error during replication.

Correct. Pricebooks and catalog data are usually replicated from staging to production. If replication fails or partially completes, the pricing data in production might be outdated or incomplete. This is a common reason why users see incorrect prices after a new launch. Always check for errors in the replication logs.

✅ Correct choice.

E. Check that the cache is set correctly.
Correct. Pricing on the Product Detail Page often relies on cached content. If cache invalidation didn’t run after replication or deployment, customers might see:
Stale pricing data
Incorrectly cached product pages

Checking and possibly clearing relevant caches is an essential step when troubleshooting pricing discrepancies.

✅ Correct choice.

❌ Why these options are incorrect?

B. Check the Quota Status page.

Incorrect. Quotas monitor system limits (e.g. number of objects, storage, API calls). They don’t directly impact pricing calculations. Unless quota overruns prevented data import or replication (which would show elsewhere), this is unlikely to cause a pricing issue on the PDP.
✅ Eliminate.

C. Check the Global Preferences to be sure the settings are correct.

Not typically the root cause for specific pricing issues. Global Preferences cover things like locales, taxation, default currency, etc. They’re not usually where you’d look first for a problem affecting some customers but not others on a live site. The issue is more likely:
Cache
Replication
Pricebook assignments

So while worth reviewing in a broader troubleshooting process, it’s not top priority for this specific symptom.
✅ Eliminate.

Prep Smart, Pass Easy Your Success Starts Here!

Transform Your Test Prep with Realistic B2C-Commerce-Architect Exam Questions That Build Confidence and Drive Success!