Over 15K Students have given a five star review to SalesforceKing
Why choose our Practice Test
By familiarizing yourself with the B2C-Commerce-Architect exam format and question types, you can reduce test-day anxiety and improve your overall performance.
Up-to-date Content
Ensure you're studying with the latest exam objectives and content.
Unlimited Retakes
We offer unlimited retakes, ensuring you'll prepare each questions properly.
Realistic Exam Questions
Experience exam-like questions designed to mirror the actual B2C-Commerce-Architect test.
Targeted Learning
Detailed explanations help you understand the reasoning behind correct and incorrect answers.
Increased Confidence
The more you practice, the more confident you will become in your knowledge to pass the exam.
Study whenever you want, from any place in the world.
Start practicing today and take the fast track to becoming Salesforce B2C-Commerce-Architect certified.
2644 already prepared
Salesforce Spring 25 Release 64 Questions 4.9/5.0
An Architect has been approached by the Business with a request to create a custom product finder. The finder would initially be available on only one site, and would eventually be extended to be available on all sites the Business maintains. There is a requirement that these wrings art also available to be used in a Job context for export to other systems.
Each site will have a different category avertable for use by the product finder.
Where should the Architect store the custom settings for use on both the storefront and in a job context?
A. Custom Object with a Site Scope
B. Jobs Framework parameters
C. Category custom attributes
D. Custom Object with an Organizational Scope
A. Custom Object with a Site Scope
Explanation:
Why Option A?
✅ Site-Specific Configuration:
Each site needs different categories for the product finder. A site-scoped custom object allows storing these settings per site (e.g., SiteA_Categories, SiteB_Categories).
✅ Accessible in Both Storefront & Jobs:
Custom objects can be queried in:
Storefront pipelines (e.g., to render the finder).
Scheduled jobs (e.g., to export data to other systems).
✅ Scalability:
New sites can add their own configurations without modifying code.
Why Not Other Options?
❌ B. Jobs Framework parameters
Problem: Job parameters are not accessible on the storefront and lack site-specific granularity.
❌ C. Category custom attributes
Problem: Attributes are tied to categories, not logic (e.g., cannot store finder-specific rules like sorting or filters).
❌ D. Custom Object with Organizational Scope
Problem: An org-wide scope cannot store site-specific settings (e.g., different categories per site).
2. Storefront Usage:
var settings = CustomObjectMgr.getCustomObject('ProductFinderSettings', siteID);
3. Job Usage:
var sites = SiteMgr.getAllSites();
sites.forEach(site => {
var settings = CustomObjectMgr.getCustomObject('ProductFinderSettings', site.ID);
exportData(settings);
});
Best Practice:
Use site-scoped custom objects for multi-site configurations.
Avoid hardcoding category IDs in pipelines/jobs.
During code review, the Architect found that there is a service call on every visit of the product detail woe (PDP).
What best practices should the Architect ensure are followed for the service configuration? (Choose 2 answers)
A. Circuit breaker is enabled.
B. Service timeout is set.
C. Service mock up call is configured.
D. Service logging is disabled.
A. Circuit breaker is enabled. B. Service timeout is set.
Explanation:
✅ Option A: Circuit breaker is enabled.
A circuit breaker is a best practice for preventing excessive load on external services, especially when the service is called frequently, like on every visit to the Product Detail Page (PDP). By enabling a circuit breaker, you can prevent repeated failures from overwhelming the system. If the service call fails multiple times, the circuit breaker will "trip" and stop further attempts until the service becomes healthy again. This improves the overall stability and resilience of the system.
✅ Option B: Service timeout is set.
Setting a service timeout is a critical best practice when calling external services. If the external service takes too long to respond, the PDP could be significantly delayed, leading to a poor user experience. By setting a reasonable timeout, you ensure that the application does not hang indefinitely waiting for the service to respond and instead handles timeouts gracefully.
❌ Option C: Service mock up call is configured.
While mocking service calls can be useful during development or testing to simulate the behavior of external services, it is not a relevant best practice for production service configuration. In production, you need the actual service call to retrieve real-time data. Mock calls should not be used in production environments unless they are part of a specific strategy for load testing or service simulation, which is not indicated here.
❌ Option D: Service logging is disabled.
Disabling service logging is not a best practice. Service logging is essential for monitoring, debugging, and troubleshooting. By keeping service logs enabled, you ensure that you have access to important diagnostic information in case issues arise. Disabling logging would make it harder to detect and resolve any service-related problems, especially in a live production environment.
A B2C Commerce developer has Implemented a job that connects to an SFTP, loops through a specific number of .csv rtes. and Generates a generic mapping for every file. In order to keep track of the mappings imported, if a generic mapping is created successfully, a custom object instance w created with the .csv file name. After running the job in the Development instance, the developer checks the Custom Objects m Business Manager and notices there Isn’t a Custom Object for each csv file that was on SFTP. What are two possible reasons that some generic mappings were not created?
(Choose 2 answers)
A. The maximum number of generic mappings was reached.
B. The generic mappings definition need to be replicated fromStaging before running the job.
C. Invalid format in one ormore of the .csv files.
D. The job needs to run on Staging and then replicate the generic mappings and custom objects on Development
A. The maximum number of generic mappings was reached. C. Invalid format in one ormore of the .csv files.
Explanation:
✅ Option A: The maximum number of generic mappings was reached.
Salesforce B2C Commerce has a limit on the number of generic mappings that can be created. If the maximum number of mappings has been reached, no new mappings can be created, and as a result, the custom object instance for the .csv file will not be created. This limit can be checked and increased if necessary, but hitting this cap would stop further mappings from being generated.
✅ Option C: Invalid format in one or more of the .csv files.
If the .csv files are not in the expected format, the job might fail to generate the correct mappings. Invalid data or incorrect structure in one or more of the .csv files could prevent the creation of the generic mappings, and consequently, no custom object instance would be created for those files. It's important to ensure that the .csv files conform to the expected structure before processing them.
❌ Option B: The generic mappings definition need to be replicated from Staging before running the job.
This is not likely to be the root cause of the issue. If the job is running in the Development instance and trying to create mappings, the mappings should already be defined and available in the Development environment, not necessarily needing replication from Staging. Replicating mappings from Staging is usually more relevant when moving data or configurations between different environments, but this step would not usually affect the creation of mappings in Development.
❌ Option D: The job needs to run on Staging and then replicate the generic mappings and custom objects on Development.
While replication between Staging and Development is part of the typical deployment process, running the job in Development should not require this additional step. The job can create the mappings directly in the Development instance without needing replication from Staging. This suggestion does not address the core issue of why mappings might not be created in Development.
A company that is a shoe-producer is doing SalesforceB2C Commerce implementation. In their Enterprise Resource Warning (ERP) system, the products are marked as being one of three types: boots, sandals, and sneakers. The business requirements based on the type are:
• The messaging on Product Detail page is different
• Customers are able to filler their Product Search Results
The customer's operations team asks about the format in which to send this value in the catalog. Which data type should the Architect specify for this attribute In the Data Mapping document?
A. A custom attribute of type string containing comma separated values.
B. A custom attribute type set-of-string containing multiple values.
C. A custom attribute of type enum-of-string (multiselect able value).
D. A custom attribute of type enum-of-string (single selectable value)
D. A custom attribute of type enum-of-string (single selectable value)
Explanation:
✅ Option D: A custom attribute of type enum-of-string (single selectable value)
Since the products are categorized into three distinct types (boots, sandals, sneakers), and the business requirements specify that the messaging on the Product Detail page is different for each type and that customers should be able to filter their Product Search Results by product type, an enum-of-string attribute with a single selectable value is the best choice. This allows the customer to select one value for each product, which makes it easier to differentiate between boots, sandals, and sneakers, both for displaying different messages and for filtering search results.
Why "enum-of-string (single selectable value)" works:
1. It provides a predefined set of options (boots, sandals, sneakers), ensuring consistent categorization.
2. Since only one product type is applicable at a time, the single-select nature of the enum is appropriate.
3. This format also allows easy filtering for customers, as search filters typically rely on a single, well-defined value.
❌ Option A: A custom attribute of type string containing comma separated values.
This option would store the product types as a comma-separated string (e.g., "boots, sandals, sneakers"), which is not ideal. It complicates filtering and searching because the system would need to parse the string to check for specific values, and multiple types might be included in a single product, which conflicts with the business requirement of having one type per product. This would also make it more challenging to apply the required custom messaging per product type.
❌ Option B: A custom attribute type set-of-string containing multiple values.
While a set-of-strings allows multiple values, this is not suitable here because the product is restricted to only one type (boots, sandals, or sneakers). Using a set-of-string would allow customers to select multiple types, which is unnecessary and confusing for this particular scenario. The business requirement specifies that each product should only have one type, and filtering based on multiple types is not required.
❌ Option C: A custom attribute of type enum-of-string (multiselect able value).
A multiselect enum-of-string is inappropriate because it allows selecting multiple values for the product type, which conflicts with the business requirement that each product should belong to only one category (boots, sandals, or sneakers). Since a product can only have one type, a single-select enum is more fitting.
The development team is building a complex LINK cartridge for a hosted checkout solution. The provider s database is used as a single source of truth, but the information in the Basket on B2C Commerce side needs to be synchronized. This is implemented asynchronously the back end when the customers interact will the hosted checkout page and change their shipping/biding details.
As an Architect you have to advise the development team with how to implement the logging to ensure that there will be a mechanism available to allow troubleshooting in the case something goes wrong on production.
Which solution should the Architect suggest?
A. Report info level message for the back-end asynchronous communication between both systems Report all errors at error level message.
B. Report debug level message for the back-end asynchronous communication between both systems. Report al errors at error-level message.
C. Get logger for cartridge specific category. Report debug level message for the back end asynchronous communication between both systems. Report all errors at error level message.
D. Get logger for cartridge-specific category. Report Info level message for the back-end asynchronous communication between both systems. Report all errors at error level message.
D. Get logger for cartridge-specific category. Report Info level message for the back-end asynchronous communication between both systems. Report all errors at error level message.
Explanation:
✅ Why this option is correct?
D. Get logger for cartridge-specific category. Report info level message for the back-end asynchronous communication between both systems. Report all errors at error level message.
This is the correct approach for production-grade logging:
Cartridge-specific logger:
Always best practice in LINK cartridge development.
Allows you to separate logs from other cartridges for easy troubleshooting.
Makes it possible to adjust log levels without impacting unrelated code.
Info-level logging for async backend communication:
Async operations are critical. You want to record key events like:
Calls made to the hosted checkout
Successful updates
Important data state changes
Info-level logs provide meaningful context without overwhelming the logs like debug-level logs would in production.
Debug-level logs are too verbose for normal operations in production.
Error-level for errors:
All errors and exceptions must be logged at error level for monitoring and alerting.
Ensures issues are easily visible in Log Center or external log aggregators.
Thus, D aligns with both performance and supportability in a production environment.
✅ Correct choice.
❌ Why these options are incorrect?
A. Report info level message for the back-end asynchronous communication between both systems. Report all errors at error level message.
Partially correct but missing:
Cartridge-specific logger.
Without a cartridge-specific category, logs could be mixed into general logs, making troubleshooting harder in multi-cartridge environments.
✅ Eliminate.
B. Report debug level message for the back-end asynchronous communication between both systems. Report all errors at error-level message.
Debug-level logs:
Are too detailed and noisy for normal production operations.
Generate large log volumes, which:
Slow down the system
Consume disk space
Should only be enabled temporarily for troubleshooting.
✅ Eliminate.
C. Get logger for cartridge-specific category. Report debug level message for the back-end asynchronous communication between both systems. Report all errors at error-level message.
While cartridge-specific logging is good:
Debug-level logs in production should not be used for routine integrations.
Debug should only be turned on selectively for diagnosing issues.
✅ Eliminate.
Prep Smart, Pass Easy Your Success Starts Here!
Transform Your Test Prep with Realistic B2C-Commerce-Architect Exam Questions That Build Confidence and Drive Success!