Salesforce-MuleSoft-Developer-II Exam Questions With Explanations

The best Salesforce-MuleSoft-Developer-II practice exam questions with research based explanations of each question will help you Prepare & Pass the exam!

Over 15K Students have given a five star review to SalesforceKing

Why choose our Practice Test

By familiarizing yourself with the Salesforce-MuleSoft-Developer-II exam format and question types, you can reduce test-day anxiety and improve your overall performance.

Up-to-date Content

Ensure you're studying with the latest exam objectives and content.

Unlimited Retakes

We offer unlimited retakes, ensuring you'll prepare each questions properly.

Realistic Exam Questions

Experience exam-like questions designed to mirror the actual Salesforce-MuleSoft-Developer-II test.

Targeted Learning

Detailed explanations help you understand the reasoning behind correct and incorrect answers.

Increased Confidence

The more you practice, the more confident you will become in your knowledge to pass the exam.

Study whenever you want, from any place in the world.

Salesforce Salesforce-MuleSoft-Developer-II Exam Sample Questions 2025

Start practicing today and take the fast track to becoming Salesforce Salesforce-MuleSoft-Developer-II certified.

2604 already prepared
Salesforce Spring 25 Release
60 Questions
4.9/5.0

A healthcare portal needs to validate the token that it sends to a Mule API. The developer plans to implement a custom policy using the HTTP Policy Transform Extension to match the token received in the header from the heathcare portal. Which files does the developer need to create in order to package the custom policy?

A. Deployable ZIP file, YAML configuration file

B. JSON properties file, YAML configuration file

C. JSON propertiesfile, XML template file

D. XML template file, YAML configuration file

D.   XML template file, YAML configuration file

Explanation:

Let’s dive into creating a custom policy for a Mule API to validate a token from a healthcare portal using the HTTP Policy Transform Extension! The goal is to understand which files are needed to package this custom policy, and we’ll work through it step by step to ensure the developer has everything required.

In MuleSoft, custom policies extend the API gateway’s capabilities, allowing developers to enforce specific logic—like token validation—using the Policy Transform Extension. These policies are packaged and deployed to the API gateway, and the packaging process involves specific file types. The HTTP Policy Transform Extension, in particular, supports defining policy logic and configuration through structured files that the gateway can interpret.

Custom policies typically require two main components:


➡️ An XML template file (e.g., policy.xml): This file defines the policy’s structure, including the logic for matching and validating the token in the HTTP header. It uses Mule’s XML-based configuration to specify how the policy interacts with the request and response, leveraging the Transform Extension to apply custom transformations or validations.
➡️ A YAML configuration file (e.g., config.yaml): This file provides metadata about the policy, such as its name, version, and supported operations. It also defines the policy’s parameters and how it integrates with the API gateway, ensuring the policy is properly registered and configurable in Anypoint Platform.

Now, let’s evaluate the options:

❌ A. Deployable ZIP file, YAML configuration file
A deployable ZIP file is the final packaged artifact containing all policy files, but it’s not a file to create—it’s the result of packaging. The YAML configuration file is needed, but this option misses the XML template file, which is essential for defining the policy logic. This is incomplete.

❌ B. JSON properties file, YAML configuration file
A JSON properties file isn’t a standard component for custom policy packaging in MuleSoft. The YAML configuration file is required for metadata, but without an XML template to define the policy’s behavior (like token validation), this setup won’t work. This is incorrect.

❌ C. JSON properties file, XML template file
Again, a JSON properties file isn’t typically used for policy packaging. The XML template file is necessary for the policy logic, but the absence of a YAML configuration file means the policy lacks metadata and integration details. This is insufficient.

✅ D. XML template file, YAML configuration file
This hits the mark. The XML template file defines the custom policy’s logic, such as using the HTTP Policy Transform Extension to match the token in the header. The YAML configuration file provides the policy’s metadata and configuration options, ensuring it’s properly deployed and managed. Together, these files allow the developer to package the policy into a ZIP file for deployment.

To implement this, the developer would create a policy.xml file with the token validation logic (e.g., using a Transform Message to check the header) and a config.yaml file with the policy’s name, version, and parameters. These files are then zipped into a deployable package.

✅ Answer: D
To package a custom policy using the HTTP Policy Transform Extension, the developer needs an XML template file to define the policy logic (e.g., token validation in the HTTP header) and a YAML configuration file for metadata and configuration. These files work together to enable the policy’s deployment and execution on the Mule API gateway.

Reference:
MuleSoft Documentation on Custom Policies and HTTP Policy Transform Extension.

The Center for Enablement team published a common application as a reusable module to the central Nexus repository. How can the common application be included in all API implementations?

A. Download the common application from Naxus and copy it to the src/main/resources folder in the API

B. Copy the common application’s source XML file and out it in a new flow file in the src/main/mule folder

C. Add a Maven dependency in the PCM file with multiple-plugin as

D. Add a Maven dependency in the POM file with jar as

D.   Add a Maven dependency in the POM file with jar as

Explanation:

When a common application (like a shared utility, service, or framework) is published as a reusable module to a central Maven repository (e.g., Nexus), the correct and scalable way to include it in other Mule projects is to:
➡️ Add it as a Maven dependency in the pom.xml
➡️ Use jar as the classifier, because the common app is packaged as a Java archive (JAR) with reusable flows or libraries

This allows:
➡️ Consistent versioning across all projects
➡️ Reuse without duplicating code
➡️ Centralized management and updates

❌ Why other options are incorrect:

A. Download and copy to src/main/resources
🔸 Incorrect: Manual, not scalable, and breaks reusability and version control.

B. Copy the XML file to src/main/mule
🔸 Incorrect: Copy-pasting shared flows is bad practice. It creates code duplication and update inconsistencies.

C. Add a Maven dependency with mule-plugin as classifier
🔸 Incorrect: mule-plugin classifier is used for Mule plugin artifacts, not reusable shared JARs.

🔗 Reference:
MuleSoft Docs – Shared Resources (Mule Domains)
MuleSoft Docs – Reusing Code Across Projects

Which command is used to convert a JKS keystore to PKCS12?

A. Keytool-importkeystore –srckeystore keystore p12-srcstoretype PKCS12 –destkeystore keystore.jks –deststoretype JKS

B. Keytool-importkeystore –srckeystore keystore p12-srcstoretype JKS –destkeystore keystore.p12 –deststoretype PKCS12

C. Keytool-importkeystore –srckeystore keystorejks-srcstoretype JKS –destkeystore keystore.p13 –deststoretype PKCS12

D. Keytool-importkeystore –srckeystore keystore jks-srcstoretype PKCS12 –destkeystore keystore.p12 –deststoretype JKS

B.   Keytool-importkeystore –srckeystore keystore p12-srcstoretype JKS –destkeystore keystore.p12 –deststoretype PKCS12

Explanation:

The question asks for the correct keytool command to convert a JKS (Java KeyStore) to a PKCS12 keystore. JKS is a Java-specific format, while PKCS12 is a standard format for storing cryptographic keys and certificates. The keytool -importkeystore command is used, requiring parameters like -srckeystore (source file), -srcstoretype (source type), -destkeystore (destination file), and -deststoretype (destination type). The correct command must specify a JKS source and a PKCS12 destination.

✅ Correct Answer: Option B

Command: keytool -importkeystore –srckeystore keystore.p12 -srcstoretype JKS –destkeystore keystore.p12 –deststoretype PKCS12
➡️ Option B is correct because it specifies the source keystore as JKS (-srcstoretype JKS) and the destination as PKCS12 (-deststoretype PKCS12), aligning with the requirement to convert from JKS to PKCS12. Although the source file name keystore.p12 is unconventional for a JKS file (typically .jks), the -srcstoretype JKS parameter explicitly defines the source format as JKS, ensuring the command works. The destination file keystore.p12 matches the PKCS12 format, making this the correct command despite the naming ambiguity.

Incorrect Options:

Option A: keytool -importkeystore –srckeystore keystore.p12 -srcstoretype PKCS12 –destkeystore keystore.jks –deststoretype JKS
➡️ Option A is incorrect because it converts a PKCS12 keystore to JKS, which is the opposite of the required conversion (JKS to PKCS12). The command specifies the source as keystore.p12 with -srcstoretype PKCS12, indicating a PKCS12 source, and the destination as keystore.jks with -deststoretype JKS, indicating a JKS output. This reverses the desired process, as the question explicitly asks for converting a JKS keystore to a PKCS12 keystore, making this command unsuitable for the task.

Option C: keytool -importkeystore –srckeystore keystorejks -srcstoretype JKS –destkeystore keystore.p13 –deststoretype PKCS12
➡️ Option C is incorrect due to the non-standard destination file extension keystore.p13. While it correctly specifies the source as JKS (-srcstoretype JKS) and the destination as PKCS12 (-deststoretype PKCS12), the .p13 extension is not a recognized standard for PKCS12 files, which typically use .p12 or .pfx. This deviation could cause compatibility issues or errors in tools expecting standard PKCS12 extensions. A proper PKCS12 file extension is critical for correct recognition and usage, rendering this option invalid.

Option D: keytool -importkeystore –srckeystore keystore.jks -srcstoretype PKCS12 –destkeystore keystore.p12 –deststoretype JKS
➡️ Option D is incorrect because it specifies the source as PKCS12 (-srcstoretype PKCS12) and the destination as JKS (-deststoretype JKS), which is the reverse of the required JKS-to-PKCS12 conversion. Despite the source file name keystore.jks suggesting a JKS file, the -srcstoretype PKCS12 incorrectly defines it as PKCS12. This command would attempt to convert a PKCS12 keystore to JKS, failing to meet the question’s requirement to convert a JKS keystore to a PKCS12 keystore.

Reference:
Oracle Documentation on keytool: Java SE 8 keytool Documentation
General guide on keystore conversion: Baeldung - Convert JKS to PKCS12
PKCS12 and JKS format details: Java KeyStore API

A Mule application need to invoice an API hosted by an externalsystem to initiate a process. The external API takes anywhere between one minute and 24 hours to compute its process. Which implementation should be used to get response data from the external API after it completes processing?

A. Use an HTTP Connector toinvoke the API and wait for a response

B. Use a Scheduler to check for a response every minute

C. Use an HTTP Connector inside Async scope to invoice the API and wait for a response

D. Expose an HTTP callback API in Mule and register it with the external system

D.   Expose an HTTP callback API in Mule and register it with the external system

Explanation:

The Mule application needs to invoke an external API to initiate a process that takes between one minute and 24 hours to complete. Given the long and variable processing time, the solution must handle asynchronous communication effectively, allowing the external API to notify the Mule application when processing is complete. Let’s analyze why D is the MuleSoft-recommended implementation:

Asynchronous Nature of the External API:
Since the external API’s processing time ranges from one minute to 24 hours, a synchronous approach (waiting for the response in real-time) is impractical. Holding an HTTP connection open for such a long duration would lead to timeouts, resource exhaustion, and poor performance in the Mule application.
The most efficient solution is to use an asynchronous callback mechanism, where the Mule application initiates the process and the external system notifies Mule when the process is complete.

Callback API in Mule:
By exposing an HTTP callback API in the Mule application (e.g., /callback), Mule can receive the response data from the external system once processing is complete.
The Mule application registers this callback endpoint (e.g., https://mule-app/callback) with the external API during the initial request. The external system then sends the response data to this endpoint when ready.
This approach decouples the Mule application from the external system’s processing time, allowing Mule to handle other tasks while waiting for the callback.

MuleSoft Best Practices:
MuleSoft recommends using asynchronous patterns, such as callbacks or webhooks, for long-running processes to avoid blocking flows and improve scalability.
The HTTP Listener in Mule can be used to implement the callback API, and the flow can process the response data (e.g., store it in a database, trigger another process, etc.).

Why not the other options?

A. Use an HTTP Connector to invoke the API and wait for a response:
This option implies a synchronous request-response pattern, where the Mule application sends an HTTP request and waits for the external API to respond. Given the processing time of one minute to 24 hours, this is infeasible. HTTP connections typically timeout after a few seconds or minutes (e.g., 30 seconds in Mule’s default HTTP Connector settings). Waiting for hours would cause timeouts, consume resources, and degrade performance. This approach is not suitable for long-running processes.

B. Use a Scheduler to check for a response every minute:
This option involves polling the external API every minute to check if the process is complete. While polling is a viable asynchronous approach, it has significant drawbacks:
Inefficiency: Polling every minute for up to 24 hours could result in thousands of unnecessary requests (up to 1,440 requests per process), consuming network bandwidth and API rate limits.
Latency: There could be up to a one-minute delay between the external API completing the process and Mule detecting the response.
Complexity: The Mule application would need to track the status of each process (e.g., using an Object Store or database) and handle edge cases like missed responses.
A callback approach is more efficient and responsive than polling, especially for long and variable processing times.

C. Use an HTTP Connector inside Async scope to invoke the API and wait for a response:
Placing the HTTP Connector in an Async scope allows the Mule flow to continue processing other tasks while the HTTP request is made, but it does not solve the core issue of waiting for a response. The HTTP Connector inside the Async scope would still attempt to wait for the external API’s response, which could take up to 24 hours. This would lead to the same timeout and resource issues as option A. The Async scope only decouples the HTTP request from the main flow’s thread; it does not change the synchronous nature of the HTTP request itself.

Reference:

MuleSoft Documentation: HTTP Connector – Details how to use HTTP Listeners and Requestors in Mule flows.
MuleSoft Asynchronous Processing: Async Scope – Explains asynchronous processing in Mule, but highlights limitations for long-running external processes.
MuleSoft Best Practices: API Design for Asynchronous Operations – Recommends callback or webhook patterns for asynchronous APIs.
CloudHub Deployment: Deploying to CloudHub – Discusses exposing HTTP endpoints in CloudHub for external systems.

A system API that communicates to an underlying MySQL database is deploying to CloudHub. The DevOps team requires a readiness endpoint to monitor all system APIs. Which strategy should be used to implement this endpoint?

A. Create a dedicated endpoint that responds with the API status and reachability of the underlying systems

B. Create a dedicated endpoint that responds with the API status and health of the server

C. Use an existing resource endpoint of the API

D. Create a dedicated endpoint that responds with the API status only

A.   Create a dedicated endpoint that responds with the API status and reachability of the underlying systems

Explanation:

When deploying a system API to CloudHub, the DevOps team requires a readiness endpoint to monitor the health and availability of the API and its dependencies. A readiness endpoint is typically used in cloud environments (like CloudHub) to indicate whether the application is ready to handle requests. For a system API that communicates with an underlying MySQL database, the readiness endpoint should not only confirm the API's operational status but also verify the reachability of the underlying systems (e.g., the MySQL database). This ensures that the API is fully functional and capable of processing requests.

Here’s why A is the correct choice:

➤ Dedicated Endpoint: A readiness endpoint should be a separate, dedicated endpoint (e.g., /health or /readiness) to provide a clear and standardized way for monitoring tools to check the API’s status. This aligns with best practices in microservices and cloud-native applications.

➤ API Status and Reachability: The endpoint should return information about the API’s operational status (e.g., "UP" or "DOWN") and the reachability of the underlying MySQL database (e.g., whether the database connection is active). This ensures that the DevOps team can confirm both the API and its dependencies are functioning correctly.

➤ CloudHub Monitoring: CloudHub uses readiness and liveness probes to monitor applications. A readiness endpoint that includes both API status and database reachability provides comprehensive monitoring, enabling CloudHub to determine if the application is ready to serve traffic.

❌ Why not the other options?

B. Create a dedicated endpoint that responds with the API status and health of the server:
While checking the health of the server (e.g., CPU, memory, or disk usage) is useful for liveness probes, it does not fully address the readiness requirement. Readiness endpoints focus on whether the application and its dependencies (e.g., the MySQL database) are ready to process requests. Server health alone does not confirm database connectivity, which is critical for a system API.

C. Use an existing resource endpoint of the API:
Using an existing resource endpoint (e.g., /users or /orders) is not a good practice for readiness checks. Resource endpoints are designed for business logic and may require specific inputs, authentication, or database queries, which could add unnecessary complexity or fail for reasons unrelated to readiness. A dedicated endpoint is preferred for monitoring purposes.

D. Create a dedicated endpoint that responds with the API status only:
While a dedicated endpoint is appropriate, reporting only the API status (e.g., "API is running") does not provide enough information for a readiness check. The endpoint must also verify the reachability of the underlying MySQL database to ensure the API can process requests successfully.

Reference:

MuleSoft Documentation: CloudHub Health Check Endpoints – Explains how CloudHub uses health check endpoints (liveness and readiness probes) to monitor applications.
MuleSoft Best Practices: API Monitoring Best Practices – Discusses the importance of dedicated health endpoints for API monitoring.
Kubernetes Readiness Probes (relevant for CloudHub, which aligns with cloud-native practices): Kubernetes Documentation on Readiness Probes – Provides context on readiness probes, which CloudHub adapts for Mule applications.

Prep Smart, Pass Easy Your Success Starts Here!

Transform Your Test Prep with Realistic Salesforce-MuleSoft-Developer-II Exam Questions That Build Confidence and Drive Success!