Agentforce-Specialist Practice Test
Updated On 1-Jan-2026
293 Questions
Coral Cloud Resorts needs to ensure its booking agent executes actions in a specific sequence: first retrieve available sessions, then verify customer eligibility, and finally create the booking. The current implementation allows the large language model (LLM) to execute these actions in any order, causing booking failures. Which approach should an AgentForce Specialist implement?
A. Write comprehensive topic instructions detailing the exact sequence of actions using numbered steps and explicit ordering requirements for the reasoning engine to follow during booking workflows.
B. Create custom variables that store completion status for each step, then implement conditional filters on subsequent actions requiring previous variables to be populated, ensuring deterministic execution order.
C. Configure topic, classification description, and action instructions with priority levels and sequence indicators to guide the reasoning engine in selecting the correct action order automatically.
Explanation:
Why B is Correct?
Creating custom variables with conditional filters is the most reliable way to enforce a strict sequential execution order because:
Deterministic Control: By storing completion status in variables (e.g., sessionsRetrieved, eligibilityVerified), you create hard dependencies that the system must respect.
Action Gating: Conditional filters on actions can explicitly require that previous steps are completed before allowing subsequent actions to execute. For example:
"Verify Eligibility" action requires sessionsRetrieved = true
"Create Booking" action requires eligibilityVerified = true
Prevents Out-of-Order Execution: Unlike instructional or priority-based approaches, this creates technical constraints that physically prevent actions from running out of sequence.
Why the Other Options Fall Short?
Option A (Topic Instructions): While detailed instructions can guide the LLM, they rely on the reasoning engine's interpretation and don't guarantee enforcement. LLMs can still choose actions out of order despite instructions.
Option C (Priority Levels/Sequence Indicators): These are guidance mechanisms, not enforcement mechanisms. The reasoning engine may still select actions in unintended orders, especially in complex scenarios or edge cases.
Key Principle:
When you need guaranteed sequential execution for critical workflows like booking processes, use programmatic controls (variables + conditional logic) rather than relying solely on natural language instructions or soft priorities.
A company wants to retrieve patient history details to augment the Al agent response. [he company wants to leverage the Data Cloud search index feature. What is best practice when considering retrieval-augrmented generation (RAG) for information that may contain personally identifiable information (PII)?
A. Depend on the agent's prompt to avoid exposing PII.
B. Encrypt embeddings, but still index PII records.
C. Mask sensitive fields and index only non-PII data.
Explanation:
When dealing with sensitive data like patient history in a RAG system, the primary goal is to maximize the utility of the indexed data for the AI agent while minimizing the risk of exposing PII.
Mask Sensitive Fields and Index Only Non-PII Data (Option C):
This is the most secure approach. By masking or removing PII (such as names, addresses, specific dates of birth, etc.) before indexing the data in the search index, you ensure that the retrieval component of the RAG system cannot return the sensitive information. The AI agent can still access the relevant medical facts and history to formulate a helpful response, but the PII itself is protected. This aligns with privacy-by-design principles.
Depend on the Agent's Prompt to Avoid Exposing PII (Option A):
This is a weak control. While a well-crafted prompt can instruct the agent not to output PII, the data is still exposed in the context window provided by the retrieval step. The agent's instructions can be overridden, ignored, or "jailbroken," creating a high-risk security vulnerability.
Encrypt Embeddings, but Still Index PII Records (Option B):
Encrypting the embeddings (the numerical representations of the text) protects the data at rest in the vector database to some degree. However, the original PII record itself is what gets retrieved as context for the AI agent to read. Even if the embeddings are encrypted, the retrieved source chunk sent to the Large Language Model (LLM) is typically decrypted and contains the PII. This defeats the purpose of securing the data within the generative process and still exposes the PII to the LLM and potentially the final user response.
Universal Containers (UC) has configured a data library and wants to restrict indexing of knowledge articles to articles which are only publicly available in their knowledge base, UC also wants the agent to link sources that the large language model (LLM) grounded its response on. Which settings should help UC with this?
A. In the data library setting window, under Knowledge Settings, enable Use Public Knowledge Article and select Show sources,
B. In the data library setting window, under Knowledge Settings, enable Use Public Knowledge Article. It is not possible to display articles that the LLM grounded its response in.
C. Use Data Categories to categorize publicly available articles to index. Sources are automatically displayed when knowledge articles are categorized as Public.
Explanation:
Universal Containers has two distinct requirements: first, to limit the data library's knowledge source to only public articles for grounding; and second, to provide source transparency by showing which articles the AI used for its response. These are two separate but common configuration settings within the Einstein Agent setup for data management and user trust.
✅ Correct Option: A
A. In the data library setting window, under Knowledge Settings, enable Use Public Knowledge Article and select Show sources:
This is the correct combination. The "Use Public Knowledge Article" setting explicitly restricts the grounding of the agent's responses to articles with a Public knowledge status. The "Show sources" setting, when enabled, instructs the agent to cite the specific knowledge articles it used to formulate its answer in the conversation thread.
❌ Incorrect Options:
B. In the data library setting window, under Knowledge Settings, enable Use Public Knowledge Article. It is not possible to display articles that the LLM grounded its response in:
This is incorrect because the first part is right, but the second part is false. Agentforce does have a built-in "Show sources" feature designed specifically for this purpose, making source citation a standard and configurable capability.
C. Use Data Categories to categorize publicly available articles to index. Sources are automatically displayed when knowledge articles are categorized as Public:
This is misleading. While Data Categories can be used for filtering, the "Public" filter for indexing is controlled by the Knowledge Status (e.g., Draft, Online, Archived, Public), not by a data category. Furthermore, source display is not automatic; it must be explicitly enabled via the "Show sources" setting.
Reference:
Salesforce Help: Configure a Data Library for Grounding
Universal Containers wants to systematically validate agent responses before deployment using a scalable testing process. Which Testing Center approach should the company implement?
A. Upload a structured CSV test template and run batch test cases in Testing Center.
B. Manually interact with the agent in Builder until responses seem correct.
C. Use pilot users in production to flag incorrect responses post-launch.
Explanation:
🧭 Summary:
The Testing Center in Agentforce enables large-scale testing of agent responses before deployment. Using a structured CSV file allows Universal Containers to automate and batch-test different scenarios efficiently. This ensures accuracy and consistency across a wide range of inputs, minimizing errors before the agent goes live.
✅ Correct Option Explanation:
A. Upload a structured CSV test template and run batch test cases in Testing Center:
This approach is designed for systematic, scalable validation of agent behavior.
The CSV contains prompts and expected responses that the Testing Center evaluates.
It helps catch inconsistencies early, ensuring readiness before production rollout.
❌ Incorrect Option Explanations:
B. Manually interact with the agent in Builder until responses seem correct:
Manual testing is slow and error-prone.
It lacks consistency and scalability compared to automated CSV-based batch tests.
C. Use pilot users in production to flag incorrect responses post-launch:
This approach happens after deployment, not before.
It risks poor customer experience since issues surface in real interactions.
📘 Reference:
Salesforce Help: Test Agent Responses in Testing Center
A developer is using the Salesforce CLI to deploy agent components from a sandbox to production. They recently made a change to several topics, instructions, and actions. Which metadata component should the developer include in their package.xml file that contains all of the topics and actions an agent will interact with?
A. genAiPlannerBundle
B. EinsteinAiPlannerBundle
C. BotBundle
Explanation:
Summary:
When deploying agent components like topics, instructions, and actions from a sandbox to production using the Salesforce CLI, the developer must include the correct metadata component in the package.xml file. The component must encompass all agent-related configurations to ensure seamless deployment of AI-driven agent interactions, critical for Agentforce functionality.
Correct Option:
A. genAiPlannerBundle ✅
The genAiPlannerBundle metadata component is the correct choice as it includes all topics, instructions, and actions associated with an agent’s configuration in Salesforce Agentforce. This bundle ensures that all AI-driven agent interactions are packaged and deployed accurately.
Encapsulates topics, instructions, and actions.
Specific to Agentforce AI configurations.
Supports deployment via Salesforce CLI.
Incorrect Options:
B. EinsteinAiPlannerBundle ❌
The EinsteinAiPlannerBundle is not a recognized metadata component in Salesforce for deploying agent-related configurations. While Einstein AI exists, it is not the correct component for Agentforce-specific topics and actions, making this option irrelevant.
Not a valid metadata type for Agentforce.
Misaligned with the deployment context.
C. BotBundle ❌
The BotBundle metadata component is related to Salesforce Einstein Bots, not Agentforce agents. It does not include the topics, instructions, or actions specific to Agentforce’s AI-driven configurations, making it unsuitable for this deployment scenario.
Specific to Einstein Bots, not Agentforce.
Lacks support for Agentforce components.
Reference:
Salesforce Metadata API Developer Guide (Agentforce Components)
Salesforce CLI Deployment Guide
| Agentforce-Specialist Exam Questions - Home | Previous |
| Page 9 out of 59 Pages |