Salesforce-AI-Associate Practice Test

Salesforce Spring 25 Release -
Updated On 10-Nov-2025

106 Questions

A consultant conducts a series of Consequence Scanning workshops to support testing diverse datasets. Which Salesforce Trusted AI Principles is being practiced?

A. Transparency

B. Inclusivity

C. Accountability

B.   Inclusivity

Explanation:

Consequence Scanning is a proactive technique used to identify potential impacts — both positive and negative — of AI systems before deployment. When applied to diverse datasets, it ensures that the AI model:
Represents a wide range of user groups
Minimizes bias and exclusion
Promotes fairness across demographics, geographies, and use cases
This directly aligns with the Salesforce Trusted AI Principle of Inclusivity, which emphasizes designing AI systems that serve everyone equitably, especially historically underrepresented or marginalized groups.

Why Not the Others?
A. Transparency
Focuses on making AI decisions understandable and explainable — not the core goal of Consequence Scanning in this context.
C. Accountability
Involves assigning responsibility for AI outcomes, but the scenario is centered on inclusive data testing, not governance.

📚 Reference:
Salesforce Trusted AI Principles: Ethical Use of AI
Trailhead Module: Build Ethical and Inclusive Products

Cloud Kicks wants to improve the quality of its AI model's predictions with the use of a large amount of data. Which data quality element should the company focus on?

A. Accuracy

B. Location

C. Volume

A.   Accuracy

Explanation:

While having a large amount of data (Volume, Option C) is beneficial, accuracy is the most critical data quality element for improving AI predictions because:
Inaccurate data (e.g., wrong product colors, mismatched purchase history) leads to flawed recommendations, regardless of dataset size.
High accuracy ensures the AI model learns from correct patterns, increasing prediction reliability.

📌 Reference:
Salesforce’s Data Quality Best Practices emphasize accuracy as a cornerstone for effective AI.

Why the Other Options Are Less Critical:
❌ B) Location
Location data might be relevant for geospatial analytics (e.g., store recommendations), but it’s not the primary issue for color-based shoe suggestions.
❌ C) Volume
While more data can help, volume alone doesn’t guarantee quality. "Garbage in, garbage out" (GIGO) applies if the data isn’t accurate.

Key Takeaway:
Prioritize accuracy (clean, error-free data) over sheer volume.
Use data validation rules and duplicate management in Salesforce to maintain accuracy.

Cloud Kicks implements a new product recommendation feature for its shoppers that recommends shoes of a given color to display to customers based on the color of the products from their purchase history. Which type of bias is most likely to be encountered in this scenario?

A. Confirmation

B. Survivorship

C. Societal

A.   Confirmation

Explanation:

Confirmation bias is the most likely type of bias to be encountered in this scenario. This bias occurs when an AI system is designed to favor data that confirms existing beliefs or patterns, leading to a feedback loop that reinforces those beliefs.

Confirmation Bias:
In the Cloud Kicks example, the recommendation system is built to suggest shoes of a color that a customer has already purchased. If a customer has only ever bought blue shoes, the system will continue to recommend blue shoes, confirming its initial assumption about the customer's color preference. This can prevent the system from ever discovering if the customer would be interested in shoes of other colors, such as green or red, thereby limiting the customer's experience and potentially missing out on sales. The AI is simply confirming the existing data pattern rather than exploring new ones.

Survivorship Bias:
This bias occurs when an AI model only considers data from "surviving" entities (e.g., successful customers or products) while ignoring those that have failed. For instance, if the system only recommends products to customers who completed a purchase, it would ignore data from those who browsed but didn't buy, leading to a skewed understanding of customer behavior. This is not the primary issue in the Cloud Kicks scenario.

Societal Bias:
This bias arises when an AI system reflects and amplifies real-world societal prejudices, such as those related to race, gender, or age. While this is a common and serious form of AI bias, it is not directly applicable to the product recommendation logic described (recommending a product based on a non-demographic feature like color).

Bottom Line
The most probable bias in the Cloud Kicks scenario is confirmation bias, as the recommendation system is designed to reinforce a customer's past purchasing habits rather than explore new preferences, creating a feedback loop that limits product discovery and sales opportunities.

What is a potential source of bias in training data for AI models?

A. The data is collected in area time from sources systems.

B. The data is skewed toward is particular demographic or source.

C. The data is collected from a diverse range of sources and demographics.

B.   The data is skewed toward is particular demographic or source.

Explanation:

A potential source of bias in training data for AI models is when the data is skewed toward a particular demographic or source (Option B). Here's a detailed explanation:

Why skewed data causes bias? AI models learn patterns and make predictions based on the data they are trained on. If the training data is skewed toward a specific demographic (e.g., predominantly male customers, a specific age group, or a particular geographic region) or a single source (e.g., data only from one platform or region), the model may develop biases that favor those characteristics.
For example, if a product recommendation model is trained on data primarily from urban customers, it may fail to accurately predict preferences for rural customers, leading to biased or less relevant recommendations. This can result in unfair outcomes, reduced model performance, and poor customer experiences.

Why not data collected in real-time? Collecting data in real-time from source systems (Option A) refers to the timing or method of data collection, not its content or distribution. While real-time data collection might introduce issues like incomplete data or system errors, it is not inherently a source of bias unless the data itself is skewed. The option is also unclear due to the phrasing ("in area time"), but it does not directly relate to bias in the context of AI training data.

Why not diverse data? Collecting data from a diverse range of sources and demographics (Option C) is actually a strategy to reduce bias, not create it. Diverse data helps ensure the AI model is exposed to a broad representation of users, behaviors, and scenarios, leading to fairer and more accurate predictions. This is the opposite of a bias source.

Bias in AI models often stems from unbalanced or non-representative training data, making Option B the correct choice. Addressing this requires careful data collection, preprocessing, and validation to ensure the training data reflects the diversity of the intended user base.

Reference:
Salesforce Trailhead: Responsible AI - Understanding Bias in AI
Salesforce Blog: Mitigating Bias in AI Models
Salesforce AI Ethics Guidelines: Fairness and Bias in AI

A customer using Einstein Prediction Builder is confused about why a certain prediction was made. Following Salesforce's Trusted AI Principle of Transparency, which customer information should be accessible on the Salesforce Platform?

A. An explanation of how Prediction Builder works and a link to Salesforce's Trusted AI Principles

B. An explanation of the prediction's rationale and a model card that describes how the model was created

C. A marketing article of the product that clearly outlines the oroduct's capabilities and features

B.   An explanation of the prediction's rationale and a model card that describes how the model was created

Explanation:

Transparency means users should understand why an AI model made a certain prediction and how the model was designed. In Salesforce’s Einstein Prediction Builder, this is achieved through:

Prediction Explanations (Rationale)
Salesforce provides feature importance scores and explanations so customers can see which factors most influenced the prediction.
Example: If the prediction is "Will this lead convert?", the system might show that “industry = tech” and “opportunity size > $100k” heavily influenced the outcome.

Model Card
A Model Card is a document that describes how the AI model was built, the data used, assumptions made, limitations, and performance.
This promotes responsible use, reduces the “black box” effect, and helps customers interpret predictions correctly.
This aligns with Salesforce’s Trusted AI Principles, especially Transparency and Explainability.

Why the Other Options Are Incorrect:
A. An explanation of how Prediction Builder works and a link to Salesforce's Trusted AI Principles → ❌
General info, but does not help the customer understand why their specific prediction happened. Too high-level.
C. A marketing article of the product → ❌
Marketing content explains features and benefits, not rationale or transparency. Not useful for trust or responsible AI.

📚 References:
Salesforce: Model Cards in Einstein
Salesforce Trusted AI Principles
Trailhead: Responsible AI

👉 Key Exam Tip:
Whenever the question mentions Transparency → think “users should know how/why AI made a decision” (explanations + model cards).

Page 1 out of 22 Pages