Case Study 5.1: How Ethical AI Became a Competitive Advantage — Salesforce's Journey

"Business is the greatest platform for change." — Marc Benioff, CEO, Salesforce


Overview

Salesforce is one of the world's largest enterprise software companies, with annual revenue exceeding $34 billion and a workforce of more than 70,000 employees globally. Its core product, the Sales Cloud CRM platform, processes data on billions of customer interactions. Its AI platform, Einstein, powers predictive analytics, automated lead scoring, and customer service automation across thousands of enterprise deployments.

By almost any measure, Salesforce is a company with enormous AI footprint — and, therefore, enormous AI ethics exposure. What makes Salesforce's story instructive is not that the company avoided AI ethics problems entirely (it did not), but that it made genuine institutional investments in AI ethics governance and found that those investments generated competitive returns.

This case study traces that journey: from the trigger events that forced the question, through the organizational response, to the competitive consequences.


1. How Salesforce Started Thinking About AI Ethics

The Trigger Events

Salesforce did not arrive at AI ethics through philosophical reflection. It arrived through a series of external pressures that made the question unavoidable.

The first pressure was the broader technology industry reckoning of 2017–2018. A cascade of AI ethics scandals — Google Photos mislabeling, Amazon's hiring algorithm, Cambridge Analytica's misuse of Facebook data — generated public, regulatory, and investor attention to technology companies' ethical responsibilities in a way that was qualitatively different from anything the industry had experienced before. For Salesforce's leadership, the question was not whether this scrutiny would arrive; it was whether the company would be positioned to respond credibly when it did.

The second pressure was internal. In 2017 and 2018, Salesforce employees — organized in part through an internal advocacy group called the Salesforce Ethics Alliance — began raising questions about the company's government contracts. Specifically, employees were concerned about contracts with US Customs and Border Protection (CBP) during a period of intense public controversy over immigration enforcement practices, including the family separation policy at the US-Mexico border.

In 2018, more than 650 Salesforce employees signed an open letter to CEO Marc Benioff and board chair John Roos, asking the company not to provide services to CBP. The letter cited the company's stated values — Trust, Customer Success, Innovation, Equality — and asked the company to live them consistently.

The third pressure was external stakeholder demand. Enterprise customers — large organizations with their own AI governance requirements and their own constituencies to answer to — were increasingly asking Salesforce about its AI ethics practices. A company selling AI-powered CRM to hospitals, financial institutions, and government agencies could not avoid questions about the ethics of the AI it was selling.

The CEO's Response

Marc Benioff's response to the employee letter was notable for what it was not: it was not a corporate communications exercise that appeared to engage while deferring substantively. Benioff personally responded to employees, acknowledged the legitimacy of their concerns, and committed to a process — not just a statement.

In the public statement, Benioff wrote: "I want to address the concerns of Salesforce employees about our work with U.S. Customs and Border Protection... I would like to say that I understand and respect the concerns of our employees. I feel them too." He acknowledged that the contract pre-dated the family separation policy and committed to convening a task force to review the company's government contracts through an ethical lens.

Whether or not one agrees with Salesforce's ultimate decisions on CBP and other government contracts, the quality of the organizational response — genuine engagement rather than dismissal — was itself notable and shaped what came next.


2. The Creation of the Office of Ethical and Humane Use

Structure and Mandate

In 2019, Salesforce created the Office of Ethical and Humane Use of Technology, headed by Chief Ethics and Humane Use Officer Paula Goldman. The office was given a mandate to establish ethical guidelines for how Salesforce's technology — including its AI platform — could and could not be used.

Several features of the office's design made it substantively different from a typical corporate ethics function:

Cross-functional authority: The Office of Ethical and Humane Use was positioned with authority to engage across business units, including sales, product, and legal. It was not purely advisory; it could participate in contract review processes and raise flags that required leadership response.

A usage policy framework: The office developed and published an Acceptable Use Policy for Salesforce's platforms, identifying categories of use that the company would not support. The policy prohibited using Salesforce technology to promote illegal discrimination, enable mass surveillance, or facilitate serious human rights violations.

A case review process: When a potential contract or use case raised ethical questions, the office had a formal review process — not unlike a legal review process — that could produce a recommendation for or against proceeding.

Public accountability: The office published an annual report on its activities, including the number of cases reviewed and the outcomes. This public accountability was significant because it created a verifiable record that critics, employees, and regulators could examine.

The Governance Design Principle

The design of the Office of Ethical and Humane Use reflected a specific governance principle: ethics must be built into the institutional architecture, not appended to it. A company that has an ethics statement but no ethics function is making a statement without making a commitment. A company that has an ethics function without authority is making a commitment without a mechanism.

Salesforce's design gave the ethics function enough institutional standing to be a real participant in business decisions — not enough to have veto power over every decision, but enough to ensure that ethical considerations were genuinely weighed.


3. The Trusted AI Principles

What They Are

Salesforce published its Trusted AI Principles in 2020, establishing a framework for how Einstein AI should be designed, deployed, and governed. The principles address five dimensions:

Responsible: AI must be developed and deployed in a way that avoids harm and respects human dignity. This includes requirements for human oversight of high-stakes AI decisions, clear protocols for when AI should defer to human judgment, and proactive consideration of potential harms in AI system design.

Accountable: Salesforce must be able to answer for what its AI systems do. This requires documentation of AI system design decisions, audit trails for AI-mediated decisions, and clear processes for investigating and remediating AI-related harms.

Transparent: Users of AI systems — including both enterprise customers and the end users those customers serve — have a right to know when AI is influencing decisions about them and on what basis. Salesforce committed to providing transparency tools that enable its customers to meet their own transparency obligations.

Empowering: AI should increase human capability and autonomy, not substitute for human judgment in ways that leave people without meaningful agency. The principle directly addresses the risk of automation bias — the tendency of humans to over-rely on AI recommendations even when their own judgment would serve better.

Inclusive: AI systems must be designed to work fairly and accurately across diverse populations. This includes commitments to testing AI systems across demographic groups and to involving diverse communities in AI design and review processes.

What They Actually Commit To

The Trusted AI Principles are unusual in the AI ethics landscape because they are accompanied by more specific implementation guidance. Rather than stopping at the level of values, Salesforce has published technical documentation — including Ethical Use Product Guides — that translates the principles into specific product requirements.

For example, the commitment to inclusivity is operationalized through requirements for model cards — documentation of AI model design, training data characteristics, performance across demographic groups, and known limitations. The commitment to transparency is operationalized through Einstein Prediction Explainability, a product feature that provides users with explanations for AI recommendations.

These product-level commitments create accountability mechanisms. If Einstein makes a prediction that violates the transparency principle — providing no explanation for a significant decision — there is a product failure, not just an ethics failure. The ethics principles are woven into the product requirements, not merely stated alongside them.


4. How Ethical AI Became Embedded in Enterprise Sales Conversations

The Procurement Shift

By 2021, enterprise technology procurement had changed significantly. Large customers in regulated industries — financial services, healthcare, government — were incorporating AI ethics due diligence into their vendor evaluation processes. Procurement questionnaires asked vendors to document their AI governance structures, describe their fairness testing procedures, and demonstrate that their AI systems produced explainable outputs.

For Salesforce, this shift created a strategic opportunity. The company had built AI ethics infrastructure — governance, documentation, product features, and principles — before its competitors had been compelled to do so by the market. When enterprise customers asked AI ethics questions in procurement processes, Salesforce had credible answers.

The Sales Narrative

Salesforce sales teams were trained to discuss ethical AI as a differentiator, not a defensive concession. The sales narrative — developed with input from the Office of Ethical and Humane Use — positioned Salesforce's responsible AI investments as evidence of product quality and organizational trustworthiness: "When you deploy Einstein, you know it has been built to be explainable, to work fairly across your customer base, and to keep your humans in the loop for high-stakes decisions."

This framing was particularly effective in regulated industries. A financial services firm choosing between AI vendors could not simply choose the most technically capable vendor; it needed a vendor whose AI governance documentation would satisfy its own regulators. Salesforce's documented, auditable AI ethics program was a procurement advantage.

Quantifying the Market Access Value

Salesforce has not publicly disclosed the revenue attributable specifically to its AI ethics positioning. What is documented is the growth of its regulated-industry segments — financial services cloud, healthcare and life sciences cloud, government cloud — which are precisely the segments where AI ethics due diligence is most stringent. These segments have grown faster than the company overall in recent years, though multiple factors contribute to that growth.


5. The Slack Acquisition: Ethics Considerations in M&A

The Transaction Context

In December 2020, Salesforce announced its acquisition of Slack for approximately $27.7 billion — one of the largest software acquisitions in history. Slack, a workplace messaging platform, raised distinct ethical considerations from Salesforce's existing product portfolio, particularly around employee monitoring, data privacy, and the potential for Slack data to be used in ways that employees did not expect or consent to.

Ethics Due Diligence in M&A

The Slack acquisition included an explicit ethics due diligence component. The Office of Ethical and Humane Use participated in the acquisition review process, assessing Slack's existing data practices, privacy architecture, and AI features against Salesforce's Trusted AI Principles.

The due diligence process identified several areas requiring attention: Slack's data retention policies, the use of user communication data in AI feature development, and the integration between Slack data and Salesforce's AI platform (Einstein). The outcome was a set of integration commitments designed to ensure that Slack data was used in ways consistent with Salesforce's stated privacy and transparency principles.

The Significance for Business Practice

The inclusion of ethics due diligence in a major M&A process is an organizational practice with significant precedent value. Most M&A due diligence processes focus on financial, legal, operational, and technical risk. Ethics due diligence — asking whether an acquisition target's practices are consistent with the acquirer's stated values — is unusual.

Salesforce's approach signals a maturation of AI ethics governance: from a set of principles applied to owned AI systems, to a set of standards applied to the AI systems and data practices of acquired companies. As AI systems become more deeply integrated into business processes, the ethics of acquired systems becomes a material consideration in M&A transactions.


6. Employee Relations: The Role of Employee Advocacy Groups

The Salesforce Ethics Alliance

The Salesforce Ethics Alliance, the internal employee advocacy group that organized the 2018 letter on CBP contracts, did not dissolve after the company created the Office of Ethical and Humane Use. It remained active as a channel through which employees raised ethics concerns and held the company accountable to its stated commitments.

The relationship between the formal ethics function and the informal employee advocacy group is important to understand. The formal function — the Office of Ethical and Humane Use — has institutional authority and organizational standing. The informal function — the employee advocacy group — has moral legitimacy and the ability to generate internal and external pressure.

When these two channels work together, they create a more robust ethics governance system than either could alone. The formal function provides process and authority; the informal function provides energy and accountability. When they work against each other — when employees perceive that the formal function is capturing and neutralizing ethics concerns rather than acting on them — the result is the ethics washing dynamic that this chapter's Section 5.9 describes.

Salesforce, at its best, has maintained a productive tension between the two channels. That tension has produced genuine organizational debates about contract decisions, product features, and company practices — debates that have sometimes resulted in policy changes.

Equality as a Core Value

Salesforce has made equality — including equality across gender, race, LGBTQ+ status, and other dimensions — a stated organizational priority, backed by specific investments. The company's annual equal pay assessment, which has resulted in significant salary adjustments over multiple years, and its Employee Equality Groups (EEGs) for underrepresented employees are concrete expressions of this commitment.

These equality investments are connected to AI ethics in a specific way: a diverse, equitable workforce is more likely to identify the kinds of AI failures — failures that affect underrepresented groups — that a homogeneous workforce would miss. Equality is not separate from AI quality; it is a component of it.


7. Tensions and Criticisms

The CBP Contract Resolution

After an extended internal review process, Salesforce ultimately maintained its contract with CBP. The company concluded — after the task force review — that its technology was being used for personnel management purposes rather than for enforcement activities directly related to the family separation policy. Many employees were not satisfied with this conclusion, and a number left the company.

The resolution illustrates a fundamental tension in corporate AI ethics: companies face genuine conflicts between their ethics commitments and their commercial interests, and they do not always resolve those conflicts in favor of ethics. Salesforce's AI ethics governance made this conflict visible and forced a genuine deliberative process; it did not guarantee an outcome that satisfied all stakeholders.

Criticisms of the Trusted AI Principles

External critics have raised several concerns about Salesforce's AI ethics program:

Scope limitations: The Trusted AI Principles primarily govern Salesforce's own AI development (Einstein). They are less clearly operative for AI applications that customers build on Salesforce's platform using Salesforce's tools. Customer-built AI applications can potentially violate ethical standards that Salesforce has committed to for its own products.

Enforcement asymmetries: The Acceptable Use Policy prohibits certain uses of Salesforce's technology. But enforcement relies primarily on contractual mechanisms — Salesforce can terminate contracts for violations — rather than technical mechanisms that prevent violating uses. Proactive enforcement is resource-intensive and incomplete.

Measurement opacity: While Salesforce publishes annual reports from the Office of Ethical and Humane Use, the reports focus primarily on process metrics (cases reviewed, training delivered) rather than outcome metrics (harms prevented, fairness improvements achieved). The gap between process documentation and outcome measurement is a persistent challenge in AI ethics program evaluation.

These criticisms are not evidence that Salesforce's program is fraudulent or ineffective. They are evidence of the genuine difficulty of building AI ethics governance at enterprise scale, and of the distance between good intentions and measurable impact.


8. Business Outcomes: Revenue, Talent, and Regulatory Standing

Revenue and Market Position

Salesforce's regulated-industry cloud products — which are directly supported by the company's AI ethics positioning — have been among its fastest-growing segments. The financial services cloud and healthcare cloud, in particular, serve customers in industries with stringent AI governance requirements.

The company's consistent appearance on "most ethical companies" lists (including Ethisphere's annual ranking, on which Salesforce has appeared multiple times) generates brand value that is difficult to attribute directly to revenue but is measurable in terms of brand awareness and enterprise reputation surveys.

Talent Outcomes

Salesforce consistently ranks highly on employer satisfaction surveys, including Glassdoor's Best Places to Work rankings (it appeared in the top 10 multiple times in the 2018–2023 period). The company's stated values, including its ethics commitments, are consistently cited by employees as reasons for joining and remaining with the company.

The talent outcome is particularly significant given the AI ethics dimension: researchers and engineers who care about building AI responsibly have options. Salesforce's visible ethics program attracts talent that might otherwise choose academic or non-profit AI ethics careers.

Regulatory Standing

Salesforce has not faced a major AI ethics enforcement action. This absence is not evidence that its AI systems are perfect — no AI systems are — but it does reflect a regulatory relationship that is different from that of companies that have faced major enforcement actions.

Proactive engagement with regulators — through comment letters, standard-setting processes, and policy advocacy — has positioned Salesforce as a responsible actor in regulatory conversations rather than a subject of regulatory action. This positioning has business value: it provides early notice of regulatory direction, reduces the probability of surprise enforcement actions, and creates goodwill that may be drawn upon if and when specific AI systems are reviewed.


9. Lessons for Other Companies

The Salesforce case offers several transferable lessons for organizations at different stages of AI ethics development:

1. Ethics governance requires institutional investment, not just statements. The creation of the Office of Ethical and Humane Use — with budget, staff, authority, and public accountability — is what distinguishes Salesforce's approach from a principles document on a website.

2. Employee advocacy is a resource, not a threat. Organizations that treat employee ethics advocacy as a communication challenge to be managed will miss the substantive value that engaged employees can provide in identifying risks and improving practices.

3. Ethics due diligence should extend to M&A. The inclusion of ethics considerations in the Slack acquisition review is a practice that more organizations should adopt as AI systems become central to the assets they acquire.

4. Transparency about limitations builds more trust than claims of perfection. Salesforce's acknowledgment of the tensions in its CBP decision, and its publication of ethics program reports that note areas of ongoing challenge, creates more credible ethics positioning than a narrative of frictionless ethical compliance.

5. The competitive advantage is real but requires sustained investment. Ethics positioning as a differentiator requires ongoing investment — in governance infrastructure, product features, talent, and public engagement — not a one-time declaration.


Discussion Questions

  1. Salesforce maintained its CBP contract despite employee pressure, concluding that the specific use case was acceptable under its ethical guidelines. Do you agree with this conclusion? What process would you use to evaluate a similar question in your organization?

  2. The case study notes that Salesforce's Trusted AI Principles apply primarily to its own AI development (Einstein) rather than to AI applications that customers build on its platform. How should a technology platform company think about its ethical responsibility for the downstream uses of its tools?

  3. Salesforce's ethics program has been praised for its structural features (authority, accountability, transparency) while being criticized for gaps in outcome measurement. Design a measurement framework for Salesforce's AI ethics program that goes beyond process metrics to capture outcome data. What would you measure, and how?

  4. The case study describes a "productive tension" between Salesforce's formal ethics function and its informal employee advocacy groups. How do you maintain that productive tension without it becoming either captured (the formal function absorbs and neutralizes employee concerns) or destabilizing (advocacy becomes a constant source of organizational friction)?


Sources for this case study include Salesforce's published Trusted AI Principles, annual reports from the Office of Ethical and Humane Use, public employee communications, academic analyses of Salesforce's AI governance practices, and contemporaneous press coverage of the CBP contract controversy. Where specific financial figures are cited, they are drawn from Salesforce's public financial disclosures.