Case Study 1: Content Moderation and the African Labor Force — Inside the Sama Group Controversy
The Investigation
In January 2023, TIME magazine published an investigation by journalists Billy Perrigo and Karen Hao that documented conditions experienced by content moderation workers in Nairobi, Kenya, employed by Sama Group on a contract with OpenAI. The workers were paid approximately $1.32 to $2 per hour to review text describing violent acts, sexual abuse, child exploitation, torture, and suicide — content that OpenAI used to create safety filters for ChatGPT and other AI products. The investigation documented that workers experienced significant psychological harm from this work, including post-traumatic stress symptoms, intrusive thoughts, and difficulty maintaining normal relationships. It also documented that the mental health support provided was inadequate to the harm being caused, and that workers who raised concerns faced retaliation.
The story generated substantial attention and controversy, and has become a central reference point in discussions of AI ethics and labor in the Global South. Understanding it in detail — what happened, who was responsible, what structural factors produced the outcome, and what genuine accountability would require — is essential for AI professionals and business leaders thinking about ethical AI development.
The AI Training Context
To understand the Sama Group story, it is necessary to understand the role of content moderation work in AI development. Large language models like ChatGPT are trained on enormous datasets scraped from the internet. That data includes vast quantities of harmful content: instructions for violence, sexual exploitation, hate speech, terrorism promotion, and other deeply disturbing material. Without intervention, these models learn to reproduce and generate this content when prompted appropriately.
Content moderation is the mechanism through which AI companies address this problem. Human annotators review datasets and individual content examples, classifying content as harmful or not harmful, creating labeled training datasets that allow models to learn to identify and decline to generate harmful content. This work is essential to AI safety — without it, powerful language models would be substantially more dangerous.
The nature of the work is inherently harmful to those who do it. Reviewing descriptions of child sexual abuse, graphic violence, torture, and other traumatic content — at the scale required to train large AI models — exposes workers to material that produces documented psychological harm. Research on content moderation workers consistently documents elevated rates of PTSD, depression, and anxiety. The Kenyan workers interviewed by TIME described intrusive thoughts, nightmares, difficulty discussing the work with family, and feeling that the work had permanently changed how they perceived the world.
Sama Group and OpenAI
Sama Group is a San Francisco-based company that markets itself as an "ethical AI" company providing data annotation services while simultaneously operating as a social enterprise providing employment to workers in developing countries, particularly in East Africa. Its marketing emphasizes the social mission: providing dignified employment and economic opportunity to marginalized workers. The company has been recognized by various impact investing and social enterprise awards and has received favorable coverage for its claimed commitment to ethical outsourcing.
OpenAI contracted with Sama Group to provide content moderation labor in Kenya. The workers performing this work for OpenAI through Sama were paid approximately $1.32 to $2 per hour, depending on seniority and the nature of the work. These wages, while above Kenya's minimum wage, are dramatically lower than what equivalent work would command in the United States or Europe — where minimum wages for content moderation work, even when performed by contractors rather than direct employees, are substantially higher.
The TIME investigation documented several specific failures in the arrangement:
Inadequate psychological support. Sama Group provided mental health support that workers and their advocates described as insufficient for the harm being caused. A single counselor was shared among a large number of workers. Mental health sessions were limited. Workers reported that concerns raised in support sessions could be shared with management, undermining the sessions' therapeutic value and discouraging honest disclosure. The intensity of the psychological harm documented — and the inadequacy of the support provided — represented a straightforward failure of duty of care.
Retaliation for organizing. When Sama Group workers began organizing to advocate for better pay, better mental health support, and more transparency about working conditions, they faced retaliation. Workers who participated in organizing activities reported being reassigned to more intensive and disturbing content, being disciplined for complaints, and ultimately facing contract terminations. Several prominent worker advocates lost their contracts with Sama Group following their public advocacy. Sama Group disputed characterizations of retaliation, but the pattern documented by multiple workers and advocacy organizations was consistent.
Opacity in the labor chain. Workers were not always clearly informed, at the point of engagement, about the specific nature of the content they would be reviewing. The employment relationship — workers employed by Sama Group on contracts that Sama held with OpenAI — created a layered structure that obscured accountability and made it difficult for workers to directly advocate to the ultimate client for improved conditions.
Termination of the contract. In February 2023, shortly before TIME's investigation was published, Sama Group announced that it was ending its content moderation contracts — including the OpenAI contract — citing what it characterized as misalignment with its social mission. Workers and advocates interpreted this as an attempt to avoid accountability for documented failures by closing the operation rather than addressing the problems. Workers affected by the contract termination faced displacement without adequate severance or transition support.
The Structural Analysis
The Sama Group controversy illustrates structural patterns that extend well beyond this specific case and company.
The global labor arbitrage in AI annotation. Content moderation and data annotation work is a global labor market, and that market is structured to minimize costs by concentrating the most psychologically harmful work in the lowest-wage jurisdictions. Research by AI researchers and journalists has documented annotation work in Kenya, Uganda, the Philippines, Pakistan, Venezuela, and other lower-wage countries, consistently finding wages dramatically lower than what equivalent work commands in wealthy countries, limited labor protections, and inadequate psychological support. The Sama Group case is not an outlier — it is representative of industry practice.
The wage differential reflects both cost minimization and the global inequality that makes it possible. Workers in Nairobi who accept $2 per hour for deeply harmful work do so in a context where $2 per hour represents meaningful income above local alternatives — not because the work is worth $2 per hour, but because the bargaining position of Kenyan workers relative to OpenAI is profoundly unequal. This inequality is not created by AI companies, but it is exploited by them.
The "ethical AI" and "impact sourcing" marketing paradox. Sama Group's marketing emphasized its social mission — providing employment and economic opportunity to marginalized workers — in ways that created a false impression of ethical labor practices. The company's "ethical AI" branding implied that its AI training products were produced under ethical labor conditions. The "impact sourcing" framework, which describes outsourcing to disadvantaged communities as a development intervention, has been broadly used to market annotation labor services from lower-wage jurisdictions as socially beneficial — without adequate attention to the quality and conditions of the employment provided.
This marketing paradox is a form of ethics washing specific to the AI supply chain. Companies that claim development impact from providing employment in the Global South, while simultaneously exposing those workers to documented psychological harm without adequate support or fair compensation, are not delivering the social value they claim. They are using the development narrative to secure more favorable client contracts and investor terms while providing working conditions that do not meet the standards that "ethical" employment should require.
The accountability gap in layered contracting. The structure of the OpenAI-Sama Group relationship — with OpenAI as the client, Sama as the contractor, and Kenyan workers as Sama's employees — created an accountability gap that allowed both principal actors to point elsewhere when confronted with the documented failures. OpenAI could note that Sama Group was responsible for its workers' conditions, that OpenAI had contractual requirements about working conditions, and that it expected its contractors to comply. Sama Group could note that OpenAI's content requirements drove the harm, that OpenAI's contracts did not provide adequate funding for better mental health support, and that the structure of the AI training industry made the working conditions it provided competitive with alternatives.
Both of these deflections contain partial truth. Neither constitutes an adequate response to the documented harm. The accountability gap is not incidental to the contracting structure — it is a feature that the structure creates, providing both parties with plausible deniability while workers bear the costs.
What Genuine Accountability Requires
The Sama Group controversy provides a template for understanding what genuine accountability in AI labor supply chains would require, and how far current practices fall short.
Transparency about the labor chain. AI companies should disclose who performs the human labor involved in training and operating their AI systems, under what working conditions, and at what wages. This transparency does not currently exist in the AI industry. OpenAI and other major AI developers provide minimal public information about their annotation labor practices, the contractors they use, or the conditions those contractors maintain. Supply chain transparency of the kind now expected in apparel, electronics, and food production does not yet exist in AI.
Living wages and benefits. Content moderation workers should receive wages that reflect the genuine market value of their labor and enable a dignified life in the communities where they live — not merely wages above local statutory minimums set in contexts of profound inequality. "We pay above minimum wage" is not an adequate labor ethics standard for an industry generating billions in revenue from the labor of workers earning $2 per hour.
Adequate psychological support. Work that exposes people to traumatic content at scale requires genuinely adequate psychological support — not a checkbox counseling service but a structured program that takes the psychological risks seriously, provides support from qualified mental health professionals without confidentiality violations, enables workers to rotate off traumatic content assignments without penalty, and monitors workers' wellbeing over time.
Labor rights protection. Workers who perform content moderation and annotation labor should have the same labor rights — including the right to organize, to complain about working conditions without retaliation, and to access legal remedies for labor violations — as workers in other industries. The use of contract structures, geographic distance, and corporate complexity to insulate AI companies from labor rights obligations is ethically unacceptable.
Due diligence and monitoring. AI companies that use contractors for annotation labor have a responsibility to conduct due diligence about labor conditions before contracting, to monitor conditions throughout the contract period, and to terminate or remediate contracts where documented violations occur. The response to violations should be remediation and accountability, not contract termination that removes accountability while imposing additional harm on affected workers.
The Broader Implications
The Sama Group case has implications that extend beyond the specific practices of Sama Group and OpenAI. It raises fundamental questions about the political economy of AI development: who bears the costs of building AI systems, and who captures the benefits?
Current AI development concentrates enormous economic value in a small number of technology companies and their shareholders while distributing costs — including the psychological costs of annotation labor, the data privacy costs of training data extraction, and the environmental costs of energy-intensive computation — across broader populations who share minimally in the value created. The workers who made ChatGPT safe enough to deploy were paid $2 per hour while OpenAI's valuation was approaching $100 billion. This distribution of value and cost is not a natural consequence of technology development — it is the result of specific choices about labor practices, contracting structures, data governance, and the allocation of economic power.
Business leaders and AI practitioners who engage with this case as merely a contractor management problem — a failure to adequately specify and monitor vendor working conditions — are missing its deeper significance. The deeper question is whether the business model of AI development, which currently depends on a global division of labor in which the most lucrative and visible work is concentrated in wealthy countries while the most burdensome and invisible work is exported to lower-wage jurisdictions, is ethically defensible. The answer the Sama Group case suggests is: not without profound structural reform.
Discussion Questions: (1) OpenAI has argued that it maintained contractual requirements about working conditions that Sama Group violated. To what extent does this argument satisfy OpenAI's ethical responsibilities to Kenyan workers? What additional obligations does OpenAI have as the ultimate client? (2) Sama Group's "ethical AI" and "impact sourcing" marketing created a favorable impression that proved misleading. Is this a case of deliberate ethics washing, or sincere aspiration that was inadequately implemented? What is the practical difference? (3) Design a labor practices standard for AI annotation work that you believe would be genuinely ethical. What wages, working conditions, psychological support, and labor rights would your standard require? Who should pay for it?