Case Study 16.1: "Facebook's Ad Discrimination Machine"
The HUD Settlement and Systemic Housing Ad Bias
Overview
In August 2018, the Department of Housing and Urban Development filed a formal complaint against Facebook, alleging that Facebook's advertising platform violated the Fair Housing Act by allowing advertisers to exclude users from seeing housing ads based on race, religion, national origin, sex, familial status, and disability. The complaint followed years of investigative journalism, academic research, and civil rights litigation demonstrating that Facebook's advertising system — advertised to marketers as the most precise targeting tool in history — was precisely targeting discrimination.
In March 2019, Facebook settled with HUD and four civil rights organizations — the National Fair Housing Alliance, the Communications Workers of America, the American Civil Liberties Union of Southern California, and others — for $5 million in payments to civil rights organizations and significant changes to its advertising system. The settlement is the largest civil rights enforcement action in the history of digital advertising and a landmark case in the application of civil rights law to AI systems.
But the story of Facebook's ad discrimination machine does not end with the settlement. Subsequent investigation found that discriminatory ad delivery continued after the settlement in modified forms, illustrating a broader truth: enforcement actions against specific discriminatory practices cannot keep pace with the adaptive capacity of AI systems that have learned to discriminate.
The Architecture of Digital Ad Discrimination
To understand how Facebook's advertising system enabled discrimination, one must understand how the system was designed. Facebook's advertising platform gave advertisers extraordinary control over who saw their ads. The platform's targeting options allowed advertisers to select audiences based on:
Explicit demographics: Age, gender, location, language.
Interests and behaviors: Categories derived from users' activity on Facebook and on websites tracked by Facebook's pixel — interests like "home buying," "gardening," "sports cars," or thousands of others.
Ethnic Affinity: This was the category that attracted the most attention. Facebook did not collect users' race as a data point — that would be legally and reputationally problematic. Instead, it inferred "affinity" with various ethnic communities based on users' activity: pages they had liked, content they engaged with, groups they belonged to. The "African American" Ethnic Affinity category, for example, was used by Facebook to identify users whose behavior patterns were consistent with African American cultural affinity — which Facebook then made available as an audience inclusion or exclusion criterion for advertisers.
Lookalike audiences: Advertisers could upload lists of their existing customers and ask Facebook to find "lookalike" users — people with behavioral profiles similar to the existing customers. If an advertiser's customer list was predominantly white, their lookalike audience would also be predominantly white.
Ad delivery optimization: Facebook's algorithm did not simply show ads to the audiences advertisers specified; it optimized within those audiences for the users the algorithm predicted would be most likely to engage with or convert on the ad. This optimization could itself produce demographic concentration, because Facebook's engagement models had learned — from historical data reflecting existing patterns of segregation and exclusion — that engagement rates with certain ad types varied across demographic groups.
The combination of these targeting options, each individually defensible in isolation, created a system in which discriminatory exclusion was trivially easy to implement, algorithmically optimized for effectiveness, and largely invisible to the users being excluded.
The ProPublica Investigations
The story of Facebook's advertising discrimination became public primarily through investigative journalism. In October 2016, ProPublica published "Facebook Lets Advertisers Exclude Users by Race," documenting that Facebook offered "Ethnic Affinity" as a targeting option and demonstrating through test purchases that advertisers could use it to exclude Black and Hispanic users from seeing housing ads. The test purchases were conducted by ProPublica journalists who created fake ad accounts and placed housing ads excluding users with Black or Hispanic affinity — a clear violation of the Fair Housing Act.
Facebook's initial response was defensive. The company argued that Ethnic Affinity targeting served legitimate purposes, including enabling targeted outreach to communities and facilitating culturally relevant advertising. It characterized the category as fundamentally different from explicit racial targeting. Following the story's publication and significant public and political pressure, Facebook removed the Ethnic Affinity option from explicit advertiser targeting choices for housing, employment, and credit categories.
But the removal of the explicit targeting option did not resolve the problem. In November 2017, ProPublica followed up with a second investigation demonstrating that Facebook's advertising system still allowed advertisers to exclude users by racial characteristics through alternative means. The investigation found that Facebook's self-serve advertising portal was automatically suggesting audience exclusions based on racial characteristics even without explicit advertiser instruction, and that the system accepted advertiser-created audiences with discriminatory characteristics.
A third investigation, in 2019, found that even after Facebook had stated that it had fixed the problem, the advertising system continued to show housing ads in demographically skewed patterns. Black and Hispanic users were seeing fewer housing ads than white users with similar financial profiles, even when advertisers had not specified any demographic targeting. The skew was being produced by Facebook's ad delivery optimization — the algorithm that chose, within a broad audience, whom to show ads to — rather than by explicit advertiser targeting choices.
The HUD Complaint and Academic Research
Parallel to the ProPublica investigations, academic researchers were documenting the discriminatory effects of Facebook's advertising system through empirical study. Researchers at Northeastern University and Upturn (a civil rights and technology nonprofit) published studies documenting that Facebook's ad delivery algorithm produced racially disparate audience distributions even when advertisers did not use explicit demographic targeting. Studies examined housing ads, employment ads, and credit ads, consistently finding that algorithmic delivery produced audiences that over-represented white users relative to Black and Hispanic users.
The HUD complaint, filed in August 2018, alleged that Facebook's advertising platform constituted a "massive countrywide machine that enables illegal discrimination" in housing. The complaint alleged:
- That Facebook enabled advertisers to use Ethnic Affinity, national origin, religion, and similar characteristics as exclusion criteria for housing ads.
- That Facebook's ad delivery algorithm, independent of advertiser targeting choices, produced discriminatory audience distributions for housing ads.
- That Facebook's use of zip code targeting enabled advertisers to exclude users in neighborhoods with high minority populations, operationalizing the neighborhood exclusions that physical redlining had historically imposed.
- That Facebook's Lookalike Audience tool enabled advertisers to build audiences that mirrored the demographic characteristics of their existing customer bases — perpetuating segregated customer relationships in the digital realm.
The complaint also raised a point that was legally significant: Facebook's advertising terms of service, at the time the complaint was filed, explicitly committed the platform not to allow discriminatory advertising. Facebook had therefore not only enabled discrimination; it had violated its own contractual commitments to advertisers and users.
The $5 Million Settlement: Terms and Limitations
The March 2019 settlement required Facebook to take the following steps:
Create a separate advertising portal for housing, employment, and credit ads that would limit the targeting options available to advertisers in these categories. The new portal would prohibit targeting by age, gender, and zip code for these ad types, and would prohibit the use of Ethnic Affinity and similar audience characteristics.
Fund civil rights organizations at a total of $5 million, to be distributed among the plaintiff civil rights organizations for fair housing education and advocacy.
Create a tool for affected users to see housing ads that had been shown in their areas, regardless of whether the ads had been targeted to include them — giving excluded users access to housing opportunities they might otherwise have missed.
Submit to monitoring by an independent civil rights auditor who would review the implementation of the settlement's requirements.
Conduct research into the discriminatory effects of its ad delivery algorithm and take steps to mitigate identified disparities.
The settlement was significant in several respects. The $5 million payment represented the largest civil rights settlement in digital advertising history at the time. The requirement to create a separate portal with restricted targeting represented an acknowledgment that normal advertising targeting tools are incompatible with fair housing law when applied to housing advertising. And the requirement to address ad delivery algorithm disparities — not just explicit targeting choices — acknowledged that discriminatory outcomes could be produced by the optimization algorithm independently of advertiser instruction.
But critics noted significant limitations. The $5 million payment was trivial for a company generating tens of billions of dollars in annual advertising revenue. The separate housing ads portal restricted only a subset of housing-related advertising; subsequent research found that housing advertisers could effectively evade the portal's restrictions by using other ad categories. And the requirement to "research" ad delivery algorithm disparities did not specify what level of disparity would be acceptable or require specific technical interventions.
After the Settlement: Continued Discrimination
The most significant finding about the Facebook housing ad discrimination case is that the settlement did not resolve the problem. Research published after the settlement found that Facebook's advertising system continued to produce racially disparate ad delivery for housing, employment, and credit advertising, through mechanisms not directly addressed by the settlement's specific requirements.
A 2021 academic study by Sapiezynski et al. found that Facebook's ad delivery algorithm produced significant demographic skews for employment ads even when advertisers selected broad, non-demographic audiences. The algorithm had apparently learned — from historical behavioral data — that certain demographic groups were more likely to engage with certain types of job ads, and it optimized delivery accordingly. An ad for a logging company was shown predominantly to men; an ad for a janitorial services company was shown predominantly to women and racial minorities. The demographic concentration in ad delivery was not the result of advertiser targeting choices; it was produced by the algorithm's optimization for engagement.
A separate 2021 study by Ali et al. demonstrated that Facebook's algorithm could produce skewed delivery based on the creative content of the ad — not just the targeting criteria. An ad featuring a man in a business suit was delivered predominantly to men; an ad featuring the same person in different attire was delivered with different demographic distribution. The algorithm's response to ad creative meant that even advertisers who specifically wanted to reach diverse audiences could find their ads being delivered to demographically concentrated audiences, based on the algorithm's learned associations between ad imagery and user demographics.
These findings are significant because they demonstrate that the discriminatory behavior of Facebook's advertising system cannot be fully addressed by restrictions on advertiser targeting choices. The algorithm itself encodes discriminatory patterns — learned from historical data reflecting real-world patterns of exclusion — and reproduces them in its optimization. Addressing this requires interventions in the algorithm's objective function and training data, not just restrictions on the targeting options advertisers can select.
Analysis: What the Case Reveals About AI and Civil Rights
The Facebook housing ad case reveals several important truths about the relationship between AI advertising systems and civil rights that have implications beyond this specific case.
AI systems learn and reproduce existing discrimination. The discriminatory patterns in Facebook's ad delivery were not programmed by anyone at Facebook; they were learned from historical behavioral data that reflected real-world patterns of segregation, exclusion, and discrimination. An algorithm trained to optimize engagement will learn that certain demographics engage more with certain content, and will optimize delivery accordingly — reproducing the existing patterns in the training data. This is not a bug that can be fixed by removing an option; it is a fundamental property of optimization-based AI systems that must be actively addressed through algorithm design and objective function choice.
Discrimination without intent is still discrimination. Facebook did not intend for its advertising system to redline. The engineers who built the lookalike audience tool, the Ethnic Affinity categories, and the delivery optimization algorithm were not trying to create a discrimination machine. But discrimination in the effects is discrimination regardless of the intent, and the Fair Housing Act prohibits discriminatory effects, not just discriminatory intent. The civil rights framework is effects-based, not intent-based, for good reason: effects are what determine whether people have genuine access to opportunities.
Settlement terms must match the technical reality. The 2019 settlement focused primarily on restricting explicit demographic targeting options for housing ads — a logical response to the most visible form of the problem. But the settlement did not require interventions in the delivery optimization algorithm, and subsequent research demonstrated that the algorithm was producing discriminatory outcomes independently. Effective civil rights enforcement against AI advertising systems must engage with the technical reality of how these systems produce discriminatory outcomes, not just with the most visible or legally legible form of the problem.
Transparency is essential, but insufficient. One limitation of the settlement was that it required Facebook to conduct research into its system's discriminatory effects — but the research was internal, and its results were not subject to independent verification. Civil rights enforcement of AI advertising systems requires transparency that enables external audit — access for researchers and civil society organizations to the data needed to test for discriminatory outcomes — not just internal compliance procedures.
Reflection Questions
-
Facebook argues that its advertising system is neutral — it does not contain race as a targeting variable. Is neutrality of inputs sufficient to ensure non-discrimination in outputs? What would genuine algorithmic non-discrimination require?
-
The settlement required Facebook to conduct research into its ad delivery algorithm's disparate impacts. What would rigorous, independent research into this question look like? Who should have access to the results?
-
Should the civil rights agencies have sought a larger monetary penalty? What level of penalty would actually deter discriminatory advertising practices by major platforms?
-
The advertising discrimination revealed by this case affects primarily housing, employment, and credit — the domains covered by major civil rights statutes. But algorithmic advertising discrimination may exist in other domains too (consumer goods, healthcare information, educational opportunities). Should civil rights protection extend further into commercial advertising?
-
The 2019 settlement has not resolved Facebook's housing ad discrimination. What additional regulatory or technical requirements would be needed to ensure genuine algorithmic non-discrimination in housing advertising?