Case Study 14.2: Facebook Housing Ads and the National Fair Housing Alliance Settlement

How the World's Most Sophisticated Targeting Machine Was Used to Discriminate


Background: The Fair Housing Act and Digital Advertising

The Fair Housing Act of 1968 — passed one week after the assassination of Martin Luther King Jr. — prohibits discrimination in housing transactions, including advertising, on the basis of race, color, national origin, religion, sex, familial status, or disability. Its advertising provisions specifically state that "it shall be unlawful to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin."

For decades, the Fair Housing Act's advertising provisions governed printed and broadcast media. Real estate agents, landlords, and developers were prohibited from running advertisements that targeted white readers in white newspapers, from using language that signaled racial preferences, or from systematically avoiding advertising in publications with predominantly minority readership.

Digital behavioral targeting created a new mechanism for doing exactly what the Fair Housing Act prohibited — showing housing advertisements only to some racial or ethnic groups — while providing a technical and rhetorical cover that made the discrimination less visible and, initially, less legally constrained.

The ProPublica Investigation: "Facebook Lets Advertisers Exclude Users by Race"

In October 2016, journalist Julia Angwin and Terry Parris Jr. published an investigation in ProPublica demonstrating that Facebook's advertising platform allowed housing advertisers to create ads that were shown only to users who were not in categories Facebook called "Ethnic Affinities" — including "African American (US)," "Hispanic (US – All)," and "Asian American (US)."

The investigation was conducted through direct testing: ProPublica reporters created a fake housing advertisement on Facebook's self-serve advertising platform and selected targeting options that would exclude African Americans, Hispanics, and Asian Americans from seeing it. Facebook's system accepted and processed the ad.

The "Ethnic Affinity" categories were not based on declared race — Facebook does not ask users to identify their race. Instead, they were behavioral inferences: users who engaged with content associated with specific racial or ethnic communities were categorized in the corresponding "affinity" segment. The affinity designation was a behavioral proxy for race.

The mechanism was, technically, behavioral targeting. The effect was racial exclusion from housing advertising. The structural result was digital redlining: people of color were systematically excluded from seeing housing opportunities that were shown to white users.

Facebook's Initial Defense

Facebook's initial response to the investigation was defensive in ways that revealed the company's framing of the problem. The company acknowledged the practice but disputed that it was equivalent to racial discrimination. Spokespersons noted that "Ethnic Affinity" was a marketing category based on cultural interests and engagement, not a racial category, and therefore not covered by the Fair Housing Act.

This defense had limited legal merit. Courts have consistently held that the Fair Housing Act covers facially neutral practices with discriminatory effects (disparate impact doctrine), not just explicitly discriminatory ones. Using a behavioral proxy for race to exclude people from housing advertising produces a discriminatory effect regardless of how the proxy is labeled.

Facebook also noted that its advertising policies prohibited discrimination based on protected characteristics and that the housing ad case was a policy violation. This framing — individual bad-actor advertisers violating policy rather than a systemic platform design problem — was also contested by investigators, who noted that the platform itself made discriminatory targeting easy, obvious, and available by default.

Subsequent Investigations: The Discrimination Runs Deeper

The ProPublica investigation prompted additional investigations that found discrimination extending beyond explicit exclusion targeting:

The ACLU's 2018 investigation found that Facebook was allowing discriminatory job advertising, showing certain employment opportunities exclusively to men or to non-parents in violation of Title VII and the Equal Credit Opportunity Act.

The National Fair Housing Alliance (NFHA) investigation found that Facebook's advertising algorithms — even without advertisers explicitly selecting demographic exclusions — were delivering housing advertisements in ways that produced racially segregated results. When test advertisers created housing advertisements with neutral targeting (no explicit demographic exclusions), Facebook's algorithm delivered the ads primarily to white users in white-majority zip codes, and primarily to users of a given race in racially homogeneous communities.

This finding was crucial: it demonstrated that the discriminatory delivery was not solely a product of advertisers choosing exclusion options — it was built into Facebook's algorithmic delivery optimization. The algorithm, optimizing for conversion (clicks that led to engagement), learned that housing advertisers' ads converted better among certain demographic groups (because of historical patterns in who buys or rents homes in particular price ranges and neighborhoods), and delivered accordingly. The discrimination was not a policy violation by specific advertisers; it was a product feature.

The HUD complaint filed in March 2019 made this same point formally. The Department of Housing and Urban Development alleged that Facebook's algorithm "effectively exempts housing-related ads from reaching" protected classes by optimizing delivery based on characteristics — including race — that are encoded in behavioral profiles. HUD's complaint argued that Facebook's algorithm was itself a discriminatory actor, not merely an instrument of discriminatory advertisers.

The 2019 Settlement

In March 2019, Facebook settled with the National Fair Housing Alliance, the American Civil Liberties Union, the National Fair Housing Alliance, and other fair housing organizations. The settlement included several significant commitments:

Elimination of multicultural affinity targeting for housing, employment, and credit ads: Facebook committed to removing the ability to target (or exclude) based on "Multicultural Affinity" categories in these sensitive advertising contexts.

Age and gender targeting restrictions: Facebook agreed to restrict the use of age and gender targeting in housing, employment, and credit advertising — recognizing that even these facially non-racial targeting variables had been used in discriminatory ways.

New ad delivery system for housing: Facebook committed to developing a new ad delivery system for housing, employment, and credit advertising that would ensure delivery patterns did not discriminate based on race or other protected characteristics. This was the most structurally significant commitment — acknowledging that the optimization algorithm itself needed to change.

Five-year monitoring: The settlement included a five-year monitoring period during which civil rights organizations could audit Facebook's compliance.

The HUD complaint was separately settled in August 2019. Facebook agreed to make additional changes and submit to ongoing auditing.

The Residual Problem: Proxies and Optimization

The settlement addressed some mechanisms of discriminatory targeting, but civil rights researchers have argued that the problem is structurally more intractable than any specific policy change can address.

The core problem is that behavioral data encodes historical patterns of inequality. If housing in certain price ranges in certain neighborhoods has been purchased primarily by white households — a pattern directly attributable to redlining, discriminatory lending, and exclusionary zoning — then a machine learning algorithm trained on this historical data will learn that white users are more likely to convert on luxury housing advertisements. Optimizing for conversion will produce racially skewed delivery without any explicit racial targeting.

This is the mechanism by which structural inequality becomes algorithmic discrimination: not through discriminatory intent, not through explicit racial targeting, but through optimization on outcomes that encode inequality. To eliminate algorithmic discrimination in housing advertising, it is not sufficient to prohibit explicit racial exclusion. It is necessary to restructure the optimization objective — to optimize not just for commercial conversion but for equitable delivery. This is a technically and commercially challenging requirement that goes well beyond the adjustments Facebook committed to in the 2019 settlement.

Analysis Questions

  1. Facebook's initial defense distinguished "Ethnic Affinity" (a behavioral marketing category) from race (a protected characteristic). Evaluate this distinction. Does the mechanism of inference — behavioral rather than declared race — change the legal or ethical analysis of the discriminatory outcome?

  2. The HUD complaint alleged that Facebook's optimization algorithm was itself a discriminatory actor. This framing — algorithm as actor — has significant legal implications. Does it make sense to hold an algorithm legally liable for discrimination? Or should liability rest with the humans and companies who designed and deployed it?

  3. The research found discriminatory delivery patterns even when advertisers chose no explicit demographic targeting. What does this finding imply about the relationship between "neutral" commercial optimization and structural discrimination? Can optimization be genuinely neutral in a historically unequal society?

  4. The settlement required a five-year monitoring period. Civil rights organizations have tools for auditing ad delivery patterns but limited legal power to compel remediation if violations are found. What governance structure would provide more robust protection against algorithmic housing discrimination?

  5. The chapter calls this phenomenon "redlining 2.0" — a reference to historical housing discrimination. Is this analogy accurate? What does the comparison illuminate, and what does it obscure?


Case Study 14.2 | Chapter 14 | Part 3: Commercial Surveillance