46 min read

In October 2016, a pair of journalists from ProPublica made a purchase. They created a fake Facebook account for a fictional employer and attempted to place a job advertisement. Using Facebook's ad targeting tools, they discovered they could filter...

Chapter 16: Transparency in AI Marketing and Advertising

Part III: Transparency and Explainability


Opening: The Feature That Discriminated

In October 2016, a pair of journalists from ProPublica made a purchase. They created a fake Facebook account for a fictional employer and attempted to place a job advertisement. Using Facebook's ad targeting tools, they discovered they could filter their ad audience to exclude anyone Facebook had categorized as African American or Hispanic — meaning their help-wanted ad would not be shown to Black or Latino job seekers, regardless of their qualifications. Facebook called this category of targeting "Ethnic Affinity" and offered it as a standard option in its advertising platform. It had been available for years.

This was not a bug. It was not a hacker exploiting a vulnerability. It was a feature — a tool that Facebook's advertising system offered advertisers as a way to refine audience targeting. Housing advertisers used it to exclude certain ethnic groups from seeing listings, replicating the redlining that the Fair Housing Act had outlawed in 1968. Credit advertisers used it to exclude certain demographic groups from seeing loan offers. Employment advertisers used it to exclude minority job seekers from seeing help-wanted ads. And Facebook's AI-powered advertising system facilitated all of it, at scale, with algorithmic precision.

The case — which led to a $5 million settlement with the Department of Housing and Urban Development in 2019 and ongoing litigation — illustrates the central ethical challenge of AI in marketing: the same algorithmic power that makes targeted advertising extraordinarily effective also makes it extraordinarily easy to discriminate, manipulate, and deceive. And because these capabilities are embedded in algorithmic systems that operate at scale and largely invisibly, the accountability mechanisms that might have caught earlier forms of discriminatory advertising are poorly suited to catch their digital successors.

This chapter examines how AI powers modern marketing and advertising, what ethical obligations arise from these capabilities, and how regulatory frameworks are evolving to address them. The themes that run through this examination are familiar from the broader AI ethics literature: the power asymmetry between platforms and their users; the gap between formal legal compliance and genuine ethical practice; the risk that algorithmic optimization for commercial goals produces discriminatory or manipulative outcomes; and the global variation in how these challenges are being addressed.


Learning Objectives

By the end of this chapter, students will be able to:

  1. Describe the landscape of AI-powered marketing and advertising, including programmatic advertising, behavioral targeting, dynamic pricing, and AI-generated content.

  2. Identify the transparency obligations that apply to AI-generated advertising content under FTC guidelines, EU regulation, and comparable frameworks in other jurisdictions.

  3. Analyze how discriminatory advertising targeting operates through AI systems, explain the legal frameworks that apply, and evaluate the adequacy of existing enforcement.

  4. Distinguish between legitimate personalization and manipulation in AI-powered advertising, and identify the specific practices that constitute dark patterns.

  5. Evaluate the ethical and legal dimensions of dynamic pricing and AI-powered price discrimination, including disparate impacts on protected demographic groups.

  6. Apply frameworks for AI-generated content disclosure to specific marketing scenarios, including deepfakes, synthetic endorsements, and AI-written copy.

  7. Assess the transparency requirements for AI-powered content recommendation systems under the EU Digital Services Act and comparable frameworks.

  8. Design ethical AI marketing practices that satisfy legal requirements while building genuine consumer trust.


Section 16.1: AI in Marketing and Advertising — The Landscape

The modern marketing and advertising ecosystem runs on artificial intelligence. From the millisecond-level auctions that determine which ad appears on a webpage to the personalized email campaign that finds you at just the right moment, AI is the infrastructure of commercial persuasion. Understanding what AI does in this domain — and therefore what ethical obligations arise — requires understanding the technical landscape.

Programmatic Advertising: Real-Time Bidding

Programmatic advertising is the automated buying and selling of digital advertising space. When you load a webpage or open an app, a real-time auction occurs in milliseconds: publishers (website and app owners) offer advertising slots, advertisers bid for those slots based on who they believe is loading the page, and the highest bidder's ad appears. This entire process — detection, evaluation, bid submission, auction resolution, and ad delivery — occurs before the page finishes loading.

The AI that powers programmatic advertising performs several functions. It profiles users based on their browsing history, demographic data, purchase behavior, and hundreds of other signals. It predicts which users are most likely to respond to which advertisers. It sets bid prices dynamically, automatically adjusting based on user profile, competitive bids, and campaign performance. And it learns over time, continuously updating its models of user behavior and advertiser ROI.

The scale of programmatic advertising is staggering. Google, Meta (Facebook and Instagram), and Amazon together account for roughly 70% of US digital advertising revenue. Each of these platforms processes billions of ad auctions daily, maintaining detailed profiles on hundreds of millions or billions of users, and deploying AI models of immense sophistication to optimize advertiser outcomes. The advertising AI ecosystem also includes hundreds of smaller players — demand-side platforms (DSPs), supply-side platforms (SSPs), data management platforms (DMPs), and advertising agencies — each adding additional algorithmic layers.

Behavioral Targeting: Using Data to Influence

Behavioral targeting uses data about users' online behavior — what they search for, what they read, what they buy, where they go, who they communicate with — to predict preferences and intentions, and to show them advertising believed to be more relevant to their current interests. More sophisticated behavioral targeting extends to predicting future behavior: not what you want now, but what you will want in three months, based on behavioral patterns that precede that purchase.

The data inputs for behavioral targeting extend far beyond explicit online actions. Location data (from mobile apps with location permissions) reveals where you live, work, worship, and seek healthcare. Purchase data (from loyalty programs, credit card companies, and retail partners) reveals spending patterns. Health app data reveals exercise habits, sleep patterns, and potential medical conditions. Social media data reveals relationships, views, and emotional states. These data streams are combined and modeled to produce consumer profiles of extraordinary granularity.

The ethical concerns with behavioral targeting are multiple. Users typically have limited awareness of how much data is being collected about them and how it is being used. The consent mechanisms through which users nominally agree to this data collection — cookies pop-ups, terms of service agreements — are designed to facilitate agreement rather than enable genuine informed consent. And the insights behavioral targeting derives from this data can be used in ways that users would strongly object to if they understood them.

Personalization: Recommendations, Curation, Dynamic Messaging

AI personalization encompasses a range of commercial techniques: product recommendation systems (like Amazon's "customers who bought this also bought"), content curation algorithms (like Netflix's thumbnail selection or Spotify's Discover Weekly), and dynamic message personalization (email campaigns whose subject lines, images, and content are customized to individual recipients). Each of these techniques uses machine learning to predict what an individual user is most likely to find compelling, and to present information accordingly.

Recommendation systems have become among the most commercially important AI applications. Amazon has reported that its recommendation engine drives roughly 35% of its total revenue. Netflix estimates that personalized recommendations prevent significant subscriber churn by ensuring users consistently find content worth watching. The commercial power of recommendation AI is substantial — which also means the incentives to optimize for recommendation engagement rather than user welfare are substantial.

Lookalike Audiences: The Amplification of Existing Bias

Lookalike audience targeting is a technique used by most major advertising platforms. Advertisers provide a "seed" list — typically their existing customers or their most valuable customers — and the platform's AI identifies users who resemble the seed list in relevant ways, building a larger audience for the advertiser's campaigns. The logic is that people who are similar to existing customers are likely to be responsive to the same advertising.

Lookalike audiences have a well-documented bias problem: they replicate and amplify the characteristics of the seed list. If an advertiser's existing customer base is 85% white, male, and high-income — as is the case for many financial products, technology goods, and professional services — the lookalike audience will also be predominantly white, male, and high-income. Job ads targeted to a lookalike audience built from existing employees will perpetuate existing workforce demographics. Loan offers targeted to lookalike audiences built from existing customers will replicate existing patterns of credit access. This occurs even without any deliberate discriminatory intent on the part of the advertiser — the discriminatory outcome is produced by the algorithm's optimization, not by anyone choosing to discriminate.

AI-Generated Content in Marketing

The most recent development in AI-powered marketing is the generation of marketing content itself. Large language models and image generation systems are now used to produce product descriptions, email marketing copy, social media content, advertising scripts, and visual advertising assets. Platforms like Jasper, Copy.ai, and the marketing capabilities built into ChatGPT and similar models are used by marketing departments of all sizes.

The scale of AI-generated marketing content is growing rapidly. In 2024, it was estimated that AI-generated or AI-assisted content accounted for a significant and rapidly growing fraction of commercial content online. This creates disclosure challenges — when consumers believe they are reading human-generated content but are reading AI-generated content, they may be misled about the authenticity of the message. And AI-generated content has known quality problems: inaccuracies, hallucinated facts, and text that sounds plausible but is factually wrong, all of which create brand safety and consumer protection risks.


Section 16.2: The Transparency Obligation in Advertising

The fundamental principle of advertising regulation, dating to the early twentieth century, is that advertising must be identifiable as advertising. Consumers who know they are being sold to can apply appropriate skepticism; consumers who believe they are receiving neutral information cannot. This principle — embodied in FTC enforcement doctrine, the Federal Trade Commission Act, and analogous regulations globally — has been tested and complicated by AI in several important ways.

FTC Endorsement Guidelines and AI-Generated Content

The FTC's "Guides Concerning the Use of Endorsements and Testimonials in Advertising," substantially updated in 2023, set requirements for when influencer endorsements, user testimonials, and similar content must be disclosed as advertising. The 2023 update explicitly addresses AI-generated content, holding that AI-generated endorsements must be disclosed and that brands are responsible for ensuring that AI-generated content about their products is accurate and not misleading.

The FTC's enforcement approach holds that the fundamental question is whether a communication is likely to mislead consumers acting reasonably. AI-generated content that creates the impression of authentic human experience or endorsement — a fake consumer review, a synthetic endorser who does not actually use the product, an AI-generated testimonial — is deceptive if it misleads consumers who would make different purchasing decisions if they knew the content was AI-generated.

The FTC's 2023 guidance also addressed the responsibility chain in AI-generated advertising: advertisers, not just the AI vendors who produce the tools, are responsible for ensuring that AI-generated content in their campaigns meets truthfulness and disclosure requirements. This is a significant expansion of accountability, because it means companies cannot outsource responsibility for misleading AI content to their technology vendors.

The Native Advertising Problem

Native advertising — paid content designed to resemble editorial or journalistic content — has been a regulatory challenge since before AI was widely deployed in marketing. AI compounds the problem. Large language models can generate content that is nearly indistinguishable from human journalism, opinion, or user-generated content, at volumes that make individual review impractical. When such content is deployed without disclosure — either through paid placement in media outlets or through organic channels — it deceives consumers about the nature and credibility of the information they are receiving.

The FTC's native advertising guidance requires that native ads be clearly labeled as advertising, using language that is understandable to the typical consumer rather than industry jargon. But enforcement has been limited, and the rapid expansion of AI-generated content has outpaced regulatory capacity. Publishers who deploy AI-generated "editorial" content on their websites without disclosure, brands that place AI-generated "thought leadership" in industry media as if it were independent analysis, and marketers who use AI to generate fake user reviews — all of these practices raise disclosure obligations that existing enforcement has not consistently addressed.

The Deepfake Celebrity Endorsement Problem

A particularly vivid illustration of AI transparency challenges in advertising is the deepfake celebrity endorsement — a synthetic video or audio in which a celebrity's likeness and voice are used to endorse a product without their consent. Such content has proliferated rapidly since high-quality voice and video synthesis became accessible to commercial actors.

Deepfake endorsements violate multiple legal frameworks: right of publicity law (using a celebrity's likeness without consent), FTC rules on endorsement disclosure, and potentially fraud statutes if consumers are deceived into purchasing products based on false endorsements. The challenge is detection and enforcement at scale. AI-generated synthetic media can be difficult to distinguish from authentic content, particularly as generation quality improves, and enforcement actions are reactive and slow relative to the speed at which deceptive content can proliferate.

The SAG-AFTRA union's 2023 agreement with entertainment studios on AI replicas — which requires explicit actor consent and compensation for AI-generated replicas of their performances — represents one contractual mechanism for addressing the problem in the entertainment context. Its implications for advertising are less clear, as the actors' union has less leverage over commercial advertisers than over entertainment studios.

International Variation in Disclosure Requirements

The disclosure obligations that apply to AI-generated advertising content vary significantly by jurisdiction. In the European Union, the EU AI Act requires disclosure when AI systems interact with users in ways that are not immediately apparent — including AI-generated content that impersonates humans. The EU Digital Services Act requires very large platforms to provide advertisers with information about the targeting parameters used for their ads and to maintain ad libraries accessible to researchers. In the United Kingdom, the Advertising Standards Authority has published guidance on AI-generated advertising disclosure. In Canada, the Competition Bureau has applied existing deceptive advertising standards to AI-generated content.

The US regulatory framework remains the least prescriptive. FTC guidance applies to specific deceptive practices, but there is no federal statute specifically requiring disclosure of AI-generated content as such. Several states have enacted or proposed AI disclosure requirements, and the FTC has proposed rulemaking that would establish clearer disclosure standards. But the US approach remains substantially reactive enforcement-based rather than proactive regulation-based.


Section 16.3: Discriminatory Targeting — When Personalization Becomes Discrimination

The most serious ethical issue in AI-powered advertising is not transparency about AI involvement — it is discrimination. The same algorithmic tools that allow advertisers to target with precision can be used, deliberately or accidentally, to exclude protected groups from access to opportunities: jobs, housing, credit, education. And the power of AI to operationalize discrimination at scale, without any individual making a discriminatory choice, has fundamentally changed the enforcement challenge.

The Facebook HUD Settlement: How Discrimination Was Built In

Facebook's "Ethnic Affinity" targeting option — the tool that allowed advertisers to exclude users by perceived race from their ad audiences — was only the most visible example of a systematic feature of the platform's advertising system. When ProPublica exposed it in 2016, Facebook initially defended the practice as serving legitimate advertising purposes (different communities, it argued, have different linguistic and cultural preferences that marketers need to address) before eventually removing the "Ethnic Affinity" category from explicit advertiser options.

But removing the explicit option did not resolve the discriminatory functionality of Facebook's advertising system. A subsequent investigation by ProPublica, the ACLU, and academic researchers demonstrated that Facebook's algorithm continued to optimize ad delivery in ways that produced racially disparate audience distributions, even without explicit demographic targeting by advertisers. Advertisers who ran housing ads saw their ads delivered to predominantly white audiences. Advertisers who ran employment ads for female-dominated professions saw their ads delivered to predominantly female audiences. The discrimination was produced not by human choices — no Facebook employee or advertiser was individually selecting to show the ad to white people rather than Black people — but by the algorithm's optimization for engagement, which had learned that engagement rates varied across demographic groups for these ad types.

In 2019, Facebook settled with the Department of Housing and Urban Development for $5 million, agreeing to create a new advertising portal for housing, employment, and credit ads that would prohibit certain types of demographic targeting. The settlement also required Facebook to create mechanisms to allow users to see why they were (or were not) shown housing, employment, and credit ads. Additional settlements with civil rights organizations followed. Despite these settlements, subsequent investigations found that discriminatory audience delivery continued in modified forms.

How Discriminatory Targeting Works: The Proxy Problem

The persistent challenge in regulating discriminatory ad targeting is the proxy problem. Even when advertisers are prohibited from targeting (or excluding) users by race, gender, or national origin explicitly, AI systems can achieve the same effect through proxy variables. Geographic targeting (showing ads only in certain zip codes) functions as a proxy for race when neighborhoods are racially segregated — as most American neighborhoods are, to some degree. Targeting by interests, behaviors, or "lookalike" audiences built from homogeneous seed lists produces demographic concentration that correlates with protected class even when no explicit demographic variable is used.

Research by Datta, Tschantz, and Datta (2015) demonstrated that Google's ad delivery system showed high-paying job ads to men significantly more often than to women, without any advertiser having instructed the system to discriminate by gender. The algorithm had apparently learned that engagement with certain professional ads was correlated with gender, and optimized delivery accordingly. Subsequent academic research has replicated and extended this finding across advertising platforms and ad categories.

The proxy problem fundamentally challenges the enforcement model built on prohibiting explicit demographic targeting. Legal prohibitions on using race or gender as explicit targeting criteria do not reach algorithmic systems that achieve the same demographic concentration through proxy variables — and the data richness of modern behavioral advertising means that essentially any demographic characteristic can be closely approximated from behavioral and contextual proxies.

Three federal civil rights statutes are directly applicable to discriminatory advertising targeting. The Fair Housing Act (FHA) prohibits advertising that indicates a preference for or against particular protected classes in the sale or rental of housing. The Equal Credit Opportunity Act (ECOA) prohibits discrimination in credit advertising based on race, color, religion, national origin, sex, marital status, or age. Title VII of the Civil Rights Act prohibits discrimination in employment, which courts have interpreted to include employment advertising that excludes protected classes from receiving information about job opportunities.

The application of these statutes to algorithmic advertising is legally contested but directionally clear. The FHAct explicitly prohibits "making, printing, or publishing... any notice, statement, or advertisement" that expresses a discriminatory preference — language broad enough to encompass algorithmic ad delivery systems. Courts and regulators have applied the disparate impact standard: advertising systems that produce discriminatory results, regardless of discriminatory intent, may violate these statutes.

The settlement in National Fair Housing Alliance v. Facebook (2018) — in which Facebook agreed to create separate portals for housing, credit, and employment advertising and to limit demographic targeting for these categories — represents the most significant application of civil rights law to algorithmic advertising to date. But the settlement's effectiveness has been questioned: researchers have found evidence of continued discriminatory delivery in Facebook's housing, employment, and credit advertising even after the settlement, suggesting that technical implementation of the settlement's requirements was incomplete.

The Algorithm's Role: Producing Discrimination Without Discriminatory Intent

Perhaps the most important insight in the discriminatory advertising literature is that discriminatory outcomes do not require discriminatory intent. An advertiser who genuinely has no intention to exclude any demographic group can still produce discriminatory outcomes if they use targeting methods — lookalike audiences, behavioral clusters, engagement optimization — that effectively concentrate ad delivery in demographically homogeneous groups.

This is not merely a theoretical possibility; it is an empirically documented regularity. Multiple peer-reviewed studies have found that algorithmic advertising optimization produces demographically skewed delivery patterns that replicate historical patterns of exclusion, in the absence of any explicit discriminatory instruction from advertisers or platforms. The algorithm simply learns what the existing world looks like — including its patterns of exclusion and concentration — and reproduces those patterns, optimizing for engagement within the demographics that have historically been targeted.

This finding has profound implications for both law and ethics. A legal framework that only prohibits intentional discrimination will not catch algorithmic discrimination of this type. A corporate culture that equates "we didn't mean to discriminate" with "we didn't discriminate" will systematically under-invest in identifying and correcting algorithmic discrimination. Meaningful ethical practice requires treating discriminatory algorithmic outcomes as a problem requiring correction regardless of intent — and it requires proactive auditing, not just reactive enforcement.


Section 16.4: Behavioral Manipulation and Dark Patterns

The ethical analysis of AI in marketing must distinguish between persuasion and manipulation. Persuasion — presenting genuine information about a product in an appealing way — is a legitimate commercial practice. Manipulation — exploiting cognitive biases, information asymmetries, or emotional vulnerabilities to produce decisions that do not serve the individual's genuine interests — is not.

The Spectrum from Persuasion to Manipulation

Persuasion and manipulation exist on a spectrum, and the line between them is contested. What is clear is that at the manipulation end of the spectrum are practices that are widely recognized as ethically objectionable: false or misleading product claims, deceptive pricing, targeting people in states of emotional vulnerability with products that exploit that vulnerability, and designing user interfaces to make certain choices difficult, obscure, or inadvertent.

AI has moved this spectrum by enabling manipulation to be personalized and automated at scale. A manipulation technique that was effective on some portion of the population in the pre-AI era can now be identified, refined, and deployed specifically against individuals who are most susceptible to it, at precisely the moments when their susceptibility is highest.

Dark Patterns: Designed to Deceive

Dark patterns are user interface (UI) designs that deliberately trick or mislead users into making choices that serve the company's interests at the user's expense. The term was coined by UX designer Harry Brignull in 2010, and the phenomenon has been extensively documented since. Classic dark patterns include: subscription traps (easy to subscribe, difficult to cancel); privacy zuckering (default settings that share maximum data, presented in confusing ways); confirmshaming (making the opt-out option involve agreeing to an emotionally loaded statement like "No thanks, I don't want to save money"); and misdirection (drawing attention away from an unfavorable condition in a transaction).

AI enables a new category of dark patterns: personalized dark patterns. Instead of using a one-size-fits-all deceptive design, an AI-powered platform can identify what type of manipulative framing is most effective for each individual user, based on their behavioral history. Research by Mathur et al. (2019) documented over 1,800 instances of dark patterns on e-commerce websites. Subsequent research has documented the personalization of dark patterns through AI — showing that platforms deploy different manipulative strategies based on inferred user characteristics.

The FTC has pursued dark patterns enforcement aggressively in recent years. Its 2022 report, "Bringing Dark Patterns to Light," documented the prevalence of dark patterns and signaled an enforcement priority. Subsequent enforcement actions against Amazon (for the Prime subscription trap) and against Epic Games (for dark patterns in Fortnite that led children to make unauthorized purchases) imposed substantial financial penalties and required companies to redesign practices identified as dark patterns.

AI-Powered Emotional Targeting

Particularly troubling at the manipulation end of the spectrum is the use of AI to identify and exploit emotional states. Research has documented that Facebook's ad delivery system can identify users who are in states of emotional vulnerability — loneliness, anxiety, insecurity — and can signal this information to advertisers targeting those states. A 2017 document leaked from Facebook Australia described a capability that could identify "moments when young people need a confidence boost," which advertisers could use to target insecure teenagers with advertising for products promising social acceptance.

Facebook denied that the system functioned as the document described, but the document reflected the aspiration — shared by many advertising platforms — to use AI to identify emotional states and optimize ad delivery to capitalize on them. The ethical objection to this practice is not that emotions should be irrelevant to marketing. It is that targeting people specifically when they are most vulnerable to manipulation — when their judgment is most impaired by emotional distress — is not persuasion. It is exploitation.

The Children's Advertising Problem

Children are the most vulnerable population in advertising contexts, and COPPA (the Children's Online Privacy Protection Act) places significant restrictions on collecting and using data from children under 13 for advertising purposes. AI has complicated COPPA compliance in several ways. AI recommendation and targeting systems deployed on platforms used by both adults and children — YouTube, TikTok, Instagram — must be designed to identify when they are interacting with children and to adjust their data practices accordingly, a task that has proven technically and operationally difficult.

The FTC's enforcement action against YouTube (2019, $170 million settlement) and subsequent enforcement against TikTok found that these platforms collected behavioral data from children in violation of COPPA, using it to serve targeted advertising. The settlements required both companies to create COPPA-compliant modes that do not use behavioral data for targeting, a remedy that significantly reduces advertising revenue from content with high child viewership.


Section 16.5: Dynamic Pricing and Price Discrimination

AI enables price discrimination of unprecedented granularity. Where traditional pricing set different prices for different product categories or customer segments defined in advance, AI dynamic pricing can set a different price for each individual customer in each individual transaction, based on a continuously updated model of that customer's willingness to pay.

How AI Enables Dynamic Pricing

Dynamic pricing using AI works by modeling the relationship between price and purchase probability for individual customers, then setting prices to maximize expected revenue. Amazon reportedly changes product prices hundreds of millions of times per day, with prices varying by customer, time of day, day of week, competitive conditions, and inventory levels. Airline pricing has used dynamic optimization for decades; AI has extended this approach to retail, hospitality, ride-sharing, food delivery, insurance, and many other sectors.

The AI models that power dynamic pricing draw on behavioral data about individual customers: their browsing history (which reveals their level of interest in a product), their purchase history (which reveals their price sensitivity), their geographic location (which correlates with income and local competition), and sometimes their device type (Mac users have been shown different prices than PC users, on the theory that Apple users have higher income). The result is prices that are less a reflection of cost plus margin than a personalized extraction of individual willingness to pay.

In the United States, price discrimination is generally legal for non-regulated goods. Robinson-Patman Act prohibitions on price discrimination apply to sales to competing business buyers, not to consumer pricing. There is no federal statute that generally prohibits charging different consumers different prices for the same product. Insurers, utilities, and other regulated industries face price discrimination limits, but retail, hospitality, and most digital commerce do not.

The legal permissibility of dynamic pricing does not resolve its ethical dimensions. When AI dynamic pricing consistently produces different prices for customers based on characteristics that correlate with race, income, or geography — which, given residential and behavioral segregation, is a predictable consequence of using location and behavioral data — it raises fairness concerns that legal permissibility does not address.

Insurance Pricing: Credit Scoring as Discrimination

The insurance industry's use of credit-based insurance scoring is one of the most studied and contested applications of AI-powered price discrimination. Insurers use credit scores and related financial data to price auto and homeowner's insurance policies, on the theory that credit behavior predicts claims behavior. The practice is legal in most states.

The problem is that credit scores correlate significantly with race and income. Research by the Consumer Federation of America and others has found that Black and Hispanic consumers consistently pay more for auto insurance than white consumers with identical driving records and coverage levels, with a significant portion of this disparity attributable to credit-based pricing. In states where credit-based insurance scoring is used most extensively, racial disparities in insurance premiums are largest.

The insurance industry argues that credit-based pricing is actuarially accurate — that it predicts claims — and that actuarially accurate pricing should not be prohibited merely because it produces racially disparate results. Critics argue that using financial characteristics that are themselves products of discriminatory systems (historically discriminatory lending, discriminatory employment, and wealth gaps produced by centuries of exclusion) perpetuates those systems in new domains. This debate has not been resolved, but several states have moved to restrict or ban credit-based insurance scoring, and the practice is prohibited for health insurance under the ACA.

Amazon's Dynamic Pricing and Uber Surge Pricing

Amazon's dynamic pricing system generates significant scrutiny because its opacity makes it difficult for consumers to know whether the price they see is the price others see. Amazon has adjusted prices based on competition from third-party sellers, geographic availability, and individual user browsing behavior. Research has found that Amazon prices for some products correlate with the demographic characteristics of the zip codes in which users are located — raising concerns that dynamic pricing produces geographically disparate pricing that correlates with race and income.

Uber's surge pricing — algorithmically increasing prices during periods of high demand — produces disparate geographic impacts. Surge pricing occurs most frequently in areas with high restaurant density, entertainment venues, and affluent residential areas, where demand is highest. Users in lower-income areas and outer neighborhoods may pay consistently higher average prices per mile because their areas are served by fewer drivers relative to demand. The spatial pattern of surge pricing reproduces and amplifies geographic inequalities in transportation access.


Section 16.6: AI-Generated Content and Authenticity

The proliferation of AI-generated content in marketing and advertising raises fundamental questions about authenticity — about whether consumers can trust that what they see is real, that endorsements reflect genuine experience, that content reflects human judgment rather than algorithmic optimization.

The Deepfake Problem in Advertising

Deepfake technology — AI systems that generate synthetic video and audio that appear to show real people saying or doing things they never said or did — has created a new category of deceptive advertising. Celebrity deepfakes have been used to create unauthorized product endorsements; voice synthesis has been used to create fake testimonials from recognizable figures; image synthesis has been used to create fake user testimonials featuring realistic-looking people who do not exist.

These practices violate multiple legal provisions, as noted in Section 16.2. The legal challenge is enforcement at scale and across jurisdictions: deepfake advertising can be created in one country, hosted in another, and distributed globally, making it difficult to identify the responsible party and apply effective legal remedies. Technology to detect AI-generated synthetic media exists and is improving, but the detection arms race between synthesis quality and detection capability has generally favored synthesis.

Brand Safety and AI Content Moderation

A somewhat different authenticity problem arises when advertisers' content appears adjacent to inappropriate content on platforms that use AI to determine ad placement. Brand safety — ensuring that ads do not appear next to extremist content, graphic violence, or other material that would embarrass the advertiser — is a significant concern for major advertisers, and platforms have deployed AI content moderation systems to identify and exclude problematic content from ad-supported inventory.

These AI content moderation systems have their own bias and accuracy problems. They have been documented to disproportionately flag content from LGBTQ+ creators, Black creators, and creators discussing disability — reducing the advertising revenue available to these creators through systematic misclassification. The brand safety AI thus replicates discrimination in a new context: by over-flagging content from certain communities as unsuitable for advertising, it reduces those communities' access to the advertising-funded economy.

The SAG-AFTRA AI Agreement

In 2023, after a months-long strike, the Screen Actors Guild reached agreements with major entertainment studios and the Alliance of Motion Picture and Television Producers that included provisions governing the use of AI-generated replicas of actors' performances. Under the agreement, studios must obtain actors' informed consent to create AI replicas of their voices and likenesses, must disclose when replicas are used, and must compensate actors appropriately.

The SAG-AFTRA agreement covers entertainment content, not advertising per se — though the relevant actors' agreements and union contracts extend to commercial advertising. The principle of consent and compensation for AI replicas has significant implications for marketing: brands that use AI-generated celebrity voices, likenesses, or performance styles in advertising without consent and compensation are likely to face both union enforcement and, increasingly, legal action under right of publicity statutes.


Section 16.7: Transparency Requirements for AI-Powered Recommendations

When platforms use AI to curate the content users see — determining which products appear in search results, which posts appear in social media feeds, which articles are recommended — this curation has significant commercial and social effects. The question of whether platforms must disclose how these systems work is contested but increasingly addressed by regulation.

EU Digital Services Act: Algorithmic Transparency

The EU Digital Services Act (DSA), effective in phased stages through 2024, imposes substantial transparency requirements on very large online platforms (those with more than 45 million monthly active users in the EU). These include requirements to maintain a publicly accessible advertising library disclosing information about all ads shown on the platform, the targeting parameters used, and the advertiser's identity. The DSA also requires platforms to offer users at least one recommendation system that is not based on profiling — allowing users to see content based on criteria other than their behavioral history.

For very large platforms, the DSA requires that researchers and civil society organizations have access to data that enables them to audit recommendation systems for systemic risks, including risks of discriminatory content distribution. This is a significant expansion of research access beyond what has been voluntarily provided by platforms historically.

EU AI Act: Transparency for Recommendation Systems

The EU AI Act, which was enacted in 2024 and phases into effect through 2026-2027, classifies certain types of AI systems as "limited risk" and requires transparency measures rather than the more extensive conformity assessments required for high-risk systems. Recommendation systems that are used to personalize content, products, or services are subject to disclosure requirements: users must be able to understand that AI recommendation is occurring and must have the option to opt out of profiling-based recommendations for significant recommendations.

The AI Act also requires disclosure of AI-generated content: images, audio, and video generated or substantially modified by AI must be labeled as such. This requirement is intended to address deepfakes and synthetic media, but its implementation for AI-assisted content — content that blends human and AI contributions — raises questions that the implementing regulations are still working out.

US Comparison: Limited Federal Requirements

The United States has no federal statutory equivalent to the EU DSA or EU AI Act transparency requirements for recommendation systems. Platforms are not required by federal law to disclose how their recommendation algorithms work, to provide algorithmic alternatives to profiling-based recommendations, or to maintain public advertising libraries (though Meta, Google, and Twitter/X have created some voluntary or settlement-driven versions of these).

The gap between US and EU transparency requirements for platform algorithms is substantial and creates compliance complexity for global platforms, which must meet EU standards in Europe while operating under less stringent US requirements. The practical effect is that EU users have somewhat more transparency into and control over recommendation algorithms than US users — though implementation of DSA requirements has proceeded slowly and enforcement has lagged.

California has enacted several AI transparency-related statutes. The California Consumer Privacy Act (CCPA) and its amendments give consumers rights to know what personal information is collected and used, including for advertising targeting, and to opt out of the sale or sharing of personal information for advertising purposes. AB 587 (2022) requires social media companies to publish content moderation and recommendation algorithm policies. Additional California legislation on AI disclosure is advancing through the legislative process.


Section 16.8: Building Ethical AI Marketing Practices

Against the backdrop of discriminatory targeting, behavioral manipulation, dynamic pricing disparities, and authenticity challenges, what does ethical AI marketing practice look like? This section offers a framework — not a comprehensive compliance guide, but an orientation toward practices that align commercial AI use with genuine ethical standards.

The most fundamental choice in AI marketing ethics is the targeting model. Opt-out behavioral targeting — collecting data about all users and using it for targeting unless users take affirmative action to stop it — has been the dominant model in digital advertising. Opt-in targeting — collecting and using behavioral data only from users who have affirmatively consented — is more privacy-protective and more aligned with consumer expectations.

The shift from opt-out to opt-in targeting is not merely a regulatory requirement (though GDPR and similar frameworks require it in many contexts). It is a business model choice with meaningful implications for the type of commercial relationship an organization builds with its customers. Brands that build on consent-based models develop audience relationships built on trust rather than extraction. They may reach smaller audiences initially, but they reach audiences that have chosen to hear from them.

Contextual advertising offers an alternative to behavioral targeting that does not require individual behavioral tracking at all. Contextual advertising places ads based on the content of the page or app the user is viewing — showing travel ads on a travel website, financial services ads on a financial news site — without tracking the user's behavior across sites. Research on contextual advertising's effectiveness is mixed: it is generally less targeted and therefore less effective at an individual level, but it has documented advantages in brand safety and consumer trust.

Bias Testing of Advertising Algorithms

Organizations that deploy advertising AI should conduct proactive bias testing — not after problems have been identified through external investigation, but as a routine part of algorithm development and deployment. Bias testing for advertising algorithms should assess: whether the algorithm produces discriminatory ad delivery patterns for protected classes in employment, housing, and credit advertising; whether lookalike audience construction produces homogeneous audiences that exclude protected groups; whether dynamic pricing produces systematically disparate prices correlated with geographic proxies for race or income.

Bias testing requires access to demographic data that raises its own privacy considerations, and methodological choices about what constitutes discriminatory disparity are contested. But the alternative — deploying advertising AI without testing for discriminatory outcomes and learning about disparities only through enforcement actions or investigative journalism — is both ethically insufficient and organizationally risky.

The Vendor Accountability Chain

Most organizations that use AI in marketing do not build their own advertising AI systems; they use platforms, ad tech vendors, and tools provided by third parties. The ethical obligations around advertising AI extend to the entire vendor chain: organizations bear responsibility for ensuring that the ad tech systems they use meet ethical and legal standards, even when those systems are provided by third-party vendors.

Practical steps in the vendor accountability chain include: due diligence on potential ad tech partners' practices, including their approach to discriminatory targeting and dark patterns; contractual requirements that vendor systems comply with applicable civil rights laws and relevant advertising regulations; ongoing monitoring of vendor AI performance for discriminatory or manipulative outcomes; and the capacity to respond effectively when vendors' systems produce ethical violations.

The vendor accountability challenge is made more complex by the layered structure of the advertising technology ecosystem. A brand working with an advertising agency that uses a demand-side platform that sources inventory from publishers using supply-side platforms that rely on data management platforms — this is a chain of four or five distinct vendor relationships, each adding algorithmic layers that the brand at the top of the chain cannot directly see or control. Discriminatory targeting, dark patterns, or data violations that occur at any link in this chain create legal and reputational exposure for the brand, even when the brand was not the proximate actor.

The EU Digital Services Act addresses part of this problem through what it calls "systemic risk" obligations — requiring very large platforms to identify and mitigate risks produced by their advertising systems, including risks of discrimination and manipulation — but the downstream vendor chain extending to smaller ad tech companies is less well regulated. Organizations that wish to genuinely manage this risk need contractual audit rights, standardized vendor disclosure requirements, and technical monitoring capabilities that most marketing organizations have not yet built.

Creative Review Processes for AI-Generated Content

Organizations that use AI to generate marketing content — product descriptions, email campaigns, social media posts, ad creative — need review processes that assess both the accuracy and the disclosure obligations of AI-generated outputs. Creative review for AI-generated content differs from review of human-generated content in several important ways.

Human creative review traditionally focuses on brand voice, factual accuracy, legal compliance, and strategic alignment. Review of AI-generated content must additionally address: whether the content contains AI-characteristic errors (hallucinated facts, plausible-sounding but inaccurate claims, outdated information); whether the content has appropriately disclosed its AI-generated nature where required; whether the content meets the organization's standards for representation and avoidance of bias (AI models can reflect and amplify demographic biases in their training data); and whether the content is genuinely the organization's own communicative act or is AI-generated content for which the organization is taking responsibility without adequate review.

The last point is particularly important for FTC disclosure obligations: organizations are responsible for advertising they publish, including AI-generated advertising, regardless of whether a human reviewed and approved every word. Review processes that simply approve AI output without meaningful human assessment of accuracy and appropriateness create liability and reputational risk. The question "did we mean to say this?" must be answerable for all published content.

Platform Responsibility and the Ethics of Advertising Infrastructure

The advertising AI ethics discussion often focuses on advertiser behavior — what choices advertisers make about targeting, content, and disclosure. But the platforms that provide the advertising infrastructure bear substantial responsibility for the ethical dimensions of the advertising AI ecosystem they have created.

Platforms make choices about what targeting capabilities to offer, how to optimize ad delivery, what discriminatory targeting practices to permit or prohibit, what dark pattern designs to allow in advertiser-created content, and how transparent to be with advertisers, users, and regulators about how their systems work. These choices shape the entire advertising ecosystem, because advertisers can only do what platforms allow them to do.

The record of major advertising platforms on these choices is mixed. Meta removed explicit Ethnic Affinity targeting for housing, employment, and credit advertising after external pressure — but maintained delivery optimization that continued to produce discriminatory effects. Google's advertising platform has implemented targeting restrictions for financial products, healthcare, and political advertising — but has faced criticism for inconsistent enforcement. Amazon's advertising practices have raised questions about the advantage its advertising system gives to its own products relative to third-party sellers. And all major platforms have faced criticism for insufficient transparency about how their advertising AI systems work.

Platform accountability for advertising AI requires: proactive rather than reactive identification and mitigation of discriminatory targeting and delivery; transparent disclosure of advertising system capabilities and limitations to advertisers and to regulators; meaningful access for civil society researchers to data needed to audit advertising AI systems; and effective enforcement of advertising policies rather than policies that are well-stated but weakly enforced.

The EU Digital Services Act's requirements for very large platforms — advertising libraries, algorithmic transparency, research data access — represent the most advanced regulatory framework for platform advertising accountability. Comparable requirements in the United States would significantly improve the advertising AI accountability landscape.

Ethics Washing in Advertising AI

A recurring theme in this textbook — ethics washing — is particularly prevalent in AI marketing. Major advertising platforms and ad tech companies have developed extensive ethics and responsibility frameworks: published principles, internal review boards, self-regulatory programs, and public commitments to fairness, transparency, and user protection. These frameworks are often genuine in aspiration but inadequate in implementation. The gap between stated principles and actual algorithmic outcomes — which the ProPublica investigations and academic research have repeatedly documented — is the advertising AI ethics washing problem.

Ethics washing in advertising AI takes several recognizable forms. A platform may publish a detailed "responsible AI" framework that prohibits discriminatory targeting, while its enforcement of that framework is reactive rather than proactive — triggered only when discrimination is exposed through external investigation rather than internal monitoring. A company may commit to "transparency" by publishing high-level descriptions of its targeting capabilities, while the practical opacity of its real-time bidding auctions and delivery optimization makes it impossible for affected individuals or regulators to understand how their data is being used. A vendor may claim its AI is "bias-free" because it does not use race as an explicit input, while its engagement optimization produces racially skewed delivery through proxy variables.

Distinguishing genuine ethical practice from ethics washing requires asking not what an organization says but what it does: What proactive bias testing does it conduct, and with what methodology? What access does it provide to independent researchers who want to audit its systems? What does it do when internal monitoring reveals discriminatory outcomes before external pressure arrives? What accountability mechanisms exist for employees who raise concerns about advertising AI practices? The answers to these questions reveal far more about an organization's genuine ethical commitments than its published principles.

Global Variation in Advertising AI Regulation

AI advertising regulation varies significantly across jurisdictions, and organizations operating globally must navigate a complex and evolving patchwork of requirements. Understanding the key regulatory differences is essential for multinational advertising strategy.

The European Union has the most comprehensive framework. GDPR restricts behavioral advertising based on personal data without valid legal basis — which in most consumer contexts means explicit opt-in consent. The EU AI Act adds transparency requirements for AI systems used in advertising. The Digital Services Act imposes advertising library, targeting transparency, and research access requirements on very large platforms. The proposed EU Political Advertising Regulation, if enacted, would prohibit targeting based on sensitive personal characteristics for political advertising. Together, these frameworks create a regulatory environment in which behavioral advertising is permitted but significantly constrained.

The United Kingdom, following Brexit, maintains UK GDPR — substantially equivalent to EU GDPR — and the Advertising Standards Authority's Code of Non-broadcast Advertising, which has issued guidance on AI-generated advertising. The UK's Competition and Markets Authority has conducted investigations into digital advertising markets that are increasingly attentive to AI's role.

Canada's PIPEDA (Personal Information Protection and Electronic Documents Act) and its proposed successor, the Consumer Privacy Protection Act, provide a consent-based framework for personal data use in advertising. Canada's Competition Bureau has applied existing deceptive advertising standards to AI-generated content. The proposed AIDA adds AI-specific transparency requirements for high-impact systems.

Australia's Privacy Act and its recent amendments move Australia toward a consent-based framework for personal data use, including in advertising. The Australian Consumer Law prohibits misleading or deceptive conduct that extends to AI-generated advertising. Brazil's LGPD (Lei Geral de Proteção de Dados) closely mirrors GDPR and creates similar constraints on behavioral advertising. India's Digital Personal Data Protection Act (2023) creates a new consent framework for personal data, with advertising implications still being worked out through rules.

The United States remains the major exception to the global trend toward consent-based advertising data regulation. The United States' sectoral, opt-out-based framework is increasingly out of step with global regulatory convergence, creating both compliance complexity for multinational advertisers and meaningful differences in the protections available to US consumers compared with their counterparts in other jurisdictions.

For organizations operating globally, the practical effect of this variation is that EU and GDPR-equivalent standards often become the de facto global standard — because it is operationally simpler to apply the most demanding standard globally than to maintain jurisdiction-specific data practices. This is sometimes called the "Brussels Effect" — EU regulation effectively raising global standards by making compliance with the strictest framework the most efficient approach for multinational businesses.

Diversity, Inclusion, and the Advertising Opportunity

One dimension of AI advertising ethics that deserves more attention than it typically receives is the opportunity dimension. Much of the analysis in this chapter has focused on the harms of discriminatory advertising AI — the people excluded from job ads, housing ads, and credit offers. But there is a complementary positive question: how can AI advertising be used to expand access and inclusion rather than perpetuate exclusion?

Some organizations have experimented with using AI advertising to affirmatively reach underrepresented communities — using targeted advertising to reach job seekers from historically excluded groups, to connect residents of underserved communities with financial products they qualify for, and to advertise educational and professional opportunities to people who would not otherwise know they exist. These positive uses of AI targeting face the same technical challenges as the discriminatory uses: lookalike audiences built from historically homogeneous populations will not produce more diverse audiences; algorithmic optimization for engagement will not automatically increase access if historical engagement rates are lower in underrepresented communities.

Affirmative use of advertising AI for inclusion requires deliberate design: building seed audiences from diverse populations; setting optimization objectives that explicitly reward demographic diversity rather than aggregate engagement; and monitoring outcomes for equity as well as efficiency. This is technically feasible, and some organizations have pursued it. But it requires treating diversity and inclusion as genuine advertising objectives, not just reputational claims — and it requires the same proactive attention to algorithmic outcomes that avoiding discriminatory advertising requires.

The Measurement Problem: Accountability Without Visibility

A fundamental challenge in advertising AI ethics is the measurement problem: organizations cannot hold themselves accountable for discriminatory, manipulative, or deceptive advertising AI practices they cannot see. The opacity of algorithmic advertising — the real-time auctions, dynamic audience construction, and engagement-optimized delivery that occur invisibly between advertiser intent and consumer exposure — means that even organizations with genuine ethical commitments cannot easily verify that their advertising AI is performing in accordance with those commitments.

This measurement problem has several dimensions. First, advertisers typically do not have visibility into how their ads are delivered at the individual level — they see aggregate reporting (total impressions, clicks, conversions) but not the demographic distribution of the audience their ads actually reached, disaggregated by protected class characteristics. Without this data, advertisers cannot test whether their ad delivery is discriminatory.

Second, the real-time bidding infrastructure that underlies programmatic advertising involves dozens of parties — DSPs, SSPs, DMPs, ad exchanges — each of which makes algorithmic decisions that contribute to the final delivery pattern. No single party in this chain has visibility into the complete picture, which makes accountability for discriminatory or manipulative outcomes difficult to assign.

Third, the A/B testing and optimization processes that platforms use to improve ad performance optimize for advertiser-defined metrics — clicks, conversions, cost per acquisition — not for fairness or non-discrimination. Advertisers who rely on platform-provided optimization reports receive information about commercial performance, not about equity.

Addressing the measurement problem requires: platforms providing advertisers with demographic reach data disaggregated by protected characteristics; standardized reporting formats that enable advertisers to test for disparate impact in their own advertising; independent research access to the advertising ecosystem's data infrastructure; and regulatory requirements that mandate transparency at the platform level sufficient to enable third-party audit. None of these are simple to implement, but the alternative — an advertising AI ecosystem that cannot be measured for equity — is one in which accountability commitments remain aspirational rather than operational.


Discussion Questions

  1. Facebook argued that allowing advertisers to target by "Ethnic Affinity" served legitimate advertising purposes — reaching communities with culturally relevant messaging. When does demographic targeting become discrimination? Is there a principled distinction, and if so, what is it?

  2. AI advertising systems can produce discriminatory outcomes without any human making a discriminatory choice — the algorithm optimizes for engagement and learns to concentrate delivery in ways that reproduce historical patterns of exclusion. Does this change the ethical analysis? Does it change the legal analysis?

  3. Dynamic pricing uses AI to extract the maximum willingness to pay from each individual customer. Under what conditions, if any, is this ethically acceptable? Does your analysis change if the pricing systematically extracts more from lower-income customers?

  4. A social media influencer uses an AI writing tool to generate their sponsored content. They post the content without disclosing that it was AI-generated, though they personally reviewed and approved it. Has there been a disclosure violation? Who is responsible — the influencer, the brand, or the AI tool provider?

  5. Cambridge Analytica used Facebook data for psychographic profiling and political micro-targeting without users' meaningful consent. How does the ethical analysis of this practice differ from ordinary commercial advertising? Does the political context change the ethical assessment?

  6. The EU requires platforms to offer an algorithmic recommendation alternative that does not use behavioral profiling. Should the US adopt a similar requirement? What would be the practical effects on platforms' business models and users' experiences?

  7. A brand discovers that its AI advertising system has been targeting vulnerable elderly consumers with misleading health product advertisements, without any human at the brand having deliberately designed this targeting. What are the brand's ethical obligations? Its legal obligations? How should it respond?


The following chapter examines the right to explanation — the legal and ethical framework that governs individuals' ability to access meaningful information about AI decisions that affect them, and the significant gap between the right as stated and the right as delivered.