Chapter 14 Exercises: Behavioral Targeting and Real-Time Bidding
Exercise 14.1 — RTB Architecture Walkthrough (Small Groups, 45–60 minutes)
Purpose: Understand the real-time bidding sequence by walking through it step by step with a specific scenario.
Scenario: A 23-year-old user opens a popular news website on their smartphone. The user has an extensive browsing history: they have been researching graduate school programs, visited student loan comparison sites, searched for apartments in three cities, and recently looked at entry-level professional clothing. They have an Instagram account that primarily shows interest in art, travel, and political commentary.
Instructions:
Working in groups of 3–4, each person takes the role of one of the following actors: - The Publisher (the news website) - The Supply-Side Platform - The Ad Exchange + DMP - A DSP representing a graduate school advertiser - A DSP representing a clothing retailer - A DSP representing a student loan company
Step 1 (5 minutes): Each actor describes what they see and know at the moment the page loads.
Step 2 (10 minutes): Walk through the RTB sequence step by step, with each actor narrating their role.
Step 3 (15 minutes): The three DSPs each make their bids. Discuss what bid price each would likely submit and why. Which advertiser "wins"?
Step 4 (15 minutes): Discuss: What did the user's behavioral history "earn" for each party in this transaction? Who benefited? What data about the user was broadcast to parties the user has no relationship with?
Reflection: The user opened the news website to read an article. How many commercial relationships did their presence trigger, and how many of them did they know about?
Exercise 14.2 — Your Ad Profile (Individual, 30–45 minutes)
Purpose: Examine how platforms categorize you for advertising purposes.
Instructions:
Each major advertising platform allows users to see some portion of their ad targeting profile.
Step 1: Check your ad categories on at least two of the following platforms: - Facebook/Instagram: Settings → Ads → Ad Preferences → Interests, Demographics, Advertisers - Google: adssettings.google.com - Twitter/X: Settings → Privacy → Interests and Ads Data - Amazon: Manage Your Account → Advertising Preferences
Step 2: For each platform, document: - Total number of interest/behavior categories you are in - The categories that seem most accurate - The categories that seem inaccurate or surprising - Any sensitive categories (health, political, financial) you are placed in - Whether the categories reflect data you consciously provided or data inferred from behavior
Step 3: Consider the combined profile across all platforms. What portrait of you emerges? What decisions — hiring, credit, pricing — might be influenced by this profile?
Reflection Questions:
a. Did you have the ability to remove or correct categories? Was the process easy or difficult?
b. Research suggests that users can often be re-categorized after removing interest categories because the underlying behavioral data hasn't changed. What does this tell you about the effectiveness of interest deletion as a privacy measure?
c. If you could design your own ad profile — one that was completely accurate and reflected your preferences — how would it differ from what the platforms showed you?
Exercise 14.3 — Price Discrimination Investigation (Pairs or Small Groups, 1–2 hours)
Purpose: Empirically investigate whether behavioral signals produce different prices.
Note: This exercise requires using two different devices or browsers to test pricing. No deception or terms-of-service violation is involved — users are permitted to compare prices across browsers.
Method:
Step 1: Identify a product you are genuinely considering purchasing from an online retailer (shoes, electronics, a subscription service, or comparable item).
Step 2: Check the price on your primary device/browser (logged into any accounts you normally use, with your usual cookies and browsing history).
Step 3: Check the same product's price on a different browser (in incognito mode with no cookies) or on a different device that you don't normally use for shopping.
Step 4: If possible, check the price while connected to a VPN located in a different geographic region (VPNs are available for free trials and are widely available).
Step 5: Document all price variations, timestamps, and browser/device conditions.
Analysis:
a. Did you find price differences? What was the magnitude?
b. What behavioral signal might explain any price differences you found?
c. The chapter mentions that Mac users sometimes see higher prices than PC users because Mac ownership correlates with higher income. Test this if possible: check the same product price on a Mac vs. a PC browser, or compare price behavior in high- and low-income zip code proxy locations via VPN.
d. Based on your findings, do you believe behavioral price discrimination is common? What evidence from your investigation supports or complicates this conclusion?
Exercise 14.4 — The Political Targeting Ethics Debate (Group Activity, 45–60 minutes)
Purpose: Apply ethical frameworks to political microtargeting.
Format: Structured debate in groups of 4.
Setup: Each group has four roles: - The Campaign Strategist: defends political microtargeting as legitimate political communication - The Civil Liberties Advocate: argues against political microtargeting on autonomy and democracy grounds - The Researcher: presents empirical findings on targeting effectiveness and effects - The Regulator: argues for a specific regulatory approach (choose one: ban, transparency requirements, opt-out, or no regulation)
Questions to debate:
-
Is political microtargeting categorically different from commercial behavioral targeting, or are they the same practice applied to different ends?
-
Is showing different political messages to different voters inherently deceptive, or is it just effective communication?
-
The chapter notes that microtargeting advantages well-funded political actors. Is this a meaningful concern for political equality? Does it differ from the advantage well-funded actors already have in broadcast advertising?
-
Should political advertising platforms be subject to the same ad targeting restrictions as housing, employment, and credit advertising? Why or why not?
-
What regulatory mechanism would best address the concerns raised about political microtargeting while preserving legitimate political communication?
After 30 minutes of debate, groups should attempt to reach consensus on one concrete recommendation that all four roles could accept. Present the recommendation to the class.
Exercise 14.5 — Redlining 2.0 Documentation (Individual Research, 60–90 minutes)
Purpose: Examine the empirical evidence for discriminatory behavioral targeting in housing, employment, and credit.
Research Task:
Find and read (or read summaries of) at least three of the following:
- Julia Angwin and Terry Parris Jr., "Facebook Lets Advertisers Exclude Users by Race," ProPublica, 2016
- HUD's complaint against Facebook (filed 2019; publicly available)
- ACLU's testing of discriminatory job advertising on Facebook (2019)
- National Fair Housing Alliance et al. v. Facebook, settlement terms (2019)
- Abigail Martin and Patrick Walker, "HCIJ/ProPublica: Amazon's Same-Day Delivery Redlining," if you can locate it
- Any recent academic study on algorithmic discrimination in advertising (search Google Scholar for "algorithmic discrimination housing advertising")
Write a 400–600 word analysis answering the following:
-
What specific discriminatory practices did investigators find?
-
What was the mechanism of discrimination — explicit exclusion, lookalike audiences, algorithmic optimization, or geographic targeting as a proxy?
-
What legal theory governed the response (Fair Housing Act, Equal Credit Opportunity Act, Title VII)?
-
Was the response adequate? What changed, and what didn't?
-
Apply the concept of "redlining 2.0" from the chapter: how do these cases exemplify the reproduction of structural inequality through algorithmic means?
Exercise 14.6 — The Filter Bubble Test (Individual, 2 weeks)
Purpose: Empirically test whether algorithmic curation narrows information exposure.
Instructions:
Week 1: Document your current information environment. For one week, record: - What news sources appear in your social media feeds - What political perspectives are represented in algorithmically recommended content - Which sources are recommended vs. which you actively seek out
Week 2: Deliberately introduce counter-programming to your algorithmic profile. For one week: - Follow accounts or subscribe to feeds that represent perspectives significantly different from your usual consumption - Like and engage with content outside your typical categories - Seek out sources from the opposite political direction, different geographic regions, or different demographic communities
After week 2: Document what changes in your algorithmic recommendations. Did the algorithm adapt? How quickly? Did engagement with counter-programming content change what you were shown organically?
Analysis Questions:
a. Did you observe evidence of filter bubble dynamics in your week 1 baseline? What was the range of perspectives in your algorithmically recommended content?
b. Did deliberate counter-programming change your feed? What does this suggest about the mechanisms of the filter bubble?
c. The academic literature on filter bubbles finds mixed results — some evidence of narrowing, some evidence of surprising diversity. What did your experiment suggest? How do individual variation and deliberate choices interact with algorithmic curation?
d. Is it possible to maintain genuine informational diversity within an algorithmically curated information environment? What would it require?
Chapter 14 | Part 3: Commercial Surveillance