Case Study 26-1: Maya Chen's Price Test — $17 vs. $27

Background

Six months after launching her sustainable fashion guide, Maya Chen faced a decision that felt too important to guess at and too personal to delegate to a friend's opinion: Was $17 the right price?

She had picked $17 during her initial launch for three reasons, none of them particularly rigorous. First, she had seen other digital products in the "beginner friendly" category priced in the teens and assumed her audience expected that range. Second, she worried that $27 would feel exclusionary to the broke-college-student segment of her TikTok following — an audience she genuinely cared about serving. Third, she had never sold anything before and was nervous about rejection, and a lower price felt like it would reduce the sting of low sales numbers.

By month six, the guide was selling steadily — about 22 units per month through her TikTok bio link and email welcome sequence. At $17, that was roughly $374/month. Decent for a passive digital product, but Maya had spent significant time creating it — two weeks of research, writing, formatting, and design — and she was increasingly wondering whether she was undervaluing both the product and her work.

"I kept seeing comments from people who said the guide changed how they thought about their wardrobe," Maya recounts. "And I'm thinking... this is $17? That's less than a single fast fashion T-shirt I'm telling them not to buy."

The Test Design

Maya's friend pointed her toward a blog post about sequential price testing, which led her to the methodology described in this chapter. She decided to run a clean sequential test:

Period A: Months 7 and 8 — keep the guide at $17. Track every sale, traffic source, and conversion rate carefully.

Period B: Months 9 and 10 — raise the price to $27. Communicate the price change with a brief email to her list and a short TikTok explaining that the launch pricing window had closed.

Before starting, she wrote out her hypothesis: "If I raise the price from $17 to $27, conversion rate will decrease but revenue per visitor will increase, because the guide provides genuine value and my audience is not primarily price-limited — they are value-limited. I will measure revenue per landing page visitor over 60-day periods."

She also noted the key confound she would need to watch: audience growth. Her TikTok following was growing, which meant Period B traffic might be different quality than Period A traffic (more recent followers, different viral video sources).

The Results

Period A ($17 price): - Landing page visits: 4,240 (from TikTok bio link + email sequences) - Units sold: 178 - Conversion rate: 4.2% - Gross revenue: $3,026 - Revenue per visitor: $0.71

Period B ($27 price): - Landing page visits: 4,810 (her audience had grown, hence more traffic) - Units sold: 149 - Conversion rate: 3.1% - Gross revenue: $4,023 - Revenue per visitor: $0.84

Maya ran the proportion z-test on the conversion rates: - Z-statistic: 3.72 - P-value: 0.0002 - Result: Statistically significant

The conversion rate drop was real — not random noise. But the revenue per visitor had increased by 18.3%, and total revenue was $997 higher in Period B despite the lower conversion rate.

What Maya Did With the Data

The easy decision was keeping the $27 price. The more interesting decision was how she thought about the conversion rate drop.

Out of every 100 landing page visitors in Period A, approximately 4 bought. In Period B, approximately 3 bought. Who were those lost buyers? Maya looked at the timing of the drop and noticed something: the conversion rate fell most in the first two weeks after the price change, then stabilized. She hypothesized that the initial dip reflected price-sensitive buyers who had been on the fence at $17 and tipped negative at $27. After two weeks, the audience was composed primarily of people encountering the guide for the first time, and for them, $27 was simply the price.

Maya also did something unusual: she emailed 15 people who had visited the landing page but not purchased during Period B (identifiable via her email sequence) and asked directly whether the price had been a factor. Of the 9 who responded, 4 said yes — they found $17 more accessible. This qualitative data nuanced her quantitative findings: the price increase was financially optimal, but it did have a real equity dimension for her most price-sensitive audience members.

The Solution Maya Built

Rather than returning to $17, Maya created an access pathway for genuinely price-constrained followers. She set up a "pay what you can" email — sent once, to her list, describing that she understood not everyone was in the same financial position — with a $7 option for people who wanted the guide but found $27 difficult. The response was modest (about 12 people took the $7 option over the following month) but the gesture aligned with her brand values around accessibility and sustainability.

"I tested the price and the data was clear," Maya says. "But data doesn't replace values. I can charge $27 AND make space for people who can't afford $27. Those aren't in conflict."

Analysis Questions

  1. Maya used sequential testing rather than simultaneous A/B testing for her price experiment. What are the specific advantages and disadvantages of her approach compared to showing two prices simultaneously to different audience segments?

  2. Maya's conversion rate dropped significantly (4.2% to 3.1%) but her revenue per visitor increased by 18%. How should creators decide which metric to optimize for — conversion rate or revenue per visitor? Are there situations where conversion rate should take priority?

  3. Maya's qualitative follow-up research (emailing non-purchasers) added context that the quantitative data alone did not provide. What does this suggest about the role of qualitative research in combination with A/B testing?