Case Study: Five Endings, One Video — A Comparative Experiment

"Same video, five different endings, five different audience responses. The content was identical. The ending changed everything."

Overview

This case study follows Priya Anand, 17, a cooking creator who ran a controlled experiment: she took one video's content and posted it five times — each with a different ending technique — to test which ending types actually drive which behaviors. The results revealed that ending design isn't about finding the "best" ending but about understanding which ending achieves which goal.

Skills Applied: - Ending technique selection and implementation - Controlled experimental design for content testing - The peak-end rule in practice - Behavior-specific optimization - Hook-ending alignment (narrative envelope)


Part 1: The Experimental Design

The Problem

Priya was a competent cooking creator with 8,000 followers. She'd read about ending design and understood the theory, but she had a practical question: How much does the ending actually matter compared to the content itself?

She designed an experiment to find out.

The Setup

The base video: A 45-second cooking video — "5-Minute Pasta You'll Actually Make" — showing a simple, visually appealing pasta recipe. The content was strong: clear instructions, attractive plating, good energy.

The control variables: - Same content (first 38 seconds identical across all versions) - Same hook (Verbal Hook #16, Save-Your-Time: "Five minutes. That's it. Restaurant-quality pasta.") - Same posting time (6 PM, different days of the week) - Same account, same follower base - Only the final 7 seconds differed

The five endings:

Version Ending Type Ending Category Last 7 Seconds
A Loop Ending (#1 Seamless) Rewatch Final plated shot matched opening angle; music looped seamlessly
B Share Ending (#7 Relatable Punchline) Share "And that, my friends, is why I can't be trusted near a stove without adult supervision." [Blooper of earlier spill plays]
C Follow Ending (#11 Teaser) Follow "Thursday's recipe is even faster. And it involves a blowtorch." [Flash of next video]
D Comment Ending (#16 The Question) Comment "Real talk: do you drain your pasta water or... are you one of those people who saves it? Tell me."
E Emotional Ending (#28 Gratitude Close) Emotional [Quiet moment] "My grandmother taught me this. She never measured anything. I still don't. Thank you for being here."

The Hypothesis

Priya predicted: 1. The Loop Ending (A) would generate the most watch time 2. The Share Ending (B) would generate the most shares 3. The Follow Ending (C) would generate the most new followers 4. The Comment Ending (D) would generate the most comments 5. The Emotional Ending (E) would generate the most saves

"If the ending really drives specific behaviors," Priya reasoned, "each version should win on its targeted metric."


Part 2: The Results

The Numbers

Metric A: Loop B: Share C: Follow D: Comment E: Emotional
Views 12,400 14,800 11,200 13,600 9,800
Avg Watch Time 34 sec 29 sec 27 sec 30 sec 32 sec
Rewatch Rate 38% 14% 11% 16% 22%
Share Rate 2.8% 6.4% 3.1% 4.2% 4.8%
New Followers 82 94 148 86 112
Comments 24 41 28 186 67
Save Rate 3.4% 4.2% 3.8% 3.6% 7.1%
Completion Rate 84% 81% 76% 82% 88%

Hypothesis Check

Prediction Result Confirmed?
A wins watch time A: 34 sec (highest) Yes
B wins shares B: 6.4% (highest) Yes
C wins followers C: 148 (highest) Yes
D wins comments D: 186 (highest by 4.5x) Yes
E wins saves E: 7.1% (highest) Yes

Every prediction was confirmed. Each ending type won on exactly the metric it was designed to optimize.


Part 3: The Deeper Analysis

Finding 1: The Ending Changed Views (Not Just Behavior)

The five identical-content videos received significantly different view counts — from 9,800 (Emotional) to 14,800 (Share). This was unexpected: the content was the same, so why did views differ?

Explanation: The ending affected early algorithmic distribution. Version B (Share ending) generated shares quickly, which expanded reach to new clusters. Version A (Loop ending) generated high watch time, which boosted algorithmic confidence. Version E (Emotional ending) generated saves, which are a strong signal but slower-acting. Each ending fed different algorithmic inputs, which produced different distribution patterns.

Implication: The ending doesn't just affect what viewers DO after watching — it affects how many viewers SEE the video in the first place, because the ending determines which algorithmic signals are strongest.

Finding 2: Emotional Endings Had the Highest Completion Rate

Version E (Emotional ending) had an 88% completion rate — highest of all five. This was 12 percentage points above Version C (Follow ending, 76%).

Explanation: The emotional ending's quiet, intimate tone created a gravitational pull toward the end. Viewers who sensed the shift in energy (the grandmother mention, the gratitude) were drawn into the emotional landing — they could feel something was coming and wanted to experience it. In contrast, the follow ending's teaser felt like a transition away from the content, and some viewers scrolled before it completed.

Peak-end rule implication: The emotional ending didn't just drive saves — it drove the highest overall experience rating. When Priya asked viewers what they thought of each video (through an informal poll), the emotional version was rated as "the best pasta video" even though the actual recipe content was identical across all five.

Finding 3: The Comment Ending Generated 4.5x More Comments

Version D generated 186 comments — 4.5x more than the next highest. The question "Do you drain your pasta water or save it?" created a genuine debate:

  • Team Drain: "Obviously you drain it, what is this question"
  • Team Save: "Pasta water is GOLD for sauces, do not waste it"
  • Team Both: "Depends on the recipe"
  • Team Chaos: "Wait, people save it??"

Why 4.5x, not just 2x? The comment section became self-sustaining. Once the debate started, viewers commented not just to answer Priya's question but to respond to OTHER commenters. The question triggered a conversation that fed itself — each comment provoked responses, which provoked more responses.

Implication: Comment-driving endings don't just generate comments; they can create self-sustaining comment ecosystems. The best comment endings are genuine questions with no obviously "right" answer — ones that split the audience into camps.

Finding 4: The Follow Ending Had the Lowest Completion Rate But the Highest Follow Rate

Version C (Follow ending) had the lowest completion rate (76%) but the highest follow rate (148 new followers). This seems contradictory — how can the version people watched least generate the most follows?

Explanation: The teaser worked differently than other endings. Some viewers, upon seeing the flash of Thursday's recipe with a blowtorch, got excited and immediately tapped the profile — before the video even finished. The teaser created such strong curiosity about future content that viewers prioritized following over watching the current video's final seconds.

This is a trade-off: the follow ending sacrificed some current-video completion for future-video audience. Whether this is a good trade depends on the creator's priority (current video performance vs. channel growth).

Finding 5: Emotional and Share Endings Crossed Categories

While each ending won its targeted metric, some endings performed surprisingly well on non-targeted metrics:

  • Version E (Emotional) had the second-highest share rate (4.8%), despite being designed for saves. Why? The grandmother story triggered DM sharing — people sent it to their own grandmothers or to friends who cook. Emotional content generates private (DM) sharing, which is the most valuable share type.

  • Version B (Share) had the second-highest comment count (41), despite being designed for shares. Why? The blooper ending generated "This is literally me" comments — the relatable moment invited self-identification through comments.

Implication: Ending categories are optimized for specific behaviors but aren't exclusive. A well-designed ending in any category can generate secondary effects in others.


Part 4: The Strategy That Emerged

Priya's Decision Framework

After the experiment, Priya developed a decision framework for choosing endings:

STEP 1: What does THIS specific video need?
  → New video in established series → Loop ending (watch time)
  → Standalone viral potential → Share ending (distribution)
  → First in a new series → Follow ending (audience building)
  → Community-driven topic → Comment ending (engagement)
  → Personal/meaningful content → Emotional ending (depth)

STEP 2: What does my CHANNEL need right now?
  → In a growth phase → Lean toward Share and Follow endings
  → Building community → Lean toward Comment and Emotional endings
  → Optimizing for algorithm → Lean toward Loop endings

STEP 3: Is there a natural fit?
  → Some content naturally suits certain endings
  → Don't force a comment question onto an emotional video
  → Don't force an emotional landing onto a quick-tip video
  → The ending should feel like part of the content, not an attachment

The Rotation Strategy

Priya settled on a rotation: she cycled through ending types across her posting schedule rather than using the same type every time. This served three purposes:

  1. Balanced algorithmic signals: Different endings fed different platform metrics, creating a well-rounded channel profile
  2. Audience variety: Viewers got different experiences, preventing pattern fatigue
  3. Ongoing data collection: Each video provided new data about which ending types worked best for which content types

Six-Month Results

Metric Before Experiment 6 Months After Change
Followers 8,000 62,000 +675%
Avg views per video 4,200 24,000 +471%
Avg save rate 2.8% 5.4% +93%
Avg comment count 14 58 +314%
Brand deal inquiries/month 0-1 4-6

"The experiment changed how I think about endings forever," Priya said. "Before, my videos just... stopped. Now every video has a designed exit. And every exit has a purpose."


Part 5: What This Tells Us About the Peak-End Rule

The Experiment as Peak-End Rule Validation

Priya's experiment provides a practical demonstration of Kahneman's peak-end rule. Five identical videos — same content, same peak moment (the plating reveal at the 30-second mark) — produced dramatically different viewer evaluations based solely on the ending.

The emotional ending (E) was rated "the best" despite identical content because the ending elevated the entire experience in the viewer's memory. The follow ending (C) was rated "good but felt like an ad" because the teaser shifted attention from experience to promotion.

The lesson: The ending isn't just the final moment of a video. It's the lens through which the entire video is remembered.

The Practical Takeaway

Content quality is table stakes. But content quality without ending design leaves value on the table. Priya's experiment showed that the same good content can generate 186 comments or 24 comments, can earn 148 followers or 82 followers, can achieve a 7.1% save rate or a 3.4% save rate — all depending on the final 7 seconds.

Seven seconds. 15% of the video. Controlling 100% of what happens next.


Discussion Questions

  1. Experimental validity: Priya posted five versions of the same video on different days. What confounding variables could explain the differences? (Day of week, audience fatigue from similar content, algorithmic timing.) How could the experiment be improved?

  2. The completion-follow trade-off: The follow ending had the lowest completion rate but the highest follow rate. Is sacrificing current-video completion for future-video audience always a good trade? Under what circumstances would you prioritize completion over follows?

  3. The self-sustaining comment ecosystem: The pasta water question generated 186 comments, many of which were viewer-to-viewer debates rather than responses to Priya. Is this kind of autonomous community engagement more or less valuable than direct creator-viewer interaction? What are the risks of sparking debates?

  4. Rotation vs. signature: Priya chose to rotate ending types. Some creators develop a signature ending (like always ending with "See you next time, beautiful people"). Which approach is stronger for long-term brand building? What are the trade-offs?

  5. The grandmother effect: Version E's emotional ending (mentioning Priya's grandmother) performed well — but what if Priya used emotional endings every video? Would the emotional impact diminish through repetition? How often can a creator use emotional landings before they feel formulaic?


Mini-Project Options

Option A: The Two-Ending Test Take one of your video concepts and create two versions with different ending types (from different categories). Post both at different times. Compare the specific metrics each ending was designed to optimize. Was the targeted metric the winner for each version?

Option B: The Ending Rotation Over your next 5 videos, use a different ending category for each (Rewatch, Share, Follow, Comment, Save or Emotional). Track all metrics for all 5 videos. Which ending type performed best overall? Which drove the specific behavior it was designed for?

Option C: The Peak-End Perception Test Create two videos with identical endings but different content quality. Video A: great content + designed ending. Video B: average content + same designed ending. Ask 5 friends to watch both and rate the overall quality. Does the peak-end rule mean both videos receive similar ratings because the ending is identical? Or does content quality still dominate the evaluation?

Option D: The Comment Question Lab Test three different comment-ending questions on three videos: 1. A binary choice ("A or B?") 2. An opinion question ("What do you think about X?") 3. A personal story prompt ("Tell me about your experience with X")

Which generates the most comments? Which generates the longest comments? Which creates the most viewer-to-viewer discussion?


Note: This case study uses a composite character to illustrate patterns observed across creators who experimentally tested ending techniques. The metrics and ratios are representative of documented patterns. Individual results will vary based on niche, audience, platform, and content quality.