Capstone Project 1: The 30-Day Content Experiment
Design, execute, and analyze a 30-day content experiment with a testable hypothesis.
Overview
The most powerful creators are not the most talented — they're the most systematic learners. They treat their own creative work as a laboratory: forming hypotheses, testing them with real content, analyzing results, and updating their approach based on what they find.
This project asks you to do exactly that: run a 30-day content experiment with a specific, testable hypothesis about your own creative work.
Duration: 30 days Deliverables: Experiment design document, content produced during experiment, 30-day analysis report Primary skills developed: Hypothesis formation, experimental design, analytics interpretation, systematic learning
Phase 1: Hypothesis Formation (Before Day 1)
Step 1: Identify Your Belief
A good hypothesis starts with something you believe but haven't rigorously tested. Review your content history and identify one belief about your work:
Examples: - "My videos perform better when I use a question hook than a statement hook" - "Longer videos (12+ min) have higher viewer loyalty than shorter ones (6-8 min)" - "Videos where I show my face get higher comment engagement than voice-over only" - "Videos posted on Tuesday get more first-week views than videos posted on Friday" - "Videos that connect my topic to current events outperform evergreen topics in the first week"
Your task: Write 3 candidate hypotheses. Choose the one that: - Is specific enough to test (avoid "my content is better when I try harder") - Can be measured with available analytics - Relates to a meaningful creative decision (not just posting time) - Is genuinely uncertain to you — you don't already know the answer
Step 2: Design the Test
Once you have your hypothesis, design the experiment:
Test condition: What content has this element? Control condition: What content lacks this element? How many videos in each condition? (Minimum: 4 in each; ideal: 6+) Timeline: Which weeks are test weeks? Which are control weeks? Primary metric: What one number will tell you the result? (Retention rate? CTR? Share rate? Comment count?) Secondary metrics: What other numbers might be informative? What will you keep constant? (Topic category, video length, thumbnail style — anything that would confound the result if it varied)
Document this before day 1. Write it down. Having a pre-committed design prevents you from unconsciously adjusting the hypothesis after you see results you don't like.
Step 3: Pre-Experiment Baseline
Before the experiment begins, record your current baseline data: - Average retention rate (last 8 videos) - Average CTR (last 8 videos) - Average share rate (last 8 videos) - Average views per video (last 4 weeks)
You'll need this to contextualize your results.
Phase 2: Execution (Days 1–30)
The Non-Negotiable: Post on Schedule
The experiment requires consistent posting throughout the 30 days. A missed week contaminates the dataset and makes analysis harder. Plan your content in advance, batch when possible, and treat the schedule as a commitment.
Recommended posting schedule: - YouTube: 2 videos per week (8 videos total, 4 test + 4 control; or alternating) - TikTok/Short-form: 3 videos per week (12 videos, rotating test/control)
Maintain Experimental Discipline
During the experiment, keep careful records: - Video title and topic - Whether it's test or control condition - Posting date - First-week metrics: views, retention rate, CTR, comments, shares
Do not look at the results and adjust mid-experiment. If the first two test-condition videos underperform, resist the urge to modify your test condition. The whole point of the experiment is to see what happens naturally; modifying mid-experiment voids the result.
Journaling
Keep a brief daily journal (3–5 sentences) noting: - What you made or worked on today - Any observations about the making process — did the test condition feel different to create? - Any unexpected developments
This qualitative record will be as valuable as the quantitative data.
Phase 3: Analysis (After Day 30)
Compile Your Data
Create a simple table:
| Video | Condition | Retention | CTR | Shares | Comments | Views |
|---|---|---|---|---|---|---|
| 1 | Control | ... | ... | ... | ... | ... |
| 2 | Test | ... | ... | ... | ... | ... |
| ... |
Calculate averages for test condition and control condition separately.
Evaluate Your Hypothesis
Compare test vs. control on your primary metric. Then ask: 1. Is the difference meaningful (more than 5–10% difference in either direction)? 2. Is the difference consistent across videos, or was it driven by one outlier? 3. Do your secondary metrics tell a consistent or contradictory story? 4. Does the journal reveal qualitative patterns that the quantitative data doesn't capture?
Important: A result that contradicts your hypothesis is not a failed experiment. It's a successful experiment with an unexpected finding. Document it as such.
Write Your Analysis Report
Your final report should include:
1. Hypothesis (restated exactly as written before the experiment)
2. Method What was the test condition? Control condition? How many videos in each? What was your primary metric?
3. Results The data, summarized clearly. Include averages for test and control conditions on primary and secondary metrics.
4. Conclusion Does the data support your hypothesis? Why or why not?
5. Confounds and limitations What factors might have influenced results other than your independent variable? (A video in the test condition went viral for unrelated reasons; one week coincided with a school holiday; your audience demographics shifted.)
6. Implications for future work What will you do differently based on what you learned? What's your next hypothesis?
Evaluation Criteria
Evaluate your own project on these dimensions:
Experimental design quality (did you isolate the variable?) □ Hypothesis was specific and testable □ Test and control conditions were clearly defined □ Primary metric was identified before the experiment □ Other variables were held as constant as possible
Execution quality (did you follow through?) □ Posted consistently for all 30 days □ Documented each video's condition and metrics □ Kept a journal during the experiment □ Did not modify the experiment mid-run based on preliminary results
Analysis quality (did you interpret the data honestly?) □ Reported results accurately, not selectively □ Addressed confounds and limitations □ Drew conclusions proportional to the evidence (didn't overclaim from a small sample) □ Updated prior beliefs appropriately (neither stubborn nor overly swayed)
Implication quality (did you learn something actionable?) □ Identified at least one specific change to your content practice based on results □ Identified at least one follow-up hypothesis □ Can explain what you'd do differently if you ran the experiment again
Extension Options
Extension A: Run the same experiment across two platforms simultaneously and compare whether the effect holds across both.
Extension B: Share your experiment design (but not your predictions) with another creator before you start, and have them predict the result. Compare their prediction to the actual outcome after 30 days.
Extension C: After your first experiment concludes, design and run a second experiment that follows up on the most interesting unexplained finding from the first.
A Note on Sample Size
Thirty days with 2 videos per week produces 8 videos — 4 test, 4 control. This is a small sample for statistical conclusions, but it's large enough for meaningful learning. You're not publishing research; you're developing intuition and practice with a systematic mindset.
The point is not to produce statistically significant results. The point is to develop the habit of asking specific questions, designing honest tests, and updating your beliefs based on evidence rather than impression. That habit, applied consistently over months and years, compounds into genuine expertise about your own creative work.