Capstone 1: The 30-Day Luck Experiment

Design, Run, and Analyze a Personal Luck Intervention


Overview

This capstone asks you to do something most people never do: treat your own luck as a subject of systematic inquiry.

The 30-Day Luck Experiment is a structured personal experiment. Over four weeks, you will measure your current luck architecture, design and implement specific interventions to change it, track what happens, and write an honest analysis of the results. The experiment will not tell you whether you are a lucky or unlucky person. It will tell you something more interesting: which specific variables in your current behavior, network, and environment are shaping the opportunities available to you — and whether deliberately changing those variables changes the outcome.

The experiment is grounded in the core proposition of this book: that luck is not random in the way most people assume. It clusters around certain behaviors, certain network structures, certain cognitive orientations. This means it can be studied, and what can be studied can often be influenced. The 30-Day Luck Experiment is a test of that proposition applied to your actual life, with your actual constraints and circumstances.

You will not need access to any other part of the book to complete this capstone. Everything you need is here.


Learning Objectives

By completing this capstone, you will be able to:

  • Apply the Luck Audit Framework from Chapter 36 to generate an honest baseline assessment of your own luck architecture across all seven domains
  • Use the luck journal methodology from Chapter 16 to build a sustainable daily data-collection habit
  • Design empirically grounded personal interventions with specific, measurable outcome variables
  • Distinguish between randomness, skill, and architectural luck when analyzing a set of real-world outcomes
  • Analyze a small dataset with appropriate epistemic humility — drawing conclusions the data can actually support without overclaiming
  • Build at least one behavioral or structural change that extends meaningfully beyond the 30-day experiment period

Background Theory: What You Are Measuring and Why

Before collecting data, be clear on what you are studying.

The Luck Architecture Model (Chapters 2-4) holds that a person's "luck" at any given time is a function of four interacting variables: opportunity surface (how many potentially luck-generating situations you are exposed to), prepared mind (how well-positioned you are to recognize and act on opportunities when they appear), network position (the structural properties of your relationships that determine what information and introductions flow to you), and resilience patterns (how quickly you recover from bad-luck events and whether those events compound or stay contained). These four variables are not fixed personality traits. They are architectural features — products of choices, habits, and environments — and they can be changed deliberately.

The Wiseman Experiments (Chapter 12) demonstrated that people who rated themselves as consistently lucky differed from those who rated themselves as consistently unlucky not in actual chance event frequency, but in behavioral and perceptual variables: lucky people created larger networks of opportunity, listened to their intuition more often, expected good fortune, and converted bad luck into good through reframing. These differences were stable over time and partially amenable to deliberate change through what Wiseman called "luck school." The implication: if behavioral differences account for much of the luck gap between people, behavioral change should narrow that gap.

The Luck Journal Methodology (Chapter 16) is a daily micro-reflection practice — five to ten minutes, structured prompts — designed to accomplish two things simultaneously: increase your sensitivity to luck-relevant events you would otherwise filter out (because of negativity bias, confirmation bias, or simple inattention), and create a longitudinal dataset about your own patterns. The journal is your primary data source for this experiment.

The Small-Sample Problem (Chapter 7) is the epistemic constraint governing everything you conclude. Thirty days is a short period. Your sample of luck-relevant events will be small. You cannot draw grand causal conclusions from this data. What you can do is identify patterns, generate hypotheses, and assess whether the directional evidence is consistent with the interventions you designed. Treat your conclusions as provisional updates to your model of your own luck, not as certainties. This epistemic honesty is itself a skill the experiment is designed to build.


The Four-Week Structure

Week 1: Baseline Measurement (Days 1-7)

The purpose of Week 1 is to understand where you are starting, without judgment and without premature optimization. Resist the temptation to begin making changes during this week. You are a scientist establishing a baseline before the intervention begins.

Day 1: Complete the Full Luck Audit

The Luck Audit (Chapter 36) covers 7 domains with 5 questions each, for a total of 35 questions rated on a 1-10 scale. The domains are: Opportunity Exposure, Network Diversity, Prepared Mind, Resilience, Openness to Experience, Signal Recognition, and Intentional Action. Score each domain out of 50 and record your totals. Your overall score out of 350 is your Baseline Luck Architecture Index.

Complete it honestly. Do not optimize your answers. The audit is only useful if it reflects where you actually are, not where you would like to be or where you think you should be. An honest score of 140 is far more useful than a flattering score of 240.

Days 1-7: Start the Luck Journal

Every day, at roughly the same time (most people find evenings work best), spend five to ten minutes completing the daily log. Use the Daily Luck Log template provided in the Templates section below. The log captures: the day's luck-relevant events (positive and negative), your rating of the day's overall luck-relevant density on a 1-10 scale, one thing you noticed that you might normally have filtered out, and one thing you did that was at least slightly outside your normal behavioral patterns.

The rating scale matters. A "1" means a day with no meaningful serendipity, no new information from unexpected sources, no encounters with people outside your normal circle. A "10" means a day with multiple meaningful unexpected connections, information, or opportunities. Most days will cluster between 3 and 6. That is expected and fine. The distribution over time — not any single day — is what is analytically interesting.

Day 2: Map Your Network

Using the concentric circle model from Chapters 19-21, map your current network by tier. Tier 1 (strong ties): the 5-15 people you are in regular contact with and genuinely know well. Tier 2 (meaningful contacts): the 50-150 people you could reach out to and who would know who you are. Tier 3 (weak ties and dormant connections): everyone else — represented as clusters and categories rather than individual names.

For each tier, note: how many are in your primary domain (your school, current job, main community)? How many are outside it? The ratio of within-domain to cross-domain contacts is one of the most reliable predictors of weak-tie serendipity, and it is often genuinely surprising when you calculate it explicitly.

Day 3: Document Your Current Opportunity Surface

The opportunity surface (Chapter 25) is the set of contexts and situations you regularly expose yourself to that could generate unexpected useful connections, information, or opportunities. List every recurring context in your week: classes, workplaces, clubs, online communities, physical spaces you frequent, events you attend. For each, note approximately how many people you are exposed to per week, and how many of them are genuinely different from you in background, domain, or perspective.

Opportunity surface is not just size — it is diversity-weighted size. Two hundred people who are all virtually identical to you in background and circumstance represent a much smaller effective opportunity surface than twenty people from genuinely different worlds.

Days 4-7: Continue Daily Logs, Begin Pattern Recognition

By the end of Day 7, you will have seven daily logs. On Day 7, spend twenty minutes reviewing them. Look for patterns: on which days did luck-relevant events cluster? Were they associated with particular contexts, particular types of activity, particular states of mind? Were there any weak-tie encounters? Did you recognize them as such in the moment, or only in retrospect? Write a brief Week 1 summary using the Weekly Reflection template below.


Week 2: Intervention Design (Days 8-14)

Day 8: Select Your Two Intervention Domains

Review your Luck Audit scores from Day 1. Identify the two domains with the lowest scores. These are your intervention targets. The logic is straightforward: the greatest returns on a luck-architecture investment tend to come from the weakest points in the system, not the strongest. If your Network Diversity score is 14/50 and your Intentional Action score is 38/50, more deliberate action will not move the needle as much as network work.

If two domains are very close in score, choose by asking: which one, if improved, would most change the texture of my daily life over the next thirty days? That is usually the right answer for this experiment.

Days 8-9: Design Your Interventions

For each selected domain, design a specific, measurable intervention. A good intervention has three properties: it must be concrete enough that you can unambiguously say whether you did it or not; it must be realistic given your actual schedule and resources; and it must be clearly connected to the theory — you should be able to explain in one or two sentences exactly why, according to the luck science frameworks, this intervention should produce the outcome you are hoping for.

Example interventions by domain:

  • Opportunity Exposure: "I will attend one event per week outside my normal domain. I will stay for at least one hour and initiate at least two conversations with people I have not previously met."
  • Network Diversity: "I will contact one weak tie or dormant connection per day — three days per week — with a genuine, non-transactional message. I will not ask for anything in the first contact."
  • Prepared Mind: "I will spend 20 minutes per day reading in a field adjacent to but outside my main domain. I will keep a running list of cross-domain connections I notice."
  • Signal Recognition: "I will add a specific daily luck journal prompt: 'What happened today that I would normally have ignored or dismissed?' I will not allow myself to answer 'nothing.'"
  • Resilience: "Following any setback or rejection this week, I will write a 5-minute post-mortem within 24 hours. The post-mortem must include: what happened, what I can control going forward, and one thing I will do differently."
  • Openness to Experience: "I will say yes to at least one invitation each week that I would normally decline for a reason I cannot specifically articulate."

The key is specificity. "Be more open to new experiences" is not an intervention. "Attend one event per week outside my usual contexts and speak to two strangers at each" is an intervention. The difference matters because you cannot track whether you followed through on a vague intention, and you cannot analyze the results of something you cannot measure.

Day 10: Set Up Your Tracking System

Create a simple tracking system — a spreadsheet, a dedicated section of a notebook, a notes application on your phone. It should capture: Did I execute the intervention? (Yes / Partial / No) — What specifically happened as a result? — Did anything unexpected occur? The tracking system is not bureaucratic overhead. It is your evidence base. Without it, your Week 4 analysis will be impressionistic rather than grounded.

Days 11-14: Pre-Intervention Actions

Before the formal intervention period begins in Week 3, take two concrete preparatory steps:

First, make contact with at least one person entirely outside your normal circles — someone you would not encounter in a typical week. This could be through an event, an online community, or a genuine outreach to someone whose work you have followed from a distance. Document the contact and any response.

Second, identify the specific events, communities, or channels where you will execute your interventions in Weeks 3 and 4. Have them scheduled, booked, or enrolled in before Day 14 ends. Interventions that are not scheduled in advance rarely happen; interventions that are scheduled have a much higher execution rate.


Week 3: Intervention Execution (Days 15-21)

This is the core of the experiment. Execute both interventions as designed, track everything, and resist the temptation to modify the design mid-stream based on early results. Changing your intervention protocol partway through makes analysis substantially harder. If something is clearly not working, note it in your journal but continue the original plan through Day 21.

Daily Routine During Week 3: - Complete your luck journal entry (5-10 minutes) - Complete your intervention tracking entry for both interventions (5 minutes) - Note anything unexpected or surprising, even if it seems minor at the time

Day 18: Mid-Experiment Reflection

On Day 18, spend 20 minutes on a structured mid-experiment review using these prompts:

  1. Have I actually executed both interventions as designed? If not, what specifically got in the way?
  2. Is there anything in the data so far that genuinely surprises me?
  3. What is one thing I have noticed about my own patterns that I did not expect to notice?
  4. Am I filtering luck-relevant events differently than I was in Week 1 — noticing more, or noticing differently?

Record your answers in your journal. Do not use this reflection to revise your hypotheses — that is Week 4's job. Use it to make sure you are executing honestly and catching anything that is getting in the way of accurate data collection.

Optional Python Extension:

If you have basic Python familiarity, the luck_audit.py module introduced in Chapter 36 can be adapted to generate a weekly score tracker. Set up a simple script that prompts you for your seven daily luck-density scores and outputs a weekly average, a domain-by-domain trend line, and a flag if any day falls below a threshold you set in advance. This is not required, but students who use quantitative tracking consistently report that it makes the Week 4 analysis richer and more specific.


Week 4: Analysis and Synthesis (Days 22-30)

Day 22: Complete the Second Luck Audit

Retake the full 35-question Luck Audit from Chapter 36 before reviewing your Week 1 scores. Complete it based on how you currently see yourself — honestly, without optimizing for an improved score. Then compare to your Day 1 baseline.

Days 23-25: Data Review

Review your full dataset: 28 or more daily logs, your intervention tracking records, your mid-experiment reflection, and both Luck Audit completions. Organize your findings in the Comparison Table template provided below. Look specifically for:

  • Which days had the highest luck-relevant event density? What were you doing on those days?
  • Did the intervention domains show measurable change in your daily-log ratings across the four weeks?
  • What types of events — encounters, information, opportunities, introductions — appeared most frequently in your logs?
  • Are there any events that, in retrospect, appear to have been caused or enabled by your interventions, even indirectly or with a time delay?

Days 26-28: Write the Analysis

Write a 500-word minimum analysis (most students produce 700-1,000 words) organized around four questions. Shorter is not better here — the goal is precision, not brevity.

  1. What changed? Compare your Week 1 and Week 4 Luck Audit scores domain by domain. Compare the distribution of your daily-log ratings across the four weeks. What is the directional story the data tells?

  2. What caused it? Distinguish carefully between outcomes that are plausibly attributable to your interventions, outcomes that appear coincidental, and outcomes that remain genuinely ambiguous. Apply the small-sample epistemic standard from Chapter 7: what can you responsibly claim based on the evidence you actually have?

  3. What was luck versus intervention? Select at least two specific events from your logs and analyze each using the luck science framework: how much of this outcome was structural (enabled by your architecture), how much was behavioral (a product of a specific choice you made), and how much was genuinely random in a way you cannot account for?

  4. What surprised you? Every serious experiment produces at least one genuinely unexpected finding. What was yours? What does it suggest about your luck architecture that your baseline assumptions did not anticipate?

Days 29-30: Build the 90-Day Follow-Up Plan

The experiment ends. The architecture does not have to. Design a realistic 90-day continuation plan with three specific components:

  1. The habit to keep: Which single behavioral change from the experiment will you continue, formalized as a specific, measurable commitment? "I will contact one weak tie per week on Sunday evenings before 9pm" is better than "I will keep expanding my network." The specificity is what makes habits survivable.

  2. The domain to keep working on: Which Luck Audit domain still has the most room for improvement? What is one structural change — not merely a behavioral one, but a change to an environment, community, or recurring context — that could move the needle in the next 90 days?

  3. The check-in moment: Schedule a 20-minute self-audit for 90 days from today. Put it in your calendar now, before you close this document. At that audit, you will retake the Luck Audit, record three luck-relevant events from the past month, and compare to your experiment baseline.


Templates and Worksheets

Template 1: Daily Luck Log

Date: __     Day Number: _ / 30     Overall Day Rating (1-10): ___

Luck-relevant events today (list any encounters, information, opportunities, or connections that felt meaningful, unexpected, or potentially significant — positive or negative):

Event Description Type Rating (1-10) Notes / Immediate Follow-Up
Encounter / Information / Opportunity / Other
Encounter / Information / Opportunity / Other
Encounter / Information / Opportunity / Other

One thing I noticed today that I might normally have filtered out or dismissed:

One thing I did today that was outside my normal behavioral patterns (however small):

Intervention Tracking (Weeks 3-4 only):

  • Intervention 1: Executed? (Yes / Partial / No) — What specifically happened?
  • Intervention 2: Executed? (Yes / Partial / No) — What specifically happened?

Template 2: Weekly Reflection Log

Week Number: __     Date Range: __ to ______

Daily rating summary:

Mon Tue Wed Thu Fri Sat Sun Weekly Average

This week's highest-density luck moment (describe specifically — who, what context, what happened):

This week's most significant weak-tie encounter (or honest note if there were none):

One pattern I am beginning to notice in my own behavior or attention:

One thing I want to track more carefully next week:

Completion rate on interventions this week (Weeks 3-4): ______ out of scheduled instances


Template 3: Luck Audit Comparison Table

Domain Week 1 Score (/50) Week 4 Score (/50) Change (+/-) Your Interpretation
Opportunity Exposure
Network Diversity
Prepared Mind
Resilience
Openness to Experience
Signal Recognition
Intentional Action
Total (/350)

Domains selected for intervention:

  • Domain 1: ___ (Week 1: /50; Week 4: /50; Change: )
  • Domain 2: ___ (Week 1: /50; Week 4: /50; Change: )

Are the changes in intervention domains consistent with the interventions you designed? Explain in 2-3 sentences, citing specific evidence from your daily logs:


Template 4: Event Causal Analysis Table

For at least five luck-relevant events from your logs, complete this analysis:

Event (brief description) Structural factor (architecture that made it possible) Behavioral factor (choice you made) Random factor (genuinely unpredictable element) Overall assessment
Mostly structural / Mostly behavioral / Mostly random / Mixed
Mostly structural / Mostly behavioral / Mostly random / Mixed
Mostly structural / Mostly behavioral / Mostly random / Mixed
Mostly structural / Mostly behavioral / Mostly random / Mixed
Mostly structural / Mostly behavioral / Mostly random / Mixed

Template 5: 90-Day Follow-Up Plan

The specific habit I am keeping:

Behavior: (describe exactly — no vague intentions)

Frequency: ______

When in the week: ______

Accountability mechanism: (who will know? how will you check?)

The domain I am continuing to develop:

Domain: __

Structural change I will make (not just a behavior — a change to environment, community, or recurring context):

Timeline for implementing that structural change: __

My 90-day check-in date: __ (this should already be in your calendar)

What I will assess at that check-in: - Retake Luck Audit: Yes / No - Record three luck-relevant events from the past month: Yes / No - Compare to experiment baseline: Yes / No - One question I want to answer at the check-in:


Reflection Questions for Final Synthesis

These questions are for your written analysis and personal journal. You do not need to answer every one — engage seriously with at least eight, and be honest rather than impressive.

  1. Before this experiment, how would you have explained why some people seem consistently luckier than others? Has that explanation changed? In what specific way — what did the experiment add, remove, or complicate?

  2. The Luck Audit scored you across seven domains. Which score was most surprising to you — either higher or lower than expected — and does that surprise tell you something about your own blind spots about yourself?

  3. Wiseman found that "lucky" people notice unexpected opportunities that "unlucky" people miss, even when both are exposed to the same situations. Did keeping the luck journal change what you noticed day-to-day? What is your evidence for your answer?

  4. You selected two intervention domains based on your lowest audit scores. In retrospect, were those the right domains to prioritize? Were there higher-leverage domains that your scores did not reveal as clearly?

  5. The small-sample problem from Chapter 7 limits what you can responsibly conclude from 30 days of personal data. What can you actually say based on your evidence? What would you need to see — over what time period, with what kind of data — to have more confidence in your conclusions?

  6. Describe one event from your logs that you initially coded as "random" or "lucky" and then, on reflection, realized was partially caused by something you did or by how you were structurally positioned. What does that reclassification suggest about how you have been thinking about luck in your own life?

  7. Were there days when executing the interventions felt effortful, artificial, or forced? What does that friction tell you about the fit between the intervention design and your actual context, values, or constraints?

  8. Chapter 16 argues that the luck journal works partly by reducing confirmation bias in how we perceive our own luck — we start noticing and recording positive events we would previously have filtered out. Did you notice any shift in your perception of your own luck over the 30 days? Is that shift evidence of reduced bias, or could it reflect something else?

  9. Network diversity was one of the four architectural variables. Did your mapping exercise in Week 1 reveal anything about the diversity of your current network that you had not explicitly noticed before? What did it feel like to see the distribution clearly on paper?

  10. The weak-tie hypothesis (Granovetter, Chapter 19) holds that the most valuable career and opportunity information travels through acquaintances rather than close friends. Did your experiment produce any evidence relevant to this hypothesis in your own specific context?

  11. What was the relationship between your effort and your luck-relevant event density during the experiment? Was there a correlation? What are the most plausible alternative explanations for that correlation besides "effort causes luck"?

  12. If you were to run this experiment again — same 30-day duration, same structure — what would you change about the intervention design, and why? What did you learn about your own context that a second iteration would account for?

  13. The experiment focused on behavioral changes, but luck architecture also involves structural changes — changing which communities you are part of, which recurring contexts you inhabit, which environments you spend time in. Did the experiment surface any structural changes that would have higher leverage than the behavioral interventions you chose?

  14. What does "building a lasting habit" actually mean in the context of luck science? Why would a behavior change begun during an experiment likely not persist without a structural or environmental anchor to sustain it?

  15. Dr. Yuki Tanaka's research argues that luck is not primarily about what happens to you but about what you are structurally positioned to notice, act on, and recover from. After 30 days of tracking your own luck with some rigor, do you believe this? What evidence from the experiment is most relevant to your answer? What would change your mind?


Rubric for Self-Evaluation

Rate yourself honestly across these five dimensions. "Excellent" is not the goal — honest self-assessment is. A Developing rating with a clear analysis of why is more valuable than an Excellent rating that glosses over weaknesses.

Dimension 1: Rigor of Data Collection

Excellent: Daily logs completed for 28 of 30 days or more; entries are specific and honest rather than retrospective summaries; intervention tracking is granular and distinguishes partial from full execution; events are described with enough detail to support later analysis.

Good: Daily logs completed for 20-27 days; entries are generally specific; intervention tracking is present but occasionally vague or incomplete; some events are described too briefly for clear analysis.

Developing: Daily logs completed for fewer than 20 days; entries are mostly general impressions rather than specific events; intervention tracking is inconsistent or absent for significant stretches.

Dimension 2: Quality of Intervention Design

Excellent: Both interventions are specific, measurable, directly connected to luck science theory from the book, and realistically executable given the student's actual schedule and resources. The mechanism of action — precisely why this intervention should produce the expected outcome — is explicitly stated and connected to specific frameworks.

Good: Both interventions are specific and measurable; the connection to theoretical frameworks is present but not fully articulated; one intervention may be somewhat ambitious for the available time and context.

Developing: One or both interventions are vague (e.g., "be more open to new experiences"), lack explicit connection to specific frameworks, were abandoned before Week 3, or were designed at a level of generality that makes tracking and analysis difficult.

Dimension 3: Analytical Honesty

Excellent: The final analysis applies the small-sample epistemic standard explicitly and consistently; clearly distinguishes between what the data shows and what the student believes is causing it; engages seriously with at least two alternative explanations for observed outcomes; reports unexpected or null findings without retrofitting a success narrative around them.

Good: The analysis is generally honest; engages with at least one alternative explanation; does not overclaim but may underengage with the complexity or ambiguity of the data.

Developing: The analysis reads primarily as a success narrative or a failure narrative without engagement with alternative explanations, null results, or the limits of the data; conclusions are stated with more confidence than the evidence supports.

Dimension 4: Framework Application

Excellent: The luck science frameworks — opportunity surface, prepared mind, weak ties, signal recognition, structural luck vs. behavioral luck — are used as genuine analytical tools applied to specific events and data points, not as decorative vocabulary. When a framework term appears, it is connected to something specific in the data.

Good: Frameworks are applied in the analysis and reflection sections with generally accurate usage; some terminology appears more decoratively than analytically.

Developing: Frameworks are mentioned but not applied; the analysis could have been written without having read the book, substituting generic self-reflection for framework-grounded analysis.

Dimension 5: Specificity of the 90-Day Plan

Excellent: The follow-up plan includes a behavioral commitment with specified frequency, timing, and accountability mechanism; a structural change that goes beyond individual behavior; a check-in date already entered in a calendar; and a brief explanation of why these specific elements were chosen over alternatives that were considered.

Good: The follow-up plan is specific and realistic; includes both a behavioral and a structural element; may lack an explicit accountability mechanism or a clear rationale for the choices made.

Developing: The follow-up plan is vague ("I'll keep working on my network and staying open") or consists only of intentions without specific behaviors, timelines, or accountability structures.


Character Connection: How Nadia, Marcus, and Priya Did This

Nadia's Version: The 30-Day Content Experiment

Nadia started tracking her own luck patterns at nineteen — not with that name for it, but with the same underlying impulse. She had a hunch that her content's performance was not random, that certain weeks produced momentum and others did not, and that the difference was not just algorithm variance. She started keeping a log: for every piece she published, she noted what had preceded it in the 48 hours before creation. What she had read. Who she had talked to. Whether she had left her apartment. Whether she had had any conversation that week outside her creative community.

The finding surprised her. Her three highest-performing pieces that month all originated from conversations with people who had nothing to do with content creation. A retired librarian she had sat next to on a bus. A conversation at her cousin's graduation party with someone who studied urban planning. A comment on an unrelated online forum that sent her down a research spiral she had not anticipated. The content that felt most alive, she concluded, was the content that began somewhere genuinely outside her existing world.

That finding changed her network strategy, her daily habits, and eventually her trajectory toward 50K followers. What the log gave her was not certainty — it gave her a direction she could act on. The 30-day experiment became a permanent orientation.

Marcus's Version: The Startup Risk Log

Marcus started a risk log for his startup after a mentor asked him a question he could not answer: "When things go right, do you know why?" He realized he tracked failures carefully and victories casually, assuming wins were earned and losses were bad luck. The log changed that asymmetry. He started coding every significant outcome — a new customer, a rejected partnership, a feature that landed differently than expected — along with a retrospective assessment: how much of this was skill, how much was architecture, how much was genuinely random?

What the log revealed over thirty days was that the highest-value outcomes were disproportionately coming through weak ties: people two or three connections removed from his core team. And those connections were being enabled by one recurring behavior — showing up to events where chess and technology intersected, a niche he had occupied for years without analytically connecting it to his outcomes. The log made the connection explicit. That insight did not make Marcus luckier. It made him strategic about where he spent his discretionary time, which amounts to the same thing in practice.

Priya's Version: The Job Search Tracker

Priya spent six weeks tracking every lead, every conversation, every introduction during her job search. She wanted to know which of her efforts were actually producing results, because she was expending roughly equal energy on cold applications and warm introductions and was not sure either was working well.

The data was uncomfortable. Zero of her cold applications produced a meaningful conversation during the search period. Every one of her meaningful conversations came through connections two or three degrees removed from her existing network — people she had not known at the start of the search. Three of those enabling connections were made possible by a single person she had met at a professional development event she had almost skipped because she was tired on a Tuesday.

The job she eventually took came through the third degree: someone who knew someone who knew a person Priya had met at that event. She got the job because she went somewhere she almost did not go, met someone she did not know yet, and stayed for the networking portion of the evening instead of leaving early as she had planned.

Her tracker did not get her the job. But it let her see, clearly and specifically, what actually worked — and it gave her a framework for the next six months of her career built on evidence rather than guesswork.


Your experiment will not look like any of theirs. It will be shaped by your specific circumstances, network, domain, blind spots, and constraints. That is exactly why it is worth running. The data that matters most to your luck architecture is the data that is specifically about you.

Thirty days. The baseline audit takes an hour. The daily log takes five minutes. Start tomorrow.