Capstone 3: Build a Serendipity Engine for a Real Organization or Community
Apply Luck Architecture Thinking to a Collective
Overview
The first two capstone projects work at the scale of the individual: your own 30-day experiment, your own network. This project works at a different scale. It asks you to analyze and redesign the luck architecture of an organization or community you are actually part of — and, if at all possible, to implement at least one element of your design and document what happens.
The central premise is one that Dr. Yuki Tanaka's research made explicit across Parts 4-6 of this book: organizations, like individuals, can be analyzed through a luck framework. Some organizations are structured in ways that generate ongoing fortunate outcomes for their members — they create the conditions for serendipitous connections, unexpected collaborations, and the cross-pollination of ideas that produces innovation, engagement, and durable success. Other organizations are structured in ways that actively suppress luck: they create information silos, homogenize communication, remove the low-stakes informal interactions where serendipity most often lives, and reward staying in lane over bridging across it.
A "serendipity engine" is a specific, designable set of organizational features that increase the probability of the fortunate encounters that produce innovative ideas, valuable connections, and unexpected opportunities. Physical space design, meeting culture, communication protocols, norms around sharing works-in-progress — all of these have measurable effects on the rate of serendipitous outcomes within an organization. Dr. Tanaka's key finding, which she arrived at across three years of comparative organizational research: "Organizations that engineer serendipity outperform those that leave it to chance — the data is unambiguous."
Your task is to choose a real organization, audit its current serendipity architecture, design a serendipity engine appropriate to its context, implement at least one component, and analyze what happened.
This is the most ambitious of the three capstones. It asks you to move from understanding luck science to applying it at a collective level — to be not just a luck recipient, but a luck architect for others. The level of ambition is matched by the level of learning it produces. Students who complete this capstone consistently report that it changes how they think about organizations, systems, and their own role within communities.
You will not need access to any other part of the book to complete this capstone. Everything you need is here.
Learning Objectives
By completing this capstone, you will be able to:
- Apply organizational luck design principles from Chapters 24-29 to an actual institution or community
- Use the serendipity engine framework to diagnose specific structural barriers to fortunate encounters within an organization
- Understand how designed environments and practices can systematically increase the rate of valuable unexpected collisions among members
- Practice systems thinking: how specific design inputs create emergent lucky outcomes that no one person orchestrated
- Build practical design and facilitation skills by implementing at least one real intervention
- Analyze the gap between designed intent and observed outcomes — a core skill for anyone who works with or leads organizations
Background Theory: The Serendipity Engine Framework
A serendipity engine is not a metaphor. It is a specific organizational design that has four functional components. Understanding each component clearly is essential before attempting to design one.
Component 1: Collision Space
A collision space is any environment — physical or digital — in which people who would not otherwise encounter each other are brought into proximity, with low social friction and no mandatory agenda. The research on how valuable working relationships form consistently shows that a substantial proportion of the most productive collaborations begin in informal, unstructured encounters — the hallway conversation, the shared table at lunch, the random seating assignment at an all-hands meeting, the online channel where people post things unrelated to their main work.
The collision space insight, developed most explicitly in research on MIT's Building 20, Pixar's studio layout, and Bell Labs' physical design, is that propinquity (physical or social closeness) dramatically increases the probability of interaction, and interaction is the precondition for the serendipitous exchange that generates value. Organizations that design for task completion alone — every space dedicated to a specific function, every interaction purpose-driven — eliminate the collision spaces where serendipity lives. Organizations that intentionally design collision spaces see measurably higher rates of cross-domain collaboration and unexpected innovation.
Component 2: Information Circulation
For a serendipitous connection to occur, each party needs to know something about what the other is working on, struggling with, or looking for. Most organizations fail at this. Information about what members are doing, thinking about, and needing tends to stay within small working groups or travel only upward through hierarchies — never horizontally, and rarely across different clusters. The result is that members who could help each other never know they should.
Effective information circulation systems are simple and low-friction. They do not require members to proactively disclose everything they are working on in detail. They create minimal-format channels through which a piece of key information — "I'm currently working on X" or "I'm looking for Y" — can travel across the organization passively, without requiring the sender to know who needs it. What-I'm-working-on boards, weekly digests, brief all-hands check-ins, and even well-designed Slack channels can function as information circulation systems. The design question is: how does information about what members need and offer get from the people who have it to the people who could use it?
Component 3: Connection Facilitation
Even when people are in proximity and aware of each other's work, they often do not connect — because of social friction, uncertainty about whether the other person wants to be approached, or simply because no one has introduced them. Connection facilitation is the organizational equivalent of what connectors do in networks: actively introducing people who should know each other, removing the social cost of the cold approach, and providing both parties with a reason and a frame for their first conversation.
Connection facilitation can be human (a person whose informal role is to make introductions — every organization has someone who does this naturally; the question is whether it is recognized and resourced) or structural (a random pairing system, a matching algorithm, a buddy assignment for new members). The most effective facilitation systems combine both: a structural mechanism that creates the opportunity, and a human facilitator who adds context and follow-through.
Component 4: Follow-Through Structure
Serendipitous encounters that are not followed up become anecdotes, not outcomes. The follow-through structure is what converts the valuable unexpected encounter into an ongoing relationship, a collaboration, a shared project, or a specific result. Most organizations do nothing here, leaving the follow-through entirely to individuals — which means it happens for the most socially confident and proactive members and rarely for anyone else.
Effective follow-through structures create a natural next step after the initial encounter: a shared channel where ideas from the conversation can be continued, a standing optional gathering where people who connected at one event can deepen the relationship, a brief check-in format that allows the organization to learn which connections are producing value and which need additional facilitation.
Part 1: Choose Your Context
Choose a real organization or community to design your serendipity engine for. The choice should satisfy these criteria:
Real: The organization actually exists and you have meaningful access to it — enough to observe its practices, talk to members, and understand how it actually functions rather than how it is supposed to function.
Appropriate Scale: Large enough to have interesting structural dynamics (more than 8-10 people) and small enough for you to understand it with real depth (typically fewer than 300 people for this project, though larger organizations can be addressed if you scope to a specific team or unit).
Open to Analysis: You have either explicit permission or reasonable access to conduct observations and, ideally, to propose and pilot interventions. This does not require formal approval in most cases — but it does require that you be honest about your role and intentions with the people involved.
Contexts that work well for this capstone:
- A school club, student government, or academic organization you are a member of
- A sports team, theater ensemble, dance company, or creative collective
- A workplace team or department (summer job, internship, part-time job)
- A nonprofit, volunteer organization, or community service group
- An online community with active membership: Discord server, Slack workspace, professional forum, subreddit with a regular contributing community
- A neighborhood, co-op, or residential community
- A faith community or cultural organization
- A classroom cohort or study group with enough members and structure to have interesting dynamics
Document your context:
Organization name and type:
Your role and relationship to it:
Why this organization (be specific — what drew you to analyzing this one in particular?):
Approximate size (number of active members):
Has the organization been informed of this analysis? Yes / No / Partially
What access do you have to observe its practices, spaces, and communications?
Part 2: Audit the Current Serendipity Architecture
Before designing anything, you need an honest picture of where the organization currently stands on each component of the serendipity engine. This audit takes careful observation — attend meetings, observe the spaces (physical or digital), talk informally to members, look at communication records where accessible.
Do not do this from imagination or assumption. The audit is only useful if it reflects what is actually happening rather than what the organization believes about itself or intends to be true.
The Serendipity Architecture Audit: Four-Component Assessment
Component 1: Collision Space Assessment
Investigate and document: - How many accidental, informal encounters between members happen per week? (Estimate based on observation — count informal conversations at events, casual drop-ins, unplanned encounters in shared spaces.) - What physical or digital spaces exist where members interact outside of task-focused meetings? (Break rooms, informal chat channels, pre/post-meeting gathering, social events.) - How long do typical informal interactions last, and how often do they cross the usual working-group boundaries? - When was the last time two members connected in a way that neither of them planned and that produced something of value?
Current Collision Space Rating (1-5): ______
1 = No informal interaction spaces or practices; all contact is task-focused and within established working groups. 3 = Some informal interaction but limited and concentrated within existing groups rather than across them. 5 = Rich collision infrastructure: multiple informal spaces, regular informal events, cross-group encounters happen routinely.
Evidence for your rating (be specific — cite what you observed, not what you assumed):
Component 2: Information Circulation Assessment
Investigate and document: - How does information about what members are working on travel across the organization? (Top-down only? Peer to peer? Does it cross group boundaries?) - Are there any formal or informal mechanisms for members to share what they are currently working on, struggling with, or looking for? - Can you identify two members in different parts of the organization who could help each other right now, but almost certainly do not know they could? (If yes, this is evidence of an information circulation failure.) - What is the most recent example of a valuable connection that was made possible because the right person learned what another person was working on?
Current Information Circulation Rating (1-5): ______
1 = Information stays within immediate working groups; most members have no visibility into others' projects or needs. 3 = Some cross-group information sharing, mostly through formal reports or announcements; little horizontal circulation of work-in-progress or needs. 5 = Effective organization-wide information circulation; members routinely learn about each other's work from channels other than formal reporting; "I know who could help you with that" is a frequent and accurate statement.
Evidence for your rating:
Component 3: Connection Facilitation Assessment
Investigate and document: - Who in this organization naturally plays the connector role — the person who introduces people to each other, who knows everyone's work, who makes the "you two should talk" introduction? Is this role recognized and resourced, or is it purely informal and dependent on one individual? - When new members join, are they systematically introduced to existing members in ways that go beyond a single orientation session? - Does the organization have any formal mechanism for facilitating connections — matching systems, buddy programs, structured pairings — or is connection entirely left to individual initiative? - What happens when someone in the organization is looking for a specific kind of help or collaboration? Is there a way for that need to reach the person who could answer it?
Current Connection Facilitation Rating (1-5): ______
1 = No connection facilitation; all connections are self-initiated; new members are on their own to form relationships. 3 = Some informal facilitation by natural connectors; no formal mechanisms; connection quality varies dramatically by member personality. 5 = Active connection facilitation: a recognized connector role, systematic new-member integration, formal or semi-formal matching mechanisms, clear paths for members to find collaborators.
Evidence for your rating:
Component 4: Follow-Through Structure Assessment
Investigate and document: - What happens after a positive informal encounter between two members? Is there a natural next step, or does follow-through depend entirely on individual initiative? - When a serendipitous connection does produce value (a collaboration, a shared project, useful information transfer), does the organization know about it? Is there any mechanism for capturing and amplifying these outcomes? - What is the dropout rate on relationships that begin informally? (How many "we should talk more" conversations turn into ongoing relationships vs. remaining one-off encounters?) - Does the organization have any shared spaces, projects, or gatherings that function as natural continuation points for conversations that begin elsewhere?
Current Follow-Through Structure Rating (1-5): ______
1 = No follow-through structure; everything depends on individual initiative; most informal connections remain one-off encounters. 3 = Some natural follow-through enabled by recurring events or shared channels; still largely dependent on individual initiative for the most valuable connections. 5 = Active follow-through structure: clear next steps after valuable encounters, shared continuation spaces, organizational visibility into which connections are producing value, amplification of successful serendipitous outcomes.
Evidence for your rating:
Audit Summary and Priority Assessment
| Component | Rating (1-5) | Most Significant Specific Gap |
|---|---|---|
| Collision Space | ||
| Information Circulation | ||
| Connection Facilitation | ||
| Follow-Through Structure | ||
| Total | /20 |
Interpreting your score:
- 16-20: Strong serendipity architecture. Design interventions targeting the one or two lowest-scoring components.
- 11-15: Moderate. Multiple components need attention; focus on the lowest-scoring component that also has the highest feasibility for intervention.
- 6-10: Weak serendipity architecture. The organization is leaving significant luck on the table. A comprehensive redesign is warranted; start with the highest-leverage single intervention.
- 4-5: Very weak. The organization is structured in ways that actively suppress serendipitous encounter. Major structural intervention needed; the case for change should be made clearly to leadership before attempting implementation.
Additional Structural Barriers:
Beyond the four components, note any additional structural barriers to serendipitous encounter that your observation revealed:
- Physical or virtual space constraints that prevent informal interaction:
- Cultural barriers (norms that discourage cross-group interaction, hierarchy that prevents junior members from connecting with senior ones):
- Time constraints that eliminate informal interaction (meetings that run back-to-back, no transition time):
- Diversity gaps that limit the range of perspectives in circulation:
Part 3: Design the Serendipity Engine
Based on your audit, design a serendipity engine appropriate to your organization's specific context. The design should be:
Targeted: Focused on the components with the lowest audit scores, or the highest-leverage gap relative to the organization's primary purpose.
Specific: Concrete enough that someone who has not spoken to you could implement it accurately.
Feasible: Realistic given the organization's actual resources, culture, and buy-in level.
Measurable: You should be able to observe whether it is working.
The Four-Component Design
Design your Collision Space:
Describe 2-3 specific events, practices, or spaces per month that would create genuine cross-group informal encounters. For each, specify: - What it is (an event format, a physical change, a digital space) - How it creates cross-group encounter (not just within existing groups) - What the social friction level is (lower friction = higher participation) - How frequently it occurs and who is responsible for it
Effective collision space designs for organizations at different scales: - For a small team or club (8-30 people): A weekly 15-minute "random pairs" coffee/tea — software or a hat assigns pairs each week, both parties have one question to discuss. Requires zero agenda and minimal time. - For a medium organization (30-100 people): A monthly cross-team project sprint: a defined problem, randomly-assigned cross-group teams of 3-4, one meeting, one output. Creates lasting weak ties. - For a large organization or online community (100+ people): A monthly "showcase" format — members briefly share what they're working on in a low-stakes setting; designed for discovery, not performance.
Your collision space design:
Design your Information Circulation System:
Describe the specific mechanism through which information about what members are working on and looking for will circulate across the organization. Specify: - The format (a channel, a board, a weekly digest, a template) - The content (what specifically members share — one sentence is usually better than a paragraph) - The cadence (how often it runs) - Who is responsible for maintaining it - How you will make participation easy enough that people actually do it
Effective information circulation designs: - For small organizations: A shared document or channel where members post one sentence per week: "Currently working on: . Looking for: ." Takes 60 seconds to update; creates surprising connections. - For medium organizations: A weekly digest compiled from brief member updates — 5-8 lines per member, circulated by email or posted in a shared channel. - For online communities: A dedicated "working on / looking for" channel with a pinned template. Members post when they have something relevant; the channel is searchable.
Your information circulation design:
Design your Connection Facilitation System:
Describe how the organization will actively introduce people who should know each other. Specify: - Whether you are designing a human role, a structural mechanism, or both - If a human role: who fills it, what they actually do, how they learn what connections to make - If a structural mechanism: how it works technically, how participation is encouraged, how it avoids becoming perfunctory - How new member integration is handled to ensure new members become genuinely connected rather than peripheral
Effective connection facilitation designs: - Connector role formalization: Identify the informal connector who already exists in most organizations, give the role explicit recognition and a small resource allocation (time, authority to make introductions on behalf of the organization), and create a simple briefing mechanism so they know what members are working on. - Structured random pairing: A monthly or bi-weekly random pairing system that assigns two members to a brief conversation. No agenda required. The structure provides the permission and the occasion; the conversation does the rest. - Introduction template: A standard format for member-to-member introductions that includes why both parties would benefit, a suggested first topic, and a natural follow-up point.
Your connection facilitation design:
Design your Follow-Through Structure:
Describe the mechanism that converts a serendipitous encounter into an ongoing relationship or productive collaboration. Specify: - The natural next step after an informal encounter (a shared channel, a recurring gathering, a follow-up format) - How the organization captures and amplifies successful serendipitous outcomes - What accountability exists for follow-through without it becoming burdensome
Effective follow-through designs: - Interest-based channels: After a collision event, participants are invited to a shared channel organized around the topic of their conversation — a low-friction way to continue without requiring anyone to schedule another meeting. - Monthly continuations: A recurring monthly gathering — coffee, lunch, a virtual drop-in — that functions as the natural continuation point for conversations that began at other events. - Outcome capture: A simple form or channel where members can briefly report a valuable connection or collaboration that emerged from an organization-facilitated encounter. Serves both to amplify successful outcomes and to give the organization data on what is working.
Your follow-through design:
Part 4: Implement at Least One Component
This is the requirement that distinguishes this capstone from a design exercise: you must actually implement a minimum of one element of your serendipity engine and document what happened.
The implementation does not need to be large. It does not need to succeed. It needs to be real — actually run, with actual participants, producing actual observations you can analyze.
Choosing which component to implement:
Implement the component that is: - Most clearly within your authority and access to run (you do not need approval you are unlikely to get in the project timeframe) - Most likely to produce observable results within 2-4 weeks - Most interesting to you as a design and facilitation challenge
Most students implement a collision event (easiest to organize, most immediately observable) or an information circulation system (easy to set up, produces data quickly). Connection facilitation is the most impactful but requires the most existing relationship capital within the organization to execute well. Follow-through structures are best added to an existing event or practice rather than run standalone.
Before the implementation:
Document what you predict will happen: - How many people will participate? - What kinds of connections do you expect to emerge? - What will not go as planned? - What is your lowest acceptable outcome that would still constitute useful evidence?
During the implementation:
Keep a simple implementation log with these fields:
| Date/Instance | What happened | Who participated | Notable connections or exchanges | Participant feedback (brief) | Departures from the design plan |
|---|---|---|---|---|---|
Make specific notes immediately after each instance. Memory degrades fast; notes taken within 24 hours are substantially more accurate than notes taken a week later.
Documenting who met whom:
If you ran a collision event, record (with appropriate anonymity if relevant): - How many pairs or groups formed? - How many of those were cross-group encounters (people from different clusters who would not normally interact)? - In any cases, what was the topic of conversation, and did it produce a follow-up plan?
Documenting contributions and connections:
If you ran an information circulation system: - How many members contributed? - Were there any cases where one member's contribution prompted another member to reach out? Document these specifically. - What was the participation rate compared to your prediction?
Part 5: Analysis and Iteration
After the implementation, analyze what happened with honesty and precision.
What happened versus what was expected:
| Prediction | What Actually Happened | Gap (if any) | Most Plausible Explanation |
|---|---|---|---|
Serendipity rate calculation:
Serendipity rate = (instances that produced a meaningful, unexpected connection or exchange) / (total instances of the intervention)
For example: if you ran a random-pairs coffee event with 10 pairs, and 3 of those pairs reported a conversation that produced something they would not have had otherwise, your serendipity rate is 30%. Even a 20-30% serendipity rate is meaningful at scale.
What would you change in the design?
Be specific. Not "I would make it better" but "I would change X because the observation showed that Y was not working as expected for the following reason."
What does this tell you about engineering luck at scale?
The most important question in this analysis is the one that bridges the individual to the organizational: what did running this intervention teach you about the difference between engineering luck for yourself and engineering it for a group of people who did not choose to participate in an experiment?
Part 6: Presentation
Prepare a 5-minute presentation covering your context, audit findings, design choices, implementation results, and lessons learned. The presentation is for an audience who has not read this book and has not seen your work — you need to make the case from first principles.
Slide Structure (5 slides)
Slide 1: The Context and the Diagnosis
What organization did you study? What is its purpose and structure? What did your audit reveal — specifically, which components of the serendipity engine are weakest and what did you observe that led you to that conclusion?
Present specific evidence, not impressions. "I observed five meetings and found that 90% of cross-group interaction occurred in the five minutes before and after, and zero cross-group projects were initiated in the past quarter" is useful. "The organization lacks connection" is not.
Slide 2: The Design
What serendipity engine did you design? Describe each of the four components briefly, with emphasis on the component(s) you implemented. Why did you make the specific design choices you made — what in the theory or the audit led to those choices?
Slide 3: The Implementation
What did you actually run? What happened? Present your implementation log summary: participation rates, notable connections, departures from the design plan. Be honest about what did not go as planned.
Slide 4: The Results
What did the implementation produce? Calculate and present your serendipity rate. What connections or exchanges occurred that would not have otherwise? What was the participant response? What evidence do you have that the component achieved its intended mechanism?
Slide 5: Lessons and Next Steps
What did this project teach you about engineering luck at the organizational level that you did not know before? What would you do differently in a second iteration? What is your recommendation to the organization — continue, expand, modify, or abandon — and why?
Research Connections
This capstone is grounded in the following bodies of research, all covered in the book. Where relevant, cite the specific chapters or studies.
Cunha et al. on Organizational Serendipity (Chapter 27): Research demonstrating that serendipitous outcomes in organizations are not purely random but are enabled by specific organizational conditions — psychological safety, information richness, and a culture that recognizes and follows up on unexpected discoveries. The audit dimensions in this capstone draw directly from this framework.
Obstfeld's Brokerage Research (Chapter 21): Research on the "tertius iungens" (third who joins) role — the organizational actor who bridges disconnected clusters by introducing people who should know each other. The connection facilitation component of the serendipity engine is a formalization of this role.
Google's Project Aristotle on Team Dynamics (Chapter 24): Google's internal research on what makes teams effective identified psychological safety — the sense that members can take interpersonal risks without punishment — as the single most important factor. The psychological safety dimension is directly relevant to whether a serendipity engine will generate genuine, productive encounters or superficial ones.
MIT Media Lab Serendipity Design (Chapter 25): The research tradition going back to Building 20 and continuing through intentional workspace design studies demonstrating that physical and structural proximity substantially determines who interacts with whom — and that deliberate design of collision spaces can meaningfully change this distribution.
Dr. Yuki Tanaka's Comparative Organizational Research (Chapters 28-30): The book's own ongoing character-driven exploration of how organizations at different scales and in different sectors have deliberately cultivated or inadvertently suppressed luck. Her key finding — that luck engineering at the organizational level produces measurable performance advantages at five-year horizons — provides the theoretical anchor for this capstone.
Reflection Questions
Engage seriously with at least eight of these ten questions in your final synthesis.
-
The audit asked you to observe the organization honestly rather than assess it based on how it describes itself or what it intends. What was the biggest gap between the organization's self-image and what you actually observed?
-
Which of the four serendipity engine components was most severely underdeveloped in your organization? What structural or cultural factors do you think account for that gap?
-
The design question is a systems design question: how do you create conditions that generate emergent lucky outcomes without trying to engineer specific connections? What is the relationship between design and emergence in your implementation?
-
When you implemented your component, what did you have to let go of — what did you have to allow to happen without trying to control it? How did that feel, and what did it teach you?
-
The serendipity rate you calculated reflects only the connections and exchanges you could observe and measure. What do you suspect happened that you could not observe — what longer-term effects might your implementation have set in motion?
-
Dr. Tanaka's research asked who in the organization has the most access to serendipitous encounters and whether that access is equitably distributed. In your organization, who gets the most informal interaction and unexpected connection — and who gets the least? Does your serendipity engine design improve or maintain that distribution?
-
What resistance — internal, external, logistical, cultural — did you encounter in implementing your component? What does that resistance reveal about the organization's relationship to informal interaction and unexpected encounter?
-
The implementation did not go exactly as planned. (It never does.) What specific departure from the plan produced the most interesting learning — not the most comfortable, the most interesting?
-
The four-component serendipity engine framework assumes that organizations can be improved by adding structure to increase unstructured encounters. Is there a paradox here? How do you design for the undesigned?
-
After completing this project, do you believe Dr. Tanaka's core claim: that organizations which deliberately cultivate serendipity outperform those that leave it to chance over meaningful time horizons? What evidence from your own implementation supports or complicates that claim?
Rubric for Self-Evaluation and Peer Evaluation
This capstone includes a peer-evaluation component if completed in a course or workshop setting. Rate yourself (and, optionally, rate one peer's work) across these six dimensions.
Dimension 1: Audit Depth and Honesty
Excellent: The audit reflects careful, specific observation of actual organizational behavior rather than impressions or assumptions; evidence cited for each component rating is specific and concrete; the gap between organizational self-image and actual observation is explicitly noted; the audit reveals things the student did not already know or assume before beginning.
Good: The audit is based on genuine observation; evidence is present but occasionally general; one or two components may rely more on assumption than observation.
Developing: The audit is primarily impressionistic; evidence is largely absent or vague; the findings align suspiciously well with initial assumptions, suggesting the observation was confirmation-seeking rather than genuinely exploratory.
Dimension 2: Design Quality and Theoretical Grounding
Excellent: Each component of the serendipity engine is specifically designed for the chosen organization's context rather than adapted generically; design choices are explicitly connected to the theoretical frameworks from the book; the mechanism of action — precisely how each design element increases the probability of serendipitous encounter — is stated clearly.
Good: Design is specific and contextually appropriate; theoretical grounding is present but not fully articulated for each component; mechanism of action is implied but not always explicit.
Developing: Design is generic (the same design could apply to any organization without modification); theoretical connections are absent or decorative; mechanism of action is not explained.
Dimension 3: Implementation Rigor
Excellent: At least one component was genuinely implemented with real participants; the implementation log captures specific, time-stamped observations rather than post-hoc summaries; departures from the design plan are noted honestly; the implementation produced real observations that could not have been generated through pure thought.
Good: Implementation occurred with real participants; log is reasonably specific; some departures from plan noted; observations are genuine.
Developing: Implementation was minimal, hypothetical, or conducted without genuine participation from others; the log consists primarily of plans rather than observations; what is described as implementation could have been imagined rather than run.
Dimension 4: Analytical Honesty
Excellent: The analysis clearly distinguishes between what the implementation produced and what the student believes it caused; the serendipity rate is calculated honestly including null outcomes; alternative explanations for positive results are considered; the "what would I change" section reflects genuine learning rather than retrospective self-congratulation.
Good: Analysis is generally honest; serendipity rate is calculated; some engagement with alternative explanations; revision recommendations are genuine.
Developing: Analysis primarily confirms the student's prior beliefs; null outcomes are minimized or absent; no engagement with alternative explanations; the "what would I change" section is cursory.
Dimension 5: Systems Thinking
Excellent: The student demonstrates understanding of the difference between designing inputs and controlling outputs — the implementation is designed to create conditions for emergent lucky outcomes rather than to engineer specific connections; the analysis reflects on what happened that was not planned for and what the unplanned outcomes reveal.
Good: Some evidence of systems thinking in design and analysis; the distinction between designed conditions and emergent outcomes is partially present.
Developing: The student treats the implementation primarily as a task to be executed correctly rather than a system to be designed and observed; no engagement with emergence or unplanned outcomes.
Dimension 6: Equity Consideration
Excellent: The student explicitly considers who in the organization benefits most from the current serendipity architecture and who benefits least; the design explicitly considers whether the serendipity engine improves or perpetuates existing access inequalities; this consideration is integrated into the design choices, not appended as an afterthought.
Good: The equity question is raised in the reflection section; some design consideration of who is most and least likely to benefit.
Developing: The equity dimension is absent from the analysis and design; the serendipity engine is designed purely for efficiency of encounter without consideration of distribution.
Character Connection: Dr. Yuki Tanaka and the Institutional Luck Question
Dr. Yuki Tanaka did not begin her career asking questions about luck. She began it asking questions about innovation — specifically, why some research groups at universities and research institutes produced disproportionate breakthrough work while others, with comparable resources and comparable talent, produced significantly less. She expected the answer to be about leadership quality, resource allocation, or the inherent difficulty of the research problems.
The data told a different story.
What distinguished the high-output research groups from the lower-output ones was not primarily talent, funding, or problem selection. It was interaction structure. The high-output groups had higher rates of informal cross-domain interaction. They had more conversations that crossed project boundaries. Their members knew more about what their colleagues were working on in adjacent areas. When a member encountered an obstacle, they were more likely to find that someone two doors down had encountered a related problem six months earlier and had information that was directly relevant.
The interaction structure was not accidental. In most cases, the high-output groups had specific, designable features that the lower-output groups lacked: a shared space where informal gathering happened naturally, a culture that made it normal to discuss work-in-progress openly, a connector figure who knew everyone's work and regularly made introductions across project lines. These features were not always intentional. Some had emerged organically. But they were consistent — and when Yuki began deliberately introducing them into groups that lacked them, she saw measurable changes within a single semester.
This is where the "organizations that engineer serendipity outperform those that leave it to chance" finding came from. It was not a philosophical claim about luck. It was an empirical finding about what distinguishes high-performing research groups from lower-performing ones, replicated across enough cases to be confident in the direction of the effect.
The capstone you are completing applies Yuki's research orientation to an organization you actually have access to. You are doing what she did: observing the existing structure carefully, identifying the specific features that are suppressing or enabling serendipitous encounter, and designing a targeted intervention to change the architecture.
The outcome of Yuki's research was not certainty about any single organization's trajectory. It was a framework — a way of seeing organizational dynamics that most practitioners were not using, applied to a question that most practitioners were not asking. That framework is what you now have.
The organization you are working with will not be transformed in four weeks. No organization is. But if your serendipity engine is well-designed, honestly implemented, and rigorously analyzed, it will produce evidence — real evidence, from real observation of real people — about what it takes to make a collective a little bit luckier. That evidence will be yours to use for the rest of your career, in every organization you are ever part of.
A Final Note on Scale and Ethics
This project asks you to engineer something that will affect other people — people who may not know they are participants in your design. That creates a responsibility worth naming directly.
A serendipity engine, well-designed, creates conditions for encounters that participants find valuable — conversations they are glad they had, connections they would not have made otherwise, information that turns out to be useful. At its best, it is a gift to the people it serves: a structural feature that generates lucky encounters for members who did not have to work to create them individually.
At its worst — if designed carelessly, or with the student's interests rather than the members' interests at the center — it can feel manipulative, can create social pressure that disadvantages already-disadvantaged members, or can generate encounters that are superficial rather than genuinely valuable.
The equity question in the rubric is not a formality. The luck research from Part 7 of this book makes this explicit: the structural conditions that enable serendipitous encounter are not equally distributed in most organizations. The people with the most informal access — to leadership, to senior members, to cross-group interaction — are usually the people who already have the most social capital. A serendipity engine that only optimizes for encounter rate without asking who it serves can easily reinforce rather than correct this pattern.
Design with that in mind. The best serendipity engines are the ones that generate lucky encounters not just for the most connected and confident members, but for the ones at the edges — the new members, the quieter voices, the people from communities underrepresented in the core network — who have the most to gain from a single unexpected introduction.
That is what luck science in practice looks like when it is done well: not just engineering better outcomes for the already-advantaged, but using the architecture of connection to expand who gets to be lucky.
You are not just a luck recipient. You are, if you choose to be, a luck architect.
This is where the book ends, and where the work actually begins.