> "I've been here thirty-eight years. I know why the premium calculation has a special case for Alaska. I know why the batch schedule runs the claims job before the eligibility job, even though the dependency diagram says it shouldn't matter. I know...
Learning Objectives
- Design knowledge transfer programs that capture implicit and tacit knowledge from retiring mainframers
- Create documentation strategies that preserve the 'why' not just the 'what'
- Build mentoring relationships that accelerate knowledge transfer
- Implement pair programming and shadowing programs for mainframe knowledge
- Create a knowledge transfer plan for your own organization
In This Chapter
- 40.1 The Knowledge Crisis: Demographics, the Retirement Wave, and What Gets Lost
- 40.2 Explicit vs. Tacit Knowledge: Why Documentation Alone Fails
- 40.3 Knowledge Transfer Methods: Pair Programming, Shadowing, Recorded Sessions, and War Room Stories
- 40.4 Documentation That Actually Works: Decision Logs, Architecture Decision Records, and Runbook Writing
- 40.5 Mentoring: Formal Programs, Reverse Mentoring, and Cross-Generational Teams
- 40.6 Building a Learning Organization: Communities of Practice, Brown Bags, and Internal Tech Conferences
- 40.7 Marcus's Checklist: A Personal Knowledge Transfer Plan
- Chapter Summary
Chapter 40 — Knowledge Transfer and Mentoring: Preserving 50 Years of Institutional Knowledge Before It Retires
"I've been here thirty-eight years. I know why the premium calculation has a special case for Alaska. I know why the batch schedule runs the claims job before the eligibility job, even though the dependency diagram says it shouldn't matter. I know which vendor contact to call at 2 AM when the MQ channel goes down. In two years, I'm going to retire. And if we don't do something about it, all of that walks out the door with me." — Marcus Whitfield, Senior Systems Architect, Federal Benefits Administration
There is a crisis in the mainframe world, and it is not technical.
It is not about aging hardware — IBM continues to release new generations of z Systems with remarkable performance improvements. It is not about the COBOL language — COBOL remains one of the most widely deployed programming languages on Earth, processing an estimated $3 trillion in daily transactions. It is not about relevance — the mainframe runs the world's banks, insurance companies, government agencies, and airlines as reliably as it ever has.
The crisis is about knowledge. The people who built, maintained, and evolved these systems over five decades are retiring. And the knowledge they carry — not just the technical knowledge documented in manuals and training courses, but the deep, contextual, experiential knowledge that makes systems actually work — is leaving with them.
This chapter is about preventing that loss. It is about the methods, practices, and organizational structures that can capture and transfer knowledge from one generation of mainframe professionals to the next. It is about mentoring, documentation, pair programming, communities of practice, and the deliberate, sustained effort required to preserve fifty years of institutional wisdom.
And it is about Marcus Whitfield, who has two years to transfer thirty-eight years of knowledge. His story is the story of an industry.
40.1 The Knowledge Crisis: Demographics, the Retirement Wave, and What Gets Lost
The Numbers
The demographic reality of the mainframe workforce is stark. According to industry surveys and workforce analyses conducted between 2020 and 2025:
- The average age of a mainframe professional in North America is approximately 55 years old.
- Approximately 40% of the mainframe workforce is eligible for retirement within the next five years.
- The pipeline of new mainframe talent — measured by university programs teaching mainframe skills, entry-level hires into mainframe roles, and apprenticeship completions — replaces fewer than 20% of anticipated retirements.
- At current trends, the mainframe skills gap will reach critical levels in most large organizations by 2030.
These numbers have been discussed at industry conferences for over a decade. What has changed is urgency: the retirements that were projected are now happening. The crisis that was theoretical is now operational.
At Federal Benefits Administration, Marcus Whitfield can name the numbers for his own organization:
"We have fourteen mainframe professionals. Average age is 57. Three are retiring this year, two more next year, and I'm in that second group. That's five of fourteen — more than a third — gone within 24 months. We've hired two new people in the last three years. Do the math."
What Gets Lost
When a senior mainframe professional retires, what exactly does the organization lose? The answer is more complex and more concerning than most managers realize.
Explicit Knowledge — The Easy Part
Explicit knowledge is the knowledge that can be written down: code, documentation, procedures, architecture diagrams, runbooks. This is the knowledge that organizations typically focus on preserving, because it is visible and tangible. When Marcus retires, his code will still be in the source library, his JCL will still be in the procedure library, and his architecture documents will still be on the shared drive.
But explicit knowledge, while necessary, is far from sufficient.
Tacit Knowledge — The Hard Part
Tacit knowledge is the knowledge that lives in a person's head — the expertise, intuition, judgment, and contextual understanding that comes from years of experience. It is the knowledge that is most difficult to articulate, most difficult to transfer, and most valuable to the organization.
Marcus's tacit knowledge includes:
Pattern recognition. When a batch job runs 15% slower than usual, Marcus knows to check the DB2 buffer pool statistics before investigating the COBOL code, because 80% of batch performance issues at Federal Benefits originate in the database layer. A newer developer would start with the code and waste hours.
Business context. Marcus knows that the claims processing system handles Alaska differently because of a 1994 regulatory exemption that was never formally documented in the requirements but was coded into the system by a developer who retired in 2003. Without that context, a future developer might "fix" the Alaska logic and create a compliance violation.
Relationship knowledge. Marcus knows which IBM support engineer to request when they open a PMR for a DB2 performance issue, because that engineer understands Federal Benefits' specific configuration. He knows which vendor contact has the authority to expedite a critical fix. He knows which business analyst in the claims division can explain the edge cases in the eligibility rules.
Troubleshooting heuristics. Over thirty-eight years, Marcus has developed a mental decision tree for diagnosing production issues that no runbook could capture. He knows the sound of different types of problems — "When the CICS region starts paging, it's always a storage leak in the claims inquiry transaction. Always. I don't need to look at the dump; I know which module to restart."
Historical context. Marcus knows why the system architecture looks the way it does — which decisions were technical, which were political, which were compromises, and which were mistakes. He knows that the batch schedule runs in a specific order not because of data dependencies but because of a capacity constraint that existed in 2007 on hardware that was replaced in 2012, but nobody ever resequenced the schedule because "it works."
Organizational Knowledge — The Invisible Part
Beyond technical knowledge, retiring professionals carry organizational knowledge that is even harder to preserve:
- How decisions actually get made (not the org chart, the real power structure)
- How to get things done (which approvals are actually required vs. pro forma)
- The history of failed initiatives (and why they failed — institutional memory that prevents repeating mistakes)
- Vendor relationship context (negotiation history, commitments, unwritten agreements)
- Regulatory interpretation (how auditors have historically interpreted ambiguous requirements)
Sandra Chen, now Enterprise Architect at Federal Benefits, describes the scope of the challenge:
"When we started the modernization initiative, I assumed the biggest risk was technical. It wasn't. The biggest risk was losing Marcus. Not because he's the best coder — he is, but we have other good coders. The risk is that Marcus is the only person who understands why the system works the way it does. He is the institutional memory of thirty-eight years of decisions, compromises, workarounds, and business rules. That knowledge is not in any document. It's in one person's head. And that person is retiring."
The Scale of the Problem: A Cross-Industry View
The knowledge crisis is not unique to Federal Benefits. It manifests across every industry that depends on mainframe technology:
Financial Services. Banks and investment firms run their core transaction processing, risk management, and regulatory reporting on mainframes. A 2024 survey by a major consulting firm found that 72% of large financial institutions rated "mainframe skills retention" as a top-five technology risk. One global bank estimated that the retirement of twelve senior mainframe professionals over three years would eliminate 400 combined years of institutional knowledge — knowledge embedded in systems that process $2.4 trillion in daily transactions.
Government. Federal, state, and local government agencies run benefits processing, tax collection, motor vehicle registration, and criminal justice systems on mainframes. These systems are subject to regulatory requirements that are embedded in code written decades ago and understood by a shrinking number of people. The Social Security Administration, the IRS, and numerous state agencies have publicly acknowledged mainframe skills gaps as critical risks.
Insurance. Policy administration, claims processing, and actuarial systems in the insurance industry are heavily mainframe-dependent. The insurance industry faces a double challenge: the mainframe professionals are retiring, and the business domain knowledge (insurance regulations, actuarial standards, legacy policy structures) that they carry is even harder to replace than the technical knowledge.
Healthcare. Hospital systems, health insurance processors, and pharmacy benefits managers run claims adjudication and eligibility verification on mainframes. The regulatory complexity of healthcare — HIPAA, ACA provisions, state-specific Medicaid rules — means that the knowledge embedded in these systems has both technical and legal dimensions.
The common thread across all these industries is that the knowledge at risk is not merely technical. It is the intersection of technology, business domain, regulation, and operational practice — a combination that takes decades to build and cannot be reproduced from documentation alone.
The Cost of Not Transferring Knowledge
Organizations that fail to transfer knowledge before retirements pay a steep price:
Increased incident frequency. Without the tacit knowledge of senior professionals, production incidents take longer to diagnose and resolve. A problem that Marcus can identify in ten minutes may take a new team member two days.
Regression errors. Changes to systems without understanding the historical context — the "why" behind the "what" — frequently introduce regressions. The Alaska claims processing example is not hypothetical; variations of this story play out in mainframe shops regularly.
Lost optimization opportunities. Senior professionals know where the inefficiencies are and how to address them. Without that knowledge, organizations run suboptimal workloads indefinitely.
Vendor leverage erosion. Without the institutional memory of vendor negotiations, organizations accept worse terms in renewals and lose the context needed to hold vendors accountable.
Decision-making degradation. Without historical context, new decision-makers repeat the mistakes of the past. Initiatives that were tried and abandoned for good reasons are proposed again, consuming resources before failing again.
Kwame Asante at CNB puts it bluntly:
"We lost a senior DB2 administrator three years ago — retirement, two weeks' notice, no knowledge transfer. Six months later, we had a performance crisis in the loan processing batch. It took three external consultants, five weeks, and $180,000 to diagnose a problem that Henry could have identified in an afternoon. That was the moment our CTO approved the formal knowledge transfer program."
40.2 Explicit vs. Tacit Knowledge: Why Documentation Alone Fails
The Documentation Illusion
Many organizations respond to the retirement wave with a documentation initiative: "Let's have our senior people document everything before they leave." This is well-intentioned and partially useful, but it fundamentally misunderstands the nature of expert knowledge.
The philosopher Michael Polanyi captured this in 1966 with his famous observation: "We can know more than we can tell." A master chef cannot fully document how to make a perfect sauce — the timing, the visual cues, the subtle adjustments based on the ingredient quality that day. Similarly, a senior mainframe professional cannot fully document their expertise in a wiki.
Consider Marcus's knowledge of batch performance troubleshooting. If you asked him to document it, he might write:
BATCH PERFORMANCE TROUBLESHOOTING
1. Check DB2 buffer pool statistics
2. Review SMF 30 records for CPU time
3. Check VSAM file statistics for CI/CA splits
4. Review JCL for parameter changes
5. Check system activity (other jobs, DASD contention)
This is accurate but nearly useless to a newcomer. What is missing is the judgment: When do you check each item? What do the numbers mean? How do you interpret anomalies? What are the common patterns? How do you distinguish a genuine performance problem from normal variation?
Marcus's actual troubleshooting process looks more like this:
"First, I look at the job elapsed time trend over the last 30 days — is this a sudden spike or a gradual degradation? That tells me whether it's likely an environmental change or a data growth issue. If it's sudden, I check whether anything changed in the system — a PTF was applied, another job was moved into the same time slot, DASD was reconfigured. I can usually rule out most possibilities in five minutes just by asking the right questions. Then I go to the DB2 statistics — but I'm not looking at the standard metrics. I'm looking at the ratio of getpages to synchronous reads. If that ratio changed, something happened to the buffer pools. If it didn't change, the DB2 layer is fine and I look elsewhere."
This deeper knowledge — the diagnostic reasoning, the pattern recognition, the prioritization of hypotheses — is tacit. It cannot be captured in a checklist. It must be transferred through experience, observation, and conversation.
The Knowledge Spectrum
Rather than a binary distinction between explicit and tacit, knowledge exists on a spectrum:
FULLY EXPLICIT PARTIALLY EXPLICIT FULLY TACIT
(Can document) (Can discuss) (Can only demonstrate)
| | |
Code Design rationale Intuition
JCL Troubleshooting Pattern recognition
Runbooks heuristics Judgment under pressure
Architecture docs Business context Vendor relationship
Data models Historical decisions navigation
Procedure manuals Risk assessment Organizational
Configuration files frameworks navigation
Estimation methods "Gut feel" about system
Vendor knowledge health
The further right you go on this spectrum, the more difficult knowledge is to transfer and the more valuable it tends to be. Effective knowledge transfer programs address the full spectrum, not just the left side.
Why Documentation Still Matters
None of this means documentation is unimportant. Good documentation is necessary — it provides the foundation that tacit knowledge builds upon. A new developer who has access to accurate, well-organized documentation can reach a baseline level of competence much faster than one who does not.
The key is understanding what documentation can and cannot do:
Documentation CAN: - Record the "what" — system structure, configurations, procedures - Record decisions and their stated rationale (ADRs) - Provide a reference for standard operations - Onboard new team members to a baseline level - Serve as a safety net when tacit knowledge holders are unavailable
Documentation CANNOT: - Transfer judgment and intuition - Capture the full reasoning behind complex decisions - Convey the "feel" of a healthy vs. unhealthy system - Replace the mentoring relationship - Stay current without sustained investment
The organizations that handle knowledge transfer best use documentation as one tool in a multi-pronged strategy, not as the entire strategy.
The Expertise Paradox
There is a cruel irony in knowledge transfer that deserves explicit acknowledgment: the more expert someone becomes, the harder it is for them to articulate what they know. This is known as the "expertise paradox" or "the curse of knowledge."
When Marcus looks at a CICS console and immediately knows that the region is under stress, he is not consciously analyzing each metric. He is recognizing a pattern — a combination of transaction rates, response times, storage usage, and queue depths — that his brain has learned to interpret as a gestalt over thirty-eight years. If you ask him to explain how he knows, he will try, but his explanation will inevitably be incomplete, because much of his recognition happens below the level of conscious articulation.
This is why simply asking experts to "write down what you know" produces disappointing results. They genuinely cannot write down everything they know, because they are not fully aware of everything they know. The knowledge is embedded in their neural pathways, not in their conscious memory.
Effective knowledge transfer methods work around this paradox by creating situations where tacit knowledge becomes visible — through pair programming (where the expert must narrate their process), through shadowing (where the observer notices things the expert does unconsciously), and through crisis response (where pattern recognition is exercised under pressure and can be analyzed afterward).
Dev Patel at Federal Benefits describes this phenomenon:
"I've been managing the batch schedule for thirty years. When someone asks me how I decide the order, I say 'it depends.' And it does — it depends on the data volumes, the day of the month, whether it's quarter-end, whether any special processing is running. But when I actually make the decision, it takes me about ten seconds. I just know. That 'just knowing' is thirty years of experience compressed into intuition, and I cannot decompose it back into rules any more than I could explain how I ride a bicycle."
40.3 Knowledge Transfer Methods: Pair Programming, Shadowing, Recorded Sessions, and War Room Stories
The Knowledge Transfer Toolbox
Effective knowledge transfer requires multiple methods, each suited to different types of knowledge on the spectrum. No single method is sufficient; the best programs use all of them in combination.
Pair Programming: Learning by Doing Together
Pair programming — where two developers work together at one workstation, one driving and one observing/advising — is the single most effective method for transferring tacit technical knowledge. When a senior developer and a junior developer pair on real production work, the junior developer absorbs not just the keystrokes but the reasoning behind them.
How to structure mainframe pair programming:
Session length: 2–4 hours. Longer sessions cause fatigue; shorter sessions do not allow enough depth.
Role rotation: The senior developer drives for the first hour while explaining their thought process aloud. Then they switch: the junior developer drives while the senior developer guides. This second phase is where the deepest learning happens — the junior developer must make decisions, and the senior developer's corrections reveal tacit knowledge that would otherwise remain hidden.
Scope: Real work, not exercises. Knowledge transfer is most effective when it occurs in the context of actual production tasks — debugging a real problem, implementing a real change, analyzing real performance data.
Documentation: After each session, the junior developer writes a brief summary of what they learned — not a transcript, but the key insights and heuristics they observed. The senior developer reviews and corrects. Over time, these summaries become a valuable knowledge base.
Marcus and his pair programming partner, Kai Nakamura (a developer with three years of mainframe experience), describe their sessions:
Marcus: "Kai and I pair every Tuesday and Thursday morning. Last week, we were debugging a batch job that failed with a SOC7 in the claims processing module. I could have fixed it in twenty minutes, but that's not the point. Instead, I walked through my diagnostic process out loud. 'Okay, SOC7 means bad data — either an invalid numeric field or a bad pointer. In this program, it's almost always an invalid numeric field in the incoming claims file, because our trading partners sometimes send garbage in the dollar amount field. Let's look at the input record...' Kai learns more from watching me think than from any documentation I could write."
Kai: "The most valuable thing about pairing with Marcus isn't the technical knowledge — I can get that from manuals. It's the shortcuts. He knows which problems are common and which are rare. He knows which logs to check first. He has twenty mental shortcuts that save hours of investigation. You can't learn those from a book. You learn them by watching someone use them on real problems."
Shadowing: Learning by Observing
Shadowing is a broader form of pair programming where a junior professional observes a senior professional across all aspects of their role — not just coding, but meetings, phone calls, vendor interactions, and decision-making.
Shadowing is particularly effective for: - Organizational knowledge (how decisions are made, who to contact) - Vendor relationship management (how to negotiate, what leverage exists) - Crisis management (how to lead a production incident response) - Stakeholder communication (how to present to different audiences)
Sandra Chen's shadowing of Marcus Whitfield during her transition to architect (described in Chapter 39, Case Study 2) is a model for this approach. She spent four weeks observing every aspect of Marcus's day, keeping a notebook of questions for weekly review sessions.
How to structure shadowing:
Duration: Minimum two weeks for targeted skill transfer; four or more weeks for comprehensive knowledge transfer.
Active observation: The shadow should not be passive. They should take notes, ask questions (at appropriate moments), and reflect on what they observe.
Debrief sessions: Schedule regular (daily or weekly) debriefs where the shadow asks questions and the mentor explains the reasoning behind their actions.
Gradual transition: After the observation phase, move to a guided practice phase where the shadow takes on tasks with the mentor available as backup.
The critical detail about shadowing: The value of shadowing is often in the things the senior professional does not realize they are demonstrating. When Marcus picks up the phone to call a specific person at IBM instead of opening a PMR through the standard channel, he is demonstrating relationship knowledge that he might never think to explain. When Diane at Pinnacle Health glances at the batch console and says "that looks fine" without pausing, she is demonstrating a pattern recognition ability that she would struggle to articulate. The shadow's job is to notice these moments and ask about them — "Why did you call that person directly instead of opening a ticket?" "What specifically did you look at on that console that told you everything was fine?" These questions surface tacit knowledge that would otherwise remain invisible.
Recorded Sessions: Capturing the Narrative
Video or audio recording of experienced professionals explaining systems, procedures, and historical context creates a persistent knowledge asset. Unlike written documentation, recorded sessions capture the narrative quality of tacit knowledge — the stories, the digressions, the "oh, and by the way" insights that would never make it into a formal document.
Effective recorded session formats:
System walkthroughs: The senior professional walks through a system or subsystem, explaining each component, its purpose, its quirks, and its history. These recordings are especially valuable because they capture the kind of detail that is too granular for architecture documents but too important to lose.
War stories: Structured interviews where the senior professional recounts significant incidents — outages, migrations, performance crises — and explains how they were resolved and what was learned. These stories contain distilled wisdom that transfers both technical knowledge and judgment.
Decision histories: Recordings of the senior professional explaining why key architectural and design decisions were made. "Why does the batch schedule run in this order?" "Why is this table denormalized?" "Why does this program have a special case for transactions before 1997?" The answers to these questions are often the most critical knowledge at risk of loss.
At SecureFirst Insurance, Yuki Tanaka implemented a recording program she calls "Knowledge Tapes":
"We set up a simple recording studio — a quiet conference room with a decent microphone and a screen capture tool. Every week, one of our senior developers spends an hour recording a 'Knowledge Tape' on a specific topic. Carlos recorded one on the insurance rating engine — ninety minutes of him explaining every subroutine, every business rule, every edge case. That recording has been viewed 47 times by our newer developers. It's the single most accessed resource in our internal knowledge base."
War Room Stories: Learning from Crisis
Some of the most important knowledge in any organization is embedded in the stories of past crises — the production outages, the near-misses, the problems that were solved under extreme pressure. This knowledge is inherently narrative: it is best transmitted as stories, not as procedures.
How to capture war room stories:
Structured interviews: Ask senior professionals to recount their most memorable production incidents. Use a consistent structure: What happened? How was it detected? What was the initial hypothesis? How was it diagnosed? What was the resolution? What was learned?
Brown bag sessions: Schedule informal lunchtime sessions where senior professionals share war stories with the broader team. The informal setting encourages the kind of storytelling that formal documentation does not.
Incident post-mortem library: Collect written post-mortem reports from past incidents into a searchable library. Supplement these with narrative summaries that capture the human dimension — the decision-making process, the communication challenges, the moments of insight.
Rob Chen at CNB has been collecting war stories for five years:
"I have a collection of about sixty war stories from CNB's mainframe history — going back to a spectacular failure in 1997 where a JCL change brought down the entire ATM network on a Friday afternoon. Each story is a lesson. The 1997 story teaches you about change management and the danger of deploying on Fridays. A 2015 story teaches you about DB2 lock escalation and why you never run a batch update against a table that CICS is reading. New developers read these stories during onboarding, and they come away with thirty years of experience compressed into a few hours of reading."
40.4 Documentation That Actually Works: Decision Logs, Architecture Decision Records, and Runbook Writing
The Documentation Problem
Most technical documentation fails. It fails because it is written at the wrong level of abstraction (too high-level to be useful or too detailed to be maintained), because it is not kept current (documentation that was accurate two years ago is worse than no documentation, because it creates false confidence), or because it answers the wrong question (it describes what the system does but not why it does it that way).
Documentation that actually works for knowledge transfer has three characteristics:
-
It captures the "why." Not just what the system does, but why it was designed that way. The reasoning behind decisions is more durable and more valuable than the decisions themselves, because it enables future developers to make good decisions when circumstances change.
-
It is maintained as part of the work. Documentation that is a separate activity from development will always fall behind. Documentation that is embedded in the development process — ADRs written at decision time, runbooks updated at deployment time, comments written at coding time — stays current.
-
It is discoverable. The best documentation in the world is useless if people cannot find it. A consistent structure, a clear naming convention, and a searchable repository are essential.
Architecture Decision Records (ADRs) for Knowledge Transfer
We introduced ADRs in Chapter 39 as an architect's portfolio artifact. Here we revisit them as a knowledge transfer tool.
ADRs are uniquely valuable for knowledge transfer because they capture the decision-making context that is otherwise lost when people leave. When a future developer asks, "Why does this system use MQ instead of direct database access?" the ADR provides not just the answer but the reasoning, the alternatives that were considered, and the constraints that shaped the decision.
ADR discipline for knowledge transfer:
Write ADRs for past decisions, not just future ones. If your senior professionals are retiring, have them write ADRs for the major design decisions they have been involved in — even decisions made years ago. The reasoning is still in their heads; capture it before they leave.
Include the rejected alternatives. Future developers need to know not just what you chose, but what you did not choose and why. This prevents them from proposing alternatives that were already evaluated and rejected for good reasons.
Link ADRs to code. Reference the specific programs, modules, and configurations that implement the decision. This makes the ADR discoverable from the code and vice versa.
Marcus has been writing retroactive ADRs for Federal Benefits' most critical design decisions:
"I spent two months writing ADRs for thirty decisions that I was involved in over the past two decades. Some of them, I'm the only person alive who knows the reasoning. Why does the claims processing run in three phases instead of one? Because in 2004, we hit a GETMAIN limit in the CICS region and had to split the transaction. The hardware constraint is gone, but the three-phase design is still there because it actually turned out to be more maintainable. Without that ADR, someone might try to consolidate it and not understand why it was split in the first place."
Decision Logs
Decision logs are lighter-weight than ADRs — they record day-to-day operational and design decisions that do not warrant a full ADR but are still important to preserve.
Format:
DECISION LOG — Claims Processing Subsystem
============================================
DATE DECISION RATIONALE DECIDED BY
2024-01-15 Added 30-sec timeout to Vendor API occasionally hangs; M. Whitfield
vendor API call in CLMPROC without timeout, CICS task S. Chen
accumulation causes region stress
2024-02-03 Changed DB2 ISOLATION from Batch claims load was causing M. Whitfield
CS to UR for CLM_SUMMARY lock timeouts on online inquiry.
table read in CLMINQ CLM_SUMMARY data is refreshed
nightly; uncommitted read is
acceptable for this table.
2024-03-22 Moved CLM_ARCHIVE purge Old schedule conflicted with K. Nakamura
batch from Saturday 02:00 month-end processing. New window M. Whitfield
to Sunday 02:00 avoids contention with MONTHEND (reviewed)
job stream.
Decision logs have two virtues: they are quick to write (one line per decision) and they accumulate into a searchable history of operational choices. Over months and years, they become an invaluable reference for understanding why the system is configured the way it is.
The key to making decision logs work is making them frictionless. If recording a decision takes more than five minutes, people will not do it. Kwame Asante at CNB integrated decision logging into their team's daily standup: "At the end of every standup, I ask: 'Did anyone make a configuration change, design decision, or operational adjustment yesterday that future us needs to know about?' If the answer is yes, the person adds it to the log before they leave the room. It takes sixty seconds. Over two years, we have accumulated over four hundred entries. That log has saved us from repeating mistakes at least a dozen times."
Runbooks That Preserve Knowledge
A runbook is a set of procedures for operating and troubleshooting a system. Good runbooks are one of the most important knowledge transfer artifacts because they capture operational knowledge that is otherwise entirely dependent on individual expertise.
The common failure: runbooks that describe happy paths only.
Most runbooks describe how to start a system, run a batch job, or perform a routine maintenance task. They do not describe what to do when things go wrong — which is precisely when runbooks are most needed and when the absence of senior expertise is most felt.
Effective runbooks include:
Normal operations: Step-by-step procedures for standard tasks. Include expected output at each step so the operator can verify they are on track.
Diagnostic procedures: When something is wrong, how do you determine what? Include decision trees: "If symptom A, check X. If symptom B, check Y." This is where tacit knowledge gets partially encoded.
Recovery procedures: For each type of failure, what are the recovery steps? Include rollback procedures, escalation contacts, and communication templates.
Historical context: For each procedure, a brief note on why it exists and any known quirks. "This restart procedure must be performed in this specific order because of a dependency between the CICS region and the MQ queue manager. If restarted out of order, the queue manager will hold messages until the CICS region is fully initialized, causing a backlog that can take hours to clear."
Troubleshooting heuristics: The experienced professional's mental shortcuts. "If the batch job CLMBATCH runs more than 20% over its normal elapsed time, the cause is almost always one of three things: (1) unexpectedly high input volume, (2) DB2 buffer pool contention from a concurrent job, or (3) DASD contention from the backup running late. Check in that order."
Ahmad at Pinnacle Health rewrote their entire runbook library as a knowledge transfer initiative:
"Our old runbooks were written in 2009 and never updated. They described a system that no longer existed. Diane and I spent three months rewriting every runbook with our senior developers. For each procedure, we asked: 'What do you do when this goes wrong?' and 'What's the thing you check that nobody else thinks to check?' The new runbooks are three times longer than the old ones, but they actually work."
40.5 Mentoring: Formal Programs, Reverse Mentoring, and Cross-Generational Teams
The Mentoring Imperative
If pair programming is the best method for transferring technical tacit knowledge, mentoring is the best method for transferring the broader set of professional knowledge — career navigation, organizational awareness, judgment development, and professional identity.
Mentoring in the mainframe context has an additional urgency: the demographic gap between senior and junior professionals is often two or three decades. A 60-year-old retiring architect and a 25-year-old new hire are separated by an entire generation of experience. Bridging that gap requires intentional, structured mentoring that goes beyond casual advice-giving.
Formal Mentoring Programs
Effective formal mentoring programs have several characteristics:
Structured matching. Pair mentors and mentees based on complementary skills and development needs, not just availability. Consider personality compatibility, communication styles, and geographic proximity (or commitment to virtual engagement).
Clear expectations. Define the frequency of meetings (minimum bi-weekly), the duration of the mentoring relationship (minimum six months, ideally a year), and the goals of the engagement.
Goal-setting. At the start of the relationship, mentor and mentee should agree on three to five specific goals. These might include: "Mentee will be able to independently diagnose batch performance issues" or "Mentee will lead an architecture review by month six."
Progress tracking. Regular check-ins (monthly or quarterly) with a program coordinator to assess progress, address challenges, and adjust goals.
Mentor training. Not every expert is a natural mentor. Providing training in coaching skills, active listening, and feedback delivery improves the quality of the mentoring relationship.
Organizational support. Mentoring takes time. Organizations must allocate that time explicitly — it is not something that happens "in addition to" regular work. If mentors and mentees are told to fit mentoring around their other responsibilities, it will not happen.
At CNB, Lisa Park manages the mainframe mentoring program:
"We pair every senior mainframe professional with at least one junior professional. The pairs meet for two hours every week — protected time, not optional. I track goals and progress quarterly. In three years, the program has developed eleven new mainframe-capable professionals from entry level to independent contributor. Without it, we would be facing a staffing crisis."
The Mentoring Anti-Patterns
Before discussing additional mentoring approaches, it is worth naming the patterns that cause mentoring to fail:
The "figure it out" anti-pattern. The senior professional is nominally a mentor but provides no structured guidance. "Just ask me if you have questions" is not mentoring — it places the entire burden of knowledge extraction on the person who does not know what questions to ask.
The "lecture" anti-pattern. The mentor talks at the mentee for an hour while the mentee takes notes. This feels productive but transfers very little tacit knowledge. The mentee learns facts they could have read in a document; they do not learn judgment.
The "sink or swim" anti-pattern. The mentee is thrown into a production support role without adequate preparation, on the theory that crisis builds capability. It does — for those who survive. But the failure rate is high, the stress is damaging, and the organization bears the cost of the mistakes the unprepared mentee makes.
The "no time" anti-pattern. The mentor and mentee both want the relationship to work, but operational pressures consume all available time. Mentoring is repeatedly deferred for "more urgent" work. This is the most common anti-pattern, and it is the most corrosive because it communicates that knowledge transfer is less important than everything else.
The antidote to all of these anti-patterns is organizational commitment — protected time, clear expectations, and accountability for knowledge transfer outcomes.
Reverse Mentoring
Reverse mentoring — where junior professionals mentor senior professionals — is a powerful complement to traditional mentoring. In the mainframe context, junior professionals often bring skills in modern technologies (cloud, containers, CI/CD, agile practices) that senior mainframers may lack.
The benefits of reverse mentoring extend beyond skill transfer:
- It builds mutual respect across generations
- It demonstrates that learning flows in both directions
- It helps senior professionals understand the perspectives and communication preferences of younger colleagues
- It gives junior professionals confidence and visibility
Kai Nakamura reverse-mentors Marcus Whitfield on Git, CI/CD pipelines, and modern testing practices:
"Marcus teaches me COBOL and mainframe operations. I teach him Git workflows and automated testing. We meet every other Wednesday for an hour. The first few sessions were awkward — Marcus is a legend in our organization, and I'm three years in. But once we started, the dynamic was surprisingly natural. He's curious and humble about what he doesn't know. I try to be the same about what he knows. We've both become better for it."
Cross-Generational Teams
The most sustainable approach to knowledge transfer is not a program but a team structure: cross-generational teams that mix experienced and newer professionals in the daily work of maintaining and evolving mainframe systems.
In a cross-generational team, knowledge transfer happens continuously and organically — not in scheduled sessions but in the normal course of collaboration. The senior developer who explains their approach while debugging a problem is transferring knowledge. The junior developer who suggests a modern testing approach is transferring knowledge in the other direction.
Effective cross-generational team design:
Mix experience levels deliberately. Avoid the temptation to create an "experienced team" and a "new hire team." Every team should include at least one senior professional (15+ years) and at least one newer professional (under 5 years).
Assign shared responsibilities. Both the senior and junior team members should be responsible for the same systems. This creates natural opportunities for knowledge transfer and ensures the junior professional is learning the systems they will eventually own.
Rotate assignments. Periodically rotate junior professionals across different systems and different senior mentors. This broadens their knowledge and prevents single points of dependency.
Celebrate both contributions. Recognize the value of both deep institutional knowledge and modern technical skills. A team culture that respects experience and values innovation in equal measure will retain both generations.
Diane Kowalski at Pinnacle Health restructured her team around this principle:
"We used to have a 'maintenance team' of senior developers who knew the legacy systems and a 'modernization team' of newer hires who worked on APIs and cloud integration. They rarely talked to each other. I dissolved both teams and created three cross-functional teams, each with a mix of experience levels and skill sets. The first month was chaotic. By the third month, the knowledge transfer was happening naturally — the senior developers were explaining business logic while the newer developers were teaching them about API design. By the sixth month, every team was more capable than either of the original teams."
40.6 Building a Learning Organization: Communities of Practice, Brown Bags, and Internal Tech Conferences
Beyond Individual Knowledge Transfer
Individual mentoring and pair programming are essential, but they do not scale. An organization with fifty mainframe professionals cannot rely on one-to-one knowledge transfer alone. It needs organizational structures that facilitate knowledge sharing across teams and across time.
Communities of Practice
A community of practice (CoP) is a group of professionals who share a domain of interest and meet regularly to share knowledge, discuss challenges, and develop best practices. In the mainframe context, communities of practice might be organized around:
- A technology (COBOL, DB2, CICS, z/OS system programming)
- A practice (performance tuning, security, DevOps)
- A domain (batch processing, online transactions, API integration)
- A cross-cutting concern (knowledge transfer, modernization, testing)
Effective communities of practice:
Regular cadence. Meet at least monthly. Bi-weekly is better for active communities.
Rotating presentations. Different members present at each meeting. This distributes the teaching burden and ensures diverse perspectives.
Practical focus. Discuss real problems, real solutions, and real lessons learned. Avoid the temptation to turn CoP meetings into lecture series.
Documentation. Capture key insights from each meeting in a shared repository. Over time, this becomes a valuable knowledge base.
Management support. Communities of practice need organizational support — time allocation, meeting space (physical or virtual), and recognition. They should not be treated as extracurricular activities.
At Federal Benefits, Sandra Chen established three communities of practice:
"We have a COBOL CoP that meets bi-weekly, a DB2 CoP that meets monthly, and a Modernization CoP that meets monthly. Each CoP has a rotating facilitator and a shared wiki for capturing insights. The COBOL CoP has been particularly valuable — we've cataloged over sixty 'tribal knowledge' items that were previously in individual developers' heads. Things like 'the SORT utility has a bug with RECFM=VB files over 32K' or 'the INSPECT CONVERTING verb is 40% faster than a PERFORM loop for character translation.' None of this is in any manual. All of it matters."
Brown Bag Sessions
Brown bag sessions — informal lunchtime presentations — are a low-overhead way to share knowledge across the organization. They work well for:
- Senior professionals sharing war stories and lessons learned
- Demonstrations of new tools or techniques
- Deep dives into specific subsystems or business domains
- Q&A sessions where newer professionals can ask anything
Keys to successful brown bag sessions:
Keep them short. 30–45 minutes, including Q&A. Keep them informal. No polished slides required. A whiteboard, a live demo, or just a conversation is fine. Record them. For remote team members and future reference. Make attendance voluntary but encouraged. Mandatory attendance kills the informal atmosphere that makes brown bags effective. Vary the topics and presenters. Do not let brown bags become one person's lecture series.
Internal Tech Conferences
For larger organizations, an annual internal tech conference — a day or half-day of presentations, workshops, and networking focused on the organization's technology — is a powerful knowledge sharing tool.
Benefits of internal tech conferences:
- They create a deadline for knowledge codification (presenters must organize their knowledge to present it)
- They raise the visibility of mainframe work within the broader organization
- They build cross-team relationships
- They give newer professionals presentation experience
- They signal organizational investment in learning and development
Yuki Tanaka at SecureFirst organizes an annual "Mainframe Summit":
"We hold a one-day internal conference every October. Last year, we had twelve presentations, three hands-on workshops, and a panel discussion on modernization. The highlight was Carlos presenting 'Twenty Years of Insurance Rating: What the Code Doesn't Tell You' — an hour of pure institutional knowledge about why our rating engine works the way it does. Three developers told me afterward that Carlos's session answered questions they'd had for years but never knew who to ask."
Knowledge Repositories
All of these activities — communities of practice, brown bags, tech conferences, pair programming notes, war stories, ADRs, decision logs, runbooks — generate knowledge artifacts. These artifacts need a home: a searchable, organized, maintained knowledge repository.
Characteristics of effective knowledge repositories:
Searchable. Full-text search across all artifact types. Organized. Consistent structure — by system, by topic, by date, by type. Maintained. Regular review to remove outdated content and flag gaps. Accessible. Available to everyone who needs it, including remote workers and new hires. Living. New content added regularly. A knowledge repository that stopped being updated six months ago is a dead repository.
The specific technology (wiki, SharePoint, Confluence, Git repository) matters less than the organizational discipline of using it consistently.
Measuring Knowledge Transfer Effectiveness
One of the most common failures of knowledge transfer programs is the absence of measurement. Organizations invest in mentoring, pair programming, and documentation but never assess whether the knowledge actually transferred.
Effective measurement approaches include:
The "could they do it alone?" test. Can the knowledge recipient perform the task independently, without consulting the knowledge holder? This is the most reliable measure of transfer effectiveness. It does not require perfection — it requires competence.
Incident resolution metrics. Track the time to resolve production incidents over time. If knowledge transfer is working, resolution times for the types of incidents covered by the program should decrease — perhaps not to the level of the departing expert, but significantly better than without the transfer.
"Why" audits. Periodically ask knowledge recipients to explain the rationale behind design decisions, operational procedures, and system configurations. If they can explain the "why" and not just the "what," the deeper knowledge is transferring.
Confidence surveys. Ask both the knowledge holder and the recipient to rate their confidence that the knowledge has transferred, on a scale of 1–10. If there is a significant gap between the holder's confidence and the recipient's confidence, investigate. Often the holder overestimates what they have conveyed and the recipient underestimates what they have absorbed.
Simulation exercises. Create realistic scenarios (simulated production incidents, hypothetical design decisions, mock vendor negotiations) and ask the knowledge recipient to handle them independently. Evaluate not just the outcome but the process — did they consider the right factors, consult the right resources, and arrive at a defensible answer?
40.7 Marcus's Checklist: A Personal Knowledge Transfer Plan
Two Years to Retire
Marcus Whitfield sits at his desk on a Monday morning in January. Outside, the DC winter is gray and cold. On his desk, next to his coffee and his thirty-eight-year-old IBM mug (a gift from his first manager, faded but unbroken), is a single sheet of paper with a heading he wrote over the weekend:
THINGS I KNOW THAT NOBODY ELSE KNOWS
He started the list on Saturday afternoon, sitting in his home office with his wife Eleanor reading in the next room. He expected to fill half a page. By Sunday evening, the list ran to four pages, front and back, in his small, precise handwriting.
The list includes things like:
- Why the CLMPROC batch job must run before CLMELIG (even though the dependency is not in the scheduler)
- The phone number for Jean-Pierre at IBM France, who maintains the DB2 APAR fix for the timestamp conversion issue that affects Federal Benefits' Canadian claims
- Why the VSAM KSDS for beneficiary records has an unusual CI size (8192 instead of the standard 4096) and what happens if someone changes it
- The meaning of the comments in program CLMX4470 that reference "the 1994 compromise" — a regulatory negotiation that resulted in a special processing path for certain disability claims
- Where the backup documentation is for the disaster recovery configuration that was set up in 2011 and never tested since 2019
- The reason the CICS transaction CLIQ takes 0.3 seconds longer than expected — a deliberate delay inserted in 2008 to prevent a race condition with the MQ listener
Four pages of knowledge. Four pages of things that exist only in Marcus's memory and, without deliberate effort, will vanish when he walks out the door for the last time.
Building the Plan
Marcus takes the list to Sandra Chen, his mentee-turned-manager, on Tuesday morning.
"I need help," he says. "I have two years. This is what I need to transfer."
Sandra looks at the list and feels the weight of it. "Marcus, this is... a lot."
"This is just what I thought of over one weekend. The real list is longer. I've been doing this for thirty-eight years. Some of this knowledge is in my hands — I can do things on the mainframe that I can't explain how I do. That's the hardest part to transfer."
They spend the morning building a structured plan. What emerges is a framework that any organization can adapt — and that Marcus calls, with characteristic understatement, "Marcus's Checklist."
The Checklist
============================================================
MARCUS WHITFIELD — KNOWLEDGE TRANSFER PLAN
Federal Benefits Administration
Timeline: January 2025 — December 2026 (24 months)
============================================================
PHASE 1: INVENTORY (Months 1–2)
--------------------------------
Goal: Identify and categorize all critical knowledge
Tasks:
[x] Complete "Things I Know That Nobody Else Knows" list
[x] Categorize each item:
- CRITICAL: System will fail or produce wrong results
without this knowledge (17 items)
- IMPORTANT: Significant efficiency loss or increased
risk without this knowledge (31 items)
- USEFUL: Helpful but not essential (24 items)
[x] Identify knowledge recipients for each item
[x] Identify the best transfer method for each item:
- Pair programming (hands-on technical skills)
- Recorded session (system walkthroughs, war stories)
- Documentation (procedures, configurations, ADRs)
- Shadowing (vendor relationships, organizational
navigation)
[x] Get Sandra's sign-off on priorities
PHASE 2: CRITICAL KNOWLEDGE TRANSFER (Months 3–10)
----------------------------------------------------
Goal: Transfer all 17 CRITICAL items
Tasks:
[ ] Write retroactive ADRs for 8 major design decisions
Target: 1 ADR per week for 8 weeks
Recipients: Kai Nakamura, Elena Rodriguez
[ ] Pair programming sessions on critical subsystems
- Claims processing batch (CLMBATCH, CLMPROC,
CLMELIG): 20 sessions with Kai
- CICS transaction processing (CLIQ, CLMX, BENI):
15 sessions with Elena
- DB2 performance monitoring and tuning:
10 sessions with Kai
Target: 2 sessions/week, starting month 3
[ ] Record 6 "Knowledge Tapes" on critical topics
- Claims processing end-to-end walkthrough (2 hours)
- Batch schedule dependencies and history (1 hour)
- DB2 configuration and tuning philosophy (1.5 hours)
- CICS region architecture and capacity planning (1 hr)
- Disaster recovery procedures and testing (1 hour)
- Vendor contacts and relationship history (1 hour)
Target: 1 recording per month, months 3–8
[ ] Update/create runbooks for all critical procedures
- Batch restart/recovery procedures (exists, needs
update)
- CICS region management (exists, needs major update)
- DB2 performance troubleshooting (does not exist)
- MQ channel management (exists, needs update)
- Disaster recovery procedures (exists, needs testing
and update)
Target: 1 runbook per month, months 3–8
PHASE 3: IMPORTANT KNOWLEDGE TRANSFER (Months 8–18)
-----------------------------------------------------
Goal: Transfer all 31 IMPORTANT items
Tasks:
[ ] Write decision log entries for 31 historical
decisions/configurations
Target: 2 per week for 16 weeks
[ ] Brown bag sessions on institutional knowledge
- "Why the System Works the Way It Does" series
(6 sessions covering major subsystems)
- "War Stories" series (4 sessions covering major
incidents and lessons learned)
Target: Monthly, months 8–18
[ ] Shadowing sessions for organizational knowledge
- Kai shadows Marcus in vendor meetings (all
meetings for 6 months)
- Elena shadows Marcus in architecture reviews
(all reviews for 6 months)
- Both shadow Marcus during annual audit preparation
(2 weeks)
[ ] Gradual responsibility transfer
- Kai takes over DB2 performance monitoring with
Marcus as backup (month 10)
- Elena takes over CICS capacity planning with
Marcus as backup (month 12)
- Kai leads first batch restart/recovery without
Marcus present — Marcus available by phone
(month 14)
PHASE 4: USEFUL KNOWLEDGE AND VALIDATION (Months 18–22)
---------------------------------------------------------
Goal: Transfer remaining knowledge; validate readiness
Tasks:
[ ] Transfer remaining 24 USEFUL items through
documentation and informal sessions
[ ] Validate knowledge transfer effectiveness:
- Kai handles a simulated batch failure independently
- Elena leads an architecture review independently
- Both can explain the "why" behind critical design
decisions without referencing documentation
- Both have established vendor relationships
[ ] Address gaps identified during validation
PHASE 5: GRACEFUL EXIT (Months 22–24)
---------------------------------------
Goal: Final handoff and continued availability
Tasks:
[ ] Marcus moves to advisory role (available for
questions but not responsible for delivery)
[ ] Kai and Elena handle all operational responsibilities
[ ] Marcus available for "office hours" — 2 hours/week
of Q&A for any team member
[ ] Final review: are there remaining knowledge gaps?
[ ] Post-retirement: Marcus available as consultant
for 6 months at 4 hours/week (if organizational
budget allows)
The Emotional Dimension
Marcus finishes reviewing the plan with Sandra and is quiet for a moment.
"You know what's strange?" he says. "I've been looking forward to retirement for five years. Eleanor and I are going to travel. I'm going to restore the '67 Mustang that's been sitting in my garage since 2008. I'm going to read all the books I've been collecting. I'm ready to go."
He pauses.
"But writing this list — all the things I know, all the things that need to be transferred — it made me realize something. This system, these programs, these processes — they're not just code. They're the work of my life. Thirty-eight years of decisions, compromises, late nights, and small victories. When I walk out that door, a part of me stays in this system. And I want to make sure it's not lost."
Sandra does not say anything for a moment. She is thinking about the weight of what Marcus is describing — not just the knowledge, but the meaning. A career's worth of expertise is not just information to be transferred; it is identity to be honored.
"We won't let it be lost, Marcus," she says finally. "That's what this plan is for. And twenty years from now, when someone asks why the claims processing system works the way it does, the answer will still be there — because you put it there."
Marcus nods. He picks up his IBM mug — the one from 1996, the one his first manager gave him — and takes a sip of coffee that has gone cold.
"Then let's get started," he says.
What Marcus's Plan Teaches Us
Marcus's knowledge transfer plan illustrates several principles that any organization can apply:
1. Start with inventory. You cannot transfer knowledge you have not identified. The "Things I Know That Nobody Else Knows" exercise is deceptively simple and profoundly important. Every senior professional should complete one.
2. Prioritize ruthlessly. Not all knowledge is equally critical. Marcus's three-tier categorization (Critical, Important, Useful) ensures that the most essential knowledge is transferred first.
3. Use multiple methods. No single method works for all types of knowledge. Marcus's plan uses pair programming, recorded sessions, documentation, shadowing, and responsibility transfer because different knowledge requires different methods.
4. Allow enough time. Two years is the minimum for a comprehensive knowledge transfer from a deeply experienced professional. Many organizations give retirees two weeks. The difference in outcomes is predictable and catastrophic.
5. Validate the transfer. Knowledge transfer is not complete when the mentor has spoken; it is complete when the mentee can perform. Marcus's plan includes explicit validation steps — simulated failures, independent leadership of reviews — that test whether the knowledge actually transferred.
6. Plan for the transition period. The gradual responsibility transfer in Phase 3, with Marcus moving from lead to backup to advisor, is far more effective than an abrupt handoff. It gives the recipients confidence and gives Marcus the satisfaction of seeing his knowledge take root.
7. Honor the person, not just the knowledge. Marcus's plan is not just a knowledge extraction process. It is a recognition of a career — a thirty-eight-year contribution to systems that serve millions of people. The emotional dimension matters. A retirement without knowledge transfer is a loss; a retirement with knowledge transfer is a legacy.
Chapter Summary
The mainframe industry faces a knowledge crisis driven by the retirement of the generation that built and maintained its critical systems. This crisis is fundamentally about tacit knowledge — the expertise, intuition, judgment, and contextual understanding that cannot be captured in documentation alone.
Effective knowledge transfer requires a multi-method approach: pair programming for technical tacit knowledge, shadowing for organizational knowledge, recorded sessions for narrative knowledge, and documentation (ADRs, decision logs, runbooks) for explicit knowledge. No single method is sufficient; the best programs use all of them.
Mentoring — formal programs, reverse mentoring, and cross-generational teams — provides the relationship framework within which knowledge transfer occurs. Communities of practice, brown bag sessions, and internal tech conferences scale knowledge sharing beyond individual relationships.
Marcus Whitfield's knowledge transfer plan illustrates a practical, phased approach: inventory critical knowledge, prioritize by impact, use appropriate transfer methods, allow sufficient time, validate the transfer, and plan a graceful transition.
The organizations that take knowledge transfer seriously will preserve the institutional wisdom embedded in their mainframe systems. Those that do not will learn — painfully and expensively — what it costs to lose fifty years of expertise in a single retirement wave.
The knowledge is retiring. The question is whether you will let it walk out the door, or whether you will do the hard, patient, essential work of passing it on.
Marcus chose to pass it on. What will you choose?
This is the final technical chapter of Advanced COBOL Programming. The knowledge you have built across these forty chapters — from COBOL fundamentals to architectural leadership to knowledge transfer — is itself a form of institutional knowledge. Preserve it. Share it. Pass it on.