Case Study 20.1: The Physics Student Who Couldn't Solve Real Problems
The Setup
Maya is a second-year undergraduate studying physics. She has a 3.5 GPA. She is not struggling in the conventional sense — she passes exams, she completes problem sets, she is regarded by her professors as a solid, hardworking student.
But there is a problem, and it becomes visible when she enrolls in the laboratory course required for physics majors.
In the lab, things break in expected and unexpected ways. Experiments produce results that don't match the textbook. You have to interpret what's happening. You have to figure out which principles apply to a situation that isn't labeled.
Maya is adept at solving physics problems that come labeled. The lab is full of physics problems that aren't.
She fails the first lab report. Not fails as in gets a failing grade — fails as in produces an analysis that her instructor describes, diplomatically, as "technically proficient but conceptually undirected." She made correct calculations. She made them in service of the wrong framework, answering questions the data wasn't asking.
The Diagnostic Session
Maya goes to office hours, which in this case means going to see Professor Varga, who teaches the lab course and has seen this pattern before.
"Tell me about the rotational dynamics experiment," he says. They'd recently done a lab measuring the moment of inertia of various objects.
Maya describes the experiment and her analysis. She is fluent, accurate on the mechanics, and wrong about the conceptual framing. She has treated the experiment as a collection of measurements to be made and calculated, rather than as an investigation of a principle.
Professor Varga holds up a sheet with twenty physics problems on it — drawn from different chapters of the introductory textbook, different experimental setups, different contexts.
"Sort these into groups based on what they have in common," he says. "Don't solve them. Just tell me which ones are similar."
Maya begins sorting. She does it quickly and confidently.
When she's done, Professor Varga looks at the piles. "Tell me what makes these similar," he says, pointing to the largest pile.
"They're all inclined plane problems," Maya says. "There's always something on a ramp."
Professor Varga nods slowly. He points to a different pile. "And these?"
"Pulley problems. There's always at least one pulley."
He pauses. "Now look at these two problems" — he points to two from different piles — "and tell me what they have in common."
Maya looks at them. One involves a ball rolling down a ramp. One involves a skier with a fixed mass decelerating to a stop. She looks at them for a long time. "They both... have objects in motion?"
"What principle governs both of them?"
Silence.
"Think about what's conserved," he prompts.
She gets it. "Energy. They're both conservation of energy problems."
"Yes," he says. "But you put them in different piles because one has a ramp and one doesn't."
The Diagnosis
Professor Varga explains what he's seeing, and it matches Chi et al.'s 1981 research precisely without either of them having read the paper.
Maya has been categorizing physics problems by their surface features — the equipment, the setup, the visible objects in the problem. This is how textbooks organize chapters: Chapter 6 is "rotational dynamics," which is helpfully illustrated with pulleys and rotating platforms. Chapter 9 is "energy," illustrated with ramps and springs.
The categorization strategy worked — it works perfectly for textbook problems, which are pre-sorted into chapters that tell you which concept to apply. It fails immediately when problems aren't pre-sorted.
In the real world, including the physics lab, problems don't come with chapter labels. A moment of inertia measurement doesn't announce that it's a Chapter 6 problem. The student must recognize what type of problem it is from the physics of the situation — from the underlying principle — not from the setup.
Maya's mental categories are organized around the wrong features. She's built excellent pattern recognition for "what does a pulley problem look like?" She hasn't built the necessary pattern recognition for "what does a conservation of angular momentum situation look like, regardless of the specific equipment?"
The Restructuring
Professor Varga gives Maya a specific practice assignment, designed around the principle-first categorization approach.
The assignment: For each problem in the next three problem sets, before solving the problem, write one sentence: "This is a [PRINCIPLE] problem, where [PRINCIPLE] is the governing physical law, because [REASONING]."
Not "this is a pulley problem." "This is a Newton's second law problem, where the net force on each mass equals its mass times its acceleration, because all the objects are undergoing linear or rotational acceleration under the influence of known forces."
Maya resists this at first. It adds time to problem sets and feels redundant — she already knows what chapter the problem is from. That's the point, Professor Varga tells her: right now you're reading the chapter label. Learn to find it without the label.
She does the assignment grudgingly for the first week. Something changes in the second week. She finds herself looking at problems differently — not scanning for the familiar equipment setup, but asking "what's happening here? What principle is operating?" before noticing the equipment at all.
This is cognitive sequence reversal, and it's the core of the intervention.
The Lab Results
Four weeks into the assignment, Maya submits her third lab report.
It's different. The analysis frames the experiment correctly — she identifies the governing principle first, then organizes the measurements in terms of what they reveal about that principle. The calculations serve a conceptual purpose rather than existing as an end in themselves.
"She's learned to see the physics before she sees the equipment," Professor Varga says.
For the remainder of the semester, Maya's lab performance improves steadily. More importantly, her performance on novel problems — problems not drawn from the textbook, applications she hasn't seen — transforms. She scores in the top quartile on the course's "novel application" section, which requires applying physics to scenarios not covered in the curriculum.
This is far transfer. She's developed the deep structure categorization that enables it.
The Insight
Maya articulates what changed in a reflection paper: "Before, I knew physics as a collection of problem types. Now I know it as a collection of principles, and problem types are just instances. When I see a new problem, I don't ask 'what setup is this?' I ask 'what's happening?' The setup is just the costume the principle is wearing."
The metaphor is exact. Surface features are costumes — they change between contexts. Deep structure is the character — it remains consistent.
Transfer requires seeing through the costume to the character. This is a learnable skill, and it's exactly what the principle-first categorization practice develops.
The Broader Lesson
Maya's story is a physics story, but its structure applies to any domain where surface features vary while deep principles are conserved: medicine (the disease presents differently in different patients), law (the legal principle applies in different factual contexts), engineering (the failure mode appears in different system configurations), programming (the algorithm solves different problem types).
In every case, novice learning tends to build surface feature categories and expert learning builds deep structure categories. The question is whether your learning practices deliberately accelerate that transition — or leave it to chance and accumulated experience.
Maya's intervention was deliberate and targeted. It took four weeks of consistent practice. It transferred immediately to genuinely novel applications.
The practice works.