Skip to content
← Back to Blog

Why Grinding Questions Beats Watching Videos: The Science Your PMP Study Plan Is Missing

don

There is a predictable moment in every PMP candidate's study journey. You've watched 40 hours of video content. You feel ready. You're not.

This isn't a failure of effort. It's a failure of method. And it's backed by decades of cognitive science with specific, uncomfortable numbers.


The Fluency Illusion: Why Watching Feels Like Learning

When a skilled instructor explains a concept, something misleading happens in your brain. The ideas flow in cleanly. The examples click. You follow along without friction. Your brain registers this smoothness as understanding — a psychological trap researchers call the fluency illusion.

In 2017, Carpenter and colleagues at Iowa State published a study in Metacognition and Learning where they manipulated how fluently instructors delivered lectures. Students who watched the fluent, polished lecturer rated their own learning significantly higher than those who watched a less smooth delivery.

Both groups then took the same test. Both groups scored approximately 25% correct. The fluent-lecture group was wildly overconfident. The disfluent group was accurately calibrated. The smoothness of the lecture produced zero additional learning — just more confidence.

A 2014 study published in the Journal of Applied Research in Memory and Cognition found the same pattern specifically for video-recorded lectures: students were systematically overconfident about how much they'd learned, cut short their follow-up study as a result, and then underperformed on the actual test.

The problem is structural: recognizing an idea when an expert explains it is a fundamentally different cognitive task than retrieving that idea under exam pressure. Videos train the former. The PMP exam tests the latter.


The Forgetting Curve Is Steeper Than You Think

In 1885, Hermann Ebbinghaus memorized lists of syllables, waited, and measured what remained. His discovery — replicated in 2015 by Murre & Dros in PLOS ONE across 70 hours of controlled trials — is brutal:

Time After LearningRetention
20 minutes~58%
1 hour~44%
9 hours~36%
1 day~21–34%
1 week~21%
1 month~12–18%

The curve is exponential and front-loaded — most forgetting happens in the first 24 hours. A full day of video lectures becomes roughly a fifth of what you think it is by exam day. Without active reinforcement, the question isn't whether you'll forget. It's how much.


The Testing Effect: The Most Replicated Finding in Educational Psychology

In 2006, Henry Roediger III and Jeffrey Karpicke at Washington University published what would become one of the most cited findings in learning science: the testing effect, also called retrieval practice.

The setup: students studied prose passages and then either re-read them repeatedly or took practice recall tests on them. One week later:

  • Re-study group: retained approximately 40% of the material
  • Practice test group: retained approximately 56% of the material
  • Net gain: ~50% more information retained through testing

But the more revealing number comes from the two-day forgetting rate within that same study:

  • Repeated re-study group: forgot 56% of what they could initially recall
  • Repeated testing group: forgot only 13%

Testing didn't just produce better retention. It produced memory that was structurally more resistant to decay.

In 2011, Karpicke and Blunt published in Science — one of the two most prestigious scientific journals on Earth — a comparison of retrieval practice against elaborative concept mapping, one of the most sophisticated study techniques taught in schools. One week later, the retrieval practice group outperformed concept mapping by 1.5 standard deviations. The advantage held on inferential questions — not just direct recall.

The largest meta-analysis on this question, published in Review of Educational Research in 2017 (Adesope, Trevisan & Sundararajan), synthesized 272 independent effect sizes from 188 separate experiments:

  • Practice testing vs. re-studying: effect size d = +0.51
  • Practice testing vs. no activity at all: effect size d = +0.93

For context: anything above d = 0.40 is considered a large educational intervention. Practice testing versus passive study clears that threshold. Practice testing versus doing nothing nearly doubles it.

In real classrooms, regular low-stakes quizzing produced 13–25% gains on summative unit examinations compared to non-quizzed material (Roediger, Agarwal et al., 2011, in actual middle-school science classes).


What 225 Studies Say About Lectures

In 2014, Scott Freeman and colleagues at the University of Washington published a meta-analysis of 225 studies in Proceedings of the National Academy of Sciences, comparing active learning to traditional lecturing across all STEM disciplines.

The results:

  • Active learning students outperformed lecture students by 0.47 standard deviations on exams
  • Failure rates under traditional lecturing were 1.95x higher than under active learning
  • Students in lecture-only courses were 55% more likely to fail than those in active-learning conditions

This held across disciplines and class sizes. Lectures are not neutral. Relative to active, effortful practice, they produce measurably worse outcomes at scale.


What Students Actually Do (And Why It Doesn't Work)

In 2009, Karpicke, Butler, and Roediger surveyed 177 college students about their actual study habits. The most commonly reported strategy — used by 83.6% of students — was rereading notes or textbooks.

When given a forced choice between rereading and self-testing, most chose rereading. Self-testing only became popular when it was accompanied by the option to re-read afterward.

That same year, Dunlosky and colleagues published the most comprehensive review of study techniques in Psychological Science in the Public Interest, evaluating 10 common strategies across 700+ studies. The verdict:

StrategyUtility Rating
Practice testingHIGH
Spaced practiceHIGH
Re-readingLOW
Highlighting/underliningLOW
SummarizingLOW

The strategy 83.6% of students prefer is rated LOW utility. The strategy they avoid is rated HIGH.


The Spacing Illusion: Why Cramming Feels Better

Nate Kornell at UCLA ran an experiment in 2009 where participants used spaced versus massed (back-to-back) flashcard practice. The outcome:

  • Spaced practice was more effective for 90% of participants
  • Yet after their study sessions, 72% of participants believed massing had been more effective

This is the spacing illusion. Cramming creates a sense of coverage and momentum. Spacing creates anxiety because more is forgotten between sessions. But that forgetting is precisely the mechanism that forces deeper encoding. The approach that feels better is almost never the approach that works better.


Interleaving: Why Mixed Practice Mirrors the Real Exam

Rohrer & Taylor (2007) had students learn four types of geometry problems in either blocked (one type per session) or interleaved (randomly mixed) practice. One week later:

Interleaving tripled test scores compared to blocked practice (effect size d = 1.34).

During practice, the blocked group performed better — confirming exactly what Robert Bjork calls desirable difficulties: the conditions that feel easier during study systematically underperform the conditions that feel harder, on delayed tests.

This maps directly to the PMP exam. Video courses present material in clean, blocked segments: predictive, then agile, then hybrid. Your brain sorts each category neatly. Nothing feels hard. Then the exam scrambles everything. A predictive scheduling question is followed by an Agile ceremony question is followed by a stakeholder escalation scenario. The neatly organized knowledge from blocked video study doesn't transfer cleanly to the mixed, judgment-heavy format of the actual exam.

Grinding questions across all domains — randomly, without knowing what's coming next — trains exactly the pattern-recognition and contextual-application skills that the PMP tests.


Desirable Difficulties: The Struggle Is the Mechanism

Robert Bjork at UCLA coined the term desirable difficulties in 1994. The framework: conditions that slow apparent performance during learning often optimize long-term retention and transfer. Conditions that make learning feel easy often undermine durability.

Bjork's critical distinction: performance ≠ learning. A student who just re-watched a lecture and scores 90% on an immediate quiz has demonstrated performance. A student who practiced retrieval three days ago and scores 70% has likely encoded far more durable knowledge.

The mechanism is not mysterious: effortful retrieval — pulling an answer from memory when it doesn't come easily — strengthens the neural pathway that stores it. Each successful hard retrieval makes the next retrieval slightly easier. Easy recognition during video watching produces no equivalent effect.


Cognitive Load and the Point of Expertise Reversal

John Sweller's Cognitive Load Theory (1988) draws a useful distinction. Passive watching minimizes cognitive load — information flows in easily. This is fine for complete novices who need a schema before they can engage with problems. But there's a point called the expertise reversal effect: once basic schemas are established, the worked example (the video) becomes redundant compared to problem-solving practice.

Past that point, watching is not just less efficient. It actively competes with the higher-load activity — practice — that would produce better encoding. Most PMP candidates have spent enough time in project environments that they are not novices. Their bottleneck is applied judgment, not vocabulary. Videos address vocabulary. Practice questions address judgment.


What This Means for Your PMP Prep

The research produces a clear directive: the majority of your study time should produce friction.

If you're cruising through a video feeling like everything makes sense, you're building fluency — not durability. Fluency collapses under exam conditions. If you're struggling through a practice question, checking your answer, reading the explanation, and thinking hard about why the right answer is right and the wrong answers are wrong — that struggle is the mechanism of retention.

This isn't about abandoning content entirely. A conceptual foundation matters, especially early. But the research is unambiguous: video content should be the scaffold, not the structure. You watch to build a schema. You grind to make it stick. The ratio most candidates use is inverted from what it should be.

The PMP pass rate is not 100%. A meaningful number of candidates who watch 60 hours of video and feel ready walk out of the exam surprised. The research above explains why.


The GanttGrind Approach

GanttGrind is built around the testing effect, spaced repetition, and domain interleaving. Every session adapts based on where your mastery actually is — not where it feels like it is. The platform tracks your difficulty-adjusted accuracy across all three domains and builds your readiness score from four signals: Bayesian mastery, topic coverage, difficulty-weighted accuracy, and recency.

When the readiness score says you're ready, that's not based on hours watched. It's based on what you can retrieve.

Start your practice session →


Sources: Roediger & Karpicke (2006, Psychological Science); Karpicke & Blunt (2011, Science); Adesope, Trevisan & Sundararajan (2017, Review of Educational Research, 272 effect sizes); Freeman et al. (2014, PNAS, 225 studies); Dunlosky et al. (2013, Psychological Science in the Public Interest); Karpicke, Butler & Roediger (2009); Kornell (2009, Applied Cognitive Psychology); Rohrer & Taylor (2007); Carpenter et al. (2017, Metacognition and Learning); Murre & Dros (2015, PLOS ONE); Bjork & Bjork (2011, 2020); Sweller (1988, Cognitive Science).