How We Build Our PMP Exam Questions
These questions are independently developed and are not reproduced from, endorsed by, or affiliated with the Project Management Institute (PMI), the PMBOK® Guide, or any PMI publication or PMP review provider.
Experimental Software
GanttGrind is experimental software in active development. While every question goes through the multi-stage validation pipeline described below, no system is perfect. If you encounter an error, inconsistency, or anything that doesn’t look right, please use the Report Issue button on any question. Your flags directly improve the question bank for everyone.
“How do you ensure quality?”
Every question in the bank has survived a six-stage validation pipeline designed to catch errors before you ever see them. Technology drafts and validates at scale, but human experts set the standards and verify the results through statistical sampling.
We take accuracy seriously because we know the stakes. A bad question doesn’t just waste your time — it teaches you something wrong that you’ll carry into the real exam. Here’s how we prevent that.
The Six-Stage Pipeline
Stage 1: Grounded Drafting
Every question starts with the actual PMI Examination Content Outline (ECO) task it must test, plus a deep understanding of the project management principles behind it. The system studies the concepts to understand the judgment calls and trade-offs each task requires.
But — critical point — none of that source language appears in our questions. We use the standards to understand the concepts correctly, then write entirely original scenarios. Just like the real PMP exam: no PMBOK quotes, no section numbers, just scenarios testing judgment.
Stage 2: Adversarial Validation
Every drafted question is immediately attacked by a separate AI reviewer acting as a senior exam instructor. It doesn’t know the “intended” answer — it evaluates cold and tries to break it. Seven checks:
- Is the correct answer actually correct? Not “does it sound right” — does the underlying project management concept and scenario context support it?
- Is any wrong answer accidentally correct? If any distractor could be right under reasonable interpretation, the question gets rejected.
- Is the right approach applied? Predictive vs. agile vs. hybrid — wrong context is a serious error.
- Is situational judgment tested correctly? The best answer must be clearly best for this specific scenario, not just generically acceptable.
- Is the question complete? If the answer depends on unstated facts, it’s flawed.
- Does the scenario match the domain and task? Each question must test the specific ECO task it claims to cover.
- Is it copyright-clean? Any PMBOK quotes, ITTO tables, or reproduced PMI definitions = automatic rejection.
Failed questions are rejected with a reason. Questions with minor issues go to Stage 3 for final review.
Stage 3: Final Adjudication
Flagged questions get a final review weighing severity. Cosmetic issues or defensible interpretations? Pass with minor correction. Substantive problems — wrong answer, ambiguous stem, copyright concern — permanent rejection. Every rejection and its reason are logged to prevent recurring errors.
Stage 4: Cross-Model Consensus
Every question that passes internal validation gets sent — without any hints about the intended answer — to two independent AI systems from different companies. All three answers are compared:
- Unanimous (3/3): All three systems chose the same answer. Ships.
- Majority (2/3): Two agree, one dissents. We review the dissenting reasoning. Legitimate concern? Revise or remove.
- Two against one: Two systems disagree with our answer? Mandatory human review. Doesn’t ship until an expert confirms or corrects.
- Three-way split: All three chose different answers. The question is genuinely ambiguous — rewritten or discarded.
Simple logic: if the best AI systems in the world can’t agree, the question tests ambiguity, not knowledge.
Stage 5: Iterative Learning
The system learns from its mistakes. After every generation cycle, we analyze failure patterns — which question types get rejected most, which rules are misapplied, which conceptual errors recur. These patterns become explicit guardrails for the next cycle.
Example: if 15 questions incorrectly conflate risk mitigation with risk avoidance, the next cycle includes a specific warning against that exact error. The question bank gets measurably better with each iteration.
Stage 6: Human Review & Statistical Sampling
The final gate is human expertise. Every question that gets flagged during cross-model disagreement or involves high-stakes topics (governance, risk, stakeholder management, hybrid methodologies) goes to a PMP-credentialed expert for review.
But we don’t stop there. We also use statistical sampling to verify that questions passing automated validation actually meet our quality standards. A representative sample of questions from each domain and difficulty level is randomly selected and reviewed by human experts — even if they passed all automated checks.
This sampling approach lets us measure quality at scale: if 95% of sampled questions pass expert review, we can be confident the broader question bank maintains that standard. When sample review finds issues, those patterns feed back into Stages 2-5 to prevent similar problems in future questions.
Technology reduces the volume needing review — it doesn’t eliminate the need for expert judgment. Human reviewers are the final authority on whether a question tests the right concept correctly.
Every Question Is 100% Original
Direct statement: we do not copy, adapt, paraphrase, or reverse-engineer questions from any other PMP review provider (PrepCast, PocketPrep, PM Master Prep, PMI practice exams, anyone). We also do not copy, quote, or reproduce language from PMI publications — the PMBOK® Guide, Agile Practice Guide, Process Groups Practice Guide, and all PMI Standards are copyrighted works.
Our system uses these concepts to ensure our understanding of project management principles is correct — the same way any PMP instructor studies the standards before writing a lecture. But the output is entirely our own: original scenarios, original language, original answer choices. No ITTO tables. No PMBOK quotes. Just like the real exam.
Every question goes through automated copyright compliance checks that reject PMBOK quotes, ITTO reproductions, or PMI-specific definitions. Enforced on every single question before it enters the bank.
Why Our Questions Don’t Put You to Sleep
You won’t find “Project Manager Smith at ABC Corp” here. You’ll meet Maya Sandoval navigating a scope change at Velvet Hippo Brewing Company, or Dante Kowalski resolving a team conflict at Cosmic Pickle Fermentation Co.
This is intentional. Research shows distinctive, vivid scenarios are more memorable than generic ones. If a question about Thunderbolt Electric Bikes makes you crack a smile at 11pm, you’re more likely to remember the underlying stakeholder management concept when you sit for the real exam. The exam is serious. The scenarios don’t have to be.
What About Hallucination?
“Hallucination” in AI means generating content that sounds authoritative but is factually wrong — stating a rule that doesn’t exist, presenting a non-existent exception, getting a threshold wrong. We take this seriously.
Three layers of defense: (1) Grounding — the drafting system studies actual PM standards before writing, so it knows what the best practices are. (2) Multi-stage validation — even if an error slips through, the adversarial reviewer challenges the PM logic, and cross-model consensus would likely catch it (three independent systems are unlikely to fabricate the same wrong rule). (3) Pattern learning — if a conceptual error occurs once, the system logs it and prevents that category of error in all future questions.
No system is perfect. But three independent layers of defense, each catching what the others miss, is meaningfully more reliable than single-pass generation.
What This Means for You
Every question you see has survived:
- Grounded in PMI standards to ensure conceptual accuracy
- Adversarial review checking for wrong answers, ambiguity, and copyright violations
- Adjudication of borderline cases to catch what automated checks miss
- Cross-validation by multiple independent systems to detect disagreement
- Pattern learning that prevents recurring errors across the question bank
- Statistical sampling and expert review by PMP-credentialed professionals
Human experts reviewed a statistically significant sample of questions to validate that the quality pipeline actually works. When they found issues, those insights improved the entire system.
If you find an error, report it. Every flagged issue feeds back into the validation pipeline and improves future questions. That’s not a platitude — it’s how the system works. We build for correct.
These questions are independently developed and are not reproduced from, endorsed by, or affiliated with the Project Management Institute (PMI), the PMBOK® Guide, or any PMI publication or PMP review provider.