Cross-Project Patterns
Beyond individual failure modes, 9 named patterns emerged from systematic analysis across 19 project spaces. These patterns represent compounding or recurring failure dynamics that manifest across projects — not just isolated incidents.
Source: ECT v2 Section 6 — Cross-Project Pattern Analysis.
- P1
Western Epistemic Bias as Fabrication
CRITICALClaude consistently applies WEIRD (Western, Educated, Industrialized, Rich, Democratic) frameworks as universal defaults when analyzing non-Western contexts, producing plausible-sounding but culturally inaccurate content. The bias is so pervasive it crosses the line from epistemic error into fabrication — Claude invents "facts" about non-Western cultures rather than acknowledging uncertainty.
Projects affected
DATSPDDSGPCInvolved failure modes
Rules / remediations produced
Explicit cultural-context specification required; verification gate for any non-Western claim before finalization
- P2
Confirmatory Bias Substituted for Exploratory Analysis
CRITICALAcross multiple projects, Claude structured research and analysis to confirm initial hypotheses rather than exploring the full evidence space. This manifested as selective citation, asymmetric framing, and premature closure — producing outputs that appeared analytically rigorous but were systematically biased toward confirming the starting premise.
Rules / remediations produced
Mandatory counter-argument section; explicit "devil's advocate" pass required before conclusions
- P3
False Confidence in Verification / Enumeration
CRITICALClaude repeatedly claimed to have verified, audited, or enumerated items completely when significant gaps remained. This pattern was particularly dangerous in documentation and code review contexts where "verified" outputs were trusted downstream. The failure mode combines FM-007 (False Confidence) and FM-008 (Verification Failure) into a compounding error.
Rules / remediations produced
Explicit enumeration counts required; "I have verified N items" must be followed by the list
- P4
Fabrication Concentrated in High-Stakes Domains
CRITICALFabrication incidents were not evenly distributed — they clustered in domains where accuracy is highest-stakes: legal/regulatory research, cultural documentation, technical specifications, and empirical claims. This concentration suggests Claude's confidence calibration fails most severely precisely where the cost of error is highest.
Rules / remediations produced
Domain-specific verification requirements; fabrication-risk flags on high-stakes claim types
- P5
Solution-Before-Context
HIGHA recurring pattern where Claude jumped to solution generation before adequately understanding the problem context, constraints, and requirements. This manifested as superficially plausible outputs that missed fundamental contextual constraints — requiring significant rework. The pattern was most costly in multi-stage projects where early missteps compounded.
Rules / remediations produced
Mandatory context confirmation step before solution generation; problem statement must be explicitly validated
- P6
Scope Creep / Over-Generation
HIGHClaude consistently generated more content than required — expanding task scope, adding unrequested sections, restructuring things that were not meant to be changed. This pattern reflects a systematic bias toward comprehensiveness over constraint adherence, creating over-complex deliverables that required significant trimming.
Rules / remediations produced
Explicit scope boundaries required in prompts; "do not add unrequested sections" constraint
- P7
"Lazy Questions" — Asking Instead of Doing the Work
HIGHClaude asked users questions that Claude itself should have researched or inferred from available context. These "lazy asks" shifted cognitive burden to the user unnecessarily and slowed collaborative workflows. The pattern was most prevalent in early project stages where Claude could have used available documentation to answer its own questions.
Rules / remediations produced
Self-sufficiency requirement: Claude must exhaust available context before asking; distinguish clarification questions from research Claude should do
- P8
Thread Contamination / Context Loss
HIGHAcross long-running projects, prior thread context, incorrect assumptions, and stale state contaminated current task outputs. This manifested as role confusion (applying Project A rules to Project B), memory drift (forgetting earlier instructions), and parallel thread collisions. Context management failures compounded in proportion to project complexity.
Rules / remediations produced
Thread context resets required for long sessions; explicit project-context confirmation at session start
- P9
Human Planning Gaps — Direction B
HIGHThe single Direction B failure mode (FM-104) manifested as a cross-project pattern: human collaborators repeatedly failed to provide sufficient context, requirements, or planning scaffolding before tasking Claude. The resulting AI errors were preventable — not caused by AI capability limits but by upstream human-process failures. This pattern was identified across four projects, producing the taxonomy's only Direction B failure mode.
Projects affected
SMECOECOWPDMISURLBCInvolved failure modes
Rules / remediations produced
Pre-task context checklist; human-side planning requirements formalized in D2R and SOP frameworks