Beyond Technical Debt: How AI Coding Assistants Created "Comprehension Debt" in Our Indie Game

Community Article Published October 30, 2025

TL;DR: We used AI coding assistants to build a game prototype in 3 months with zero budget. The good news? AI helped us build things way beyond our skill level. The bad news? We built things way beyond our skill level. We call this "comprehension debt" - and it might be AI's biggest challenge for junior developers.


The Setup: Three Developers, Zero Dollars, Big Dreams

My team and I set out to build The Worm's Memoirs, a 2D point-and-click narrative game about childhood trauma. We had:

  • 3 people: An artist, a sound designer, and me (junior programmer)
  • 3 months: February to April 2025
  • $0 budget: Completely bootstrapped
  • Part-time: 10-15 hours/week each, distributed across time zones

Traditional game development methodologies (Agile, Scrum) assume you have full-time co-located teams with experienced leads. We had none of that. So we turned to AI coding assistants: GitHub Copilot, Claude, ChatGPT.

Note: This work is currently under peer review at DiGRA 2026 and FDG 2026 conferences.


What We Did: The CIGDI Framework

We developed what we call the CIGDI Framework (Co-Intelligence Game Development Ideation) - basically, a workflow for junior devs using AI tools without getting completely wrecked.

The 7 Stages:

  1. Research (AI-Assisted): AI helps filter technical docs and tutorials
  2. Ideation: Brainstorm with AI, humans filter for feasibility
  3. Prototyping (AI-Assisted): AI generates code scaffolds
  4. Playtest (Human-Only): Team tests everything
  5. Review: Analyze what broke and why
  6. Action: Update docs and priorities
  7. Integration: Merge working stuff into main build

The 2 Critical Human Checkpoints:

Decision Point 1 - Priority Criteria: Humans decide what features to build (AI suggests, we decide)

Decision Point 2 - Timeboxing: Humans set deadlines (prevents endless iteration)

Our Ethical Boundaries:

  • NO AI for: Art, music, narrative design (core creative work)
  • YES AI for: Code scaffolding, documentation, research, boilerplate

We wanted to keep human creativity central and not displace creative workers.


The Data: We Tracked Everything

Over 3 months, we documented:

  • 157 Jira tasks across 12+ sprints
  • 333 GitHub commits showing our code evolution
  • 13+ Miro boards for visual planning
  • 8 team reflection sessions (recorded and analyzed)
  • Emotional documentation: Memes, diary entries, frustration logs

Yeah, we even tracked our feelings. Turns out that was important.


The Good: AI Superpowers We Gained

1. Knowledge Access Democratization

Before: "How do I implement a state machine for point-and-click interactions?" → Hours of Stack Overflow, outdated Unity forums, confusing docs.

After: Ask AI, get working example in 30 seconds with explanation.

This is huge for self-taught developers without CS degrees.

2. Reduced Cognitive Load

AI handled all the boilerplate code, API documentation lookup, and syntax checking. I could focus on game logic instead of remembering whether Python uses append() or push().

3. Accelerated Prototyping

What traditionally took days to implement, we prototyped in hours. More iteration = more experimentation = better game design.

The dopamine hits from rapid progress were real. 🚀


The Bad: Enter "Comprehension Debt"

Here's where things got interesting (and scary).

What is Comprehension Debt?

Comprehension debt is when you build systems more sophisticated than your skill level can maintain.

Traditional technical debt: "We wrote messy code to ship fast, now it's hard to maintain."

Comprehension debt: "AI wrote clean, sophisticated code we don't fully understand, now it's impossible to maintain."

How It Happened

AI didn't just give us working code - it gave us professionally architected code using patterns we'd never learned:

  • Observer patterns
  • Command patterns
  • Singleton with dependency injection
  • State machines we couldn't debug

The code worked. That was the problem.

When bugs appeared or requirements changed, we couldn't fix it. We didn't understand the internal logic. We'd have to go back to AI and ask it to debug our own codebase.

The Paradox

Experienced Developer:
  AI suggestion → "Oh, this uses Observer pattern" → Understands → Can modify

Junior Developer (us):
  AI suggestion → Looks good, tests pass → Ships it → Bug appears → ???

We possessed working systems we incompletely understood. Fragile and dependent.


The Emotional Rollercoaster

Our emotional documentation revealed a pattern:

Week 1-2: 🎉 "AI is amazing! We're building so fast!"

Week 3-4: 🤔 "Wait, how does this inventory system work again?"

Week 5: 😰 "Everything is breaking and we don't know why" (Sprint 5 Overwhelm Crisis)

Week 6-12: 😅😰😅😰 Oscillating between productivity euphoria and incompetence anxiety

From our reflection docs:

"AI suggested too much work for our level. We can't keep up."

From our meme collection:

[Image: Drowning person] "Me trying to implement AI's 'simple' inventory system"


The Critical Question: Learning Ladder or Dependency Trap?

Learning Ladder Theory: Exposure to sophisticated code gradually builds expertise (like Vygotsky's Zone of Proximal Development)

Dependency Trap Theory: Repeated reliance on AI prevents independent skill development

Our 3-month data: ¯\(ツ)Too early to tell.

We saw evidence of both:

  • ✅ Some AI explanations improved our understanding of patterns
  • ❌ We kept asking AI for the same help without learning

Who Benefits? Who Gets Left Behind?

This raises important questions about democratization:

The Promise:

AI lowers barriers to game development! Self-taught developers without CS degrees can build functional games! More diverse voices in game narratives!

The Reality:

Two-tiered system emerging:

Tier 1 (Experienced devs): Use AI to accelerate work they already understand

  • Know when AI is wrong
  • Can verify outputs
  • Learn from AI suggestions
  • Get faster

Tier 2 (Novice devs): Depend on AI for work they can't independently maintain

  • Don't know when AI is wrong
  • Can't fully verify outputs
  • Copy without understanding
  • Get fragile

For resource-constrained developers (especially from underrepresented backgrounds): Comprehension debt might be an acceptable trade-off. Shipping an imperfect game > never shipping.

But this should be a conscious choice, not accidental.


Our "Trust But Verify" Protocol

We developed verification rules after painful experiences:

1. Understand Before Implementing

Rule: If you can't explain the code to a teammate, don't use it.

Example: AI suggested a complex reactive UI system. Looked cool. We copied it. Broke everything. Reverted to simple button clicks we understood.

2. Verify Against Current Sources

Rule: AI has knowledge cutoffs. Check if APIs/frameworks are current.

Example: AI suggested a deprecated Unity API. Would have created security vulnerability. We caught it because we verified the docs.

3. Evaluate Creative Fit

Rule: Does this serve YOUR vision or just what AI thinks is "good practice"?

Example: AI wanted to add complex inventory management. Our game didn't need it. We said no.

4. Document Decisions AND Emotions

Rule: Track not just what you did, but how you felt.

Why: Emotional patterns revealed when we were over-relying on AI vs. learning from it.


Practical Advice for Junior Developers Using AI

✅ DO:

  1. Use AI for boilerplate and research - Great for reducing grunt work
  2. Set explicit boundaries - Decide what AI can/can't touch
  3. Force yourself to explain code - If you can't, don't ship it
  4. Track your emotional state - Oscillating between euphoria/anxiety? Red flag.
  5. Have human decision checkpoints - Don't let AI drive scope

❌ DON'T:

  1. Copy-paste without understanding - Comprehension debt accumulates
  2. Let AI define your scope - It will suggest way too much
  3. Skip verification - AI makes confident mistakes
  4. Ignore your gut - If something feels too complex, it probably is
  5. Rush - Timebox and accept "not done" as outcome

What We Shipped

Despite comprehension debt, we shipped a working prototype:

  • 5 playable scenes
  • Point-and-click mechanics
  • Narrative branching
  • Custom art and sound (no AI!)
  • All in 3 months, $0 budget

Was it fragile? Yes.
Did we understand all the code? No.
Could we have built it without AI? Also no.


Open Questions for the Research Community

  1. Longitudinal studies needed: Does comprehension debt resolve over time (learning ladder) or compound (dependency trap)?

  2. Measurement frameworks: How do we quantify comprehension debt? Code complexity vs. developer expertise gap?

  3. Intervention studies: What verification protocols actually work for novice developers?

  4. Equity analysis: Who benefits most from AI coding assistants? What new barriers emerge?

  5. Educational implications: Should CS education teach "AI-assisted development" as a skill?


Conclusion: It's Complicated

AI coding assistants gave us superpowers. They also created dependencies we're still working through.

For junior developers:

  • AI can help you ship when traditional paths are inaccessible
  • But comprehension debt is real and creates fragility
  • Be deliberate about what you use AI for
  • Verify everything, even when (especially when) it looks right

For the AI/ML community:

  • Comprehension debt is a design challenge, not user error
  • "Working code" ≠ "maintainable code" for the skill level
  • Junior developers need different tools than experts
  • Consider: How do we scaffold learning instead of replacing it?

For the game industry:

  • AI democratizes access but creates new barriers
  • Resource-constrained teams face different risks than studios
  • More research needed on long-term skill development impacts

Our Data & Framework

We're committed to open research. Our framework documentation, reflection logs, and analysis will be available after conference review.

Connect with me:


References & Further Reading

Key Papers:

  • Prather et al. (2024): "The Widening Gap: Benefits and Harms of Generative AI for Novice Programmers"
  • Perry et al. (2023): "Do Users Write More Insecure Code with AI Assistants?"
  • Crowston & Bolici (2025): "Deskilling and Upskilling with AI Systems"

Related Work:

  • Politowski et al. (2021): Game Industry Problems (analyzed 927 postmortems)
  • Kazemitabaar et al. (2023): AI Code Generators & Novice Learners

Acknowledgments

Thanks to our team members [anonymized for review], RMIT MAGI program, and everyone who playtested our messy prototype.

Special thanks to the AI tools we used (and struggled with): GitHub Copilot, Claude, ChatGPT. You taught us a lot - including what we don't know.


This research is part of my Master's thesis at RMIT University's Master of Animation, Games & Interactivity (MAGI) program. Currently under review at DiGRA 2026 (Extended Abstract) and FDG 2026 (Full Paper).

Have thoughts on comprehension debt? Experienced it yourself? Let's discuss in the comments! 💬

Community

Sign up or log in to comment