AK Brand 2026

Who Reviews the AI’s Code When the Senior Leaves?

The sustainability question nobody’s asking about AI-augmented development

A friend read kerber.ai yesterday and sent me this:

“Seniora devs vinner på AI-kodning—inte för bättre prompts, utan för att de vet hur bra ser ut.”

Håller helt med, men det finns en annan sida på detta mynt också; jag tror inte dessa personer över tid kommer acceptera den rollen. I början, ja. Men sen blir det juniorutvecklare eller Produktägare som tar över stafettpinnen. Och vad händer då med att bibehålla den (good enough) höga kvaliteten? (translated)

He’s right. And it’s a question I haven’t seen anyone address in the AI-coding hype cycle.

The Hidden Assumption

Everyone talk about 10x productivity. Senior devs guiding AI. Human-in-the-loop quality control.

But there’s an assumption baked in: that seniors will stay forever.

They won’t.

Senior developers don’t want to spend their careers reviewing AI-generated code. It’s tedious. It doesn’t build new skills. Frankly, if you’re good enough to review AI output, you’re good enough to do more interesting work.

So what happens when they leave?

The Obvious Problem

Here’s the trajectory I see playing out:

  1. Year 1: Senior dev + AI = magic. Fast shipping, high quality.
  2. Year 2: Senior gets bored reviewing AI code. Wants architecture work, new challenges.
  3. Year 3: Senior leaves. Junior or PM inherits the “AI supervision” role.
  4. Year 4: Quality slowly degrades. Technical debt accumulates invisibly.
  5. Year 5: “Why is everything broken?”

The junior doesn’t know what good looks like. They can’t catch the subtle architectural mistakes AI makes. They approve PRs because the tests pass, not because the code is right.

Three Partial Solutions

1. Expertise-as-a-Service

This is what we’re building at kerber.ai.

Companies don’t need to retain expensive seniors full-time. They rent the judgment. 1-2 hours per week of senior guidance, not 40 hours of babysitting.

One senior expert can oversee multiple projects. You get the quality control without the headcount.

The senior stays engaged because they’re solving interesting problems across different codebases—not stuck reviewing the same repo forever.

2. Knowledge That Persists

Every code review should become documentation.

When I review AI-generated code, I don’t just approve or reject. I document why. What patterns to follow? What to avoid. What “good” looks like for this specific codebase.

Architectural Decision Records (ADRs). Pattern libraries. Test suites that encode quality standards.

The goal: when I leave, the knowledge stays.

3. Faster Learning Loops

Here’s the optimistic take: juniors working with AI learn faster than any previous generation.

When AI explains its code in real-time, with context, the feedback loop is brutally short. A 2027 junior will have seen more code patterns than a 2017 senior.

The question is whether pattern recognition equals judgment. I’m not sure it does. But it helps.

The Real Answer

Honestly, I don’t think there’s a complete solution yet.

The AI-coding revolution is maybe 18 months old. We’re still figuring out the workflows, the tools, the mental models. The sustainability question is just starting to surface.

But here’s what I believe:

The companies that win will be the ones who treat this seriously now.

  • Build quality systems that outlive individuals
  • Document decisions, not just code
  • Create feedback loops that accelerate junior learning
  • Use external expertise strategically, not as a crutch

The goal isn’t for seniors to review AI forever. The goal is for senior knowledge to become sustainable.

We’re not there yet, but at least we’re asking the right questions.

Thanks to Peter Åkerström for the question that sparked this post.

If you’re thinking about AI-augmented development for your team, let’s talk.

photo 1461897104016 0b3b00cc81ee
Photo cred: Braden Collum
© Alex Kerber 2003 - 2026