Blog

February 3, 2026

Anthropic Just Studied How AI Affects Learning to Code

The debate about whether AI helps coding is over. The real question is what it does to us.

Anthropic Just Studied How AI Affects Learning to Code

Yesterday I wrote about the new playbook for juniors. Today, Anthropic dropped a study that puts hard numbers on something we’ve all been feeling.

Does using AI to code make you worse at understanding code?

Short answer: Yes, but also no. It depends entirely on how you use it.

The Study

Anthropic took 52 (mostly junior) developers and had them build features using Trio, a Python library none of them knew. Half got AI assistance. Half didn’t.

The results:

GroupQuiz ScoreTime
No AI67%baseline
With AI50%~2 min faster

AI users scored 17% lower on understanding the code they’d written minutes earlier. The biggest gap? Debugging questions.

The Patterns

Low scorers: Let AI write everything. Fastest completion. Worst understanding.

High scorers: Asked why things work, then coded themselves. Let AI generate code, then asked follow-up questions to understand it.

The difference isn’t AI vs no AI. It’s offloading vs augmenting.

What This Means for Teams

  1. Don’t ban AI. That’s like banning calculators.
  2. Create learning moments. Code reviews where juniors explain the AI-generated code.
  3. Test for understanding, not output. Can they debug it? Can they modify it?
  4. Expect more debugging skills, not fewer.

The debate about whether AI helps with coding is over. The interesting question is what it does to us over time.

Thinking about this stuff too? Let's talk.