- Evidence Snacks
- Posts
- Cognitive Outsourcing
Cognitive Outsourcing
AI’s greatest threat to learning
Hey 👋
Welcome to May (named after the Roman goddess of growth). Today, we’re kicking off a new series on AI & education… strap in.
Big idea 🍉

AI is coming—thick & fast. LLM’s like ChatGPT are outperforming humans at an ever-increasing range of tasks, their adoption is spreading quicker than any technology before, and they are the least intelligent they will ever be.
However, just because AI is powerful doesn’t necessarily mean that it’s good for learning. Setting aside obvious issues related to accuracy, bias, and privacy—the current generation of LLMs are optimised for helping users solve problems, not for helping users get better at solving problems.
As a result, when students use AI to help with their learning, they tend to produce better outcomes in the short term (eg. write a superior essay, answer a question more comprehensively)... BUT, this often comes at the expense of their learning in the long term.
This disruption is due to one of the core axioms of education, namely: whoever does the thinking gets the learning.
When students use the current generation of LLMs as a crutch, it’s the AI who is doing the important thinking... and as a result, our students get robbed of their intellectual growth (we’ve seen similar effects with overly helpful Teaching Assistants in the past).
This ‘cognitive outsourcing’ is arguably AI’s single greatest threat to learning. So where does this leave us?
Our teaching should be organised around maximising student thinking.
We should avoid assessing tasks where students have used AI as support.
If we are going to use AI to support learning, we should use LLMs that are designed to act as a tutor rather than an assistant (such as LearnLM).
NOTE: All this applies to teachers too… if we repeatedly outsource important aspects of our thinking, we risk becoming de-skilled in those areas over time.
🎓 For more, check out this paper on how AI can boost performance but harm learning.
Summary
AI is powerful, but not necessarily always helpful for learning.
This is because ‘whoever does the thinking gets the learning’.
As such, we must be super careful about using AI to support and assess student learning (and to support teaching).
Little updates 🥕
Study exploring generative vs retrieval tasks → finds combo improves retention & comprehension more than either alone (regardless of order).
Paper testing learning with direct, unrelated, or no images → finds related images aid processing and recall, while unrelated images impair learning.
Meta-analysis of 30+ studies on peer feedback → suggests that supporting students as feedback givers improves feedback quality but not subject learning.
Report on teacher PD reform → argues that a coherent, evidence-based entitlement boosts teacher expertise and drives system improvement.
Get printable PDFs of snacks and much more → Show me Snacks PRO ✨
Enjoy the ride.
Peps 👊