Start now →

How Neuroscience Completely Transformed My Approach to Artificial Intelligence

By Sarvesh Talele · Published April 24, 2026 · 12 min read · Source: Level Up Coding
DeFiRegulationStablecoinsAI & Crypto
How Neuroscience Completely Transformed My Approach to Artificial Intelligence

What the illusion of competence taught me about daily programming practice

Image Created by Author

The time shows exactly 11:47 PM as you complete your 347th algorithmic puzzle with precision. The logic streams through your fingertips as if your motor cortex has internalised this unique pattern countless times before. You close your laptop, feeling like a flawlessly optimised computational machine. Two weeks later, a coworker approaches you over coffee and asks you to illustrate the same conceptual framework on a whiteboard. You grab the marker, but your hand lingers in mid-air. Complete cognitive silence.

That situation occurred so often that I eventually ceased attributing it to the intricacy of the questions and began to question my basic approach. As someone who builds artificial neural networks, I recognised that my biological learning framework was entirely out of sync. The conclusive answer, when I finally pursued it, had been well-established in cognitive psychology research for a full twenty years.

The Scientific Name for the Lie Your Brain Tells You

Image Created by Author

Researchers Jeffrey Karpicke and Henry Roediger at Washington University in St. Louis identified this phenomenon as the illusion of competence. When you rehearse information strictly within your short-term memory, your brain rapidly constructs temporary neural circuitry. This rapid formation makes mere recognition feel entirely indistinguishable from genuine mastery. The syntax looks familiar, so your brain confidently whispers that you know it. However, cognitive recognition and active recall operate on fundamentally different biological hardware.

In their 2006 study published in Psychological Science, Roediger and Karpicke compared students who merely reread material four times against a group who read it once and subsequently tested themselves three times. A week later, the testing group retained roughly 50 percent more data, despite dedicating less total time to the material. The rereaders felt significantly more confident, but the testers possessed actual structural knowledge. Most of us, when we practice programming, are simply rereaders executing extra conceptual steps.

What Your Brain Actually Does While You Sleep

Your brain does not commit data to a permanent drive in real time. According to a 2019 review in Nature Neuroscience by Klinzing and Born, long term memory formation occurs through an active systems consolidation process that executes primarily during sleep. During slow wave sleep, the exact neurons that fired while you were learning ignite again in rapid, compressed replays. This mechanism slowly transfers structural patterns from your hippocampus to your neocortex for highly durable storage.

Here is the crucial element that completely changed my perspective on practice. The human brain is ruthlessly selective about what data it actually bothers to consolidate. Research conducted by James McGaugh at UC Irvine demonstrates that emotionally salient experiences encode much more strongly due to amygdala activation at the precise moment of learning. Attention fundamentally matters. Cognitive effort matters. Authentic intellectual struggle matters. The specific neurons that fire together during moments of genuine difficulty are the exact ones your sleeping brain rewires for the long haul.

Executing algorithms at midnight, while half awake, and blindly copying patterns you barely comprehend is close to the absolute worst possible input you could provide to this biological system.

How Artificial Intelligence Deepens the Trap

Now introduce a large language model into this cognitive loop.

You describe an architectural problem. An AI assistant writes the elegant solution. You read it, nod in agreement, and immediately move on. Your prefrontal cortex registered the logical explanation, but your hippocampus encoded virtually nothing worth preserving. The next time that specific problem surfaces in a technical interview or on a production system, your cognitive retrieval fails.

A 2025 study from the MIT Media Lab led by Nataliya Kosmyna, appropriately titled “Your Brain on ChatGPT,” put 54 participants through essay writing tasks while recording brain activity across 32 distinct regions using EEG technology. They evaluated three groups: language model users, search engine users, and individuals relying entirely on their biological brains. The AI group demonstrated the weakest and least distributed neural connectivity patterns of all three cohorts. By the third session, many participants had devolved into simply pasting prompts and lightly editing the generated output. The researchers accurately described this degrading effect as “cognitive debt.”

This remarkable finding applies flawlessly to coding practice. When the generative tool writes the answer, your brain categorises watching as an entirely different biological process from actually doing, even when the resulting text on your screen looks identical.

The Week 350 Problems Humbled Me

I had systematically solved around 350 complex problems across six months when a friend asked me to resolve a medium difficulty algorithm from two months prior, out loud, without an integrated development environment. I stumbled through the logic, designed the architecture incorrectly, and required a hint I never even needed the first time around.

So I audited my own logs and attempted thirty problems from eight weeks prior. Perhaps a third of them felt cognitively fresh enough to genuinely rebuild from a blank file. The remaining algorithms felt like movies I had watched but could not accurately recount the plot of.

Completing 350 problems meant absolutely nothing. Having 100 problems permanently living in my long-term memory would have meant everything.

Two Scientific Techniques That Changed Everything

I entirely rewrote my daily practice routine around two specific methodologies that cognitive psychologists have vigorously validated for decades.

The first technique is active recall. You must always place a question between yourself and the desired answer. Instead of asking my AI assistant to solve a complex dynamic programming problem, I ask it to describe the broad problem category. I close the browser tab, and I architect the solution from a completely blank text editor. If I fail, I deeply study the knowledge gap, and the next day, I attempt that identical problem again from scratch. A comprehensive meta-analysis by Christopher Rowland, published in Psychological Bulletin in 2014 and covering 159 independent studies, discovered that retrieval practice produced an average biological benefit of half a standard deviation. That leap reliably moves a learner from the 50th percentile to roughly the 69th percentile of total retention compared to passive study methods.

The second technique is spaced repetition. A landmark study by Cepeda and colleagues in Psychological Bulletin in 2006 suggested reviewing material at intervals of roughly 10 to 20 per cent of your desired total retention timeframe. To remember a concept for a full year, you review it every one to two months. To perfectly recall an algorithm for a technical interview four weeks away, you review it every three to seven days. I currently maintain a rolling digital list of 100 core problems and systematically revisit a small handful each morning before I allow myself to touch anything new.

How I Actually Use AI as an Engineer Today

Artificial intelligence is no longer my automated code generator during practice sessions. It is my dedicated cognitive tutor.

My most frequently used prompt these days resembles this structure: “Provide a medium difficulty sliding window problem. Do not show me the generated solution. If I am clearly stuck for six minutes, provide exactly one conceptual hint, then wait.”

The AI dynamically designs my curriculum. It intelligently selects the next logical problem, identifies the mathematical categories I have been subconsciously avoiding, and generates nuanced variations of the specific architectures I failed last week. But the heavy biological retrieval work stays entirely mine. That friction is the only variable that successfully writes to my long term memory.

When I do require an AI to explain an advanced concept, I strictly force an active cognitive step immediately afterwards. I read the detailed explanation, aggressively close the tab, and write my own interpretation in a private digital note, utilising my own vocabulary and constructing my own custom example. This is essentially the Feynman Technique dressed in modern 2026 clothing. That deliberate effort of translation is exactly what the hippocampus notices and permanently files away.

A Practical Framework You Can Try This Week

If you have been aggressively grinding through algorithms and quietly suspect you are fundamentally retaining less than your problem count suggests, try this specific protocol for fourteen days.

Select twenty problems located in your weakest algorithmic category. Attempt to solve each one entirely cold. Document a two-line plain English description of the underlying structural pattern in your personal notes. Twenty-four hours later, retry those identical twenty problems with zero reference material and absolutely no AI assistance. Carefully observe which specific structures you cannot rebuild. Those represent your true cognitive gaps. Work exclusively on those identified gaps for the following three days. After that, begin introducing new problems slowly, methodically folding the original twenty back into your active rotation every third day.

By the highly productive end of those two weeks, you will possess twenty robust problems living comfortably in your durable long term memory. That structural knowledge is infinitely more valuable than two hundred scattered problems rapidly decaying in short term biological buffers you cannot access on demand.

The Quiet Truth About Learning in the Age of Automation

The greatest contribution of contemporary cognitive science is not just an enhanced method for memorization. It is the validation to take your time without feeling lingering guilt.

You are not trailing behind simply because you only deeply mastered 100 problems. You are radically ahead because those 100 concepts are genuinely and permanently yours. Every single time you actively resist the temptation to let an artificial intelligence finish your complex thought, you cast a powerful vote for the future version of yourself. The version who will still be highly competent three months from now, standing confidently in a quiet room with a blank whiteboard, zero autocomplete functionality, and an interviewer asking precisely why you made a particular architectural tradeoff.

The machine can effortlessly synthesize the syntax. Only your hippocampus can forge you into a true engineer.

A daily template for mastery

Image Created by Author

This is the routine I landed on after a lot of iteration. Seventy-five minutes, five days a week. Every block is designed around retrieval, not recognition.

Block 1- Warm retrieval (10 min): Open a blank file. Rewrite, from memory, the pattern you learned yesterday. No notes, no AI, no editor autocomplete. If you cannot reconstruct it, flag it as a gap and move on. The act of failing to recall is itself how the brain marks something as worth deeper consolidation.

Block 2- Spaced review (15 min): Pick three problems from your log: one from a week ago, one from three weeks ago, one from two months ago. Solve each from a blank editor, time-boxed to five minutes. Anything you cannot rebuild goes to the top of tomorrow’s review queue. This is the 10 to 20% spacing rule in practice.

Block 3- One new problem, deeply (30 min): Solve one new problem cold for the first twenty minutes. No AI, no hints, even if you are stuck. Struggle is the signal your amygdala needs to tag this as worth keeping. Only after twenty minutes do you open Claude or Copilot, and only with this prompt: “I am stuck on X. Give me one hint that points me toward the pattern, not the solution.” After you solve it, close everything and start over from scratch.

Block 4- Active synthesis (10 min): Write, in your own words, a three-line note: (1) the pattern you used, (2) the trigger that tells you to reach for this pattern next time, (3) one sentence on what made this problem hard. This is where long-term memory actually gets built. You are not writing notes for later review. You are writing them so that the translation itself forces encoding.

Block 5- One reflection question (10 min): End with a prompt to Claude or Copilot: “Here is the pattern I just learned. Give me three variations of this problem that look different on the surface but use the same underlying pattern. Do not show solutions.” Save these for tomorrow’s warm retrieval block.

The whole thing is built so that AI sets the curriculum and verifies the work, but every neuron that matters fires in your head, not on the screen.

Resources and further reading

The foundational research

Roediger, H. L., & Karpicke, J. D. (2006). Test Enhanced Learning. The study I referenced in the rereading versus testing section. Short, readable, and the paper that started the modern active recall movement. PubMed abstract · SAGE full text

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed Practice in Verbal Recall Tasks. The meta-analysis of 317 experiments gives us the 10 to 20% spacing rule. Dense, but the tables alone are worth the read. PubMed abstract · Full PDF

Klinzing, J. G., Niethard, N., & Born, J. (2019). Mechanisms of systems memory consolidation during sleep. Nature Neuroscience review on how hippocampal replay during slow wave sleep moves memories into neocortical storage. This is the paper to read if you want to understand what is actually happening while you sleep. Nature link

Kosmyna, N., et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt. The MIT Media Lab EEG study on LLM assisted writing. The one I referenced for the “weakest connectivity” finding. Long paper, but the abstract and Figure 1 tell most of the story. arXiv preprint · MIT Media Lab project page

Books worth owning

Make It Stick: The Science of Successful Learning by Peter Brown, Henry Roediger III, and Mark McDaniel. Co-written by one of the authors of the 2006 testing effect paper. This is the cleanest translation of cognitive science into practical technique you will find anywhere.

A Mind for Numbers by Barbara Oakley. The book version of her Coursera course. The chapter on illusions of competence is where most developers first meet this idea.

Ultralearning by Scott Young. A self-experimental take on aggressive retrieval-based learning, useful as a case study in what the principles look like pushed to the extreme.

Free practical tools

Learning How to Learn (Coursera) by Barbara Oakley and Terrence Sejnowski. The most popular online course ever made. The illusion of competence idea in your opening message comes almost word-for-word from Week 2. Free to audit. Coursera link

NeetCode 150. A curated list of 150 problems covering every major pattern. Exactly the kind of depth-first curriculum this blog argues for. Build mastery here before chasing a 500 problem count. NeetCode 150

One thing to bookmark

The Feynman Technique, applied to code: pick a pattern you think you understand, open a blank document, and explain it in plain English as if to someone who has never programmed. The moment your sentence gets vague is the moment you found the gap. This is active recall in disguise, and it takes ten minutes a day.

Follow Sarvesh Talele here on Medium for practical, step-by-step AI guides that actually make a difference in your workflow.


How Neuroscience Completely Transformed My Approach to Artificial Intelligence was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.

This article was originally published on Level Up Coding and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →