AI and the Erosion of Knowing
When we outsource skills to systems, what decays in us? On epistemic fragility in the age of AI
"The obstacle was always the path." Adapted from a Zen proverb
In the Renaissance, apprentices learned to paint by grinding pigments, mixing oils, stretching canvas, and copying masterworks line by line. In architecture, students spent years sketching by hand before they touched a computer. In math, the best way to understand a theorem is to try to prove it yourself—and fail.
Today, that slow accumulation of competence is being replaced by a faster rhythm.
Ask an AI to write a proof, generate a building, produce an image, or answer a math question—and it will. But something essential gets skipped. And what gets skipped doesn't just disappear. It quietly erodes.
AI gives us output without process. The result: polished answers with no foundation.
The Eroding Scaffold
Learning isn’t just about information. It’s about structure and the path you took to get there. You don’t truly understand a concept until you’ve built the scaffold it rests on—step by step, skill by skill.
Mira’s Derivative
Take Mira, a student learning calculus. She’s supposed to learn how to compute derivatives. The teacher explains the chain rule: when one function is nested inside another, you must differentiate both, in the right order. It’s abstract, so Mira does what students do—she turns to an AI tutor.
She types:
“Find the derivative of sin(x² + 3x).”
The AI answers instantly:
cos(x² + 3x) · (2x + 3)
It even offers a short explanation. Mira copies it down and moves on.
What she didn’t do:
- Simplify the inner expression.
- Apply the rule mechanically.
- Make a mistake and figure it out.
- Internalize the rhythm of composition and change.
Now fast-forward two months. Mira sees a problem she’s never encountered:
“Differentiate xᵡ.”
She freezes. No familiar template. No AI available. No internal scaffold to fall back on.
She reached the destination — but never built the path.
The skipped struggle was what encoded the concept.
Leo’s Essay
Now consider Leo, a college student writing an essay on political philosophy. He’s supposed to take a position on Hobbes’s Leviathan and argue whether absolute sovereignty is justified today.
Traditionally, Leo would:
- Clarify Hobbes’s argument,
- Develop a counterposition,
- Find textual evidence,
- Draft and revise a logical structure.
Instead, he types:
“Write a 5-paragraph essay arguing against Hobbes’ justification of sovereign power.”
The AI delivers—fluent, plausible, even citing sources. Leo pastes it in, tweaks a few lines, and submits.
A week later, he’s asked:
“How would Hobbes respond to modern surveillance capitalism?”
He flounders. The structure was never his. The reasoning was never practiced. The scaffolding was never built.
He didn’t outsource writing. He outsourced thinking.
Teachers aren’t grading the prose. They’re grading the reasoning it reveals.
Art Without Sketching, Architecture Without Lines
In design and architecture, we’re seeing the same thing. AI can generate facades, floor plans, and renders in minutes. But without grounding in scale, structure, or constraint, designs become fragile—beautiful but unbuildable. The result is a facade that ignores sun direction, a floor plan that fails fire code, a portrait with six fingers.
In art, tools like Midjourney let users create stunning illustrations from a few words. But if you can’t draw, can you see? Can you revise? Can you critique? Can you tell what’s off?
Drawing is not just a means of production—it’s a way of learning to notice. Line by line, it teaches scale, proportion, balance, rhythm. Without that training, feedback becomes guesswork. Revision becomes roulette.
When the tool does the shaping, the human stops developing the eye.
And when the AI makes a mistake—one that’s subtle, structural, or compositional—there’s no foundation to catch it. You don’t just lose the sketch. You lose the ability to tell when something is wrong.
Oversight Collapse
AI doesn’t reason — it samples.
When a model like GPT writes a paragraph or answers a question, it isn’t deriving a conclusion from first principles. It’s drawing from a probability distribution — choosing what sounds most plausible based on past data.
The result? Output that feels fluent — but isn’t guaranteed to be correct.
That’s what makes AI mistakes dangerous. They’re not just wrong. They’re plausibly wrong — errors with the gloss of insight. And if we’ve skipped the scaffolding, we can’t tell the difference between coherence and truth.
This is the tipping point: when we’re still “in the loop,” but can no longer verify what we see. We’ve become fluent — but not competent. You look like you’re in control. But you’re just along for the ride.
And the risk isn’t limited to math or logic. In “wicked” domains — ethics, design, law, writing — there may be no single right answer. What matters is the ability to justify, adapt, revise, and notice what doesn’t quite fit.
That capacity comes from friction — from having built the internal scaffold of reasoning.
AI gives us output. But it skips the reasoning. It removes the friction — and with it, the growth.
From Work to Erosion
In a recent essay, I argued that AI is reshaping work not by replacing entire jobs, but by separating judgment/decision from execution/action. Tools like GPT, Copilot, and dashboards take over action-level tasks. But what remains human is the ability to frame problems, make judgments, and verify outcomes.
In that example, Ada, a software engineer, wasn’t made obsolete. Her job changed. Execution was automated. Judgment was not.
But here’s the deeper risk: what if using AI to execute also erodes our ability to decide?
That’s the dynamic explored here. When AI lets us skip foundational steps—whether in calculus, writing, or design—it removes the very scaffolding that enables judgment to form.
At first, it feels like acceleration. But over time, it becomes erosion.
The erosion of action-level skill becomes the erosion of decision-level agency.
What Can Be Done
The goal isn’t to abandon AI. It’s to use it without losing ourselves.
That means:
- Choosing tools that show their work, not just their output.
- Practicing skills we no longer “need,” because they still underpin everything else.
- Teaching not just what to do, but how to decide, how to verify, and how to notice what’s missing.
What’s at stake isn’t just productivity. It’s agency.
Final Thought: The Skills We Skip Are Still Ours to Build
When we let AI do the scaffolding for us, we don’t just skip steps—we weaken the very structures that make thinking, reasoning, and creating possible.
The skills we skip don’t vanish. They decay. Quietly. Until we need them—and find we’ve forgotten how they worked.
So yes, use AI. But build the scaffold anyway.
Because the point isn’t just getting it right — it’s still knowing what right looks like.
That’s the kind of knowing worth protecting.
Nice article Nisheet!