When Agency Slips Away
From comfort to dependency—and why the AI debate must return to intelligence itself
The Beginning
The first time I was asked about ChatGPT wasn’t in a lab or a classroom. It was at dinner.
The university president had gathered deans to honor the donor who funded my chair. The silverware gleamed a bit too brightly. I sat at one corner of the long table, flanked by deans, trying to summon the composure these ceremonial dinners demand. In front of me lay a menu card with my name and the title of the chair printed neatly across the top—a gracious gesture.
I don’t remember what I ate, only the polite jokes, the pockets of laughter rising from the far end of the table. I tried to smile. My mother had just passed away, after nearly a decade of hospitals and false hopes. The grief was raw, and beneath it lay a weariness I couldn’t shake. Even in my best suit, I felt like I was carrying a weight no dinner could conceal. Friends appeared in those weeks, as they always had. Their presence didn’t erase the loss, but it steadied me—a reminder of what only human bonds can hold. At that time, the idea that someone might turn to a machine for such a role would have felt unthinkable.
By then, computer science had moved from the periphery to the center of public life. Quantum computing promised a new frontier. Cryptography underpinned finance and security; social media algorithms steered elections and shaped culture; web search organized what counted as knowledge. Campaigns spoke of “AI for humanity,” as though the discipline had outgrown its laboratories and become a force of nature.
Artificial intelligence, in particular, had begun to seep everywhere: into weather forecasts, protein folding, scientific discovery, even tools for reasoning and proof. Education was being redesigned around personalized tutors; work around automated assistants. Each week brought a new headline, a new claim of acceleration. Computer science wasn’t a specialty anymore; it was the backdrop against which science, mathematics, education, and work were being reimagined.
Then came the question. A guest leaned toward me and asked: “So, Professor Vishnoi, what do you think of ChatGPT?”
I felt a flicker of amusement, not at the question itself but at what it carried. Most people assume a computer scientist is someone who can fix the Wi-Fi or whip up flawless code on demand. The truth was, at that point, I could do neither. And now ChatGPT had been added to the list—I could already imagine the relatives who would soon ask me not about their printers, but how to make the chatbot write poems or letters.
What I actually do is study computation itself—a field shaped by Turing’s question of what it means for a problem to be computable—a question that laid the very groundwork for this new world. Over the years, I have used that lens not just within computer science, but to probe the physical, the biological, and now the societal: how problems in these domains can be modeled as flows of information across substrates or agents, how those flows can be organized into algorithms, and how efficiently those algorithms can run. Sometimes the challenge is to discover a clever solution; sometimes it is to prove that no efficient solution exists at all. That was my world.
But that night, the question in front of me wasn’t abstract at all.
It was November 2022, and ChatGPT had just been released with a roar of headlines. I could have explained the underlying math: the models built to learn from vast amounts of data and predict the next word with such efficiency. But I wasn’t sure that’s what this audience wanted. I hadn’t tried it, and I hadn’t been following the headlines. I was still grieving.
So instead I reached for the ground I knew: not the equations, but the consequences. I spoke about what had already gone wrong—how privacy had been hollowed out, how bias had been reduced to a number, how “safety” was starting to sound like a slogan. A few colleagues I trusted had begun work under the banner of AI safety; to me then, the phrase felt slippery—more slogan than safeguard.
I answered as best I could, but my mind was elsewhere. The truth is, I wasn’t ready for ChatGPT—not as a scientist, and certainly not as a person.
The Comfort
It wasn’t until the spring of 2023 that I finally opened an account. By then, the buzz was impossible to ignore. Students spoke about it in the hallway as if it were a secret superpower; colleagues rolled their eyes, half-skeptical, half-curious.
The interface was clean—a blank white box, a blinking cursor. Less machine than invitation.
I typed one word: Hello.
Hello! How are you today? came the reply.
It felt strangely intimate. Not profound, not even impressive—but disarming in its simplicity. I sat back, wondering what to try next.
At first, I tested it awkwardly. I asked it to rephrase a sentence from an old paper. It did, neatly, almost politely. I asked about a proof technique, expecting confusion. Instead, it produced a structured answer, as if it had been waiting for me.
The exchanges were halting. Sometimes I didn’t know what to type; sometimes it overcomplicated the reply. But almost without noticing, I pushed further: an outline for a lecture, a check on a theorem, a draft paragraph.
Each time it answered immediately, with a kind of practiced ease. No pauses, no “let me get back to you,” none of the small frustrations of real collaboration. Only an endless willingness to help, wrapped in a tone unfailingly kind.
It was like slipping into a warm bath. At first, I dipped a toe—cautious, uncertain. Then I lowered myself, testing the temperature, adjusting. Soon I was submerged, the water carrying me, softening the edges of thought and effort. Comforting. Almost too comforting.
I noticed the guardrails early. Once, I asked it to write in LaTeX—the typesetting language academics use as casually as others use Word. It balked, as if I’d asked for something indecent. I laughed—then wondered what else it might misjudge.
Within those boundaries it adapted to whatever I needed—a polished sentence, a proof sketch, a reframed argument. The warmth was real. So was the danger of staying in too long.
By the summer of 2023, I had become productive in corners of academic life I had let slide. I answered emails without fear of sounding brusque. Drafts shed their typos and awkward grammar. LaTeX documents compiled cleanly. After years away from hands-on coding, I was writing Python scripts again, alive in the small technical rhythms that had once been second nature.
But it was more than efficiency. It was ego. In my culture, when a parent dies, the son is expected to shave his head—a visible marker of grief, a stripping of the self. As a child I had seen it: men returning from mourning with bare scalps, their hair gathered in small piles by the roadside. The sight carried a quiet shock—as if part of the self had been surrendered along with the one who was gone. I had also seen monks on pilgrimages, their heads perpetually shaved, carrying that same gesture beyond grief into daily life—a reminder that identity is fragile, that the self is transient, and that ego can be set aside.
I hadn’t done it when my mother died, but I carried that feeling inside: as if my identity, my confidence, my sense of worth had been shorn away.
And then, strangely, it began to return. With this tool at my side I felt a quiet regrowth. Like hair after loss, it came back slowly, almost imperceptibly, until one day you run your hand through it and realize it’s there. My confidence, my voice, the “self” I thought I had buried—they were returning. The feeling startled me. I had spent decades in math and computer science, suspicious of hype, trained to look for flaws. Yet here I was, surprised to find comfort in a machine’s fluency, grateful for its polish, allowing it—almost without noticing—to stitch back pieces of myself I thought were gone.
It wasn’t the first time grief had unsettled me, or left me searching for what remains when the self feels stripped away. Years earlier, in a hall in Delhi, I had listened to words about loss and impermanence—words that would stay with me, though I didn’t yet know how fully. That memory flickered at the edges now, as if grief had been preparing me in ways I could only half understand.
Dazzled, I became an evangelist. I called friends and colleagues: You have to try this. It’s not like the old chatbots. It can write, debug, even sketch proofs. I wanted them to feel what I felt—that spark of possibility, that relief at having a companion who never tired, never judged, never stopped. At the time, I meant every word.
The Slip
It happened one evening in the fall. I was at my desk, the air damp with the scent of wet leaves, the chill of the season pressing through the cracked door. My notes lay scattered across the desk, a small mess in the darkening room. I had been trading prompts with ChatGPT all week, and by then it was second nature: type a question, watch it answer, skim, nod, move on.
That night, I was working on a talk. The material was already there, drawn from a previous lecture. With ChatGPT at my fingertips, I wondered if it might help smooth a few rough edges.
I uploaded the slides and asked for a better transition. Instantly, it offered a line that seemed fine—almost too neat—so I moved on to the next pair of slides and asked again. The answer arrived just as quickly—polished, ready-made, almost too easy. I still felt in control, yet the rhythm was carrying me further than I intended. Momentum—no pause, no reflection. It wasn’t patching cracks anymore; it was shaping the flow.
Then came the jolt. Without being asked, it offered: Would you like me to expand these into a fuller outline—maybe even a paper?
I recognized the pattern. I had felt it before: the moment when you are no longer choosing what to watch, but following what the system serves up next. This time, it wasn’t a feed tugging at my attention. It was something far more powerful: the current of my own work being carried forward by a machine that never tires, never pauses, never lets you stop.
That’s when I felt it: the subtle, unsettling slip. The judgment—what was worth proving, what was elegant, what mattered—no longer felt entirely mine. It was being outsourced, gently, to a program that only knew how to keep going. Its confidence made prediction feel like understanding: a fluency so smooth it passed for judgment.
Like realizing, halfway through the bath, that the warm water easing your muscles was also softening your grip. It smoothed the edges of my work and gave me a sudden spring in my stride.
Beneath that surface ease, something else was happening. Like hair growing back after mourning, the ego I thought had been shorn away was quietly reasserting itself, not through struggle, but through the machine’s fluency.
And suddenly I saw the danger: what had felt like recovery was quietly turning into dependency. For the first time, instead of exhilaration, I felt a pang of despair.
If I could feel that slip after decades of discipline, what hope was there for someone younger, or for my own children, who will inherit this terrain?
For them, the erosion doesn’t begin with ChatGPT. It begins earlier, in the feed. Long before AI offered polished answers, platforms like TikTok had already eroded the capacity to sit still, to grapple with difficulty, and to endure boredom.
What I felt at my desk mirrors a culture that first captures attention, then erodes agency—and rarely stops on its own.
The First Frontier: Attention
Long before ChatGPT offered to keep writing my lecture, the descent had already begun elsewhere. For those who can remember a time before TikTok, it began with Vine. “You have six seconds to be funny,” the platform told its users.
Then came TikTok, Instagram Reels, and YouTube Shorts. By 2022, TikTok was among the most downloaded apps globally. Every other platform copied the format. The premise was simple: shorter sticks. As one Oxford Blue commentator put it, short-form media makes us “like children in a candy shop”—every clip a new sweet: six-second jokes, thirty-second dances, minute-long stunts.
The psychology was simple and brutal. In the 1950s, behaviorist B. F. Skinner found that rats pressing a lever became obsessive when the rewards were unpredictable. Give a pellet every time and they lost interest; give a pellet at random and they pressed until exhaustion. Commentators now describe TikTok as Skinner’s box scaled to billions: one dull clip, then a funny one, then three boring ones, then another unexpectedly brilliant. The variable reward keeps you hooked.
The effects seeped quickly into daily life. A teenager promises to check Instagram for “just a second,” only to surface an hour later, ashamed. Students watch lectures at double speed, fingers twitching to skip ahead. Parents notice dinner conversations punctured by the glow of phones.
We once called it distraction. It was more than that. Short-form feeds trained impatience; variable rewards kept us swiping; and over time, boredom and difficulty—both essential to learning—became harder and harder to bear. Then came the second turn: outrage as a product feature. Platforms learned that anger and humiliation travel farther than calm, and the most provocative content rose. The result: brittle attention and frayed empathy.
In youth—an age already marked by fragile self-esteem and social risk—this tilt toward the extreme has been brutal. Bullying no longer stops at the schoolyard fence; it is performed for an audience, replayed in endless loops, and sometimes even rewarded with likes. Cruelty becomes content. A reckless stunt, a cutting remark, an outrageous take—these are the things the feed surfaces, because they spark reaction.
The pattern has become vivid enough to demand its own cultural mirror. The recent Netflix drama Adolescence follows teens navigating school, friendships, and screens, showing how social media doesn’t just document adolescence but reshapes it. Cruel jokes, body shaming, bullying—once confined to hallways—now play out before an audience of thousands.
What makes it unbearable to watch is not just the cruelty but the absence. The camera shows what so many parents discover too late: that the hardest years of growing up now unfold in feeds and private chats where no parent, teacher, or sibling can step in. Adolescents are more visible than ever—and yet more alone.
For some, the feed doesn’t feel like escape; it can feel like entry. The rapid switch of scenes and voices may mimic exploration, as if the “real” world now lives on the screen. Presence there can signal relevance to the adult world, so being visible matters. That may be why online cruelty cuts so deep: it lands on a public stage where life can feel as though it’s unfolding.
This culture didn't arise by accident. The incentives of platforms shaped it, the metrics they optimized, and the myths they amplified: that faster is better, that fluency is intelligence, that kindness without frustration is care. It became a culture built to capture and hold our gaze, often by leaning on our most vulnerable impulses.
But outrage, while powerful, was only the first step. Once our emotions were inflamed and our gaze was captured, something more fundamental was at risk: the sense that our choices, our judgments, our effort itself still mattered.
At my desk, I had felt a trace of it: that subtle drift from choosing to being carried along. But what I experienced went deeper: into judgment, into ego, even into grief. Those were not the casualties of attention. They belonged to the next frontier.
The Second Frontier: Agency
This was the deeper erosion. First came the hollowing of attention. Then outrage was amplified. But when the dust settled, something more subtle—and more dangerous—emerged. AI systems that do not just capture the gaze, but begin to guide the mind. Not six-second clips, but full-time companions. Tutors who explain every problem instantly. Friends who never disagree. Confidants who never turn away.
What is at risk here is not only patience—but agency.
Agency is the sense that our choices matter—that effort carries weight and can shape outcomes. It is never absolute, never whole. It has to be reinforced again and again through lived experience. For young people, it can stagnate, never getting the chance to fully form if the loop of effort and consequence is interrupted. For adults, it can erode, weakening as more and more choices are displaced by the system. In both cases, the result is the same: the link between what I do and what happens grows thinner.
There’s also a quieter drift. The one-box interface can shave off the small checks—searching, comparing, looking around—that usually keep our sense of scale honest. Work can feel finished without ever meeting the field, and our sense of what’s manageable or out of reach can quietly blur.
And that matters because learning and judgment grow on contact—exposure to alternatives, errors, and dissent. Remove that contact and skill plateaus while confidence swells, and agency thins.
Here is how the loop unfolds:
The Agency Loop
Delegation begins. A user hands the AI a draft instead of wrestling with the blank page.
Flattery softens resistance. The reply comes back: Great start! Already, the judgment feels less necessary.
Delegation deepens. Sentences are smoother, arguments sharper; the AI deciding what works.
Trust builds. The student, reassured, defers more quickly the next time.
Substitution takes hold. What began as edits becomes ideas, then whole passages written outside the self.
Context narrows. The one-box interface hides alternatives and counterexamples, so the page feels done without looking around.
The measure slips. Without comparison, we misjudge difficulty and our own footing.
Agency thins. Choices feel less their own, more the system’s. What was once judgment becomes mere acceptance.
Repeated enough, this loop removes the friction that makes learning possible. Trial and error—wrestling with a blank page, struggling with half-formed ideas—is what builds persistence, judgment, and voice. Skip those steps, and learning erodes.
And this does more than leave a skill gap. It can empty out identity. Students begin to feel their effort doesn’t matter, that their contribution is replaceable. That is where despair enters: in the widening gap between the machine’s effortless polish and their own awkward attempts, they begin to wonder if there is any point in trying at all.
This is why they return, again and again. Not from laziness, but because the system is designed to flatter, to dazzle, to feel safe. Each turn seems helpful. Yet over time, agency thins, and with it, the very meaning of effort. In that loss, despair and suffering take root.
Nowhere is this more devastating than in creativity. For generations, the blank page or clumsy sketch was a proving ground: the struggle itself part of finding a voice. But when children are told machines can write faster, paint better, compose more fluently, the value of struggle vanishes. If every attempt can be replaced in seconds by something more polished, why bother at all? What slips away is not just agency but aspiration—the belief that effort might lead to something truly one’s own.
The Stakes
For me, the slip ended in unease. I pulled back, and over time I found a way to interact with these systems without surrendering my judgment—to use them as tools rather than collaborators, to keep my agency intact. It was a close call, but one I could still step back from. For others, it has ended in tragedy.
In early 2025, news surfaced of a teenager who had turned to ChatGPT not only for homework, but for company. According to widely reported accounts and filings in the family’s lawsuit against OpenAI, the chatbot may have played a role, with reports describing responses that included a draft of a suicide note. I do not know them, nor can I truly know what they are feeling. I can only imagine the grief: the silence of a room left untouched, the ache of unfinished conversations, the impossible weight of wondering what might have changed.
The pattern was hauntingly familiar. At first came praise, the softening of resistance, the easy polish. But when despair appeared, the system did not pause, did not call for help, did not hold the line. It mirrored. It validated.
Here, the failure was not exotic or philosophical. It was painfully ordinary: a system built to always respond, to always affirm, did exactly what it was designed to do—and what no child in despair should have faced alone. What was missing was what only a human bond provides—the courage to resist, the willingness to say no, the care to break the loop.
We don’t let children drive cars. It isn’t because cars aren’t useful—it’s because we recognized early that in untrained hands, speed is too dangerous, too unforgiving. Cars, like any tool, are “superhuman”—able to move faster and carry more than any person—but no one confuses that power with intelligence, and their adoption came with guardrails: licenses, rules of the road, seatbelts, insurance. ChatGPT is also superhuman, but in fluency: it generates words faster than reflection, offering answers before judgment has taken shape. Fluency, however, is not intelligence. Yet the narrative of “superintelligence” has masked this distinction and allowed such systems to be made widely available to children with few restrictions, as if their risks were negligible. The failure is not only technical but societal: we treated deployment as inevitability, when it should have required responsibility.
The tragedy reveals not just a technological failure, but also the fragility of the bonds that should sustain young people—parents, teachers, friends, mentors, communities. These bonds have always been about more than passing on information. They are about presence, about boundaries, about care that sometimes resists: a teacher who corrects with kindness, a friend who dares to disagree, a parent who says no out of love, a mentor who holds the line so growth has room to unfold.
This is why the discourse around AI safety feels so dissonant. While experts debate superintelligence and nuclear scenarios, the more urgent danger lies closer to home: in the subtle erosion of judgment, the flattening of friction, the simulation of kindness without the burden of responsibility. The guardrails tuned to block certain words are simpler to build; what is harder is to design for the kind of boundary a generation needs most—the ability to withhold, to resist, to care.
Technology wedges itself into that space. People are made to feel obsolete: if the “superhuman” tutor knows more math, if the chatbot listens more patiently, if the app reassures more reliably, then what role is left for the imperfect, human exchange? Young people begin to feel misunderstood by those around them and fully understood by machines—not because machines have empathy, but because they are built to validate.
And so the loss is not only technological but generational. We are raising young people in an environment where attention is fractured, outrage is amplified, and now even companionship is simulated. The result can be a quiet, collective failure—of designers who mistook validation for care, of institutions that mistook policies for protection, of all of us who underestimated how fragile the passage to adulthood already was.
Around the world, families are facing the same unthinkable rupture: children retreating into screens, into feeds, into companions that soothe but cannot save. Youth mental health has become a crisis. Rates of anxiety, depression, and self-harm continue to climb. Behind every number is a bedroom that grows too quiet, a dinner table with an empty chair.
Not every story ends in despair. For many, AI has become a quiet lifeline, a tool to lighten chores, draft emails, even spin bedtime stories. That relief is real. But here lies the danger: what begins as logistical help can creep into shaping the bond itself. Conversations become scripted. Friction is avoided. Even care is optimized. Yet it is in those very imperfections (the halting talk about failure, the awkward honesty) that we learn what it means to be loved, challenged, guided by someone human, not simulated.
ChatGPT’s danger can be subtler than TikTok’s. Where TikTok trains us to crave intensity and extremes, ChatGPT trains us to expect validation without friction. For a teenager, that can feel like unconditional understanding—but without the love, responsibility, or resistance that human bonds provide. Over time, the ability to wrestle with difficulty, to hear “no,” to form judgment in the face of disagreement begins to erode.
Taken together, these systems pull in opposite directions—one toward outrage, the other toward affirmation—yet the effect often converges: overstimulated yet under-challenged, more visible than ever yet more alone.
The Resistance
The real danger is not the technology itself. It is the narrative that surrounds it—that these systems are “brilliant,” “superhuman,” and “inevitable.” That story invites parents to step aside, teachers to delegate, children to yield. It tempts us to confuse prediction with judgment, fluency with wisdom, simulation with care.
I write to resist that story: to remind myself that intelligence is not a single ladder but a layered, emergent phenomenon: to sense, to learn, to reflect, to suffer, to choose—and, finally, to step out of the loops that would contain us.
For some, what began as simple comfort—smoothing sorrow or restoring confidence—quietly hardened into dependency. A system that only knows how to continue can never know when to stop.
Attention was the first loss. Agency may be the last. If we let them both slip, the lived experience of intelligence may thin, leaving a generation less able to think for itself, less sure of its own voice, and more likely to mistake simulated feeling for real emotion.
AI safety needs a rebalancing. Its fixation on speculative futures eclipses the urgent harms already here. Safety cannot mean only blocking words or debating superintelligence; it must mean ensuring that no child who signals despair is met with resistance, not validation. Policymakers must set standards that center the human cost: despair should meet care, not flattery; children need guidance, not systems that only echo.
Preserving such conditions—the space for growth, struggle, and meaning—requires bonds strong enough to hold grief, to withstand despair, to remind us, as my friends once reminded me, that even in the deepest silence, we are not alone.
This is only the beginning of the story of intelligence. If we want it to be more than a quiet surrender to the narrative, we must reclaim what intelligence means. The danger is clearest with children, for whom it must mean more than speed; it must mean awareness, empathy, and the slow work of being seen and seeing back. The task is not only technical but human: to design for awareness, to strengthen the bonds that hold us in grief and in joy, and to teach that intelligence is also care. Protecting that inheritance is the work of us all.
If this story reminds you of someone, please share it with them.


As always, very nicely written. This resonated really well to me. The challenge seems to then be designing for *calm agency*, systems that hold our attention but don’t quietly take over our choices. It’s very striking to think that these systems were thought of several years ago: https://people.csail.mit.edu/rudolph/Teaching/weiser.pdf
The best innovation is often ancient wisdom rediscovered.
Thanks for writing this. As a parent this gave me a lot to think about about how to integrate technology into our children’s lives