नेति नेति
Not this, not this.
— Bṛhadāraṇyaka Upaniṣad
The Frame Before the Frame: A Prehistory of Discovery
Long before there were "scientists," there was science. Across every continent, humans developed knowledge systems grounded in experience, abstraction, and prediction—driven not merely by curiosity, but by a desire to transform patterns into principles, and observation into discovery. Farmers tracked solstices, sailors read stars, artisans perfected metallurgy, and physicians documented plant remedies. They built calendars, mapped cycles, and tested interventions—turning empirical insight into reliable knowledge.
From the oral sciences of Africa, which encoded botanical, medical, and ecological knowledge across generations, to the astronomical observatories of Mesoamerica, where priests tracked solstices, eclipses, and planetary motion with remarkable accuracy, early human civilizations sought more than survival. In Babylon, scribes logged celestial movements and built predictive models; in India, the architects of Vedic altars designed ritual structures whose proportions mirrored cosmic rhythms, embedding arithmetic and geometry into sacred form. Across these diverse cultures, discovery was not a separate enterprise—it was entwined with ritual, survival, and meaning. Yet the tools were recognizably scientific: systematic observation, abstraction, and the search for hidden order.
This was science before the name. And it reminds us that discovery has never belonged to any one civilization or era. Discovery is not intelligence itself, but one of its sharpest expressions—an act that turns perception into principle through a conceptual leap. While intelligence is broader and encompasses adaptation, inference, and learning in various forms (biological, cultural, and even mechanical), discovery marks those moments when something new is framed, not just found. [A future essay will take up this broader view of intelligence—and how discovery both draws from it and transcends it.]
Life forms learn, adapt, and even innovate. But it is humans who turned observation into explanation, explanation into abstraction, and abstraction into method. The rise of formal science brought mathematical structure and experiment, but it did not invent the impulse to understand—it gave it form, language, and reach.
And today, we stand at the edge of something unfamiliar: the possibility of lifeless discoveries. Artificial Intelligence machines, built without awareness or curiosity, are beginning to surface patterns and propose explanations, sometimes without our full understanding. If science has long been a dialogue between the world and living minds, we are now entering a strange new phase: abstraction without awareness, discovery without a discoverer.
AI systems now assist in everything from understanding black holes to predicting protein folds and even symbolic equation discovery. They parse vast datasets, detect regularities, and generate increasingly sophisticated outputs. Some claim they're not just accelerating research, but beginning to reshape science itself—perhaps even to discover.
But what truly counts as a scientific discovery? This essay examines that question. Building on my earlier essay, Can AI Know Infinity?, I argue that today’s AI excels at recognizing structure, but not at reframing it. It doesn't invent abstractions, ask better questions, or propose new ways of seeing. And that distinction—between fitting the world and reimagining it—is what separates tools of discovery from discovery itself.
That shift won’t come from more data or larger models alone. It will come when the process of discovery—of conceptual leap, reframing, and abstraction—is itself understood, modeled, and encoded in AI systems. That means more than training on outcomes. It means building systems that can interrogate assumptions, generate alternatives, and recognize when the right question hasn’t yet been asked.
Until then, AI remains a powerful extension of our methods—but not yet a partner in our capacity to know.
From Accident to Abstraction: The Anatomy of Discovery
Science has never followed a straight path. Some of its most profound advances began as accidents: fire tamed not by design, but by chance; fermentation discovered in spoiled food; magnetism noticed in peculiar stones. In ancient India, fermented tonics once dismissed as spoilage became staples of Ayurvedic medicine. In Japan, the mold that grew on rice—initially treated as rot—became the basis of saké brewing. Chinese seismographs, once mistaken for ornamental art, quietly recorded distant earthquakes. Maya astronomers, misread by colonial scholars as mystics, had charted Venus with uncanny precision.
What unites these moments is not the accident, but what followed. First came observation—someone noticed what didn’t fit. Then came the question: Why does this keep happening? What could explain it? That act of questioning marked the shift from event to evidence. From there came inference: patterns were abstracted, principles articulated, and new frameworks born.
Centuries later, when Alexander Fleming returned from vacation to find a contaminated petri dish surrounded by dead bacteria, he didn’t discard it. He asked: What was killing the bacteria? That act of interpretation reframed failure as phenomenon, and led to the abstraction of antibiotics—a concept that would transform medicine.
In 1967, Jocelyn Bell Burnell noticed regular radio pulses in what should have been background noise. Rather than dismiss it as interference, she asked: What could cause such precision? Her insight revealed pulsars—rotating neutron stars whose very existence redefined the known boundaries of stellar life and death.
In each case, the path from accident to insight followed the same arc: a break in the pattern → attention → a generative question → a new abstraction. What mattered wasn’t the anomaly itself, but the decision to treat it as meaningful. Discovery didn’t begin with data. It began with doubt—and the courage to ask what the data might mean.
Even modern breakthroughs echo this path. In 2004, graphene—a one-atom-thick sheet of carbon—was isolated not with cutting-edge equipment, but with adhesive tape and graphite. What began as a playful experiment revealed a material with extraordinary strength and conductivity, earning a Nobel Prize and reshaping materials science. In 2012, CRISPR’s gene-editing potential was unlocked by tracing an odd pattern in bacterial genomes—repetitive sequences that had puzzled scientists since the 1990s. What was once dismissed as noise turned out to be an ancient immune system—and a tool for rewriting DNA with unprecedented precision.
This is the anatomy of discovery: not the stumble, but the interpretation. Not the data, but the decision to ask what it might mean.
Discovery doesn’t begin with data. It begins with doubt—and the courage to see meaning where others see noise.
What Counts as Scientific Discovery?
Discovery is not a fixed category. It resists sharp definitions, because it spans a spectrum: from observing the unexpected to formulating the ideas that reframe it. Philosophers of science have long debated this terrain—and today, so must we.
If we are to take seriously the claim that AI systems might discover, we must ask: what truly distinguishes scientific discovery?
It is not just recognizing patterns or optimizing predictions. It is a leap of thought—a shift in perspective that unifies the disconnected or reshapes the problem space itself.
Newton did not just describe motion—he invented the ideas of force and mass. Maxwell did not just model electricity—he revealed it as one face of a broader electromagnetic field. Einstein did not improve Newton’s equations—he redefined time and space.
What unites these moments is not computation, but conceptual reframing.
By contrast, today’s AI systems excel at finding structure within existing models. They classify, interpolate, compress. They identify anomalies—but only those defined within a frame set by others.
They do not ask: What if this frame is wrong?
That kind of question requires epistemic agency: the ability not just to work within a model, but to challenge, revise, or replace it—to see a new abstraction where none existed.
Until AI can interrogate its assumptions, introduce new variables, or discard an entire approach in favor of a more coherent one, we should hesitate to call its outputs discoveries in the full scientific sense.
Yes, AI is powerful. It expands our reach, accelerates our work. But science at its core does not begin with better answers.
It begins with better questions.
And yet, across history, the ability to ask deeper questions has often followed the invention of better tools.
Tools That Transformed — And Where AI Fits
Scientific progress has been shaped not only by theories, but by the instruments that expanded our ability to observe, measure, and imagine.
The telescope expanded our vision beyond Earth, revealing moons, galaxies, and the curvature of spacetime. The microscope opened up unseen biological worlds, from single cells to viral structures. The computer enabled the simulation of complex systems and large-scale data analysis, powering breakthroughs from genome-wide association studies to gravitational wave detection. Each of these tools didn’t just make science faster—they expanded the frontier of what could be seen, modeled, and imagined.
These tools didn’t merely speed up science—they reshaped it. They changed the kinds of questions we could ask and the scale at which we could explore. But crucially, none of them framed those questions on their own. Each breakthrough still depended on human interpretation, creativity, and judgment.
Artificial intelligence belongs to this lineage. Today’s systems can compress structure, generate approximations, assist experimentation, and surface patterns invisible at human scale. They're already embedded across the sciences, accelerating discovery in ways both practical and profound.
But acceleration is not reframing. Tools that extend our senses or calculations still operate within the conceptual boundaries we set. The telescope revealed Jupiter’s moons—but it was Galileo’s interpretation that reshaped our model of the cosmos. The computer simulated atomic collisions—but it was human insight that gave us quantum theory.
This is the critical distinction when considering AI: not whether it is useful—it clearly is—but whether it is different. Does it mark a break with the tradition of instruments, or merely a new peak within it? So far, its strength lies in expanding the reach of science, not in redefining its foundations.
Whether AI will ever make that leap—from assistant to theorist—remains an open question.
But increasingly, we’re told it already has.
Today’s Claims About AI Discovery
In recent years, several AI achievements have been celebrated not just as accelerants to science, but as agents of discovery. Headlines proclaim that AlphaFold has “solved” a grand scientific challenge. Symbolic regression tools are said to have “rediscovered” laws from classical mechanics. Some suggest that AI will soon generate scientific theories entirely on its own.
These claims signal a shift in narrative: from AI as instrument to AI as discoverer. But this shift rests on a conflation—between solving well-defined problems and redefining what the problem even is.
To assess these claims, we must ask: What kind of intellectual work is being performed? What is being optimized—and what is being reimagined?
Let us examine the evidence, example by example.
AlphaFold: Precision Without Principle
In 2021, DeepMind’s AlphaFold was hailed as a watershed moment in computational biology. Trained on decades of structural data, it could predict the three-dimensional shapes of proteins from their amino acid sequences with astonishing accuracy—sometimes rivaling results from labor-intensive techniques like X-ray crystallography or cryo-electron microscopy. The accomplishment was so impactful that it formed part of the work cited in the 2024 Nobel Prize in Chemistry.
To appreciate the scale of the breakthrough, consider the scientific backdrop. For over 50 years, the “protein folding problem” was considered one of biology’s most difficult unsolved challenges. Proteins, made up of long chains of amino acids, fold into highly specific 3D shapes that determine their function. While the sequence is known, the forces that drive folding—hydrophobic collapse, hydrogen bonding, electrostatics—create a staggering combinatorial complexity. Predicting a protein’s structure from its sequence alone was considered nearly impossible.
AlphaFold changed that. Its architecture combined deep learning with multiple sequence alignments, attention mechanisms, and physical constraints to model inter-residue distances and angles. The model learned from the Protein Data Bank (PDB), generalizing across protein families to predict new structures with unprecedented reliability. It wasn't just accurate—it was fast, scalable, and openly released, leading to a massive expansion of publicly available protein structures. Biologists now regularly use AlphaFold to study proteins that were previously inaccessible to structural analysis.
This achievement is historic. But to understand what kind of achievement it is, we must be clear on what AlphaFold did not do:
It did not propose a new physical theory of folding.
It did not uncover unknown principles of molecular dynamics.
It did not generate new concepts to unify disparate domains of biology.
It did not explain why proteins fold the way they do—or what governs exceptions.
It learned to predict—extremely well—within a well-understood framework. But it did not reshape the framework itself.
Even AlphaFold’s designers have acknowledged that their model “learns the shape,” not the process of folding. It can tell you the likely final structure, but not how the molecule gets there, what intermediates arise, or what pathways fail in disease. In this sense, AlphaFold’s knowledge is kinematic, not dynamic; descriptive, not causal.
And unlike a theory, its knowledge is not portable. Change the conditions—pH, pressure, solvent—or ask it to predict folding in non-natural environments, and the model has no explanation or extension to offer. It generalizes across known data, but it cannot extrapolate to unknown biology. There is no conceptual lens through which it interprets its own predictions.
To see the contrast, consider the 2013 Nobel Prize in Chemistry, awarded for pioneering work that combined quantum mechanics and classical physics to simulate chemical reactions in biomolecules. These multiscale models didn’t just predict molecular behavior—they explained it. They provided a theoretical framework that bridged physics and biology, revealing how enzymes catalyze reactions, how molecular motion drives folding, and how energy landscapes govern function. The breakthrough lay not in recognizing patterns, but in constructing a lens through which those patterns made sense—enabling generalizations across molecules, reactions, and conditions. It reframed how scientists understood the nature of biological activity itself.
AlphaFold is a transformative tool. It is to biology what the telescope was to astronomy—a way of seeing more than we could before. But just as the telescope did not discover the laws of planetary motion, AlphaFold did not discover the laws of folding. It revealed, not reframed.
The Nobel Prize recognized the model’s impact, not its theorizing. It deserves that recognition. But we should not confuse that with epistemic agency. AlphaFold solves a grand challenge. It does not redefine the terms of the challenge itself.
Drug Discovery: High Throughput, Low Hypothesis
Another area where AI has made rapid and celebrated inroads is drug discovery. Startups and pharmaceutical giants alike now use machine learning models to identify candidate compounds, predict biological activity, and optimize properties like solubility and toxicity. The process that once took years can now be compressed into weeks or days. But here too, we must ask: is this discovery—or acceleration?
AI models in this domain typically do three things:
Generate vast libraries of new molecular structures using generative models.
Predict ADMET properties—absorption, distribution, metabolism, excretion, and toxicity—using supervised learning.
Rank compounds based on predicted binding affinity to known protein targets.
These are powerful tools. They significantly reduce search space and cost. But epistemically, they operate within inherited biological frames:
The targets—the proteins or pathways of interest—are specified by humans.
The notion of what constitutes a “drug-like” molecule is grounded in prior chemistry.
The optimization metrics—binding scores, toxicity thresholds—are proxies, not causal models.
AI excels at proposing answers, not at reformulating questions. It cannot tell us:
Why one compound activates a receptor while a near-identical one blocks it.
What new classes of bioactivity might exist beyond known pathways.
How to define a therapeutic interaction outside the ligand-receptor model.
And when a drug candidate fails in clinical trials—as many do—the model has no explanation. It cannot reason mechanistically about disease or intervention. There is no generative theory underlying its predictions—only patterns drawn from precedent.
In this sense, AI-driven drug discovery is a form of combinatorial optimization, not conceptual innovation. It navigates the map faster. It does not redraw the map.
The distinction matters. AI is revolutionizing pharmaceutical workflows. But it is not reframing biology. Its value lies in throughput—not theory.
Symbolic Regression and the Illusion of Newtonian Insight
One of the more audacious claims about AI in science is that it can rediscover the fundamental laws of physics. Tools like AI Feynman, Eureqa, and SciNet perform symbolic regression: given data, they search for compact equations that describe it well. Feed them a set of trajectories, and they may output F=ma. Given orbital data, they may recover Kepler’s third law.
At first glance, this seems astonishing. But the reality is more modest. These systems do not derive laws from first principles. They optimize over a library of mathematical expressions, selecting those that minimize prediction error while remaining simple. The “discovery” is a form of curve-fitting—sophisticated, but syntactic. This is very different from what happened in the historical development of Newtonian mechanics.
The difference is not one of accuracy, but of epistemic depth:
AI starts with variables already defined. Scientists have historically invented new ones—like force, mass, or entropy—that reshaped the language of science itself.
AI selects from a predefined library of functions. Scientific pioneers created entirely new mathematical frameworks—like calculus, group theory, or tensor analysis—to describe unfamiliar phenomena.
AI minimizes error over training data. Scientific theories often arise from a drive to explain—to unify diverse observations under a single conceptual framework, even before the data fully demand it.
AI produces symbolic outputs with no internal semantics. Scientific theories embed meaning: they link abstract quantities to measurable reality and offer causal narratives, not just numerical fits.
AI generalizes within domains. Great scientific advances connect across domains—linking motion on Earth to motion in the heavens, or relating symmetry to conservation, as in Noether’s theorem.
These examples reveal the limit—not of AI’s “intelligence”, but of its orientation. It excels at extracting structure from data, but remains bounded by that data’s framing.
But scientific breakthroughs don’t just extract patterns—they rupture assumptions. They don’t emerge from better curve-fitting, but from asking: what if we need a new kind of curve?
To see what that rupture looks like, we must return to the history of conceptual leaps—when scientists didn’t just describe the world more precisely, but changed what the world was understood to be.
What, Then, Does Genuine Discovery Look Like?
If today’s AI shows how far pattern recognition can go, the history of science shows where it stops: at the edge of explanation. Discovery, at its most powerful, has never been just about matching patterns—it has been about changing the frame itself.
These moments didn’t emerge from optimizing within existing models. They arose by stepping outside them—by inventing new concepts, new lenses, and sometimes new mathematics.
Reframing at the Frontier: Canonical Examples
Noether — When Symmetry Becomes Law
In 1915, Emmy Noether transformed physics with a single insight: for every continuous symmetry in a physical system, there is a corresponding conservation law. Her insight wasn’t an extension—it was a redefinition. Noether didn't fit data to equations. She explained why the universe behaves as it does—embedding meaning into mathematics. Her work laid the foundation for modern field theory, not by extending existing knowledge, but by reframing what counted as an explanation.
Maxwell — Unifying Forces through Abstraction
A few decades earlier, James Clerk Maxwell had shown that electricity and magnetism—once thought separate—were two aspects of a single phenomenon. His equations unified them, and in doing so, predicted the existence of electromagnetic waves—including light. This was not curve-fitting. It was conceptual synthesis: the idea that light itself was an electromagnetic wave reframed both optics and electromagnetism under one coherent framework. Maxwell didn't just describe nature. He revealed its hidden structure.
Einstein — Questioning the Frame Itself
Einstein’s theory of special relativity began with a conceptual tension: the laws of physics seemed incompatible with the constant speed of light. Rather than resolve this by tweaking equations, Einstein redefined the assumptions—space and time were no longer absolutes but relative to the observer. This wasn’t refinement. It was rupture. And it laid the foundation for general relativity, where gravity emerges not as a force, but as the curvature of spacetime. Einstein reframed the question, not just the answer.
Dirac — When Mathematics Predicts Matter
Paul Dirac sought to reconcile quantum mechanics with special relativity. In doing so, he formulated an equation so elegant it seemed to transcend known physics. His equation not only described the electron—it predicted the existence of antimatter. No one had seen a positron yet. But Dirac’s mathematics said it must exist. And four years later, experiment confirmed it. This was not interpolation. It was deduction from symmetry, guided by the aesthetic conviction that beautiful equations reflect physical truth. The data followed the math—not the other way around.
Feynman — When Every Path Matters
In the 1940s, Richard Feynman reimagined quantum mechanics by asking a strange question: what if every possible path a particle could take actually contributes to the outcome? His path integral formulation didn't improve accuracy. It changed the story. Instead of solving differential equations, he offered a new ontology—a picture of motion that embraced probability, interference, and multiplicity. AI systems today might solve Schrödinger’s equation faster than any human. But they wouldn’t invent a world where every possible path matters. It wasn’t a better answer—it was a stranger question.
Beyond the Canon: Global Frames of Insight
But such reframing is not the sole domain of a few iconic figures. If we stop there, we risk mistaking a narrow lineage for the full story of discovery.
To speak of discovery only through the lens of celebrated individuals in Western science is to miss a deeper truth: the human capacity to reframe understanding is vast, plural, and unevenly acknowledged.
Long before calculus textbooks bore European names, mathematicians in Kerala had derived infinite series for trigonometric functions—centuries ahead of their formalization in the West. These weren't isolated tricks, but part of a coherent mathematical tradition rooted in observation, approximation, and abstraction.
During the Islamic Golden Age, scholars like Ibn al-Haytham laid the foundations of modern optics and the scientific method, treating vision not as a mystical force but as a geometry of light. His insistence on systematic experimentation reshaped epistemology itself—arguably centuries before Francis Bacon.
In China, astronomers compiled and refined meticulous records over generations, developing models to predict eclipses and planetary motion with striking empirical precision. These frameworks were not merely descriptive; they embodied a philosophy of harmony and regularity.
The Maya constructed intricate calendar systems based on celestial cycles, encoding astronomical knowledge into ritual, architecture, and timekeeping—an intellectual feat that blurred the line between science and cosmology.
And in more recent history, scientists like Chien-Shiung Wu, Rosalind Franklin, and Katherine Johnson reframed their fields. They didn’t just compute. They understood. They changed what was knowable.
These contributions aren’t footnotes to a canonical story. They are reminders that discovery has always been a global, collective, and often contested act of reframing—not just recognizing new patterns, but inventing new ways of thinking, seeing, and explaining.
Reclaiming the Full Spectrum of Discovery
Today’s AI systems are trained on historical data. But the historical canon is not neutral—it reflects centuries of exclusion, erasure, and epistemic narrowing. If we mistake pattern synthesis within that archive for discovery itself, we risk entrenching its blind spots.
To truly evaluate AI’s role in science, we must first expand our sense of what counts as science—and whose insights we count as discovery.
But this isn’t just an academic distinction. The stakes are real.
The Stakes: What We Risk by Misreading AI's Role
This is not a critique of AI’s utility in science. It’s a critique of the claim that it has already begun to discover—in the true, conceptual sense.
AI systems today operate within human-defined frames. They solve posed problems, optimize given objectives, and predict outcomes with striking accuracy.
The danger is not in what AI does, but in what we believe it’s doing.
If we confuse output with insight, we risk outsourcing the core of scientific inquiry: not just solving problems, but deciding what problems are worth solving. The threat is not that AI will replace scientists—but that we will begin to think like it: following outputs instead of challenging frames, optimizing within assumptions rather than questioning them.
Over time, this shift erodes something deeper: our capacity for epistemic agency—the ability to shape the very terms of inquiry. I explore this more fully in AI and the Erosion of Knowing, where I argue that over-reliance on AI can lead to long-term stagnation in human learning.
This isn’t merely a philosophical concern—it has real, structural consequences. It shapes:
How we fund research: Will we prioritize systems that find faster answers, or minds that ask deeper questions?
How we train scientists: Will education emphasize tool use over theory-building, data analysis over conceptual abstraction?
How we allocate credit and trust: Will we celebrate systems for discoveries they cannot yet make, or support the plural human processes that actually generate insight?
So if we truly hope for AI to contribute to scientific discovery—not just accelerate it—what would that require?
From Tool to Theorist: What Would It Take?
If AI is to move beyond being a powerful instrument and become a genuine contributor to scientific discovery, it must become an epistemic agent: capable of generating new concepts, new questions, and new explanatory frameworks.
This requires more than scaling models or training on larger datasets. It demands a shift in orientation—from interpolation to invention, from optimization to insight.
To ground this conversation, we propose five illustrative criteria that could signal meaningful steps toward discovery. This list isn't exhaustive, nor are the benchmarks definitive. Some AI systems may already be inching toward these capacities. The goal is not to declare a hard boundary, but to clarify what kind of innovation we're looking for—and what might distinguish true conceptual breakthroughs from increasingly fluent pattern recognition.
Conceptual Reframing. Can an AI system invent a new abstraction that reorganizes existing knowledge and reveals connections previously unseen?
Benchmark: Proposes a symmetry principle that unifies multiple physical laws, akin to Noether’s theorem. This would go beyond fitting known data—it would reframe how domains relate.
Hypothesis Generation Beyond Existing Models. Can it generate novel, falsifiable hypotheses that deviate from prevailing theories—and are later validated?
Benchmark: Suggests a plausible mechanism for dark energy, or a new foundational principle in quantum gravity, before empirical confirmation.
Model Breaking and Anomaly Seeking. Can it detect when a current model fails—not merely in fit, but in explanatory power—and formulate a coherent replacement?
Benchmark: Like Planck noticing the ultraviolet catastrophe and proposing energy quantization—not just flagging the anomaly, but resolving it with a new theoretical framework.
Cross-Domain Transfer. Can it draw conceptual bridges across fields that seem unrelated—mapping deep structures from one to another?
Benchmark: Discovers that principles from information theory can model metabolic efficiency or that topological invariants explain ecological resilience.
Self-Interrogation of Assumptions. Can it recognize the limits of its own priors, revise its inference frameworks, and propose alternative models of explanation?
Benchmark: Detects bias in its internal models, critiques the structure of its own training data, and shifts its learning objective—all autonomously.
These are not speculative fantasies. They reflect what human scientists have done—often at great cost, and often in opposition to prevailing norms. If AI were to meet even one of these criteria in a genuine, unsupervised way, it would mark a shift: not just solving problems faster, but reframing what a problem is.
But without such framing, we risk mistaking every statistical regularity for scientific insight. We flatten the meaning of discovery and inflate the role of tools into that of theorists.
Why This Matters
We are at an inflection point. The narrative around AI in science is shifting—from tool to theorist, from assistant to author. If we fail to interrogate that shift, we risk misreading not only what AI is doing, but what science is for.
This matters because narratives shape priorities.
If we treat discovery as a computational achievement, we may erode the very conditions that enable it: deep conceptual labor, epistemic dissent, and plural traditions of knowing.
Institutions may fund faster science, but not deeper science. Educators may train students to prompt models, but not to question assumptions. Researchers may optimize within paradigms, but lose sight of what those paradigms exclude.
This is not a rejection of AI’s power. It is a reminder.
Discovery is not just about finding answers. It is about deciding which questions matter—and why.
That remains a human responsibility. For now.
Brilliant! Extremely insightful article, and a timely reminder (thankfully constant on this Substack by Dr. Vishnoi) that we stand to lose a lot - potentially everything, if we actually lose what it means to be human - if we let ourselves be seduced by the power of what AI can help us see and mistake it for something it is not.
Very elaborate and distinctive roles of human mind/intellect as different from AI.