AI Are the New Institutions
How algorithms gained decision-making power—and what they replaced
In my previous post, I argued that AI systems—particularly those built on large models—are no longer just tools. They are becoming institutions.
Not in the legal sense. But in function.
They structure judgment, allocate opportunity, and shape legitimacy, just as courts, schools, or hiring boards once did.
This essay explains what that claim means in practice. Not through abstraction, but through concrete, recent events.
When I say "AI," I don’t mean a single system. I mean a distributed set of deployed models—language models, scoring tools, surveillance networks—that now act with institutional authority in many parts of life.
What makes something an institution?
Institutions are not just buildings or bureaucracies. They are systems of trust and authority.
They determine:
Who gets hired.
Who is heard.
Who is protected or punished.
And who gets left out.
They operate over time. They are accepted, contested, appealed to.
And now, AI systems are being inserted into exactly these roles.
From automation to judgment
A 2025 Australian study led by researchers at the University of Technology Sydney found that AI hiring systems widely used in the private sector were systematically discriminating against women, people with disabilities, and candidates with non-standard accents.
Some systems analyzed facial expressions during interviews, penalizing applicants who didn’t smile “appropriately.” Others struggled with speech patterns from non-native English speakers, or penalized pauses—often flagging neurodivergent candidates as "low energy."
Most troublingly, many of these systems were trained on U.S. resumes and corporate data, leading to cultural mismatches in Australian hiring. (source)
The AI didn’t just screen résumés. It functioned as a hiring gatekeeper, shaping who advanced and who disappeared from consideration.
Prediction becomes policy
Los Angeles County recently launched a machine learning initiative called PREVENT to identify individuals most at risk of becoming homeless.
The system analyzes data across health records, public benefits, and eviction filings. It flags individuals for early intervention, offering rental assistance or job support—before they lose housing. (source)
In a world of limited social services, that score can determine who gets help and who doesn’t. Prediction becomes a gateway.
The model becomes a triage system, quietly encoding institutional priorities.
From oversight to automation
In 2024, New Orleans police used over 200 facial recognition cameras run by a private nonprofit to monitor and arrest suspects in real time—without public disclosure.
The system sent alerts with suspect identities and GPS locations directly to officers' phones, resulting in at least 34 arrests—some for nonviolent offenses. (source)
This surveillance violated a 2022 city ordinance, which limited facial recognition use to violent crime investigations. Yet the system operated with full effect, effectively automating discretionary policing without accountability.
AI governs speech
For instance, in 2024, TikTok laid off hundreds of human moderators and leaned heavily on AI to enforce its content policies.
Over 150 million videos were removed in one quarter alone—more than 80% flagged automatically. (source)
Yet users have long noted that AI moderation lacks nuance, often taking down posts about trauma, race, or mental health from marginalized communities.
In response, creators began using “algospeak” (e.g., “unalive” instead of “suicide”) to evade takedowns.
When AI moderates, it doesn’t just enforce guidelines. It structures what can be said and what is suppressed.
And this is just the tip of the proverbial iceberg.
The power without the process
These systems don’t gain authority from constitutional law or public oversight.
They derive it from performance (or efficiency):
“It works.”
“It scales.”
“It predicts better than humans.”
But once adopted, their outputs become binding.
And there is often no one to appeal to.
The judgment is made before the human arrives.
And often, the human defers to it.
Why this matters
When institutions were flawed, we could appeal. Argue. Dispute. Reform.
With AI systems, we often don’t know why the decision was made. Or we mistake it for truth.
That’s what I mean when I say: AI systems are becoming institutions.
Not because they were designed to be—but because they function like ones.
They make judgments. They allocate resources. They shape legitimacy.
But unlike human institutions, they offer no explanation, no transparency, and no path for redress.
Interesting post. A few other things that might be insightful to dive deeper into:
I think we can agree that institutions are needed, and many times (if not most), they need to allocate scarce resources - money, attention, seats in a program/course, healthcare, you name it - after all, the entire discipline of economics deals with the allocation of scarce resources.
So in my mind, there are three fundamental questions here: (1) Can AI somehow allocate resources more efficiently?; (2) Can it allocate more fairly?; and (3) What are the mechanisms of redress if someone disagrees with the allocation?
#1 is a pretty hard question in itself, because the notion of “efficiency” as it relates to human interactions inherently comes down to value judgements. That’s an important one to ask even before any efficiency argument is made in favor of AI.
#2 is interesting because arguably any system designed to allocate scarce resources will be biased in some way - almost by definition. And then we come to whether there is any way to objectively compare one bias vs. another. So let’s say a human-only system overwhelmingly penalizes black people but an AI system overwhelmingly penalizes women. Is one preferable to the other? Here I argue that the presence of a model is better than a human, because the bias is more likely to be systematic rather than random - and therefore addressable using some kind of rules and regulation. Credit scores and laws around fair lending are a great example here.
Mechanisms of redress are arguably dependent completely on the institution - which at this point at least are still overwhelmingly controlled by humans. Sure there can theoretically be more mechanisms of redress with humans rather than AI. But in practice, how many of the 98% of applicants rejected by top tier universities actually get to appeal their decisions? Even before AI started keyword-scanning resumes, how many job applicants actually got to appeal their decisions? Does the insertion of AI into these processes actually erode our right to redress? It conceivably might if humans start to shrug and say “well, the AI said you weren’t good enough,” but again, was this not happening even before AI? Would any of these decisions change if you appealed your rejection pre-and post-AI? So perhaps this again comes down to the bias aspect, and the fact that we must force explainability on our models.