Can AI Allocate Better Than Traditional Institutions?
Efficiency, fairness, and redress in the age of AI
A reader recently asked a deceptively simple question: If AI systems are becoming institutions, can they actually do better than the ones they’re replacing?
But before we go deeper into performance or fairness, we should ask something more basic: why are we turning to AI in the first place? The answer, more often than not, is convenience.
AI systems promise to reduce administrative burdens, scale evaluations, and produce consistent outputs. They help organizations avoid the messiness of subjective judgment, slow deliberation, and case-by-case exception-making. In short, they are attractive precisely because they make allocation easier—not necessarily better.
And this convenience comes with a cost. The smoother the system, the harder it becomes to question its workings. When processes feel seamless, we stop noticing them—and stop questioning their logic.
For instance, Hilke Schellmann, in her book The Algorithm, documents how companies increasingly adopt AI hiring tools that screen applicants based on facial expressions, vocal tone, or inferred personality traits. One hiring manager put it plainly: “The system just gives us a score. We don’t really know what it’s based on, but it’s fast.”
Convenience here isn’t just a motive—it becomes a justification.
And that should worry us. Because at the heart of institutional power lies a fundamental challenge: the allocation of scarce resources. Who gets access to opportunity, support, attention, or trust? And how do we justify those decisions?
In the age of AI, this question surfaces along three dimensions: efficiency, fairness, and redress.
Efficiency: What Are We Optimizing For?
AI systems promise efficiency. They operate at scale, minimize friction, and outperform humans on many narrow tasks. But allocation in human systems is rarely narrow.
Efficiency only matters after we’ve decided what counts as success—and that’s not a technical choice, but a values-driven one.
In hiring: Are we optimizing for long-term retention? Short-term output? Cultural fit?
In education: Should we prioritize academic scores, future income, or equitable access?
In healthcare: Do we focus on recovery odds, or treat the most vulnerable first?
AI systems demand that these values be made precise—often mathematically. But in practice, goals are fuzzy, contested, and contextual.
In a forthcoming work on human–AI collaboration in job–worker matching, we show how even small shifts in a model’s objective—such as prioritizing average success probability versus robustness—can yield very different worker selections. What appears efficient under one criterion often privileges a particular performance profile, even when other candidates would succeed under uncertainty.
Efficiency, then, is not neutral. It is an encoding of what we choose to value.
Fairness: Comparing Biases
Even without explicit prejudice, decision systems can reflect and amplify structural inequities.
In some of my recent work, we examined how seemingly objective evaluations—when combined with limited information and institutional risk aversion—can systematically disadvantage certain groups. Not because of individual intent, but because of how noise, uncertainty, and caution interact at scale.
One insight is that institutions often set conservative thresholds to avoid mistakes. But when those thresholds are applied to noisier signals—say, from candidates with less conventional backgrounds—they lead to higher exclusion rates. The system penalizes variance, even when it contains a valuable signal.
These dynamics resemble well-known behavioral patterns. When people (or institutions) are unsure, they often choose what feels safest. But “safe” choices often align with familiar profiles—and familiarity isn’t the same as fairness.
This raises a crucial point: fairness isn’t a property of a model alone. It’s a function of the model, the environment it operates in, and the institutional choices around it.
If a human decision-maker is biased, that bias might be inconsistent, unconscious, or improvisational.
If an algorithm is biased, its behavior can (at least in theory) be audited, documented, and corrected.
That’s a potential advantage—if we build the infrastructure to make those corrections possible. Transparency, oversight, and public reasoning are essential. Without them, systematic bias becomes invisible and unchallengeable.
Redress: Who Do You Appeal To?
Institutions, when they work well, offer recourse. There are processes for appeals, complaints, or explanations. But even in traditional systems, these mechanisms are often limited or inaccessible.
How many job applicants have meaningfully appealed a rejection?
How many students have overturned an admissions decision?
How many people denied a loan had the tools to contest it?
AI doesn’t necessarily make redress worse. But it can make the absence of redress harder to see.
When people are told “the algorithm decided,” it can feel like the end of the conversation. There’s no name, no rationale, no one to reason with. Even those deploying the model might not fully understand its logic.
That’s not just a technical problem. It’s a civic one.
If individuals can’t understand or contest the systems that judge them, the result is often disillusionment or disengagement. In ongoing research, I’ve been exploring how people adjust their effort when outcomes feel opaque or unresponsive. When effort seems decoupled from reward, people respond accordingly. What looks like underperformance may simply be rational withdrawal from an illegible game.
And when these behaviors get fed back into training data or performance metrics, the system can end up learning that certain groups are less “engaged” or “qualified”—not because they are, but because they were excluded to begin with. Thus begins a loop of misinterpretation and misallocation.
Toward Institutional-Grade AI
If AI systems are going to play institutional roles, they need to meet institutional standards:
Clear articulation of goals and tradeoffs.
Ongoing monitoring of impact across groups.
Transparent processes and avenues for appeal.
We’ve made it easy to scale decisions. But legitimacy is harder. And in matters of justice, it’s not the speed of a decision that matters—it’s whether anyone can speak back to it.
Further Reading
If you're curious about the ideas in this post—job-worker matching, emergence of institutional bias, feedback loops, and fairness in algorithmic systems—I've written about them more formally elsewhere. Feel free to reach out if you'd like links or want to discuss further.
An insightful essay. I very much like the way in which you introduce the studied mathematical problems within a broader social context.
I thought about a related question a while ago. While the AI fairness crucially depends on the fairness of the algorithms, it also relies on the fairness of the training sets. At least, a "fair" training set would make it easier for us/AI to correct an algorithm's bias. I was wondering if one can formally study the fairness of a train set, or by nature its fairness is subject to an individual's judgement. Are you aware of related work on this?