Discussion about this post

User's avatar
Manu Sharma's avatar

Interesting post. A few other things that might be insightful to dive deeper into:

I think we can agree that institutions are needed, and many times (if not most), they need to allocate scarce resources - money, attention, seats in a program/course, healthcare, you name it - after all, the entire discipline of economics deals with the allocation of scarce resources.

So in my mind, there are three fundamental questions here: (1) Can AI somehow allocate resources more efficiently?; (2) Can it allocate more fairly?; and (3) What are the mechanisms of redress if someone disagrees with the allocation?

#1 is a pretty hard question in itself, because the notion of “efficiency” as it relates to human interactions inherently comes down to value judgements. That’s an important one to ask even before any efficiency argument is made in favor of AI.

#2 is interesting because arguably any system designed to allocate scarce resources will be biased in some way - almost by definition. And then we come to whether there is any way to objectively compare one bias vs. another. So let’s say a human-only system overwhelmingly penalizes black people but an AI system overwhelmingly penalizes women. Is one preferable to the other? Here I argue that the presence of a model is better than a human, because the bias is more likely to be systematic rather than random - and therefore addressable using some kind of rules and regulation. Credit scores and laws around fair lending are a great example here.

Mechanisms of redress are arguably dependent completely on the institution - which at this point at least are still overwhelmingly controlled by humans. Sure there can theoretically be more mechanisms of redress with humans rather than AI. But in practice, how many of the 98% of applicants rejected by top tier universities actually get to appeal their decisions? Even before AI started keyword-scanning resumes, how many job applicants actually got to appeal their decisions? Does the insertion of AI into these processes actually erode our right to redress? It conceivably might if humans start to shrug and say “well, the AI said you weren’t good enough,” but again, was this not happening even before AI? Would any of these decisions change if you appealed your rejection pre-and post-AI? So perhaps this again comes down to the bias aspect, and the fact that we must force explainability on our models.

Expand full comment
1 more comment...

No posts