Your hotel lobby needs renovations, but peak season—when the cash flows—is months away. But you need those upgrades before vacationers arrive, so you don’t have to cut capacity or offer a subpar experience.
You apply for a bank loan: $100,000 will do the trick. Denied. You’re furious. Does it matter who denied you? What if it wasn’t a loan officer at the bank but rather an algorithm?
Trying to understand how the public feels about algorithms ... is an increasingly important topic, and there's really not much known about it right now.
Research by Harvard Business School’s Elisabeth Paulson finds that people prefer that bank managers make lending decisions over algorithms. Among the study’s 9,000 participants, people were more likely to choose a human decision maker for a hypothetical loan scenario by 4.3 percentage points. The findings also suggest that participants care more about whether a decision maker—person or algorithm—efficiently reduces collective risk than if the decisions are fair.
“Trying to understand how the public feels about algorithms depending on their features is an increasingly important topic, and there's really not much known about it right now,” Paulson says. She and coauthor Kirk Bansak, an assistant professor at the University of California, Berkeley, detail their findings in “Public Attitudes on Performance for Algorithmic and Human Decision-Makers,” published online by the journal PNAS Nexus in December 2024.
Questions of algorithmic decision-making have permeated society. Not only do algorithms determine which TikTok video plays next in a user’s feed, but they increasingly guide hiring, medical care, and security. But with every algorithm, society will need to contend with a broader question: Would we feel better about the result if a human were making the call? Sometimes, yes—but, sometimes, no.
Who should approve a bank loan?
Paulson and Bansak set out to understand how people evaluate algorithmic and human decision-makers, and whether their evaluation criteria differ based on who—or what—calls the shots. In addition to the bank loan scenario, they probed another high-stakes decision: whether to release a criminal defendant before their trial.
The researchers randomly presented pairs of decision-maker profiles—some human, some algorithmic—to their 9,000 participants and asked them to choose one to make loan or pretrial release decisions. Each profile included:
Data about the decision-maker’s efficiency in preventing risky outcomes, such as crimes committed after release or loan default rates.
The rate at which the decision-maker incorrectly denied White and minority defendants and applicants, comparing the rates to gauge fairness.
Paulson says that the results challenged her assumptions. She expected people to be more concerned about the fairness of algorithms, “given the ongoing academic and media engagement with issues of algorithmic fairness.”
However, the team found that participants were:
7.6 percentage points more likely to choose a person to judge pretrial release decisions.
4.3 percentage points more likely to pick a human bank manager to evaluate loans.
Fairness, defined as equal treatment across racial groups, was consistently the least important factor in respondents’ evaluations—regardless of whether they were judging human or algorithmic decision makers. Instead, respondents overwhelmingly favored efficiency, keeping crime or default rates low.
Decision quality matters most
There were nuances, of course. While the sample generally leaned toward choosing human for important decisions, a significant portion of respondents—about one-third—preferred algorithms, finding they were more fair and effective than humans.
Political differences and comfort with artificial intelligence mattered, too:
Republicans preferred humans nearly three times more than Democrats did in the pretrial release scenarios.
Respondents who said they were pessimistic about the use of AI in society also picked humans over twice as often compared with optimists.
If you can show that on accuracy, you can actually do much better with an algorithm ... then that's probably sufficient.
Paulson says the most important finding, perhaps, was that superior performance matters most. If algorithms can show vast improvement over humans, people will likely prefer them.
"If you can show that on accuracy, you can actually do much better [with an algorithm], and that the other performance metrics don't suffer, then that's probably sufficient. But demonstrating that may not be easy.” Paulson says.
While algorithmic fairness is a top priority in public discussions of whether to turn over key decisions to algorithms, Paulson’s findings show that people prioritize effectiveness in decision-making, even above fairness.
For leaders trying to decide
To narrowly focus on beliefs about algorithms and humans, the researchers didn’t evaluate perceptions of people who use algorithms to help make high-stakes decisions, a question with many more potential variables. While the researchers didn’t set out to study questions about implementation, their findings offer considerations for organizations trying to adopt AI:
Not everyone prefers a human decision-maker. While respondents in Paulson’s study were more likely to choose a person, sizable segments were more open to letting an algorithm decide.
Ethical and legal questions still abound. The researchers acknowledge that in the real world, policymakers and companies can’t ignore questions about fairness. They will need to consider due process and data privacy, for example.
Trust and transparency are paramount. People ultimately care most about decision quality, so organizations will need to ensure—and ideally, demonstrate—that whatever system they use achieves those aims.
While a human may have a better bedside manner than a detached robotic voice in breaking the news that your hotel lobby renovation loan has been denied—it’s still been denied. If you then learn that this algorithm has a proven track record of performance increases over human bank loan officers, would that make the news sting any less? Maybe not.
But after all, this research suggests that what really matters most isn’t who—or what—makes the decision, but how well they make it.
Image created with asset from AdobeStock/Harsha.
Have feedback for us?
Public Attitudes on Performance for Algorithmic and Human Decision-Makers
Bansak, Kirk, and Elisabeth Paulson. "Public Attitudes on Performance for Algorithmic and Human Decision-Makers." PNAS Nexus 3, no. 12 (December 2024).

