Despite concerns that artificial intelligence might displace large swaths of workers, people say they are willing to let machines take over many tasks, and entire occupations, especially if AI can do the work better, faster, and cheaper. Just a handful of professions are viewed as off limits to automation, among them clergy members and childcare workers.
All told, the public supports automating roughly 30% of jobs based on current AI capabilities, according to a survey of 2,357 people who were asked about 940 occupations. What’s more, when people were prompted to imagine a more advanced version of AI in the future that can outperform humans at a lower cost, support for automation nearly doubled to 58% of occupations.
Such a sharp increase suggests that resistance to AI from automating jobs stems largely from concerns over the technical feasibility of the technology, rather than from social or ethical objections about replacing human labor, says Harvard Business School Assistant Professor James Riley, who served as the principal researcher on the survey.
Meanwhile, respondents were ambivalent or morally opposed to automating a significant number of jobs, even if AI could do the work flawlessly and more cost-effectively.
“Some employees will resist for their own reasons,” Riley says. And even customers might balk if companies “run roughshod over core audiences who value their goods, services, and brand. So there’s certainly a risk to automating and augmenting jobs away.”
It's a question of whether we can do even more at our jobs with these tools than we would have been able to in the past.
Before companies decide where and how to deploy AI to reap productivity gains, Riley says they should consider where and how much the public most values a “personal touch.” Regardless of whether AI can outperform a human, businesses risk turning off both employees and customers by letting the technology take over certain tasks.
Riley shares the study’s findings in a working paper “Performance or Principle: Resistance to Artificial Intelligence in the US Labor Market,” released in October, coauthored with Simon Friis, postdoctoral fellow at the Laboratory for Innovation Science at the Digital Data Design Institute at Harvard.
People support using AI as a performance tool
It may seem logical that as AI’s capabilities improve, market forces are likely to drive widespread automation of work. Yet Riley and his coauthor highlight that history tells a different story, noting that economic incentives alone don’t always dictate the adoption of new technologies. The paper points to public resistance to genetically modified crops, nuclear power, and embryonic stem cell research as examples of potentially beneficial technologies that have been initially constrained by social, moral, and ethical concerns.
When it comes to AI, people have a greater appetite for using the technology as a collaborative tool to assist humans, rather than a replacement for them:
Some 94% of respondents favor using current-day AI to augment human work.
That percentage rises to nearly 96% when the survey described a more advanced version of AI that may emerge in the future.
This suggests that most people appear comfortable with the idea that technology helps humans perform work more quickly and efficiently.
There are certain types of products or market sectors where people do care about how something was made as much as the fact that it was made cheaper.
“The fact that we can now use calculators to do longhand calculus or algebra means that instead of wasting our time trying to do what have become rudimentary calculations, we can turn our focus to higher-order problems,” Riley says.
“So, the AI optimist is thinking that we're not going to be suddenly replaced all at once and in such a way that none of us have any more purpose, meaning, or jobs. It's a question of whether we can do even more at our jobs with these tools than we would have been able to in the past.”
Which roles should remain human?
When it comes to AI replacing humans, however, people remain ambivalent about automating nearly 42% of professions. Respondents indicated that technical advances alone aren’t enough to win their support for automating many roles, from nuclear power reactor operators and sports medicine physicians to bakers. They feel a strong degree of moral repugnance toward automating a select few occupations—about 12%—including funeral directors, athletes, and artists.
“Respondents are implying that there’s this social and moral boundary marking what we as people, as a species, need to continue to do,” Riley says. “We’re going to need to have a sober conversation informed by some rigorous analyses around where these boundaries are and why.”
Survey respondents matched the demographics of the US adult population on age, gender, ethnicity, and political affiliation.
Leaders must find the ethical line
The survey didn’t ask whether respondents were comfortable with their own occupations being automated—and Riley notes that public sentiment about which roles are fair game for AI automation could, of course, change over time.
“It's easy for respondents to say all this hypothetically, but maybe people won't be as comfortable when it actually starts happening,” he says. “Because there will be more at stake, as opposed to right now, when it’s no longer at a distance, or a thought experiment.”
For now, Riley recommends that leaders who are trying to balance AI use while maintaining stakeholder trust should:
Prioritize performance in evaluating AI opportunities
Companies planning their shorter-term AI strategies should focus on areas where customers don’t hold strong moral objections, and where AI’s usefulness is solid, Riley suggests.
Consider AI norms in long-term planning
As companies scale their AI use, planning should go beyond technical constraints and performance to include social and organizational factors, including talent retention and demand-side risks.
Be transparent about AI use
As AI capabilities expand, businesses may want to extend AI into more areas, but Riley says they should openly communicate to stakeholders how exactly they’re applying the technology and why.
Know the risks of crossing lines
For some customers, human presence will be non-negotiable in certain tasks and roles. Executives in some industries should remain aware that automation could outright diminish the value of their products or services and alienate customers.
“There are certain types of products or market sectors where people do care about how something was made as much as the fact that it was made cheaper,” Riley says.
Photo: AdobeStock/whyframeshot
Have feedback for us?
Performance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market
Friis, Simon, and James W. Riley. "Performance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market." Harvard Business School Working Paper, No. 26-017, October 2025. (These authors contributed equally to this work; author names are listed in alphabetical order.)
