Career and Workplace

Unpacking the Talent Shift That AI Could Spark: Interview with Joseph Fuller

Generative AI could open more roles to AI-savvy candidates who lack credentials or experience, says research by Joseph Fuller. Companies will need to rethink their talent strategies and training models to adapt.

A man in a suit and glasses holds a presentation remote in front of a blurred audience. The image has colorful overlays in coral, blue, and magenta.

Generative artificial intelligence could open more doors for job candidates than it closes, according to a recent report that challenges many doomsday predictions.

The report suggests that gen AI could impact 50 million US workers by automating millions of entry-level jobs. Paradoxically, AI might also open more technically demanding roles to people without post-secondary credentials and less experienced candidates who can use AI tools.

The prediction could upend the traditional pyramid-shaped organization that has existed for some 150 years, with a wide base of entry-level job holders that ascend as they gain experience and become the next generation of leaders. In an AI-at-the-bottom structure, the organization must find ways to give workers experience and move them upward to perform oversight and jobs that AI can’t.

“Gen AI challenges employers, educators, and policymakers to rethink how careers begin—and how they progress,” declares the report, “Expertise Upheaval: How Generative AI’s Impact on Learning Curves Will Reshape the Workplace.” The report was published in July by the Burning Glass Institute in collaboration with Harvard Business School’s Project on Managing the Future of Work.

Among the conclusions:

  • AI-based automation could make nearly 18 million entry-level jobs in the US obsolete. That’s about 12% of the total workforce. Those jobs include legal associates, marketing specialists, and project managers.

  • About 29 million “mastery roles” will become open to more workers than before. AI tools will lower technical skill requirements for positions such as network administrators, data warehousing specialists, and loan interviewers.

  • Companies must rethink organizational structures, talent strategies, and training models. In industries with declining entry-level work, companies will have to rethink how they attract young talent, redefine job descriptions, and articulate career paths. They must develop new ways to partner with educational institutions to ensure that graduates at all levels have the requisite familiarity with AI to use it quickly and productively.

We spoke with Harvard Business School Professor Joseph Fuller, who coauthored the report and founded the Managing the Future of Work project, about the report’s implications. Here’s what we learned.

1. Companies will still need strong talent pipelines

“Since this is the first technology that gets better all by itself, the number of jobs made obsolete will go up over time. You’re going to have fewer entry-level personnel in your organization.

Since this is the first technology that gets better all by itself, the number of jobs made obsolete will go up over time.

If an employer starts crimping the bottom of the company’s talent ladder, their pipeline eventually won't have workers with the requisite experience to spot AI hallucinations and to oversee the training of agentic AI. AI has to be trained, and it has to be managed by someone who knows how to make decisions about their employer’s strategy and culture, about its markets.

AI is great at making rules-based decisions where there is a lot of historical data available for training purposes. If you're Ford Motor Credit and you’ve been lending money to people to buy cars, you've got millions of observations about previous transactions, and you know what your current risk profile—how far are you willing to go incurring credit risk in order to minimize underutilization of manufacturing assets to build the cars. AI can analyze various scenarios, answer questions, and arrive at optimal outcomes in a fraction of the time a human could and likely with great accuracy.

But if you don't hire a new human credit analyst as a result, they don't gain the early experience and build the insight they will need to become a senior credit analyst who can spot and override an agentic AI that makes a mistake, or who has the insight to set credit policy.

If you are a lender or company considering a major capital investment, AI will give you its opinion based on the data it was trained on, but, ultimately, you're going to want someone with lots of judgment and appreciation for the specific type of transaction to make the final decision, even if it is only to confirm the AI’s recommendation.”

2. Leaders will need better social skills

“If you look at the evolution of skills in C-suites, which I've written about with HBS Professor Raffaella Sadun, you’ll see that over time, social skills have become decidedly more prominent in the role of CEOs and other C-suite executives. Social skills determine how effective they are in interacting with different constituencies.

Companies will need leaders who, in the course of a given day, can talk to the LGBTQ+ employee group over coffee, then take a call with a congressperson that represents one of the districts where they have a plant; then talk a board members in order to solicit some advice; then get in a limo and have lunch with a key account or supplier; then deliver a speech at an industry convention.

The ability to do all of those things, not just interact with people, means the leader appreciates different constituencies and different situations.

AI doesn’t do subtlety, as least not yet. For example, it has trouble distinguishing between a truthful observation and a sarcastic one. Asked, ‘Are you looking forward to working all weekend?’ you might answer, ‘Yes,’ because this is a critical task that you’re passionate about, or ‘Yeah sure’ because you’re looking forward to it like a hole in the head. AI just hears ‘Yes.’”

3. Organizations need to redesign processes for AI

“Companies are largely designing experiments and deploying AI to make the way they do things now even more efficient and effective. That’s often referred to as augmentation. They often get marginal improvements, leading to reports that AI underperforms. But a lot of their disappointment stems from their selected use case.

Using AI to augment a pre-AI process is a gross underemployment of a general purpose technology. To maximize the impact of something so fundamental, the process needs to be redesigned to exploit it, not subordinate AI by making it an adjunct to the process.

Add to that the data dilemma. Many companies find that the quality of their internal data is lower than they understood and it’s disorganized. Relying on that data to train AI models gives them negative results, but it has nothing to do with the AI.”

4. Robust AI training and tools are essential

"Another reasons companies get disappointing results is their workers don't really know how to use it. A lot of companies have pretty significant restrictions on what workers are allowed to do with AI, and many haven't licensed major AI platforms for their employees. Employees have shown significant interest in getting trained in AI, but most companies aren't providing such training.

Our research shows that people are 50% more likely to use AI at home, mostly relying on older free versions. A lot of people use it like a glorified search engine or to help their high schooler. So, there is a big distribution in terms of people’s comfort with and knowledge of AI.”

5. People will need more frequent training

“If you want to qualify workers in the future, it's going to be much more oriented toward what we call work-based learning. Think of a Temple University or Northeastern University co-op program or an apprentice program.

If you want to qualify workers in the future, it's going to be much more oriented toward what we call work-based learning.

People are going to have to be trained much more frequently to keep up with the pace of technological change. We will see an inversion of the present model of how workers gain the skills necessary to get a job and develop additional skills to keep that job or get promoted.

Historically, people went to school to learn technical skills to qualify for work. That took all sorts of forms—K-12, vocational education, certificates, college degrees, etc. All were forms of what we’d call education. Then, they got a job and almost all their future learning was on-the-job training or what we’d call experiential learning.

The speed of innovation is overwhelming that model. Educators can't possibly change curriculum fast enough to keep pace, and companies will be changing out technology or adding new technologies at a much faster rate than historically.

So, companies are going to have to get better at intense, bursty, frequent training. They won't be able to wait for people to learn by doing. And students are going to need more experience to develop the social skills and contextual knowledge to be strong candidates to get their first jobs.”

6. Major companies are applying AI to core functions

“JP Morgan Chase would be a good example. Coca-Cola would be a good example. Cisco would be a good example. They are taking data from specific and strategically important processes and setting up a parallel process, developing the tools basically from the ground up, and applying them.

Coke is generating a lot of its marketing collateral now with a heavy AI basis. All of Chase’s loan origination and loan processing is heavily AI-driven.”

7. Soft skills will matter more in hiring

“Employers will be looking for social skills. They'll be looking for capacity to learn new things. The ability to interact with a range of people and be effective in unfamiliar situations.

They'll also be looking for grit and determination. If you talk to Accenture, what they say is the number one thing we're looking for is grit. The person who is going to buckle down when it's hard and they're tired. Or they're not quite sure what to do and do their best to figure it out."

8. Employers will expect AI experience

“You're going to be asked in a year or two by an employer like Accenture, have you ever built your own agentic tool? What went wrong? What would you do differently? How many have you built?

Have you ever had to inherit a tool from somebody else? And what did you learn about how to make it work for you even though someone else created it?

Which tools have you used? How do you use them? Which one's your favorite for this type of application or that type of application?

Have you used multiple tools in sequence to get the benefit of different models and different training regimes to test your logic?

Show me a portfolio of multimedia material that you've created.”

9. Better policy solutions will be needed

“This is the first technology that hits harder the further you go up the earning spectrum, whereas historical automation, including the internet, tended to have the most impact on middle skills, middle-wage jobs.

What are we going to do for those displaced workers in the United States, whether it's through technology or globalization? So far, what we’ve done can only be characterized as an expensive, dismal failure. Government can't turn coal miners into coders, and why anyone ever thought that is bizarre.

How are we going to think about policy through the lens of encouraging citizens and companies, employers, to do the things that will ease this transition, as opposed to how do we develop policy to solve the problem? Legislators are not good at writing policy that markets can't defeat.”

Photo credit: Neal Hamberg

Have feedback for us?

Latest from HBS faculty experts

Expertly curated insights, precisely tailored to address the challenges you are tackling today.

Strategy and Innovation

Social Responsibility

Data and Technology