Strategy and Innovation

AI Trends for 2026: Building 'Change Fitness' and Balancing Trade-Offs

Many organizations will seek to gain more value from artificial intelligence in 2026. Harvard Business School faculty share four considerations for leaders planning to scale the technology.

A close-up illustration of a robotic hand, palm facing up, reaching from the right side of the image, against a dark background filled with rows of small teal dots that look like digital data. A semi-transparent large orange circle hovers in the hand.

Many business leaders have made strides in bringing artificial intelligence into their organizations. In 2026, many will seek to wring more value from the technology.

We asked Harvard Business School faculty what trends leaders might see in the coming year, especially as AI becomes omnipresent. Here’s what they said.

Their comments have been lightly edited for length and clarity.

Tsedal Neeley: Change fitness will become the AI differentiator

AI is no longer the experiment on the side; it’s rewiring how work gets done. And it is shifting from isolated tools people can choose to adopt (or ignore) to platforms that sit at the center of workflows, decisions, and customer journeys.

When AI becomes a platform, it quietly sets the defaults: how information flows, who has access to what, and which options even show up on the screen. That raises the stakes of getting change wrong. AI means change, over and over again, and in the absence of change fitness, individuals, teams, and organizations will continue to struggle.

What is change fitness?

Change fitness is the capacity to metabolize significant and ongoing change. At the individual level, it shows up as curiosity, experimentation, and comfort working in human–machine workflows. At the team level, it means new collaboration patterns, role clarity, and decision rights that match an AI-driven context.

At the organizational level, it requires modern data foundations, thoughtful governance, and leaders who treat AI as a transformation of work, not just a software rollout. At minimum, everyone needs a 30% digital and AI mindset—enough fluency to use tools, ask good questions, interpret outputs, and redesign work.

The leadership imperative for 2026 is clear: make change fitness a core capability, not an afterthought. Invest in broad AI literacy, redesign workflows (not just jobs), and reward learning speed and outcomes.

Tsedal Neeley is the Naylor Fitzhugh Professor of Business Administration and senior associate dean and chair of the MBA program.

Jon M. Jachimowicz: Will AI make work less meaningful?

We’ve been hearing that 2026 is the year AI becomes mainstream, that more organizational processes will use AI and a substantial number of employees will have AI become part of their core workflow.

A lot of work is currently being done on the first-order effects, meaning: How do people work with AI? How do we design AI so people can work more efficiently? But in 2026, we also need to start thinking very carefully about the second-order effects: How does AI change my experience of work and its meaning to me?

We're going to experience the first glimpses of what the future looks like when work becomes less meaningful because of AI.

Take customer service, for example. In the past, if you took a United flight and your bag was late, you would talk to a United employee and they could help make it right. And that employee would go home that day and say, “I was really able to help someone.” But now, these conversations are largely handled by AI chatbots—and far fewer United employees get to directly experience how their work can positively impact others.

AI has the potential to distance workers from the beneficiaries of their work. Employees may be less able to see the impact of their work. And this could become a drain on how meaningful they experience their work to be.

Potential efficiency losses

Many people will be surprised by the efficiency losses that may be incurred as a consequence. We know from a lot of research that when people experience their work as deeply meaningful, they're more willing to invest more. When AI makes work less meaningful, people may also become less willing to invest effort. That means you’re getting less out of each employee.

So, if AI makes the work 20% more productive, but 20% less meaningful, what is actually the net benefit of implementing AI? And from a philosophical level: Do we want to live and work in a world that is more efficient but less meaningful?

People have been very excited about the implementation of AI. And a lot of work has focused on understanding how we can use AI to make our work more efficient. But we need to start paying attention to the ways in which AI may also negatively shape the experience of work—in ways that both shapes organizational outcomes and the human employee’s experience.

Jon M. Jachimowicz is an assistant professor in the Organizational Behavior Unit.

Jacqueline Ng Lane: Leaders will need to balance hidden trade-offs

Whether executives realize it or not, AI is already embedded in the decisions they make every day. The applications are ubiquitous, and growing.

As organizations layer multiple AI systems into their workflows, a critical question emerges: Does the order in which you deploy different AI tools matter? The answer is yes—and it has implications on the types of opportunities that rise to the top.

Each AI type used for decision-making has distinct capabilities:

  • Predictive AI, built on supervised machine learning, excels at pattern recognition, basing forecasts on historical data. It quantifies risks, identifies gaps, and assesses alignment with proven benchmarks.

  • Generative AI, powered by large language models, synthesizes knowledge to surface connections and evaluate novelty.

In a recent field experiment, we asked people to evaluate innovations using two AI recommendation systems—predictive and generative—in different orders. When evaluators received predictive AI recommendations first, they selected solutions with higher innovation scores. When evaluators received generative AI recommendations first, the solutions they picked varied more widely.

The trade-off is real

You can't maximize both simultaneously, and in innovation, we care about both. Average quality matters because solutions must satisfy stakeholder needs. Variety matters because breakthroughs often emerge from the tails—the atypical ideas many evaluators miss.

These findings speak to concerns about AI homogenizing creative output. Our research suggests the issue is how organizations orchestrate AI tools and whether evaluators remain cognitively engaged.

In 2026, executives should:

  • Organize AI around strategy. Sequence predictive AI first if your goal is sustaining innovation, as in aerospace or medical devices. Put generative AI first if R&D or emerging markets are your focus.

  • Prioritize cognitive engagement, not just efficiency. AI’s format shapes outcomes as much as its recommendations. Dynamic interfaces (such as chatbots) increase exploration but may reduce quality. Static interfaces (such as fixed explanatory content) improve outcomes but may narrow the solution.

  • Consider AI orchestration a portfolio decision. Just as financial portfolios balance risk and return, innovation portfolios balance mean and variance.

  • Shift to process design from process optimization. Most AI implementations aim to improve existing workflows. Using AI to fundamentally reorganize how decisions happen is the emerging frontier.

Jacqueline Ng Lane is an assistant professor in the Technology and Operations Management Unit and a co-principal investigator of the Laboratory for Innovation Science at the Digital Data Design Institute at Harvard.

N. Louis Shipley: Entrepreneurial competition in AI remains fierce

As we enter 2026, we are in the midst of a massive AI buildout. This creates enormous opportunities for entrepreneurs and investors.

Clearly, AI is transforming the economy. It’s easy and in fact natural to get caught up in the euphoria. For entrepreneurs, it is critical to ensure your AI startup solves a real problem by addressing verifiable customer pain. The barriers to developing new products have dropped considerably, and products can be developed at record speed.

Do your homework

That doesn’t mean you don’t have to do your homework to understand whether your product addresses real customer needs. Conducting a structured interview process, which asks potential prospects to rate the importance of solving this particular problem on a scale of one to 10.

Listen to what potential customers are not saying. Investors need to be aware that there is a record number of competitors for every AI product idea entrepreneurs have. So investors have to be as diligent as founders in uncovering and understanding real customer needs.

N. Louis Shipley is a senior lecturer in the Entrepreneurial Management Unit and author of the forthcoming book, Unlikely Entrepreneurs: Wins, Losses, and Crucial Lessons on Building Great Companies.

David Fubini: Relationships will still matter

As AI reshapes professional service firms, one truth is already clear: Whatever the magnitude of disruption, advantage will flow to those closest to their clients. Large language models and autonomous agents are absorbing more analytical and modeling tasks, so differentiation will shift from technical firepower to human judgment, insight, and the ability to build meaningful relationships.

A trusted adviser is defined not by flawless analyses, but by how they help clients navigate uncertainty, make trade-offs, and design practical paths forward. This demands pattern recognition, the ability to interpret ambiguity, and a commitment to serving in ways that genuinely advance the client’s goals.

If anything, the rise of AI elevates the importance of human work behind genuine client impact.

AI can process information, but it cannot sense the room or understand a leader’s burden. If anything, the rise of AI elevates the importance of human work behind genuine client impact. Clients remember who helped them through a hard moment, who brought clarity when the stakes were high, and who showed up even when there was no commercial gain. In an AI world, delivering real value is not an add-on, but the defining purpose of the role.

These capabilities cannot be automated or accelerated. They come from apprenticeship, lived experience, and long-term client commitment. As AI makes expertise more accessible, it heightens the premium on advisers who can translate insight into action and help leaders achieve outcomes that matter. The firms—and individuals—who thrive will be those who build client impact not through a transaction, but as a craft constructed patiently over time.

David Fubini is a senior lecturer in the Organizational Behavior Unit.

Image created with assets from AdobeStock.

Have feedback for us?

Latest from HBS faculty experts

Expertly curated insights, precisely tailored to address the challenges you are tackling today.

Strategy and Innovation

Social Responsibility

Data and Technology