Managing the Business

Are You Ready for AI Security Threats? Time to Act

AI is accelerating cyberattacks, and most leaders aren't ready. Hise Gibson explains why traditional risk prevention strategies aren't enough and offers a practical playbook for preparing for the next breach.

A hallway lined with server racks on both sides, leading to a large circular vault door at the end. A digital overlay of glowing dots and connecting lines forms a network pattern across the teal-toned image.

The average AI-enabled data breach now costs organizations $4.88 million; a figure that does not account for reputational damage, regulatory penalties, or the cascading operational failures that follow. Yet the breach itself is no longer the greatest risk. The greatest risk is leaders who never saw it coming.

Picture this: A viral video explodes across social media, showing a former president declaring a national emergency. The voice is flawless. The setting looks authentic. The message is urgent. But it’s all a lie. By the time anyone realizes it’s a deepfake, the markets have crashed, global alliances have shifted, and public trust in the very foundation of government has been shattered.

AI isn’t simply speeding up attacks; it’s making the world radically less predictable, harder to comprehend, and exponentially more dangerous.

This isn’t a distant threat; it’s already here. In 2022, a convincing artificial intelligence-generated video of Ukrainian President Volodymyr Zelensky purportedly ordering his troops to surrender circulated widely online, despite Ukrainian officials immediately denouncing it as a fabrication.

The technology needed to create these videos once demanded Hollywood budgets, but now runs on everyday laptops, which means threats are not only coming more quickly but are more likely to fool wide swaths of viewers. AI isn’t simply speeding up attacks; it’s making the world radically less predictable, harder to comprehend, and exponentially more dangerous.

The fallout for organizations is seismic, and most business leaders aren’t ready.

AI-enabled cyberattacks are rising

Cyberattacks of public-facing software and system applications rose by 44% in just one year, and many of those attacks were driven by AI-enabled vulnerabilities, a 2026 IBM study found. These attacks learn, adapt, and evolve in real time. They test defenses, find flaws, and change their strategies, all without any help from people.

Meanwhile, according to Accenture’s State of Cybersecurity Resilience 2025 report, 77% of executives don't trust their companies’ ability to deal with AI-driven threats. This gap between how quickly threats can occur and how ready organizations are to manage them is one of the biggest strategic weaknesses of our time.

The question isn't if your business will be attacked by AI. The real question is whether leaders are ready to act when it happens.

Why a new framework is necessary

Leaders have used the VUCA framework (volatile, uncertain, complex, and ambiguous) to talk about difficult business situations for a long time, but this framework no longer describes what we face in the world of AI and cybersecurity.

We are now in a BANI (brittle, anxious, nonlinear, incomprehensible) environment, which calls for a very different kind of leadership response. VUCA told leaders to prepare. BANI tells them that preparation itself may not be enough, because the threats no longer follow predictable patterns, escalate at human speed, or announce themselves in advance. Here’s a closer look at the landscape:

  • Brittle: Our systems look strong, but they can fail in a big way when they are under intense stress. A single point of failure can cause problems for an entire organization in just a few minutes. In June 2017, the NotPetya malware, for example, spread rapidly through the network of TNT Express, taking down 40,000 computers and 10,000 servers in less than 40 minutes and bringing the company’s global operations to a halt.

  • Anxious: Leaders often struggle to make decisions because they have too many options, not enough information, and outcomes that are hard to predict. The need to act quickly and the need to act wisely are at odds. A lot of executives freeze up or, worse, hand off decisions they should be making to other people.

  • Nonlinear: Traditional risk models assume that big problems have big warning signs, a principle known as proportionality. But with AI-enabled attacks, small issues can have big effects; a single stolen password or a minor configuration mistake can cause damage across the entire company.

  • Incomprehensible: The decision-making logic of the AI systems we use are difficult to explain, and we’re attempting to defend ourselves against AI attacks using methods that are tough to understand. Modern AI's "black box" nature makes it hard for traditional governance structures to deal with the risks it poses.

Managing the challenges of AI

Leaders are grappling with a number of AI-related problems that are tough to manage: We want AI to be quick, but not careless. We want it to be strong, but not weak. We need to scale it up across the company, but in a way that is safe and ethical. Every function that AI enables can also lead to an attack. Every gain in efficiency also makes a business dependent on this technology. Every decision made by a machine also raises the question of who is responsible.

Every function that AI enables can also lead to an attack.

Leaders who handle this well can’t completely eliminate these tensions; that's impossible. But they will be the ones who create organizations that can work well despite the challenges.

A guide for leaders

To survive in a world that is fragile, anxious, nonlinear, and hard to understand, we need more than just better technology. We need a new approach to leadership. I created a framework called ACTS because taking action is the first step to thriving. Leaders should:

Assume a breach is inevitable

When it comes to cybersecurity, the worst thing you can do is put your guard down and believe your defenses are working. Every business should plan for the fact that there will always be some level of compromise. This means using zero-trust architectures, splitting networks to limit damage, maintaining manual backups for critical tasks, and practicing crisis scenarios regularly.

FedEx was able to handle the NotPetya attack and sustained smaller losses than other companies because the organization had trained for this exact possibility. Their leaders, many of whom had military training, stayed calm and made smart decisions even when things got tough. They were ready with manual workarounds.

On the flip side, in September 2023, MGM Resorts International suffered a ransomware attack that shut down casino floors, hotel reservations, and digital room keys across multiple properties. The attack began with a 10-minute phone call from someone impersonating an MGM employee. Within days, MGM had lost an estimated $100 million in revenue and faced weeks of operational disruption, not because their technology failed, but because their leaders had not rehearsed for the moment when the technology would be unavailable.

Cultivate AI fluency at every leadership level

AI fluency can't reside only in the IT department anymore. Every leader in the company needs to know how AI systems work, where they add value, and where they pose risks. Leaders must not only learn how to use AI, but they should develop reverse mentoring programs in which junior employees, who are often better-skilled at AI, teach senior leaders. And they should hire people who are adaptable and willing to learn, not just people with the right skills.

Tie every AI investment to core operations

Many companies run AI pilots that don't scale, or they make great demos that don't have long-term impacts. This needs to stop. Every AI project should have clear ROI frameworks, meet real business needs, and link to measurable results.

Strengthen governance

Companies need to create ethical guidelines and form AI governance councils that include representatives from several departments across the business, not just IT. These groups need to regularly test for fairness and bias. And decision-makers should be transparent about who is responsible for AI-related problems when things go wrong before the breach, not afterward.

Four questions for your next board meeting

Use these four questions as a readiness diagnostic before your next board meeting.

  1. Can your business run for 48 hours without digital systems? Do you have manual processes in place to keep serving customers and completing critical tasks if a major attack takes down your infrastructure?

  2. Have your top leaders completed the necessary training in AI security and ethics? This shouldn’t be just a webinar they clicked through, but real lessons on how these systems work and where they go wrong.

  3. Is your AI deployment plan based on business resilience, rather than only growth? Are the systems you're building making you stronger or weaker when the going gets tough?

  4. Can your people still make good choices when data is unavailable? If the dashboards suddenly go down and the models aren't available, does your leadership have the skills and processes to make decisions based on experience?

If you can't say yes to all of these questions, now is the time to ACT. The leaders who treat AI security as a fiduciary obligation, not an IT line item, will be the ones still standing when the next attack comes. And it will come. The only question is whether you are ready to ACT.

Illustration by Ariana Cohen-Halberstam for HBS Working Knowledge. Photos from Adobe Stock/Grispb, Adobe Stock/issaronow, and Adobe Stock/Dennis.

Have feedback for us?

Latest from HBS faculty experts

Expertly curated insights, precisely tailored to address the challenges you are tackling today.

Strategy and Innovation

Social Responsibility

Data and Technology