Psychology and Behavior

The Hard Work of Failure Analysis

We all should learn from failure—but it's difficult to do so objectively. In this excerpt from "Failing to Learn and Learning to Fail (Intelligently)" in Long Range Planning Journal, HBS professor Amy Edmondson and coauthor Mark Cannon offer a process for analyzing what went wrong.

It hardly needs to be said that organizations cannot learn from failures if people do not discuss and analyze them. Yet this remains an important insight. The learning that is potentially available may not be realized unless thoughtful analysis and discussion of failure occurs. For example, for Kaiser [Permanente's] Dr. [Kim] Adcock, it is not enough just to know that a particular physician is making more than the acceptable number of errors [in misread x-rays]. Unless deeper analysis of the nature of the radiologists' errors is conducted, it is difficult to learn what needs to be corrected. On a larger scale, the U.S. Army is known for conducting After Action Reviews that enable participants to analyze, discuss, and learn from both the successes and failures of a variety of military initiatives. Similarly, hospitals use "Morbidity and Mortality" (M&M) conferences (in which physicians convene to discuss significant mistakes or unexpected deaths) as a forum for identifying, discussing, and learning from failures. This analysis can only be effective if people speak up openly about what they know and if others listen, enabling a new understanding of what happened to emerge in the assembled group. Many of these vehicles for analysis only address substantial failures, however, rather than identifying and learning from smaller ones.

An example of effective analysis of failure is found in the meticulous and painstaking analysis that goes into understanding the crash of an airliner. Hundreds of hours may go into gathering and analyzing data to sort out exactly what happened and what can be learned. Compare this kind of analysis to what takes place in most organizations after a failure.

As noted above, social systems tend to discourage this kind of analysis. First, individuals experience negative emotions when examining their own failures, and this can chip away at self-confidence and self-esteem. Most people prefer to put past mistakes behind them rather than revisit and unpack them for greater understanding.

Second, conducting an analysis of a failure requires a spirit of inquiry and openness, patience, and a tolerance for ambiguity. However, most managers admire and are rewarded for decisiveness, efficiency, and action rather than for deep reflection and painstaking analysis.

Third, psychologists have spent decades documenting heuristics and psychological biases and errors that reduce the accuracy of human perception, sense making, estimation, and attribution.1 These can hinder the human ability to analyze failure effectively.

People tend to be more comfortable attending to evidence that enables them to believe what they want to believe, denying responsibility for failures, and attributing the problem to others or to "the system." We would prefer to move on to something more pleasant. Rigorous analysis of failure requires that people, at least temporarily, put aside these tendencies to explore unpleasant truths and take personal responsibility. Evidence of this problem is provided by a study of a large European telecoms company, which revealed that very little learning occurred from a set of large and small failures over a period of twenty years. Instead of realistic and thorough analysis, managers tended to offer ready rationalizations for the failures. Specifically, managers attributed large failures to uncontrollable events outside the organization (e.g., the economy) and to the intervention of outsiders. Small failures were interpreted as flukes, the natural outcomes of experimentation, or as illustrations of the folly of not adhering strictly to the company's core beliefs.2

Similarly, we have observed failed consulting relationships in our field research in which the consultants simply blamed the failure on the client, concluding that the client was not really committed to change, or that the client was defensive or difficult. By contrast, a few highly learning-oriented consultants were able to engage in discussion and analysis that involved raising questions about how they themselves contributed to the problem. In these analytic sessions, the consultants raised questions such as "Are there things I said or did that contributed to the defensiveness of the client?", "Was my presentation of ideas and arguments clear and persuasive?", or "Did my analysis fall short in some way that led the client to have legitimate doubts?" Raising such questions increases the chances of the consultants learning something useful from the failed relationship, but requires profound personal curiosity to learn what the answers might be. Blaming the client is much more simple, comfortable, and common.3

Recent research in the hospital setting by [A. L.] Tucker and Edmondson shows that health care organizations typically fail to analyze or make changes even when people are well aware of failures. Whether medical errors or simply problems in the work process, few hospital organizations dig deeply enough to understand and capture the potential learning from failures. Processes, resources, and incentives to bring multiple perspectives and multiple minds together to carefully analyze what went wrong and how to prevent the occurrence of similar failures in the future are lacking in most organizations.

Conducting an analysis of a failure requires a spirit of inquiry and openness, patience, and a tolerance for ambiguity.

Thus formal processes or forums for discussing, analyzing, and applying the lessons of failure elsewhere in the organization are needed to ensure that effective analysis and learning from failure occurs. Such groups are most effective when people have technical skills, expertise in analysis, and diverse views, allowing them to brainstorm and explore different interpretations of a failure's causes and consequences. Because this usually involves the potential for conflict that can escalate, people skilled in interpersonal or group process, or expert outside facilitators, can help keep the process productive.

Next, skills for managing a group process of analyzing a failure with a spirit of inquiry and sufficient understanding of the scientific method are essential inputs to learning from failure as an organization. Without a structure of rigorous analysis and deep probing, individuals tend to leap prematurely to unfounded conclusions and misunderstand complicated problems. Some understanding of system dynamics, the ability to see patterns, statistical process controls, and group dynamics can be very helpful.4 To illustrate how this works in real organizations, we review a few case study examples below.

Examples Of Systematically Analyzing Failure

Edmondson et al. report how Julie Morath, the Chief Operating Officer at the Minneapolis Children's Hospital, implemented processes and forums for the effective analysis of failures, both large and small. She bolstered her own technical knowledge of how to probe more deeply into the causes of failure in hospitals by attending the Executive Sessions on Medical Errors and Patient Safety at Harvard University, which emphasized that, rather than being the fault of a single individual, medical errors tend to have multiple, systemic causes. In addition, she made structural changes within the organization to create a context in which failure could be identified, analyzed, and learned from.

To create a forum for learning from failure, Morath developed a Patient Safety Steering Committee (PSSC). Not only was the PSSC proactive in seeking to identify failures, it ensured that all failures were subject to analysis so that learning could take place. For example, the PSSC determined that "Focused Event Studies" would be conducted not only after serious medical accidents but even after much smaller scale errors or "near misses." These formal studies were forums designed explicitly for the purpose of learning from mistakes by probing deeply into their causes. In addition, cross-functional teams, known as "Safety Action Teams," spontaneously formed in certain clinical areas to understand better how failures occurred, thereby proactively improving medical safety. One clinical group developed something they called a "Good Catch Log" to record information that might be useful in better understanding and reducing medical errors. Other teams in the hospital quickly followed their example, finding the idea compelling and practical.

In the pharmaceutical industry, about 90 percent of newly developed drugs fail in the experimental stage, and thus drug companies have plenty of opportunities to analyze failure. Firms that are creative in analyzing failure benefit in two ways. First, analyzing a failed drug sometimes reveals that the drug may have a viable alternate use. For example, Pfizer's Viagra was originally designed to be a treatment for angina, a painful heart condition. Similarly, Eli Lilly discovered that a failed contraceptive drug could treat osteoporosis and thus developed their one-billion-dollar-a-year drug, Evista, while Strattera, a failed antidepressant, was discovered to be an effective treatment for hyperactivity/attention deficit disorder.

Second, a deep probing analysis can sometimes save an apparently failed drug for its original purposes, as is seen in the case of Eli Lilly's Alimta. After this experimental chemotherapy drug failed clinical trials, the company was ready to give up. The doctor conducting the failed Alimta trials, however, decided to dig more deeply into the failure—utilizing a mathematician whose job at Lilly was explicitly to investigate failures. Together they discovered that the patients who suffered negative effects from Alimta typically had a deficiency in folic acid. Further investigation demonstrated that simply giving patients folic acid along with Alimta solved the problem, thereby rescuing a drug that the organization was ready to discard.5

Failure analysis can reach beyond the company walls to include customers. Systematic analysis of small failures in the form of customer breakdowns was instituted at Xerox using a network-based system called Eureka. By capturing and sharing 30,000 repair tips, Xerox saves an estimated $100 million a year through service operations efficiencies. The Eureka analysis also provides important information for new product design.6

Analyzing Employee And Customer Defections To Capture The Lessons

To help build an organization's ability to analyze its own failures, outside sources of technical assistance in analyzing failure can be engaged. For example, Frederick Reichheld at Bain and Company has demonstrated the value of a deep, probing analysis of failure in the areas of customer and employee defections. In one instance, the fact that most customers who defected from a particular bank gave "interest rates" as the reason for switching banks seemed to suggest that their original bank's interest rates were not competitive. However, his additional investigation demonstrated that there were no significant differences in interest rates across the banks. Careful probing through interviews indicated that many customers defected because they were irritated by the fact that they had been aggressively solicited for a bank-provided credit card, and then had their applications turned down. A superficial analysis of customer defection would have led to the conclusion that the bank's interest rates were not competitive. A deeper analysis led to an alternate conclusion: The bank's marketing department needed to do a better job of screening in advance the customers to whom it promoted such cards.

Failure analysis can reach beyond the company walls to include customers.

The importance of analysis related to employee turnover at another company, where managers became concerned when they observed high turnover among sales people and conducted an investigation. Many of the employees gave "working too many hours" as the reason for their defection. Initially, it appeared that the turnover may not have been such a bad thing—after all who needs employees who are not committed to working hard? However, further data collection revealed that many of the employees who quit were among their most successful salespeople, and had subsequently found jobs that required, on average, 20 percent fewer hours. Once again, deeper probing and analysis yielded a truer understanding of the situation.7

Benefits Of Analyzing Failure

In addition to the technical aspects of systematic analysis, discussing failures has important social and organizational benefits. First, discussion provides an opportunity for others who may not have been directly involved in the failure to learn from it. Second, others may bring new perspectives and insights that deepen the analysis and help to counteract self-serving biases that may color the perceptions of those most directly involved in the failure. After experiencing failure, people typically attribute too much blame to other people and to forces beyond their control. If this tendency goes unchecked, it reduces an organization's ability to mine the key learning that could come from the experience.

Lastly, the value of the learning that might result from analyzing and discussing simple mistakes is often overlooked. Many scientific discoveries have resulted from those who were attentive to simple mistakes in the lab. For example, researchers in one of the early German polymer labs occasionally made the mistake of leaving a Bunsen burner lit over the weekend. Upon discovering this mistake on Monday mornings, the chemists simply discarded the overcooked results and went on with their day. Ten years later, a chemist in a polymer lab at DuPont made the same mistake. However, rather than simply discarding the mistake, the Dupont chemist gave the result some analysis and discovered that the fibers had congealed. This discovery was the first step toward the invention of nylon. With similar attention to the minor failure in the German lab, they might have had a decade head start in nylon, potentially dominating the market for year.8

These first two sections have dealt with inadvertent failures. If a firm can identify and analyze such failures, and then learn from them, it may be able to retrieve some value from what has otherwise been a negative "result." But failure need not always be considered from a "defensive" viewpoint. Our third section describes an "offensive" approach to learning from failure—deliberate experimentation. The three activities presented in this article—identifying failure, analyzing failure, and deliberate experimentation—are not intended to be viewed as a sequential three-step process, but rather as (reasonably) independent competencies for learning from failure. They can be sensibly examined alongside each other, since each is easily inhibited by social and technical factors.

Latest from HBS faculty experts

Expertly curated insights, precisely tailored to address the challenges you are tackling today.

Strategy and Innovation

Social Responsibility

Diversity and Inclusion