Organizations may think they have taken care of the problem of learning from failure once they’ve put in a lessons learned database. But the research of Cannon and Edmondson indicates that managers greatly underestimate the difficulties that are involved in learning from failure.
They define failure as “a deviation from expected and desired results.”(p. 4) Using that definition, failures not only include disastrous events like the Columbia shuttle disaster and Enron, but also the many small failures that occur everyday. A major conclusion Cannon and Edmondson draw is that looking at small failures can head off those catastrophic failures.
…one of the tragedies in organizational learning is that catastrophic failures are often preceded by smaller failures that were not identified as failures with examination and learning. In fact these small failures are often the key “early warning signs” that could provide a wake up call needed to avert disaster down the road (p.9).
A number of industries have taken that philosophy to heart. The FAA has developed practices that make it a model in learning from failure. Pilots can report near misses with out fear of retribution and major accidents are thoroughly investigated. Patient safety work in hospitals, which began to be addressed in the early 90’s, drew heavily on what had been learned about flight safety and has since saved countless lives. If you are a patient in a clinic or hospital and find yourself being asked for your birth date over and over again, that is patient safety at work – preventing any chance of an error like, for example, the technician putting the wrong name on your tubes of blood. The nuclear industry has also put practices in place, for example tracking anything that is even slightly out of the ordinary and quickly investigating and reporting the results to the entire system.
But these industries are far from the norm. In most organizations failures are hidden or down played as one-off incidents. There is pressure to “move on” rather than “dwell” on past mistakes. In part this results from the very human desire to be highly regarded by peers and bosses, but in part it is people's recognition that success is rewarded while failure often prevents promotion. Added to that are the very strong negative emotions that all of us experience when we have to confront our own failure. These factors are so powerful that without a very intentional and concerted effort, learning from failure just doesn’t happen.
What is needed is a clear model of what it takes to learn from failure and that is what Cannon and Edmondson have provided in their article, “Failing to Learn and Learning to Fail.” (Download Edmondson - Failing to Learn and Learning to fail) They identify three organizational activities that need to be in place if an organization is going to take advantage of this valuable knowledge.
• A systematic way of identifying failures
• A process in place for analyzing failures, and
• A culture that promotes deliberate experimentation
I discuss each of these practices below, providing examples from Cannon and Edmondson and adding examples from my own work.
One of the challenges in identifying small failures is how to make the failures visible enough that they can be identified as such. There is no problem with identify catastrophic failures – they are well publicized. But if the goal is to identify small failures that are precursors to those catastrophes – then a well thought through system has to be in place. As this example from Cannon and Edmondson shows it often takes someone with good research skills to format data so that others can see it.
X-ray errorsCannon and Edmondson note that the inaccessibility of data is one of the issues that prevent learning from mistakes. In many organizations, to make failure visible enough that it can be addressed requires someone with the research and analytic skills to ferret it out. Learning from failure may also require a courageous choice about what data to collect. The eradication of smallpox accomplished by the World Health Organization (WHO) begun in 1965 provides a useful example.
Across the medical profession there is a 10-15% failure rate in reading mammograms accurately - even among experts. So the discovery that a technician has missed identifying a tumor is not viewed as a topic worth spending time on. This well-known error rate is an example of small failures that need attention - yet are easy to consider as “just the way things are.”
When Dr. Kim Adcock became radiology chief at Kaiser Permanente Colorado, she used HMO records to identify failures and produce systematic feedback including bar charts and graphs for each individual x-ray reader. The graphs allowed readers to become aware of their own error rate and more importantly to return to a specific x-ray to investigate why they missed a tumor – allowing them to learn not to make the same mistake again.
Without Dr. Adcock’s skill in putting the data together in a usable format it was largely invisible. An x-ray reader might never even realize he/she had made an error since the patient’s next visit could be months away and an altogether different reader might look at that x-ray.
Eradication of Smallpox.
Before its eradication 10 million people a year were inflicted with the disease and 20-40% died, with many of the rest left scarred and sometimes blind. Because of WHO’s ten year vaccination campaign, by 1977 the last case of small pox occurred in Somalia – small pox had been eradicated. The story of that success is a story of learning from both failures and successes and being willing to change strategies in terms of what it was learned.
An important part of that learning resulted from the choices WHO made about what data to collect during the long vaccination campaign. It would have advantaged WHO politically to track the number of vaccinations given and the number of areas free of small pox. But instead they chose to track trends in the incidence of smallpox. Those data were openly shared with field workers around the globe. WHO considered the field workers co-researchers, so felt a responsibility to continually provide them accurate and useful data with which to guide their own field vaccination program and research. Such reports were frequently embarrassing both to WHO and to the countries – yet it allowed WHO to learn in a way that could not have been possible from data that were more sensitive to public relations
The second organizational activity that is necessary in order to learn from failures is the ability to analyze the failures. Cannon and Edmondson explain that organizations tend to avoid such analysis. They may request that someone conduct a study but studies are of little use unless the organization creates the forum for in-depth discussion of the failure.
And those forums need facilitators who have the skills to create an atmosphere of psychological safety in which a spirit of inquiry and openness can flourish. As Cannon and Edmondson note:
Decades of research by organizational learning pioneer Chris Argyris has demonstrated that people in disagreement rarely ask each other the kind of sincere questions that are necessary for them to learn from each other. People also try to force their views on the other party rather than educating the other party by providing the underlying reasoning behind their views. (p.28)
Edmondson’s work on patient safety is particularly relevant for me, because of my own research and project work in this area. The following report, written by a facilitator of a failure analysis meeting in one of those projects, illustrates that facilitation skills that create psychological safety can indeed be learned. The skills the report writer references in this report were based on Argyris’ Model II skills that are intended to make organizational learning possible. The report is a first hand account written by St Anthony’s Director of Patient Safety following a meeting (which we referred to as sensemaking) she facilitated to look at a small failure that had been reported in the hospital.
We held a sensemaking session on a medication error that originated in the pharmacy. A patient was given a blood pressure medication that the physician had put on hold. The patient had come from the regular floor to our re-hab unit where the medication was wrongly administered. No harm occurred because of this error.
Before the sensemaking session occurred, I, as the Director of Patient Safety, along with the Director of Pharmacy conducted a Root Cause Analysis on the error and based on that developed a causal tree.
We were particularly interested in this error because we were in the process of implementing bar coding throughout the hospital. The unit where the error occurred was trialing the bar coding system so both the pharmacy and the re-hab unit had put in new processes for medication orders.
The sensemaking meeting brought ten people together, including the Director of Pharmacy, two pharmacy technicians and two pharmacists, one of whom had been involved in the error. We also had the nursing unit director from the floor where the error occurred, the VP of administration, the management engineer and the nursing informaticist who had expertise in bar coding. In addition two nurses from the floor had agreed to come, but at the last minute were not able to do so. The meeting lasted about an hour and a half and was held toward the end of the shift to allow some people to come in early and others to stay after their shift.
I facilitated the sensemaking session, having participated in the earlier sensemaking training. At the beginning of the meeting I attempted to set a tone of openness. I explained that the intent of the sensemaking session was to educate ourselves on how we do things and more specifically how this error had occurred and I acknowledged that we all make errors including myself. I recalled an error I had made and asked if the others, whom we had arranged in a circle, would be willing to share a personal experience they had with an error and to say something about what it meant to them. Nearly everyone in the circle did so and this beginning proved valuable in demonstrating to everyone present that we were all on equal footing. The beginning also served to introduce those present to each other, which was important because one of our barriers to fixing medication errors has been that people from the floor don’t know many of the people from pharmacy.
I had printed the events from the causal tree on individual sheets of paper and these were placed one at a time on a wall chart. I explained to the group that this was just our rendition of what had occurred and that what I was looking for was input from them as to the correctness of the items or the sequence. As I placed each component on the tree, the group talked about that item, providing feedback, correcting, and adding new items. I used post-it notes to write additional information or steps. Having the items on individual sheets to be posted and changed proved much less threatening than using PowerPoint. It emphasized the tentativeness of what we had already produced illustrating that it was changeable.
The discussion evolved slowly as the group began to open up. As the meeting continued it became clear to me that the members were really trying to make sense of each other’s reasoning and contributions. Rather than all the questions coming from me, they began to ask questions of each other. For example, one pharmacist asked, “How do you know what medication you’ve put in?” The other pharmacist responded, “I check them off.” The first pharmacist was surprised because he had not been checking them off. Moreover, one of the techs explained that he thought the check meant that the pharmacist had already done a double check, so had not been re-checking those items. So even the pharmacist were discovering differences in how they did things and that often led to a discussion about which practice was safer.
One of the ways this meeting seemed different from a typical Root Cause Analysis that I have conducted was that the sensemaking meeting had a tone of curiosity rather than the urgency to get the form completed and the meeting over with. I found myself exploring more in this meeting, trying to get at staff’s understanding rather than just get the facts. I found myself honestly curious about what happened and did not feel compelled to make judgments. In this meeting we primarily tried to understand what had happened and to reflect those changes in the causal tree. In the end I felt great satisfaction knowing that the discussion did indeed go beyond the surface of the event. The casual tree we ended up with added several.
In two subsequent meetings we created an action plan to address the issues we had uncovered in the sensemaking meeting and based on the same incident we conducted four subsequent Failure Modes and Effects Analysis meetings. So the sensemaking meeting proved a rich source of understanding for our continuing improvement work. The safety culture re-survey conducted in the pharmacy showed a positive change which, in part, was as a result of the conversation we had in the sensemaking meeting.
This report illustrates for me the value of a systematic process to analyze mistakes. It also illustrates the care that needs to be taken with the tone of the meeting, the media that is used to convey the concepts, the facilitator skills, and even how the room is arranged.
The third of Cannon and Edmondson’s three practices is the need for deliberate experimentation. They call it “learn to fail intelligently – a deliberate strategy to promote innovation and improvement.” This may seem counter-intuitive because experimentation necessarily leads to failure. But a practice of experimentation requires a shift in thinking from seeing failure as something to avoid or punish, to seeing failure as a way to learn. Lewis Thomas in The Medusa and the Snail says,
Mistakes are at the very base of human thought, embedded there, feeding the structure like root nodules... We are built to make mistakes, coded for error.... The lower animals do not have this splendid freedom. They are limited, most of them, to absolute infallibility. (P.29)
Cannon and Edmondson quote research that shows that organizations that experiment effectively are likely to be more innovative, productive and successful than those that do not take such risks.
A well know example is IDEO, the award winning design firm. It uses many slogans to encourage experimentation including, “Fail often in order to succeed.” In the early stages of creating a design for a new product, IDEO encourages many experiments, knowing that most of them will not work, but also knowing much will be learned from those failures.
Another celebrated example is the 3M Corporation that has encouraged deliberate experimentation in a culture that is tolerant of mistakes. “At 3M failures are seen as a necessary step in a larger process of developing successful, innovative products.” (p. 21) The 3M corporation provides incentives and has policies that encourage deliberate experimentation.
The Defense Intelligence Agency (DIA) Knowledge Lab developed a process to encourage innovation and experimentation called Crossing Boundaries. You can read the full description in an earlier post. But to summarize it briefly here, Crossing Boundaries is a system where many ideas are encouraged and support is provided to make those ideas happen.
Crossing Boundaries takes place in the largest auditorium at DIA with the Director up in front. The meeting starts with an impressive chart that reveals how many ideas have been offered and how many have resulted in change – 47% at last count. As soon as the accounting is over, employees from across DIA stand up, one at a time, and offer the Director solutions, not problems. And they only offer solutions THEY are willing to take personal responsibility for implementing. The Director listens to the ideas, responds to each one, sometimes with questions, sometimes with comments. There are from 5 – 20 solutions offered at each monthly meeting. Immediately following the meeting, a group of coaches gets in touch with each of the idea submitters to assist them, that is, to help write a business case, open doors, connect to others with a similar idea.Crossing Boundaries illustrates Cannon and Edmonson’s caution that, “Deliberate experimentation requires that people not just assume their views are correct, but actually put their ideas to the test and design (even very informal, small) experiments in which their view can be disconfirmed.” (p. 20)
It is possible for organizations to learn from failure but to accomplish that requires more than admonitions from seniors about the need to do so. It requires 1) a way to systematically collect data on small failures, 2) designing systems to jointly analyze the data on failures, and 3) encouraging experimentation which deliberately results in failures from which the organization can learn.