In the mid-2010s, much was made of Silicon Valley’s “fail fast” culture. Start-up tech companies seemed to operate largely on a philosophy of encouraging failure through a rapid trial-and-error process that would quickly determine the viability of products and allow companies to kill projects before losing too much money. Perhaps no company personified this approach better than Facebook, now Meta Platforms, which operated with the principle of “Move fast and break things.”
But does this “fail fast” approach really work? And is it healthy?
Indeed, there are many examples of technological innovations that came after multiple failures. Thomas Edison famously failed to create a light bulb hundreds of times before building one that worked.
Of course, for all these eventual successes, there are many more failures—some that failed at the idea stage, others that made it all the way to release and resulted in lost revenues and brand reputation. The Museum of Failure, a collection of over 200 failed products and services from around the world, showcases some of these products.
Failure is part of business—and part of life! However, failure has consequences, and the size and impact of those consequences can vary widely. For example, if a software service rolls out a new online platform that no one adopts, the only real consequence may be to the company’s bottom line and their ability to employee people. But if a medical device company rolls out a new product that ends up hurting patients, not only will the company suffer—human beings could suffer actual harm as well.
Somewhere between failure for failure’s sake and absolute perfection is a better approach. Here are four ways to encourage a healthy failure culture in your organization.
1. Understand the different kinds of failure.
Amy Edmondson distinguishes different types and contexts for failure in her book Right Kind of Wrong: The Science of Failing Well. In her framework, failures can be basic, complex, or intelligent; they can exist in consistent, variable, or novel contexts.
Much of the time, context and type are well-aligned. Consistent contexts are predictable and well-known, such as assembly lines; basic failures are the most common type. A variable context is one with well-developed knowledge, such as an operating room, where the unexpected can occur and contribute to a complex failure. In the novel context of a laboratory, where knowledge is limited and emerging, a high degree of uncertainty contributes to intelligent failure.
Understanding the different types and contexts for failure can help leaders and team members mitigate opportunities for failure where necessary and take more risks in contexts where failure can help innovate or advance knowledge.
2. Cultivate psychological safety.
In an environment with high psychological safety, people don’t fear rejection for being wrong or making mistakes. Edmondson says, “My research has shown that psychologically safe environments help teams avoid preventable failures. They also help them pursue intelligent ones.”
Too many organizations see psychological safety as an excuse or permission structure for lax performance—if people aren’t accountable for their mistakes, the thinking goes, how can leaders trust they’ll do their best work?
Edmondson suggests this thinking comes from a false dichotomy. “A culture that makes it safe to talk about failure can coexist with high standards,” she writes. “Psychological safety isn’t synonymous with ‘anything goes.’”
An environment without psychological safety could actually cause more significant, more harmful mistakes. When people fear failure too much, they are more likely to make mistakes and then cover them up out of fear of rejection or retribution. Psychological safety encourages them to perform well and be accountable when something goes wrong.
3. Create systems to mitigate or control failure.
Once leaders understand the difference in failure types, they can create systems or processes to help mitigate, avoid, and control failures.
In a consistent context, such as dispensing medications in a hospital, systems can help minimize the opportunities for error. Nurses can use visual cues, such as a high-visibility vest, to indicate they are concentrating on gathering medications. The hospital can ask nurses to verify medication, dosage, and patient before administering higher-risk medications.
Simulations can help highly skilled professionals learn how to respond to variable contexts. For example, no one wants airline pilots to fail, but they should also know how to react in an emergency. In a flight simulator, pilots can practice responding to variables in a safe environment and learn to mitigate damage from failure.
4. Know when to quit.
Novel environments are likely most commonly associated with a “fail fast, fail often” culture. In a research and development setting or experimental laboratory, intelligent failure is expected as a by-product of experimentation and innovation.
Silicon Valley is well-known for its emphasis on experimentation and intelligent failure, but do these “fail fast” cultures know when to quit?
Astro Teller, CEO of X, the moonshot factory (a division of Google), describes an approach he calls “monkeys and pedestals.” If someone wants to train a monkey to recite Shakespeare while standing on a pedestal, he says, the hard part isn’t building the pedestal—it’s training the monkey. But because bosses want to see results, too many teams rush to build the metaphorical pedestal while ignoring the bigger, more intractable problem. That way, the team can point to the pedestal as progress.
One team within the moonshot factory developed a way to turn seawater into carbon-neutral fuel. However, after two years of research and development, the team recognized that costs were simply too prohibitive to make the project feasible. The team killed the project and freed resources for other moonshots.
In some organizations, devoting that much time and energy into something that didn’t pan out could be a career-ender; in others, teams might continue to build pedestals or try to train an untrainable monkey with no progress until funds run out. The Alphabet team operated in a system that encouraged intelligent failure but not the pursuit of a hopeless cause.
A healthy approach to failure lies somewhere between a “fail fast, fail often” philosophy and a rigid “failure is not an option” philosophy. With a proper balance of psychological safety, reasonable failure mitigation, and intelligent failure where appropriate, leaders and teams can face inevitable mistakes and failures with confidence that they’ll recover and thrive in the long run.
Self-check:
- Do people in our organization feel like they can safely fail? Why or why not?
- What is one way we could improve systems to mitigate basic failures?
- How could we encourage more transparency around failure?