A few months ago, my team was gearing up to launch a new automation tool for case assignment - a project that is key for the future, and also were we spent so much energy into. We spent weeks identifying scenarios, testing in a dev environment, and reflecting on potential issues. We wrote crystal-clear documentation and worked with a rockstar team of developers, testers, and communicators. We thought we had every base covered. But when launch day arrived, chaos ensued. The tool hit snags that never showed up in testing - edge cases we hadn’t anticipated. Worse, some team members seemed blindsided by the changes, despite our efforts to keep everyone in the loop. It was a classic “complex failure,” as Amy C. Edmondson describes in her book Right Kind of Wrong: The Science of Failing Well.
Complex failures, Edmondson explains, aren’t the result of
one person’s mistake or a single oversight. They happen in intricate systems
where multiple factors - technology, human behavior, and unexpected variables -
collide in ways we didn’t foresee. Our tool’s launch wasn’t derailed by
laziness or lack of effort; it was a tangle of technical surprises and
communication gaps. Here’s what I learned from the experience, guided by
Edmondson’s insights, and actionable steps to navigate complex failures in your
own work.
Key Learnings from Complex Failure
Even the Best Plans Can’t Catch Everything
- We
thought our testing was exhaustive, but real-world conditions revealed
edge cases we hadn’t seen in the dev environment. Edmondson notes that
complex failures often stem from “unique combinations of needs, people,
and problems” that don’t show up in controlled settings. Our mistake
wasn’t neglecting testing - it was assuming our tests mirrored reality
perfectly. Real users, real data, and real systems always bring surprises.
Communication Needs Constant Reinforcement
- Despite
our clear documentation and updates, some team members were still caught
off guard. Edmondson’s research, including her work on psychological
safety, shows that information doesn’t always stick unless it’s actively
reinforced. We’d shared the plan, but we didn’t account for how busy
schedules or differing priorities might dilute its impact. People need
repeated, clear signals to stay aligned.
Subtle Warning Signs Are Easy to Overlook
·
Looking back, there were hints of trouble - like
a vague question in a meeting about “unusual cases”. Edmondson emphasizes that
complex failures are often preceded by “subtle warning signs” we miss when
we’re focused on execution. We were so confident in our prep that we didn’t dig
deeper into those fleeting concerns.
Teamwork Is the Key to Recovery
·
No one person could’ve fixed this launch alone.
Edmondson stresses that complex failures require collaborative problem-solving
to untangle. Once we realized things were going south, our team rallied - developers
helped prioritize key issues, communicators sent urgent updates, and managers applied
the rollback plans. That collective effort turned a disaster into a manageable
hiccup.
Action Points to Navigate Complex Failure
Here’s how to handle complex failures, inspired by
Edmondson’s framework and our launch gone wrong:
Conduct a Thorough Post-Mortem
·
When things derail, pause for an “after-action
review,” as Edmonson suggests. Gather your team to map out what happened: What
broke? Why didn’t we see it coming? For us, this meant identifying the gap
between our test environment and live conditions, plus the communication
breakdowns. This helps you pinpoint systemic issues, not just surface errors.
Test Beyond the Controlled Environment
·
Real-world conditions are messier than dev
environments. Edmondson advises anticipating variability in complex systems.
Next time, simulate live conditions more closely - think stress tests with real
user data or beta rollouts to catch edge cases. Think of unrealistic cases studies
to mimic unexpected scenarios.
Reinforce Communication Relentlessly
- Don’t
assume one email or meeting is enough. Edmondson’s work on team dynamics
shows that people need consistent, multi-channel updates to stay aligned.
Try pre-launch briefings, visual reminders (like dashboards), or quick
check-ins to confirm everyone’s on the same page. We now use a dedicated Teams
chat channel for real-time communication.
Foster Psychological Safety for Early Warnings
- Create
a culture where people feel safe raising concerns, even small ones.
Edmondson’s research shows that psychological safety helps teams catch
issues early. Encourage questions like, “What are we missing?” or “What
feels off?” In our case, a tester’s hesitation or a double-check might’ve
flagged those edge cases if we’d made space for it.
Share Learnings to Build Resilience
- Turn
failures into collective wisdom. Edmondson advocates for a “learning
culture” where insights are documented and shared. After our launch, we updated
the communication and scenarios, so that next teams are aware. Share your
lessons via team docs, wikis, or quick debriefs to prevent repeat
failures.
Failing Well Is a Superpower
Our automation tool launch was a humbling reminder that even
the best-laid plans can falter in complex systems. But as Edmondson writes in Right Kind of Wrong, failing well means
leaning into the mess to learn and improve. That rocky launch didn’t define us –
it pushed us to tighten our testing, communicate better, and build a stronger
team.
The next time you face a complex failure – whether it’s a
project implosion or a personal plan gone awry – take a breath, rally your
team, and dig for the lessons. What’s a complex failure you’ve tackled, and
what did it teach you? Drop your story in the comments – I’d love to hear it.
For more on turning failure into growth, check out Amy C.
Edmondson’s “Right Kind of Wrong: The Science of Failing Well” at Amazon.
Comments