There's a ritual that follows almost every major institutional failure. A crisis surfaces — a hospital error, a corporate scandal, a government program that collapsed, a school where kids keep falling through the cracks. People are angry, and rightly so. And then, almost immediately, a name appears. A supervisor who missed a warning sign. A manager who didn't follow protocol. A low-level employee who made the wrong call on a bad day.
The name absorbs the anger. There's an investigation. Sometimes there's a termination, a resignation, an apology. And then things go back to the way they were.
I've watched this cycle repeat so many times across so many different institutions that I've stopped thinking of it as a failure of accountability. I've started thinking of it as a feature. The ritual of blaming individuals for systemic problems doesn't just fail to fix anything — it actively prevents the fixing from happening. And I think most institutions know this, even if no one says it out loud.
What "Systemic Failure Personalization" Actually Means
The term sounds academic, but the idea is simple. Systemic failure personalization happens when an organization (or the public, or the press) attributes an outcome that was produced by structural conditions to the choices of a specific person or small group of people.
The structural conditions might be: understaffing, misaligned incentives, inadequate training, conflicting policies, an impossible workload, a culture that punishes speaking up, or a design that almost guarantees the error someone eventually made. None of those things have a face. They can't be fired. They don't make for a satisfying press conference. So instead, we find the person who was closest to the failure at the moment it became visible — and we make them the story.
What's important to understand is that the individuals involved are often genuinely at fault in some narrow sense. The nurse did deviate from protocol. The trader did take on too much risk. The inspector did skip a step. I'm not arguing that people bear no responsibility for their actions. I'm arguing that when the system reliably produces a certain kind of error, and we respond by removing the person who made that error this time, we've done essentially nothing to prevent the next person from making the same one.
That's the tell. When the same failure keeps happening to different people, in different locations, at different times — that's not a pattern of individual bad judgment. That's a system doing what it was built to do.
Why We Default to the Person
The pull toward individual blame is deep, and in my view it comes from a few places at once.
The first is cognitive. Decades of social psychology research on the fundamental attribution error shows that people consistently overestimate the role of individual character and underestimate the role of situational factors when explaining behavior. When something goes wrong, we reach for who before we reach for what. It's a wiring issue. Lee Ross, the Stanford psychologist who named the fundamental attribution error in 1977, found the bias so persistent it operates even when people are explicitly told the situational constraints on someone's behavior. We discount the situation. We stick with the person.
The second is institutional. Organizations have a powerful incentive to locate failure in an individual rather than in their own design. A structural problem implies the organization needs to change — which is expensive, slow, politically difficult, and requires leadership to admit that something they built or tolerated was wrong. Blaming a person is cheaper and faster and lets the institution present itself as having "addressed the issue" without altering anything fundamental. Research on organizational incident reporting has found that blame-focused responses to failure are significantly correlated with underreporting in subsequent periods — meaning the cleanup itself creates new risk by silencing the signals that might otherwise surface the next problem.
The third is cultural. We have a deep investment in the idea that outcomes track choices. If bad things happen because of bad choices, then we can protect ourselves by making good ones. If bad things happen because of structural conditions we didn't design and can't control, that's much harder to sit with. Individual blame is, among other things, a comfort mechanism — a way of maintaining the belief that the world is navigable, that effort and virtue are protective.
I don't think any of these tendencies are stupid or malicious. They're understandable. But they add up to a systematic bias, and that bias has real costs.
What Gets Lost in the Blame
A 2016 study published in BMJ Quality & Safety analyzed incident reports from healthcare systems across multiple countries and found that organizations that responded to adverse events primarily through staff discipline had measurably worse safety outcomes than those that used root-cause analysis focused on systemic factors. The researchers estimated that blame-based responses delayed structural improvements by an average of two to four years per incident type — years in which the same class of error continued to occur.
The aviation industry figured this out before healthcare did. After a series of catastrophic crashes in the 1970s and 1980s traced to cockpit communication failures, the industry developed Crew Resource Management — a set of protocols explicitly designed around the insight that human error in high-stakes environments is largely predictable and system-shaped. The FAA's implementation of non-punitive incident reporting (ASRS) in 1975 created a confidential channel for pilots to report their own errors without fear of blame. The data that came in was staggering: near-misses were happening constantly, and most of them followed recognizable structural patterns. Over the following three decades, commercial aviation's fatal accident rate dropped by roughly 65%. The system got safer not because pilots became more virtuous, but because the system got redesigned around what human beings actually do under pressure.
Healthcare, by contrast, still loses an estimated 250,000 patients per year in the United States to preventable medical errors — a figure from a landmark 2016 Johns Hopkins study — making it the third leading cause of death. The single most cited barrier to improvement in patient safety research is a blame culture that discourages reporting and root-cause analysis in favor of individual accountability.
The comparison is worth sitting with.
| Industry | Dominant Response to Failure | Reporting Culture | Outcome Trend |
|---|---|---|---|
| Commercial Aviation | Systemic (CRM, ASRS) | Non-punitive, anonymous | Fatal accident rate down ~65% since 1980 |
| Healthcare (US) | Primarily Individual Blame | Blame-averse, underreporting | ~250,000 preventable deaths/year (est.) |
| Financial Services | Mixed (individual + regulatory) | Variable, often suppressed | Repeated systemic crises (2001, 2008, 2023) |
| Nuclear Power | Systemic (defense-in-depth) | Non-punitive, mandatory | Extremely low operational incident rate |
The pattern isn't subtle. The industries that treat failure as a system problem worth understanding tend to get safer over time. The industries that treat failure as a people problem worth punishing tend to keep having the same failures.
How to Recognize It When It Happens
Systemic failure personalization doesn't always look like a scapegoat. Sometimes it's dressed up as accountability, performance management, or professional standards. Here are the structural signals I've learned to watch for.
The "bad apple" framing. When an institution describes a failure as the result of "one bad actor" or "a small number of individuals who didn't live up to our standards," and the failure involved dozens of people across multiple teams or locations — the framing is almost certainly wrong. Bad apples require an explanation for why the barrel kept producing them.
The removal without redesign. If the response to a failure is a personnel change, and nothing else changes — no policy review, no incentive restructuring, no process redesign — the institution is betting that this person was uniquely problematic and that the next person will be better. That bet is almost always wrong.
The repeated failure signature. When the same type of error appears repeatedly, across different people and time periods, that is the system telling you something about itself. The error has a fingerprint. The person's name changes; the fingerprint doesn't.
The downstream silencing. After a blame-focused response, do people in that organization start speaking up more about problems — or less? If less, the "accountability" response has made the system more opaque and more dangerous, not less.
Accountability concentrated at the bottom. In most institutionalized blame cycles, the person who absorbs consequence is significantly lower in the organizational hierarchy than the person who designed the policy, set the staffing level, or created the culture. This is a reliable signal that something structural is being laundered through an individual who happened to be visible at the moment of failure.
The Accountability Question
Here's where it gets genuinely difficult, because I don't think the answer to systemic failure personalization is to stop holding individuals accountable. That would be its own kind of distortion. People do sometimes act with genuine negligence, or bad faith, or cruelty. Those things are real, and they warrant real responses.
But I think there's a meaningful distinction between accountability that is oriented toward understanding what happened and preventing it from happening again, and accountability that is primarily oriented toward assigning blame and producing a satisfying resolution. The first is additive — it generates information, it changes conditions, it makes future failure less likely. The second is mostly theatrical. It serves the institution's need to appear responsive without requiring it to actually change.
The hardest version of this question comes up around senior leaders. When a CEO presides over a company whose culture produced a catastrophic outcome, the instinct is often to protect them — they weren't in the room, they didn't make the specific decision, the problem was downstream. But the culture was theirs to shape. The incentive structure was theirs to set. The signals were available to them. I think accountability for systemic conditions belongs, in large part, to the people who had the power and responsibility to change those conditions. That is rarely the person at the bottom of the hierarchy who got fired.
What Systemic Thinking Actually Requires
Looking at a system honestly — rather than looking for a person to blame — requires a few things that institutions tend to find uncomfortable.
It requires admitting that the problem was predictable, which often means admitting it should have been addressed sooner. It requires examining the incentives that produced the behavior, which usually means examining the organization's own priorities. It requires creating safe channels for reporting, which means giving up some of the control that comes from suppressing bad news. And it requires being willing to change things that are expensive or politically inconvenient to change.
None of this is technically difficult. The tools exist — root-cause analysis, human factors engineering, anonymous reporting systems, systemic incentive audits. They are not mysteries. What they require is a genuine preference for understanding over blame, and that's a cultural and leadership question more than a methodological one.
I think the organizations that actually get better over time are the ones that have developed something like intellectual honesty about failure — a genuine curiosity about why things went wrong that is stronger than the impulse to find someone to blame for it. Those organizations are not the norm. But they exist, and the gap between how they perform over time and how blame-focused organizations perform is hard to explain any other way.
The Deeper Pattern
What I keep coming back to is this: the way an institution responds to failure is one of the clearest windows into what it actually values, as opposed to what it says it values. An institution that says it values safety, or quality, or its people — and then responds to every failure by identifying a person to blame — has given you a useful piece of information. It values the appearance of accountability more than the substance of it.
This doesn't make the people running these institutions bad. It makes them human, caught in their own structural pressures — the pressure to respond visibly, to protect the institution's reputation, to avoid the expense and disruption of genuine change. The same bias that makes them point at individuals operates on them too. They are inside the same system.
But the pattern has costs. Real ones — in lives, in money, in organizational capacity. And I think the people who sit at the top of institutions have a specific responsibility to resist the pull toward comfortable blame, even when it's politically easier, even when there's a press conference to get through, even when the board is asking for someone's head.
The question worth asking — the one that tends not to get asked — is simple: did this failure tell us something about how we're built? If the honest answer is yes, then removing a person and calling it solved is not accountability. It's a deferral.
Last updated: 2026-04-28
Jared Clark is the founder of PatternThink, where he writes about the hidden structural patterns that shape institutions, organizations, and human systems. Read more at patternthink.com.
Jared Clark
Founder, PatternThink
Jared Clark is the founder of PatternThink, where he writes about the hidden structural patterns that shape institutions, organizations, and human systems.