There are people on the Internet who claim that what we see as Reality is actually a giant Simulation, and some days it seems like they have a point. Would random chance in real life have given us the entertaining string of disasters we've experienced so reliably this spring, or should we assume that it's a plotting device dreamed up by some intergalactic blogger and content creator with an offbeat sense of humor? Since my purpose in this blog is not to tackle the Big Metaphysical Questions I'll leave this one unanswered, remarking only that our record of calamities the last few months has been strikingly consistent.
A lot of my recent posts since January have been related, in one way or another, to the tribulations of Boeing, who seem to have dominated the headlines for some time now in spite of themselves. But of course that's not all that has been going on. Also back in January, the electric grid in the Province of Alberta came close to shutting down, seemingly because, … (checks notes) … it got too cold. (I discuss this event here.) Then in another extreme weather event that did not repeat the Alberta experience but somehow rhymed with it, massive hailstorms in central Texas three weeks ago destroyed thousands of solar panels.* And perhaps the most dramatic recent catastrophe (upstaging even Alaska Airlines flight 1282) took place early Tuesday morning a week ago, when a massive container vessel piloting out of Baltimore Harbor collided with one of the supports of the Francis Scott Key Bridge—and demolished the bridge.
It should go without saying that tragedies like this are devastating. If there is any way to find a silver lining around clouds this dark, it is that by analyzing what went wrong we can often learn how to prevent similar catastrophes in the future.
Sometimes this analysis can rely on straightforward data collection about the environment in which the planned operation will take place. Historical records could offer information, for example, on the likelihood of cold weather in Alberta in January, or the risk of hail in central Texas. But often the question is more difficult. For example, the Dali (the container vessel in Baltimore Harbor) appears to have suffered some kind of power failure just before the accident, a power failure which could have made it impossible to steer the ship. I'm sure there was some kind of planned protocol for how to handle a power failure; there was probably an emergency backup power supply available. But how much time did it take to activate the backup power? Did the advance planning take account of the possibility that the power would go out when the ship was in such a position that even a minute or two without steering could mean catastrophe? At this point I don't have any information to answer that question. But I can easily imagine that the answer might be "No, we assumed that five minutes [for example] would be plenty fast enough" … and I can also imagine that back when the planning was done, that might have sounded reasonable! Today we would evaluate the same question differently, but only because we have seen an accident where seconds counted.**
So it turns out that analyzing catastrophes is a hard thing to do. In particular, it is important to recognize that even when we can collect all the data, there are huge innate biases we have to overcome in order to understand what the data are telling us. Two important ones are the Hindsight Bias, and the Outcome Bias.
The Hindsight Bias means that when we already know the outcome, we exaggerate (in retrospect) our ability to see it at the time. This is why people can play tabletop games to refight battles like Gettysburg or Waterloo and the other side ends up winning. Once you know what stratagems your opponent could use to win (because they are part of the historical record), it becomes easier to block them.
The Outcome Bias means that when we already know the outcome, we judge the decisions that people made in the moment by how far they contributed to the outcome. So if someone took steps in the middle of a crisis which looked logical at the time but ultimately made things worse, retrospectively we insist that he's an idiot and that it was his "bungling" that caused the disaster. We ignore the fact that his actions looked logical at the time, for reasons that must have made sense—and therefore, if it happens again, somebody else will probably do the exact same thing. By blaming the outcome on one person's alleged "stupidity" we lose the opportunity to prevent a recurrence.
If you can spare half an hour, there's a YouTube video (see the link below) that explains these biases elegantly. It traces the history of the nuclear accident at Three Mile Island on March 28, 1979. The narrator walks us through exactly what happened, and why it turned out so badly. And then the narrator turns around to show us that the whole story he just told is misleading! It turns out that Hindsight Bias and Outcome Bias are fundamentally baked into the way we tell the story of any disaster. And if we allow ourselves to be misled by them, we can never make improvements to prevent the next accident.
The basic lessons are ones you've heard from me before—most critically, that human error is never a root cause but always a symptom. (See also here, here, and here.) But the video does a clear and elegant job of unpacking them and laying them out. And even though we all know how the story is going to end, the narrator makes it gripping. Find a free half hour, and watch it.
__________
* I have seen multiple posts on Twitter insisting that this happened again a week later, but the weather websites which I've cross-checked disagree. See for example this news report, which showcases a tweet that pegs the storm on March 24, whereas the text of the article dates it to March 15.
** Again, to be clear, I have no genuine information at all about the disaster planning aboard the Dali. I am reconstructing an entirely hypothetical situation, to show how our judgements about past decisions can be affected by our experience in the present.
No comments:
Post a Comment