Thursday, July 6, 2023

What makes a risk "acceptable"?

A few weeks ago, Etienne Nichols published an extended post on LinkedIn about risk. Nichols works in the medical device industry, so risk is a critical topic for him. But of course everything we do has some kind of risk—even getting out of bed in the morning! At what point do we decide that the risk is "acceptable" so that we can move ahead?

Unsurprisingly there is a standard that discusses this, and Nichols references it. ISO 14971 specifically covers the "Application of risk management to medical devices," and Nichols quotes it in his long post. First he points out that, with respect to any kind of harm, Risk = Probability x Severity. But what is harm? In the context of medical devices, you'd expect that to mean harm to the patient, and of course it includes that among other things. But it turns out that the actual definition is a lot broader. According to ISO 14971, harm means damage or injury to human health, or damage to property, or damage to the environment. All of these have to be considered in a complete risk analysis. 

Then what makes a risk acceptable? That has to be the benefit which is enabled by the risk, and this too is defined broadly. Nichols quotes the following clarification: "Benefits can include positive impact on clinical outcome, the patient's quality of life, outcomes related to diagnosis, positive impact from diagnostic devices on clinical outcomes, or positive impact on public health." This is a broad list, but it has to be. At the very least it has to be as broad as the range covered by the word harm. What is more, we all know that sometimes the benefits of an action land in a whole different area from its risks. If I drive my car to work, the risks are around things like the morning's traffic or the chance of an accident. But the benefits relate to my job, my coworkers, my customers, and my income. There is no simple arithmetical way to calculate that the benefits outweigh the risks, but a lot of people drive to work every day and are satisfied that it's the right thing to do. There must be some kind of intuitive "calculation" behind that decision, but it's not an arithmetical one.

In the end, Nichols concludes, "No risk is acceptable without the presence of some benefit. But when the stakes are high enough and the benefit is great enough, there's no risk that could be unacceptable.... [W]hen it's all said and done, the acceptability or unacceptability of a risk boils down to the benefit." And this, too, is logical. We all know that in wartime, soldiers risk their lives to take a hill or a position; but if by so doing they can help end the war, that benefit is great enough to make any risk worth it.


There is one way that this balance between risk and benefit can go badly wrong. That's when the parties who face the risk and the parties who decide on the action and reap the benefit are different people. This is called moral hazard, and its consequences are always bad.

For example, let's pretend that I sell a device that doesn't work. [I don't really.] Then I make money but the patients suffer. If I get away with it, what's to stop me doing it again? As long as I don't suffer any consequences, I might deem the patients' risks to be "acceptable."*

In the real world, this is why manufacturers are legally liable for their products—to prevent just that kind of fecklessness by giving the manufacturers some kind of skin in the game. ("Skin in the game" is more or less the conceptual opposite of moral hazard.**) And in the exact same vein, Nichols is careful to point out that ISO 14971 states "...this subclause [ISO14971 A.2.7.4] cannot be used to weigh residual risks against economic advantages or business advantages (i.e. for business decision making)."

It's an important qualification.

But so long as that qualification is understood and in place, it's the benefit that makes the risk acceptable. Nichols makes a good argument for this point.

__________

* Compare also Charlie King's protest song, "Acceptable Risks," which treats the exact same concept in a different context. The refrain runs, "But they told me it was safe, and they swore that was true / They said, The risks are all acceptable—acceptable to who?"  

** Therefore, if you ever find a case where a manufacturer is shielded from legal liability for his products, you should assume that moral hazard is a risk, at least in principle.   


           

2 comments:

  1. This is an excellent discussion that @Etienne Nichols kicked off and our Blogger has continued in a difficult topic. The Technical Committee that authored ISO 14971 had long and extensive discussions on the topic. One thing that was found is the lack of a definition for "benefit" by any of the regulators or medical device standards, so now 14971 has one in 3.2 partially discussed above. Though there is a definition of "clinical benefit" in the MDR, and our discussion is broader than clinical benefit. There are extensive discussions of the topic in a couple of other areas not identified here, first being the Introduction to ISO 14971:2019 which sets the context for the risk and benefit discussion, talking about clinical procedures and the particular patient that may be considering the device. Further discussions are also in Clause 7.4 of ISO TR 24971:2020 adding even more to the discussion.

    Another issue brought up is the definition of harm, which appears to be incorrectly stated. In 14971 3.3 states harm to be, "injury or damage to the health of people, or damage to property or the environment". The definitions was changed by removing the word "physical" before injury in the 2019 version to accommodate cyber and data security harm and emotional or any other harm that the patient or affected family may suffer. Additionally the harm to bystanders is to be considered, such as someone nearby when an incident such as a release of energy or substances by a device may occur leading to an injury.

    Finally, we cannot discuss benefit-risk without pointing to the four FDA device guidance documents on benefit-risk. One covers PMA and DeNovo submissions, one for 510(k) submissions, one for Uncertainty of data, and a final one for Product Availability, Compliance and Enforcement. These are the only guidance available anywhere on this topic, and have some examples that are useful. Anyone involved in this area, these are a MUST READ.

    ReplyDelete
    Replies
    1. Thank you so much for this detailed clarification, and pointers to further reading! I am also grateful for the correction of my quotation.

      Delete

Five laws of administration

It's the last week of the year, so let's end on a light note. Here are five general principles that I've picked up from working ...