• J-dawg 6 days ago

    I am reminded of this story of a medical reversal: https://www.badscience.net/2011/03/when-ethics-committees-ki...

    > This is not an abstract problem. Here is one example. For years in A&E, patients with serious head injury were often treated with steroids, in the reasonable belief that this would reduce swelling, and so reduce crushing damage to the brain, inside the fixed-volume box of your skull.

    > Researchers wanted to randomise unconscious patients to receive steroids, or no steroids, instantly in A&E, to find out which was best. This was called the CRASH trial, and it was a famously hard fought battle with ethics committees, even though both treatments – steroids, or no steroids – were in widespread, routine use. Finally, when approval was granted, it turned out that steroids were killing patients.

    > This was an extraordinary piece of work. At the end of the trial, where the head injuries were pretty bad (a quarter of the people died), it turned out there were two and a half extra deaths for every one hundred people treated with steroids.

  • e40 6 days ago

    in the reasonable belief that this would reduce swelling

    It's amazing to me that they wouldn't track this from the beginning, and do the experiment, to vet the idea. I mean, they went all in on it, why not give it to 50% of the head injuries and see?

  • kbenson 5 days ago

    Possibly because if you make an experiment out of is, then there's ethical concerns that you might have to consider (or worry about other people throwing in your face), where as if you just do it because of your stated belief it would help, then that specific problem is no longer a concern.

    It's impossible to rationally consider all the myriad decisions we must make in the modern day, so people rely on rules-of-thumb that work in most cases (or we hope they work in most cases). For example, "don't perform medical experiments on people that may cause them more hard than not experimenting". Unfortunately, we often start relying on the rule-of-thumb as the reason instead of the heuristic that helps us short-circuit a rigorous rational reasoning, and then we end up with people using those rules of thumb why to deny, or criticize, actions where it makes no sense.

  • e40 5 days ago

    It seems like part of the world is going in the direction of verify don't assume, and the other half has decided that their gut always has the right answer. It's a strange time to be alive.

  • AstralStorm 5 days ago

    Except some of those won't be. We're taking life or death decisions here. (Or at least productive years of life.) There is always a cost and this cost did come to bite you sooner or later.

    The highest standard for decision making is definitely not "going with your gut". I wonder if anyone is actually claiming this or you just like to build strawmen. Typical reasons to adopt a practice is politics and marketing, then legacy, only then followed by effectiveness...

  • jessriedel 6 days ago

    I'm first in line to criticize unnecessary medicine based on flimsy evidence, but in principle there's nothing wrong with medical reversals. As you begin to collect data about anything, your best guess for the truth will be bouncing around wildly and only slowly settle down as the data accumulates. "Do no harm" is ok as a rule of thumb to counteract the natural tendency for doctors to err on the side of over-medicating (and to account for our expectation that a random intervention is net negative), but it's silly to use as an inflexible principle. If you're so cautious that you never recommend harmful interventions, you're missing out on plenty of interventions that are positive in expectation.

    Do any of these studies of "medical reversals" attempt to estimate how often reversals should be made ideally?

  • najarvg 6 days ago

    I think you may be missing the larger point the authors are trying to make. It's more nuanced. For every item on a treatment pathway, ask if there is peer reviewed evidence backed by a randomized clinical trial (where possible) to support it. If not, consider doing one. For evidence of harms done by medical reversal, see the other thread as well as consider reading the book "Ending Medical Reversal" by one of the authors Adam Cifu. Also consider following them on twitter if this topic is of interest (Adam Cifu, Vinay Prasad et al).

  • jessriedel 6 days ago

    Your first two sentences aren't supported by the rest of your comment because I'm not rejecting the entire article. (I don't disagree with any of that other stuff you wrote.) I'm just criticizing the implicit assumption that medical reversals are necessarily evidence of a mistake.

    Indeed, from the second paragraph (emphasis mine):

    > Medical reversals are a subset of low-value medical practices and are defined as practices that have been found, through randomized controlled trials, to be no better than a prior or lesser standard of care (Prasad et al., 2013; Prasad et al., 2011).

    That is, the authors assert that something is low value if it is later proven to not work.

  • the_af 6 days ago

    Trying to understand here: is the medical reversal itself which causes harm, or is the harm caused by all the time spent pursuing the wrong treatment before the reversal?

  • DoreenMichele 6 days ago

    That will depend on the treatment in question. In some cases, the practice is harmful per se. In other cases, it's not harmful per se, but will prevent a better treatment option from being deployed.

  • AstralStorm 5 days ago

    Finally, it might prefer people from supplementing the care with additional practices that are of value due to cost, even when not directly preventing it.

  • caycep 6 days ago

    To put it in statistics terms, is it more appropriate to look at theses as "reversals" vs. typical Bayesian learning as our understanding of anatomy and pathophysiology improves over time?

  • jessriedel 5 days ago

    (Assuming you're asking for clarification/confirmation.) Well, some will represent bad/unwise practices, and some will be prudent Bayesian corrections. My prior is that most cases of reversal are the former, but you'd actually have to analyze the past evidence and the decision-making procedure to tell.

  • J-dawg 6 days ago

    I suppose if you end up reversing an established thing, you'd better have a damn good reason why you were doing that thing in the first place.

    Why give any treatment that hasn't been through a randomised trial? (Unless, of course, you are giving it as part of a trial)

  • jessriedel 6 days ago

    Because sometimes we have good reasons to believe something works even without an RCT, and it would be unethical to withhold such treatment. (Not to mention that there is a continuum of RCT quality, so it's unclear what would even count and "proven".)

    The classic tongue-in-cheek example is parachutes:

    > Objectives: To determine whether parachutes are effective in preventing major trauma related to gravitational challenge.

    > Design: Systematic review of randomised controlled trials.

    > Main outcome measure: Death or major trauma, defined as an injury severity score > 15.

    > Results: We were unable to identify any randomised controlled trials of parachute intervention.

    > Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

    https://www.bmj.com/content/327/7429/1459

  • pmyteh 6 days ago

    This year's Christmas BMJ actually contained an RCT of parachute use when falling out of aeroplanes. It found no effect.

    It turns out that study design can also be a problem, even with 'gold standard' designs like the RCT...

    https://www.bmj.com/content/363/bmj.k5094

  • scott_s 6 days ago

    This is why David Gorski and Steven Novella advocate for Science Based Medicine: https://www.painscience.com/articles/ebm-vs-sbm.php Basically, it advocates incorporating scientific principles into reasoning about medical practices.

  • ChrisSD 6 days ago

    IMHO, mostly due to history and some caution. A thoroughly oversimplified history would go something like this:

    In days past a lot of medicine was based on what seems to work or what ought to work. Unless the effect is really obviously disastrous it's easy to believe an intervention is helping even if it's not. Medical research was of course done but doctors mostly went with their gut or whatever article they'd recently read in a journal and liked the sound of.

    It's only since the late 80's and through the 90's that Evidence Based Medicine really started to get going. Since then doctors have (sometimes slowly) come round to the idea. However all this research takes time. What do you do about those interventions that haven't been tested yet?

    You could stop doing anything that hasn't been thoroughly researched, but that risks letting people die or suffering unnecessarily.

  • DoreenMichele 6 days ago

    It also risks malpractice suits. Over-testing and over-prescribing are rooted in part in doctors trying to cover their own ass and avoid being sued, plus having a good defense in the event they do get sued.

  • AstralStorm 5 days ago

    You do know that ineffective practice also risks malpractice suits if it's identified? Even worse if by trying random approaches (face it, an untested experimental medicine is mostly random) you end up harming the patient.

  • the_af 6 days ago

    I'm not sure I understand the terminology. If "medical reversal" means -- apologies if I misunderstood -- "stopping a medical treatment which hasn't been shown to be helpful, or one that is not backed by evidence", why is the reversal itself the problem?

    Shouldn't the focus be on applying treatments backed by robust evidence in the first place, rather than on medical reversals? Of course the two are connected, but the wording seems odd. Medical reversals seem to be a symptom of the problem, not the root cause.

    Of course, I might be misunderstanding the terminology.

    edit: I've read the abstract and it seems I am indeed misunderstanding the definition, but for the life of me I cannot understand what "medical reversal" means in layman words.

  • YeGoblynQueenne 6 days ago

    I was wondering the same thing. I found this definition:

    Medical reversal occurs when a new clinical trial — superior to predecessors by virtue of better controls, design, size, or endpoints — contradicts current clinical practice. In recent years, we have witnessed several instances of medical reversal. Famous examples include the class 1C anti-arrhythmics post-myocardial infarction (contradicted by the CAST trial) or routine stenting for stable coronary disease (contradicted by the COURAGE trial).

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3238324/

    The article I link to has one author common with the paper originally posted, way above. Shame that this author seems to have lost the skill of ensuring that terms are propery defined.

  • hannob 6 days ago

    I think you're not misunderstanding, it's exactly what the term "Medical Reversal" is trying to capture. I.e. of course it's only a symptom, but it's something that can be measured to show there's a problem early on.

  • scott_s 6 days ago

    My laymen's definition of medical reversal: a current medical practice that is worse than what we did before.

  • the_af 6 days ago

    Ah, thanks! So indeed I was mistaken. I misunderstood it to mean "reverting an unproven/inconclusive treatment" when it actually means "conducting an unproven/inconclusive treatment".

    I was confused because "reversal" sounds to me as the act of ceasing to do something, i.e. "reverting" the treatment (if you're a programmer: I thought of "reversal" as in "reverting a mistaken commit using git"). I now see it's a technical term which means the opposite!

    Now it all makes sense.

  • scott_s 6 days ago

    Hmm. Sorry, no, that's not my understanding. I think the phenomenon of discovery is distinct from the changing of behavior. I believe it is the discovery itself that is called a "medical reversal." I think the submitted paper defines it in relatively laymen's terms: "Medical reversal occurs when a new clinical trial — superior to predecessors by virtue of better controls, design, size, or endpoints — contradicts current clinical practice."

    I think an important difference is that current clinical practice is not necessarily thought to be "unproven/inconclusive." Rather, I think people think it has a solid foundation, but better investigation reveals that not to be true.

  • FPGAhacker 6 days ago

    Sometimes the trial itself is an ethical dilemma. Sometimes it's impossible to control because a placebo that effectively mimics side effects of the subject are not feasible.

  • DoreenMichele 6 days ago

    Someone on HN once left a comment that made it really clear why it is better to "let x guilty men go free than incarcerate one innocent man." A big part of their point: if you put an innocent man in prison, you are still letting a guilty man go free. The real killer is still out there. And you aren't even looking for him because you have announced "case closed."

    So this is similar to part of the problem with low value or harmful medical practices proliferating. If you are doing x, you probably won't be doing y. Its use actively excludes the use of better therapies in most cases.

    But, worse, biological processes are complicated and there can be critical windows of time for x to happen. If people are ignorant of such a window and how to use it effectively, some people will have dramatically better outcomes than others in a way that promotes the all-too-common perception that it's just random. For medical issues, this can be literally life or death.

    Furthermore, use of low value procedures pollutes the data with lousy outcomes. If you don't identify that x treatment is the culprit, then the perception that patients with x condition have yadda prognosis proliferates. This actively promotes poor outcomes by encouraging doctors and patients alike to accept a poor outcome as the norm and to be expected for your condition.

    Additionally, once a practice proliferates, it tends to persist. It becomes a habit. Habits are hard to break.

    And doctors are people. Most people want to do something, anything rather than doing nothing. For a doctor, doing something, anything is probably less likely to get them sued for malpractice than taking a wait-and-see approach, even if waiting is the wiser move. It's going to be harder to defend the choice to do nothing if it goes to court. It flies in the face of how the human mind works.

    It takes substantial education, wisdom and self restraint to do nothing when the problem is your responsibility to fix. Even if you know that's currently the best course of action, it is all too easy to cave in the face of social pressure, especially if you have reason to believe that not going along to get along may come with substantial penalties (like a malpractice lawsuit).

    To my mind, the following linked article is related to that last point, but I also wrote it and I've had four hours of sleep. Apologies if it seems unrelated:

    https://raisingfutureadults.blogspot.com/2019/01/the-hand-li...

  • colechristensen 6 days ago

    >want to do something, anything

    Culture could be changed from the expectation that most encounters with a doctor result in an action. The problem is visiting a doctor costs hundreds of dollars for a short amount of time. It doesn't matter if you have insurance or free universal health care, it still costs hundreds of dollars regardless of the layers of abstraction you put on top of the billing.

    I personally wish encounters with doctors resulted in more tests or other data gathering (and hopefully that data made available de-identified for analysis)

  • ncmncm 6 days ago

    This work seems to assume that "randomized, controlled trials" necessarily produce correct and meaningful results, that such a trial is necessarily meaningfully possible at all, and that, where either of these is false, the treatment is valueless.

    But a randomized, controlled trial can produce meaningful results only where just one malady is being treated. The DSM is full of diagnoses that lump together a whole family of pathologies with (sometimes only superficially) similar symptoms, but entirely different causes. This is especially notable in psychiatry, but far from unique; for an extreme example, cancers.

    The reason such trials produce bad results is that there is no way to know which patients have the particular pathology whose cause is addressed by the treatment under test, without actually administering it to see.

    Actually performing such a trial, with an effective treatment agent, tends to produce strong results for a few patients, and null or actually harmful results in the rest. Nothing is wrong with the treatment, when applied to the patients who should get it, but the trial fails to produce a positive result.

    Confusing a bad trial with a bad treatment should be an error made only by ignorant observers, but it is all too commonly seen in apparently respectable media.

  • AstralStorm 5 days ago

    > Actually performing such a trial, with an effective treatment agent, tends to produce strong results for a few patients, and null or actually harmful results in the rest. Nothing is wrong with the treatment, when applied to the patients who should get it, but the trial fails to produce a positive result.

    Do you have any actual data to support this assertion, or is it just a weak sophism, same as used to support ineffective "integrative medicine" practices?

  • ncmncm 5 days ago

    What sort of data do you imagine would convince you? Or is this just a reflexive rejection of change?

    Insisting on data that demonstrates invalid RCTs while assuming that DSM diagnoses precisely distinguish causes, on the basis of no data at all, puts the cart before the horse.

  • Gatsky 6 days ago

    Searching for the scientific interest here... this is a list of trials that changed medical practice. The senior author has previously published such a list and written a book about it. What does this article add?