Will defining the value of spine care change our practice behaviors?

By Ray M. Baker, MD

How readily will providers incorporate new scientific and economic proof of spine treatments (or lack of proof) into their practices?

As we embark on a quest to define and prove the value (both scientific and economic) of spine treatments, we must first understand if it will meaningfully impact the everyday practice of spine care. What good are strategies to improve quality of care and prove value—RCTs, CER, registries—if the average practitioner ignores the results? As Churchill once said, “However beautiful the strategy, you should occasionally look at the results.”

To some, this question might seem ludicrous. Given strong data supporting or refuting a given treatment, we skilled scientists and clinicians unquestionably change our practices accordingly. To imply otherwise is insulting. Yet, the poor diffusion of medical information into general practice is well documented. Fetal monitoring serves as an example less controversial to spine care providers. Randomized controlled trials in the 1980s, and systematic reviews of these RCTs in the 1990s, concluded that electronic fetal monitoring was no better than intermittent auscultation. Specifically, electronic fetal monitoring did not decrease the percentage of babies born with metabolic acidosis, low APGAR scores, or requiring admission to neonatal intensive care units. This monitoring increased rates of labor augmentation, epidural anesthesia, instrumental delivery and caesarean section. Almost 20 years later, 85% of all US deliveries utilize electronic fetal monitoring. Why?

Psychologists point to the fact that physicians generally feel immune from the forces that dictate everyday behavior. We believe ourselves to be smarter, more ethical and more altruistic than others. We are highly educated and trained in the scientific method. We are above reproach. If there is a problem, it is with other, less respected colleagues, not us. We are said to be in denial.

Others say the opposite—it is physicians’ altruism that paradoxically leads to seemingly irrational behavior. We want so badly to help others that we tend to perform treatments which we intellectually know are likely to fail. We have trouble saying no when the patient is sitting in front of us in the office.

In the end, it is probably a mixture of many things. Mark Twain had it right when he said, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

Why is it so hard for us to change practice behavior, even in the face of seemingly insurmountable evidence?

The following string of teasers is meant only to pique your interest in the topic of change. I have purposely omitted malpractice from the discussion. Although certainly a factor, it deserves its own, unique forum.

  • People value direct experience over knowledge. One article was cleverly entitled, “The tree of experience in the forest of information: Overweighing experience relative to observed information.”10 The direct impact of experience, because of its personal relevance, has a greater effect on decision-making than objective information.
    • Rates of HIV in Thailand and teenage smoking in the US increased during periods of intensive educational campaigns.
  • Imprinting: People tend to “anchor” to their first understanding of a subject. Once anchored, it is difficult to overcome beliefs through education.
    • This partially explains the extremely slow rate of adoption of Hibiclens-alcohol over Povidone-iodine for surgical preps despite compelling evidence.
  • Relativity: The tendency to estimate the value of things according to how they compare with other items. Similar to optical illusions, our minds can be tricked by cognitive illusions in interpreting our environment.
    • Studies show that a worker who accepts a $70,000-a-year job offer at a company where she is at the top of the wage scale will actually be happier than a worker who accepts a $75,000 job offer for the same job in a company that places her at the bottom of the wage scale.
    • Unfortunately, we cannot educate our way out of cognitive illusions. External “cognitive rulers” are needed to overcome relativity. This explains why physicians learn better when bench–marked against others, and why measuring outcomes from our own patient data is more instructive than journal clubs or more formal venues.
  • We emulate respected peers and use their behavior to help govern our boundaries for what is acceptable and unacceptable.
    • Most people cheat (bend the rules) a little. The extent of this cheating depends less on risk versus the gain and more on the individual’s internal moral compass. While most people cheat only a small amount, cheating increases markedly if they see others in their “circle of influence” cheat. Examples in medical circles include not following known “best practices,” “up-coding,” etc.
    • What is learned in training from respected mentors can overrule new information that proves those methods outdated.
  • People use either social or market driven behaviors, depending on the situation. If both exist (a friend asks you to help him move for $25), the social system is more fragile and will yield to market forces.
    • Physicians see themselves as primarily either altruistic or entrepreneurial. Coupled with the emulation of respected peers, and with relativity, this can lead to a dramatic change in a whole medical community (the McAllen, TX effect).
  • Large-scale change often only requires modification of important behaviors (actions) in a few key individuals.
    • HIV rates in Thailand decreased rapidly only after educating trusted prostitute “leaders,” who then stressed the importance of condoms to other prostitutes. Influencing behaviors within a very small group effected massive change.
    • We need to study “positive deviance”—individuals with superior outcomes—and emulate that behavior. What are they doing differently than everyone else?

  • Taking something away or discounting it leads us to devalue the object. Patients paying full price for a cold medicine had fewer symptoms than those who took an identical medication but were told that it was a low cost generic.
    • This is why we cannot convince patients that they do not need an MRI with one week of low back pain. MRIs are now seen as an entitlement.
  • Simple is better. We avoid complex decision–making when possible. When faced with a complex intellectual or ethical decision, we tend to default to an established path.
    • In a study where doctors were informed on the day of hip replacement surgery that a patient had not tried one accepted alternative, the surgery was usually cancelled. If the doctor was informed that the patient had not tried two accepted treatments, the surgeon usually went forward with the surgery.

Many of these examples are extrapolated from Dan Ariely’s work and illustrate what he calls “predictably irrational” behavior. Humans predictably follow irrational behavioral patterns. NASS, as your spine society, has a responsibility to learn more about these behavior patterns, how they affect our lives and our ability to practice, and how we can overcome them. We need to look beyond the strategy to the results.

Join this Discussion

We started this blog to foster conversations among members about defining and measuring value, and changing practice behaviors. We hope you will share some thoughts here with others and find some solutions along the way. Please join the discussion by commenting below.