Bombshell news out of Oregon today: a large-scale randomized controlled trial (RCT) of what happens to people when they gain Medicaid eligibility shows no impact on objective measures of health. Utilization went up, out-of-pocket expenditure went down, and the freqency of depression diagnoses was lower. But on the three important health measures they checked that we can measure objectively--glycated hemoglobin, a measure of blood sugar levels; blood pressure; and cholesterol levels--there was no significant improvement.
I know: sounds boring. Glycated hemoglobin! I might as well be one of the adults on Charlie Brown going wawawawawawa . . . and you fell asleep, didn't you?
But this is huge news if you care about health care policy--and given the huge national experiment we're about to embark on, you'd better. Bear with me.
Some of the news reports I've seen so far are somewhat underselling just how major these results are.
"Study: Medicaid reduces financial hardship, doesn’t quickly improve physical health" says the Washington Post.
The Associated Press headline reads "Study: Depression rates for uninsured dropped with Medicaid coverage"
At the New York Times, it's "Study Finds Expanded Medicaid Increases Health Care Use"
I think Slate is closer to the mark, though a bit, well, Slate-ish: "Bad News for Obamacare: A new study suggests universal health care makes people happier but not healthier."
This study is a big, big deal. Let me explain why.
The Health Wonk's Conundrum: It's Damned Hard to Stuy the Effects of Being Uninsured
We think we know that giving people more access to doctors at lower cost makes them healthier--it's only intuitive, right? Otherwise, why are we spending so much time agonizing over whether to pay extra every month for the PPO?
But finding something intuitively plausible isn't actually the same as knowing it.
Until 2011, there had been only one RCT of health insurance. Most of the studies of health insurance are panel studies. You take years worth of data from the same people, like census surveys, or the National Health and Nutrition Examination Survey. You start with year one and divide people into groups: those who have insurance, and those who don't. And then you look at what happens to them over time. There are some problems with this, of course. For one thing, those people didn't necessarily stay uninsured; most people who are uninsured lose insurance for only a few months. And the surveys don't necessarily have all the data you'd ideally like to look at to see the effect of insurance on health. So these studies have tended to focus on mortality, because it's a piece of data that virtually every survey collects, and it's fairly unambiguous. We all agree on what it means to be "dead". And also, that being dead is definitely very bad.
Most of you probably have probably heard the statistic that being uninsured kills 18,000 people a year. Or maybe it's 27,000. Those figures come from an Institute of Medicine report (later updated by the Urban Institute) that was drawn from these sorts of studies.
The biggest problem these studies have is that people who have health insurance are different from people who don't--and not just because they don't have a pretty white card from Aetna.
Some of the differences we know about: the uninsured are poorer, more likely to be unemployed or marginally employed, and to be single, and to be immigrants, and so forth. And being poor, and unemployed, and from another country, are all themselves correlated with dying sooner.
Then there are the differences that we don't know about--that don't show up in any of the big panel surveys. For example, say you have very poor impulse control. You are probably more likely than the average citizen to end up uninsured, because you are probably more likely than the average citizen to end up divorced, fired, and with a prison record. You are probably also more likely than the average citizen to drive without your seatbelt, ride your motorcycle helmetless while sipping a beer, or subsist on nothing but pork rinds and Mountain Dew for weeks at a time. When you finally drive that Harley into a tree, your death will raise the mortality rate for the uninsured, even though it would have taken something much stronger than an insurance policy to keep you alive. This is a problem that social scientists call "omitted variable bias", and it plagues the hell out of most studies of health insurance.
To give you a taste of what I mean: panel studies not infrequently show that putting people on Medicare or Medicaid kills them. No, I'm not kidding. According to some of the studies (the same ones that show being uninsured kills), people on Medicare and Medicaid have higher mortality rates than people with no insurance at all, even after you control for things like age, and smoking.
Okay, you can tell a story where Medicaid is actually killing people: maybe only bad doctors take Medicaid, and going to a bad doctor is worse than going to no doctor at all. (In health-wonk jargon, this is known as "iatrogenic mortality"). But really, how likely is this? The authors of the studies have explained--convincingly!--that people on Medicaid are more likely to be poor, and have substance abuse problems, and all manner of other issues that make them die sooner, so that what you were seeing was not actually the effect of being on Medicaid, but the effect of being the kind of person who goes on Medicaid.
The problem is, the same sort of problems plague the data about the uninsured. Ultimately, there's a danger that we're doing an intuition gut check: it doesn't make sense that Medicaid would kill you, while it makes perfect sense that being uninsured would, so we look for reasons that Medicaid patients are different . . . and then stop before we get to the uninsured. Oh, there are some observational studies to support those beliefs, but there's always the risk that a heavy dose of "that just doesn't make any sense" is creeping into our analysis.
Meanwhile, some of the observational studies don't even show any benefit from insurance. For example, one study that looked at old people before and immediately after the establishment of Medicare showed that putting the entire over-65 population on health insurance produced zero detectable improvement in their mortality rates. And the biggest observational study of all, with over 600,000 people in it, showed no mortality benefit from insurance. This was not done by some conservative crank: the author was Richard Kronick, a UC San Diego professor who was an advisor to the Clinton Administration.
So does health insurance save your life, or not? Verdict of observational studies: hard to tell. Looking at the sea of data, we can't rule out the possibility that putting more people on insurance will save 0.0 lives. We can't even rule out the possibility that Obamacare will result in net deaths. (I'm not saying I think Obamacare will kill people. I'm just saying that this possibility is within the bounds of what at least some solid observational studies have found. Which just points out the issues with observational studies.)
The above is a summary of what I wrote in a 2010 article for The Atlantic, in which I pointed out that despite the confidence with which statistics like "44,000 people a year die from lack of health insurance", we had no idea how many people, were dying, or even if that number was greater than zero. But I confess that the new Oregon results surprised even me. I said at the end of the article that I thought it unlikely that health insurance had zero impact on outcomes--though I also said that if it was this hard to detect, I thought that the effect was probably small. But though I triggered quite a bit of outrage from Obamacare supporters, the article was mostly about the difficulty of doing these kinds of studies, not an actual argument that there was clearly no benefit at all to being insured.
Enter the RCT
Ideally what you want to test the effect is not survey data, but a randomized controlled study: divide people into two groups, and give one of those groups insurance, while the other group stays uninsured. As you can imagine, it's hard to do. For one thing, you'd probably have trouble getting people to stay in the control group once you put them there. For another, it's going to be pretty expensive to insure a bunch of people just so you can see how many of them get sick.
Luckily for us, in 1971, Rand went ahead and did it anyway. Well, close. They took thousands of families and divided them up into five groups. Group one got totally free care. Groups two through four got "traditional" (for the time) insurance plans with various degrees of cost sharing, ranging from 20% to 95% (meaning that in Group 4, only 5% of your average expenses were paid). The cost sharing plans also had a cap on the percentage of your income that you'd have to pay out of pocket. Group 5 got an HMO. Then they looked to see what differences emerged in health outcomes.
Shocker: none did.
Oh, there were a couple of small effects: slight improvement in hypertension control among the poor; one on dental, also among poor people; and some on vision (translation: when they were cheap, more people updated their eyeglass prescriptions). But otherwise, no detectable effect. People who had more generous coverage consumed more health care. But they weren't healthier. In fact, the people who had less generous coverage reported being less worried about their health and taking less sick time, presumably because they weren't going to the doctor to find things to worry about.
Conservatives like to call this the "gold standard" for health care research. Liberals, understandbly, do not. They point out that it's just one study. It's old, too--the study ran between 1971 and 1982, when the health care system was arguably quite different. And they didn't actually compare the effects of insurance to none; they just compared the effects of basic insurance to the effects of more generous insurance. All of these are perfectly fair critiques.
Enter the Oregon health study.
In 2008, the state of Oregon gave us a beautiful natural experiment. They put together enough money to add 10,000 new people to Medicaid. Unfortunately, that was many fewer people than actually wanted to join. So they held a lottery.
Health care experts put on their prettiest bonnets and their best silk aprons and began dancing around the maypole, singing "Oh frabjous day! Calloo! Callay! An actual RCT!" Then a bunch of very smart ones snapped into gear and started recruiting participants for a giant, gorgeous study. Ultimately they signed up about 6,400 of the 10,000 people who'd won the Oregon lottery, and a control group of about 5800 who had entered, but not managed to secure a Medicaid slot. America was getting its first large national trial to really test the question: how much does putting someone on Medicaid improve their health?
In 2011, the researchers released the results from the first year of participation. As I wrote at the time, it was a bit of a rorschach test. Folks who had supported Obamacare viewed it as proof that Medicaid is down there in the trenches, saving lives. I saw the results as more ambiguous:
. . . it pretty much confirms what has come to be my view of the evidence on the impact of insurance: you see a very clear impact on utilization, including a handful of recommended preventative screenings, as well as hospitalizations and other treatments. You see a moderately strong effect on both patient and provider finances: fewer medical bills sent to collections, and lower self-reported financial strain from medical costs. And people like being insured, so various self-reported measures rise. The rest is more ambiguous.
For example, the strongest impact on health that they find is that self-reported health status rises by a modest-but-still-significant 0.2 standard deviations: reported depression goes down, while the people who won the lottery were more likely to say that they were in good, very good, or excellent health. This rules out the theory that people who have more contact with the health system might feel less healthy because their doctor gives them more things to be paranoid about, but as Finkelstein et. al note, it doesn't quite show that they're actually healthier. Indeed, about 2/3 of the improvement in self-reported physical health comes almost immediately, before people had a chance to consume much in the way of health services; this suggests that the effect may be psychological rather than the result of any improvement in their physical well being. As the authors say "Overall, the evidence suggests that people feel better off due to insurance, but with the current data it is difficult to determine the fundamental drivers of this improvement."
Meanwhile, while the measures of utilization are strong, the sort of "quality" measures that people often want Medicare to use for reimbursements are considerably less promising:
First, we examined hospital utilization for seven conditions of interest and of reasonably high prevalence in our population: heart disease, diabetes, skin infections, mental disorders, alcohol and substance abuse, back problems, and pneumonia. We found a statistically significant increase in utilization (both extensive and total) only for heart disease. We also explored the impact of health insurance on the quality of outpatient care (admissions for ambulatory care sensitive conditions) and three measures of quality of care for inpatient care (not having an adverse patient safety event, not being readmitted within 30 days of discharge, and quality of hospital). We were unable to reject the null of no effects on either outpatient or inpatient quality, although our confidence intervals are extremely wide and do not allow us to rule out quantitatively large effects. Finally, we examined whether insurance was associated with a change in the proportion of patients going to public vs. private hospitals and were unable to detect any substantive or statistically significant differences.
As for lower mortality--economist-speak for "confirms that Medicaid does, indeed, save lives", the authors didn't find any such thing. I quote: "Panel A shows that we do not detect any statistically significant improvement in survival probability." Mortality and related metrics such as life-expectancy is the easiest thing to measure--we can all pretty much agree on who is dead, and that death is generally pretty bad. It's also, for obvious reasons, the easiest emotional appeal for proponents of a program.
But death is not that common in the under-65 population, so unless Medicaid has a really large effect on mortality, it's not going to show up in a one-year study.
Part of the problem is that in the first year of results, they hadn't been able to look at what you might call objective measures of health, like heart attacks, strokes, or amputations. They had looked at mortality, but had not seen a significant effect. So while most of the outlets that reported these results said that they showed "Medicaid improves health outcomes", all the study could actually report was that people said they felt healthier--subjective measures of health. Objective measures were due in the year two study, which was supposed to be out last summer. However, Obamacare's supporters had every reason to be optimistic that the objective measures would look good, too. Why would people say they felt better if they weren't?
The Data Speaks
Sadly, for some reason the results of the second year study were delayed. That's a real pity, because many voters going to the polls last November, and governors considering whether to do the Obamacare Medicaid expansion this spring, would probably have liked to have had this data sooner.
Here's the net result:
We found no significant effect of Medicaid coverage on the prevalence or diagnosis of hypertension or high cholesterol levels or on the use of medication for these conditions. Medicaid coverage significantly increased the probability of a diagnosis of diabetes and the use of diabetes medication, but we observed no significant ef- fect on average glycated hemoglobin levels or on the percentage of participants with levels of 6.5% or higher. Medicaid coverage decreased the probability of a positive screening for depression (−9.15 percentage points; 95% confidence interval, −16.70 to −1.60; P=0.02), increased the use of many preventive services, and nearly elimi- nated catastrophic out-of-pocket medical expenditures.
No statistically significant treatment effect on any objective measure: not blood pressure. Not glycated hemoglobin. Not cholesterol. There was, on the other hand, a substantial decrease in reported depression. But this result is kind of weird, because it's not coupled with a statistically significant increase in the use of anti-depressants. So it's not clear exactly what effect Medicaid is having. I'm not throwing this out: depression's a big problem, and this seems to be a big effect. I'm just not sure what to make of it. Does the mere fact of knowing you have Medicaid make you less depressed?
For that matter, I'm not sure what the policy implication is. If you wanted a program to cure depression, Medicaid is probably not what you'd design.
So back to cholesterol and blood pressure, which are exactly the sort of thing Medicaid is supposed to take care of. What does this study tell us? That Medicaid--or health insurance more generally--is useless at curing physical disease?
Not quite. Oh, to be sure, Slate is right that this is not good news for Obamacare--which, you may recall, got half of its coverage expansion by putting more people into Medicaid.
There's been a bit of revisionist history going on recently about what, exactly, its supporters were expecting from Obamacare--apparently we always knew it wasn't going to "bend the cost curve", or lower health insurance premiums, or necessarily even reduce the deficit, and now it appears that we also weren't expecting it to produce large, measurable improvements in blood pressure, diabetes, or blood sugar control either. In fact, maybe what we were always expecting was a $1 trillion program to treat mild depression.
Well, that's not how I remember it; as I remember it, Obamacare was going to save tens of thousands of lives every year. And it's hard for me to look at this study and see the kinds of numbers that save tens of thousands of lives every year.
You can squint hard at the data and say, well, sure, the effects weren't statistically significant, but there was some improvement! Much such squinting has been going on. But if there had been a slight, not-statistically-significant decline in the health of the Medicaid participants, I'm skeptical that many--or any--of our squinters would have been touting the probative power of those sorts of small effects. As someone I was talking to earlier noted, "It's got huge confidence intervals" is not normally the sort of thing you hear when arguing that a study supports your thesis. Our intuitions about health care, not the data, are doing a lot of heavy lifting here.
When you do an RCT with more than 12,000 people in it, and your defense of your hypothesis is that maybe the study just didn't have enough power, what you're actually saying is "the beneficial effects are probably pretty small". Note that we're talking about a study the size of a pretty good Phase III trial for Lipitor, Caduet, or Avandia--some of the leading new drugs for treating high cholesterol, hypertension, and diabetes. Of course, to be fair, those trials enroll only people with the disease they're targeting, so you should get more statistical power--but then, to also be fair, many of those studies have many fewer than 12,000 participants and still achieve statistical significance.
And as Katherine Baicker, a lead researcher on the Oregon study, noted back in 2011, "people who signed up are pretty sick". Yet the study failed to find statistically significant improvement on the three targets associated with the most common chronic diseases. This, mind you, is the stuff that we're very good at treating, and which we're pretty sure has a direct and beneficial effect on health.
Hypertension meds and insulin/home blood sugar monitoring are not quite up there with antibiotics as a 20th century medical miracle, but they're in the Top 5. By one estimate I've seen, hypertension control has cut the death rate from stroke in half, and from heart attacks by a third. And as medical interventions go, it's easy: low cost, low effort, fairly minimal side effects. The hardest part is to convince patients to keep taking the pills, because hypertension has no painful symptoms until oops! major cardiovascular event.
Statins have more side effects, and diabetes control is much more onerous. Still, I'd have expected to get more power out of a study that treated 6,400 sick, poor people who previously had no insurance.
Another way to look at it is to think about the "number needed to treat"--which is to say, how many people do you need to treat to avoid some bad outcome? The estimate I've seen is that you need to control severe hypertension in 29 patients to prevent a single stroke in a five year period, and if the hypertension is less severe, that NNT rises as high as 118. According to the Oregon study, an extra 85 hypertensives got their blood pressure into the normal range, compared to the control group. So by putting 6,400 people onto Medicaid, we may have prevented as many as three strokes every five years, or as few as none.
It's worth noting, as Josh Barro pointed out on Twitter, that the average diastolic blood pressure of the Medicaid patients declined by only 0.81 points. This does not suggest that all 85 were achieving very large declines in their blood pressure. Either a few people got a lot of control, or a lot of people dropped a few points below a somewhat arbitrary cutoff line. By which I mean that cutting your blood pressure from 111 to 109 does not drastically decrease your stroke risk, but it will move you from "hypertensive" to "reasonably well controlled"
But that doesn't mean Medicaid has no effect on health. It means that Medicaid had no statistically significant effect on three major health markers during a two-year study. Those are related, but not the same. And in fact, all three markers moved in the right direction. They just weren't big enough to rule out the possibility that this was just random noise in the underlying data. I'd say this suggests that it's more likely than not that there is some effect--but also, more likely than not that this effect is small.
I don't want to understate either, however; these are the major chronic diseases we should be expecting Medicaid to help. I saw it suggested on twitter that the perhaps much larger effects on cancer, Parkinsons, and Alzheimers would take much longer to detect. In fact, it would take decades, since the majority of the new Medicaid patients were under the age of 50, and the average age of onset for those diseases is, respectively, 67, 60, and 72. Treatment effects would take even longer to discover, but unfortunately, we wouldn't actually discover them, since the majority of people with those diseases get treated on Medicare already. There's no control group to compare them with.
And even if these design issues somehow withered away, there would still be arguing, because in slow-progressing diseases, it is easy to confuse early detection with prolonging life. If you've got a disease that's bound to kill you by the age of 85, and I discover it at 72 instead of 84, I've dramatically increased your "survival rate", but I haven't actually helped you survive. This has been a common refrain from critics of the American health care system when the system's boosters point out that America does really splendidly on cancer survival rates.
But What About Non-Health Measures?
And finally we come to the item I discussed above: financial protection. That seems to be large. About 5% of the control group, or 300 people, reported having "catastrophic" medical expenses of more than 30% of their income. Less than 1% of the Medicaid group did. That's quite statistically significant, and I imagine that the folks who had their income protected found it quite personally significant too. On average, the Medicaid group spent about $215, or 40% less, than the control group. They weren't totally shielded from health care costs; over 40% of them still reported having medical debt, and about 10% said they'd borrowed money to cover medical expenses. But that compares to 57% and 25% in the control group.
As I said in 2011, I find this entirely plausible. Government health insurance unquestionably functions as income protection. What's less clear is whether it functions as health protection. Unfortunately, "paying for someone's health care means they have more money" is not a really interesting or surprising result. It's an important result, and it's lovely to have data. But I can't think that anyone, even the most ardent opponents of Obamacare, really thought that the Oregon study would find anything else. This merely gives us exact numbers.
Here's what is interesting: somehow, the control group is doing a pretty good job of getting access to health care, even without the safety net . . . or if you prefer, they're doing about as badly as the folks on Medicaid.
In other words, another way to say that "the study was underpowered" is to say that "the difference between the insured and the uninsured is so small that we can't pick it up in a study of 12,000 people*. This tells us one fo two things. Either people with insurance are doing an okay job of getting treatement for all the major chronic diseases--which is startling, because as you may recall one of the main reasons that we needed Obamacare was all the poor uninsured people who can't control their blood pressure or diabetes. Or that the treatment Medicaid patients get for their chronic diseases doesn't do them much good.
Even if you think that Medicaid has larger effects than we're seeing here, I think you also have to acknowledge that many of the uninsured seem to be surprisingly good at accessing the health system, if not paying for it. At least on the markers that the study looked for, the majority of them--even the majority of the diabetics, hypertensives, and hypercholsterolemics--are doing about as well as their counterparts in Medicaid. They maybe don't feel as good about it, but from the outside, they're not that much different.
This was pretty much the thesis that Richard Kronick offered, when his observational study (much to his surprise!) suggested that there was no adverse mortality risk to going without insurance. Maybe those without insurance, he said, were simply finding a way to get at least basic care. Maybe in a costly and financially risky way, but still getting it.
If that's true, though, here's the question we have to ask: is Medicaid, or Obamacare, the program that we would design to solve these problems? We might think that they'd better be solved with free mental health clinics, or cash.
I'm not sure that if you'd waved this study at the American public two years ago, they'd have said, "Yup, this makes me want to put 16 million new people into Medicaid, and enact a giant new regulatory apparatus to force everyone else in the country to buy insurance." But of course, we didn't have this study. Instead, we heard that 150,000 uninsured people had died between 2000 and 2006. Or maybe more. With the implication that if we just passed this new law, we'd save a similar number of lives in the future.
Which is one reason why the reaction to this study from Obamacare's supporters has frankly been a bit disappointing. Not because I expected them to fall on their knees and say, "Oh my God, national health care was a terrible mistake!" Even if I thought that was the obviously correct actual response, well, I've met people before, and that's not how they act.
But at this point, the only two large-scale randomized control trials that we have done on the benefits of paying for peoples' health care have both come back showing surprisingly small effects. In 2011, when the first results came out of Oregon, that was not what Obamacare's supporters were predicting. They were predicting that the second phase of the Oregon study would show large, significant effects on basic health measures like blood pressure, blood sugar, and cholesterol control. It didn't. There's really no other way to put it.
A good Bayesian--and aren't most of us are supposed to be good Bayesians these days?--should be updating in light of this new information. Given this result, what is the likelihood that Obamacare will have a positive impact on the average health of Americans? Every one of us, for or against, should be revising that probability downwards. I'm not saying that you have to revise it to zero; I certainly haven't. But however high it was yesterday, it should be somewhat lower today.
That was not, let us say, the tone of much of the commentary I read. The financial effects tended to be punched up at the top even though they are the least surprising or interesting result. Depression also ranked high. The health effects often, er, less so--unless it was to explain why actually, these are surprisingly great.
And in fairness, it is only one year. Sadly, my understanding is that we aren't going to get any more years, because Oregon eventually found the rest of the money they needed to add more people to Medicaid. I mean, I'm not sad about that, but the lost data . . . well, you know what I mean. All of the strong beneficial effects could show up on cancer or congestive heart failure, and we'll never know. Or this could be a fluke. The confidence intervals absolutely include the possibility of a strong positive result, and it may just be bad luck that this time, we got an abnormally poor sample.
Nonetheless. This is what we have. It's one of two major RCTs that have ever been done on insurance. And like the first one, it doesn't show a signficiant effect. That is huge news. Not good news--obviously, it's much nicer if giving people money to pay for health care makes them obviously much healthier. But big.
And it's actually bigger, and more important than Obamacare. We should all be revising our priors about how much health insurance--or at least Medicaid--really promotes health. What this really tells us is how little we know about health care, and making people healthy--and how often data can confound even our most powerful intuitions.
* (added 4/2, for clarity)