Hospitals and health care companies are increasingly tapping experimental artificial intelligence tools to improve medical care or make it more cost-effective.

At best, that technology has the potential to make it easier to detect and diagnose diseases, streamline care, and even eliminate some forms of bias in the health care system. But if it’s not designed and deployed carefully, AI could also perpetuate existing biases or even exacerbate their impact.

“Badly built algorithms can create biases, but well-built algorithms can actually undo the human biases that are in the system,” Sendhil Mullainathan, a computational and behavioral science researcher at the University of Chicago’s Booth School of Business, told STAT’s Shraddha Chakradhar at the STAT Health Tech Summit this month.

advertisement

Mullainathan also spoke with STAT about about the importance of communication in developing AI tools, the data used to train algorithms, and how AI could improve care. This conversation has been lightly edited and condensed.

Tell us about what happens when a health algorithm doesn’t work the way its designed to.

advertisement

This story is, I think, a really interesting one. … Sometimes AI is sprinkled around as if it were a magic fairy dust. And I think this story is one that I would keep in the back of my head whenever you hear the phrase AI. So this was a project at Google a few years back. This team had built an algorithm to take chest X-rays and identify disease from the chest X-ray. … And a friend of mine at that time had been working on interpreting what algorithms see. So this algorithm had really good performance and they were super excited. And this friend of mine who works at Google … they had reached out to him and said, “Oh, so can we can we use your technique? We’re curious, what is the algorithm looking at?”

And when they did that on a bunch of X-rays, they noticed it was looking in a particular region. They were like, well, this is odd. So they zoomed in and as they zoomed in and clarified what was going on in that region, what they saw were these pen marks. And they’re like, “What is this doing here?” And it turned out that in their dataset, when radiologists noticed something interesting, they would put pen marks there. And the algorithm wasn’t identifying disease so much as it was identifying pen marks.

And you can see how in the data they had, they had what appeared to be a good performing algorithm because pen marks was associated with disease. But it’s unlikely to do that anywhere else, especially if what you were thinking was, “Let’s automate radiologists out of there.” So in a perverse way, the algorithm was seeing something quite different than by all accounts, it appeared to be seen.

Why does that matter that it caught the pen marks? 

Well, it’s very close to a correlation doesn’t equal causation. What you think the algorithm is doing is detecting disease. Instead, it’s detecting pen marks. So let’s suppose we had taken this algorithm and then deployed it somewhere else. And suppose we had said, “Wow, this thing does as well as radiologists. Let’s get the radiologist out of the system.” OK. But there are no more pen marks now. And it’s even worse. From this system, this data happened to have pen marks. There’s lots of systems where they don’t put pen marks. Algorithms pick up on correlations in the very narrow dataset that they’re given. But those correlations don’t hold outside of the context in which they’re trained necessarily. So the real challenge in these data is finding the correlation or finding the signal that is going to hold outside of the very narrow training context. And what’s particularly weird about this example that I just want to pause on is that many people think of the problem in algorithms as being something computational or it needs some fancy technique. But you’ll notice here it was just almost a communication problem.

Every radiologist knew they put pen marks. Something was broken in the human communication system that when the data was handed over, they didn’t say, “By the way, every X-ray that has disease also has pen marks, every X-ray that doesn’t, doesn’t have pen marks. So you might want to watch out for that.”

Perhaps the best example of your work emerged around this time last year when you found that racial bias in a commonly used hospital algorithm was working to perpetuate racial bias in patients who were in that hospital system. Can you talk to us about that? 

This is one of the strangest pieces of research I’ve ever worked on, just because you rarely have a chance to do something that’s on this scale. So this is a category of algorithm that, you know, depending on how you count either 60 million or 100 and something million of patients are exposed to. So it’s care coordination programs [and] you’re trying to decide which patients should be put into them. Lots of health systems buy an algorithm that will take their data and will rank patients according to how much care they’ll need in the coming year.

Sendhil Mullainathan spoke with STAT about the use of AI tools in medicine.

And the way these algorithms work is they’re just predictive — they say, “Hey, based on everything I know about you, how much care do you tend to use? And if you look like you’re going to use a lot of care, let’s put you in these expensive care coordination programs.” They make a lot of sense. And so what we did is we said, “OK, let’s take these programs and let’s look at how they do.” … They do a good job of finding the people who need a lot of care. And that’s why people buy them.

But the surprise came, as you alluded to, in the racial element of it. When you looked at how well they did for whites versus Blacks. What you found is at the same level of illness, Blacks were much lower ranked than whites. To the extent that if you were to equalize, you would more than double the number of Blacks being put into these programs. So it’s as if for the same level sickness, Blacks were given a much lower score.

It’s tempting when you think of algorithms being biased to imagine there’s something nefarious going on. But when you dug into this, what happened was another just fairly simple communication error — but very consequential communication error. So I keep using the words “find the people who need lots of care.” So what does that mean, care? If we unpack it, there are two ways we could define care.

We could go to your data. Look at your claims and say, “Oh, here’s a person who we’ve spent a lot of money on.” Now, that’s the easiest data to get, because that’s claims data. You can also go and look at care and say, “Oh, this is a person who ends up being very sick.” … These two are used interchangeably in health care a lot. It’s like health as measured by expenditures or health as measured by physiological state. It so happened the algorithm was trained on health as measured by dollars.

Now, here’s a tragic fact in the United States. Health — physical health and health as measured by dollars — don’t relate in the same way for Blacks and whites at every level of sickness. We spend more on whites than we do on Blacks. So what the algorithm was actually asked to find is expensive patients. Expensive patients are disproportionately white patients because we spend more on them. So this subtle miscommunication that crept in and it actually there were there were six algorithms of this variety and it actually crept in apparently to all of them. So it’s not as if these people were stupid. It’s not as if you just get better data scientists. You just get a better team.

I think what’s happening is we’re learning how to take our understanding of a problem and to put it into code, and recognizing the code is very fragile. That the data we use, the exact data we use, makes a difference. The exact variable we use makes a difference. And we haven’t yet learned that ability to convert the problem in our heads into an AI-ready problem in a way that doesn’t create problems. … So this is not a negative statement, it’s a part of this learning process.

How does one develop an AI engine as a closed loop system to detect those biases and improve upon the accuracy? And how does the impact of explicit and implicit biases creep into this design? 

It is true that a poorly built algorithm will end up embodying the biases that we have as humans. That’s what we see here, because costs are a biased function of health. We train on costs and we got this problem. And I think the first line of defense we have against all of this is just to check.

… I actually think what we’re starting to see in other areas is that badly built algorithms can create biases, but well-built algorithms can actually undo the human biases that are in the system. So actually, algorithms are a remarkable remedy for ourselves. And one of the things that’s missing is that when people talk about algorithmic bias, they’re looking at the algorithm and the creators of the algorithm. But they’re forgetting that in many cases that algorithm is a substitute or an aide to a human and a much older literature on bias is on human bias and human bias is much bigger, much more intractable, and it is very hard.

And the nice thing about algorithms is that they sit in a box and we can look at their behavior, we can tweak them, we can keep working on them. I can’t go in and tweak what’s inside a doctor’s head. … So algorithms [that are] poorly designed really are quite a big problem. But they actually offer this amazing opportunity for us that if we’re careful, we actually can do a lot more good things with them.

Can you give us an example of when AI can be beneficial?

I don’t know if you’ve ever read this book “The Diving Bell and the Butterfly.”

This is about this guy who basically — to make a long story short — he basically had to blink. He was trapped in his body. And the only muscle he could you can move where his eyelids. So you could blink. So you think of how horrific that is. You just can’t move anything else but your eye. But he wrote this entire book to a sequence of blinks. They had a whole code …

So why am I telling you this? There is now work that over the last 10 years has said, wait — for people in that position, we can put EEGs on their head, and actually read out the brain signal, build an algorithm that translates that brain signal consistently into their attempt as to where are they looking on a particular keyboard, for example? Which allows people to basically type with their minds, which is like science fiction. And it is something that seems unbelievable …

You can do stuff like this because we have such well-trained algorithms now to convert EEG signals into something. So that’s just an example of the kind of amazing, magical things that we can do. And the reason all of these things can be done is algorithms can find signals in things that we as humans don’t know that they’re there. Take an ECG. Hundreds of thousands of people, even in a small hospital, have ECGs every year. We know how to look for eight, 12, 15 things in them. But that is an entire readout of the electrical current of the heart.

There is a ton more signal for heart function in that than the human mind can detect. So I think the optimism is we now have this unbelievably powerful tool for finding signals and things that we didn’t even know how to look with our eyes. But conversely, we have this downside, which is we have to learn how to use this tool.

Sign up to receive a free weekly opinions recap from our community of experts.
Privacy Policy