Skip to Main Content

During Monday’s grilling of Mark Zuckerberg in a Senate hearing that lasted for more than three hours, Illinois Sen. Dick Durbin asked the Facebook founder and CEO if he would be comfortable sharing the name of the hotel he was staying at in Washington. The question caught Zuckerberg off guard.

“Umm. … Uh. No,” Zuckerberg eventually answered, after thinking about it for some time.

advertisement

As this question illustrates, the recent firestorm over Facebook’s involvement in the Cambridge Analytica scandal is less about the data breach and more about the right to privacy and the limits to that right. It is also highlighting the ethical conflicts of implementing artificial intelligence without thoroughly considering societal norms. This scandal has some important bioethics lessons for health care leaders who are building machine learning and artificial intelligence models for clinical decision-making.

The biggest bioethical challenge in building these models is how we prevent algorithms from imitating human biases in decision-making. Last year, ProPublica reported how Facebook algorithms did not prevent discriminatory practices by housing advertisers who were excluding users by race, or by recruiting agencies who were excluding users on the basis of age.

The problem behind these shocking algorithmic decisions is that they reflect human biases that are ingrained in the data used to build them. It is reasonable to think that similar biases — whether they are racial or genetic — could make their way into clinical decision-making algorithms.

advertisement

For example, if an algorithm is repeatedly fed information about a certain disease among black patients without context, it may incorrectly assume that the disease is more common among blacks, leading to racial bias. Schizophrenia, for instance, is overdiagnosed in blacks, without contextualizing factors such as how racial and ethnic minorities in the United States are less likely to seek mental health treatment compared to whites. So, when black patients present with symptoms of mental illnesses, algorithms may incorrectly assume — like medical textbooks have for decades — that schizophrenia is more common in black patients than it really is. Similarly, if an algorithm is built to analyze risks or clinical outcomes from genetic tests, the conclusions will be racially biased if genetic data from minority patient populations are scarce.

Some of this is already happening: In 2015, a study that used data from the landmark Framingham Heart Study to predict risk of cardiovascular events led to mixed and biased risk calculations among black and Hispanic patients, mainly because the Framingham study included mostly white individuals.

A report in the journal Science revealed that artificial intelligence may actually amplify implicit racial and gender biases by picking up deeply ingrained prejudices concealed within language patterns. The paper illustrated that a statistical machine-learning model trained on standard text from the World Wide Web associated the words “female” and “woman” with professions in arts and humanities and the words “male” and “man” with engineering professions.

Algorithms are also prone to unethical clinical decision-making. Just like Facebook algorithms are built to generate maximum revenue from advertisements, clinical decision-making algorithms can be built in ways that maximize profit over optimal treatment for patients by overprescribing certain drugs or unnecessary imaging studies.

So far, the private sector has mostly been involved in designing algorithms for clinical decision support, but bioethical training programs should take a cue from the Facebook scandal and develop curricula that reflects trends in health care delivery to better inform these algorithms. As a physician-bioethicist, I am unable to find platforms or guidelines that can help me navigate this space.

To be sure, incorporating big data and machine learning into digital platforms that assist in clinical decision-making holds great promise — from reducing disparities in health care to addressing the burnout so many physicians face from administrative tasks. However, leaders in the medical research community and health care field need to play proactive roles in shaping these innovations.

While big data and machine learning may appear as threats, they also present unique opportunities to counteract biases. With some data experts calling for an international artificial intelligence watchdog to prevent discrimination against vulnerable communities by automated computed systems, we can start by taking small steps. Clinical decision-making systems should be built to detect — and address — known biases. These ethical concerns need to be incorporated early on in the development process.

The other advantage of these systems is that, unlike humans, algorithms can be created to not lie.

Initiatives that require clinical decision-making algorithms to account for known biases may seem complicated to develop, and in many cases they are. But if we are to move forward with the use of artificial intelligence to help physicians and other care providers make clinical decisions, ensuring that human flaws do not permeate computed clinical decision-making is a responsibility we cannot shy away from.

Junaid Nabi, M.D., is a nonprofit executive and medical journalist, a fellow in bioethics at Harvard Medical School, and a New Voices Fellow at the Aspen Institute.