Skip to Main Content

Artificial intelligence has the potential to transform health care. It can enable health care professionals to analyze health data quickly and precisely, and lead to better detection, treatment, and prevention of a multitude of physical and mental health issues.

Artificial intelligence integrated with virtual care — telemedicine and digital health — interventions are playing a vital role in responding to Covid-19. Penn Medicine, for example, has designed a Covid-19 chatbot to stratify patients and facilitate triage. Penn is also using machine learning to identify patients at risk for sepsis.


The University of California, San Diego, health system is applying AI by using machine learning to augment lung imaging analyses for detecting pneumonia in chest X-rays to identify patients likely to have Covid-19 complications The U.S. Veterans Health Administration is piloting an AI tool to predict Covid-19 outcomes such as length of hospitalization and death. Mass General Brigham developed a Covid-19 screener chatbot to rapidly stratify sick patients, facilitate triage of patients to appropriate care settings, and alleviate the workload on contact centers.

One problem with artificial intelligence is that its ability to interpret data relies on processes that are not transparent, making it difficult to verify and trust output from AI systems. Its use in health care raises ethical issues that are paramount and fundamental in order to avoid harming patients, creating liability for health care providers, and undermining public trust in these technologies.

AI-based health care tools have, for example, been observed to replicate racial, socioeconomic, and gender bias. Even when algorithms are free of structural bias, data interpreted by algorithms may contain bias that is replicated in clinical recommendations. Although algorithmic bias is not unique to predictive artificial intelligence, AI tools are capable of amplifying these biases and compounding existing health care inequalities.


Most patients aren’t aware of the extent to which AI-based health care tools are capable of mining and drawing conclusions from health and non-health data, including sources patients believe to be confidential, such as data from their electronic health records, genomic data, and information about chemical and environmental exposures. If AI predictions about health are included in a patient’s electronic record, anyone with access to that record could discriminate on the basis of speculative forecasts about mental health, or the risks of cognitive decline, cancer risk, opioid abuse, and more.

The implications for patient safety, privacy, and provider and patient engagement are profound. For example, patients may limit their participation using patient portals or personal medical records, or even stop using connected health devices that collect sensor and patient generated data.

These issues have already outpaced the current legal landscape. The Health Insurance Portability and Accountability Act (HIPAA), which requires patient consent for disclosure of certain medical information, does not apply to commercial entities that are not health care providers or insurers. AI developers, for example, are generally not considered as business associates under HIPAA and thus not bound by the law. The Americans with Disabilities Act does not prohibit discrimination based on future medical problems, and no law prohibits decision-making on the basis of non-genetic predictive data, such as decisions made using predictive analytics and AI.

As health care systems increasingly adopt AI technologies, data governance structures must evolve to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors. A data governance framework based on the following 10 steps can assist health care systems embrace artificial intelligence applications in ways that reduces ethical risks to patients, providers, and payers. This approach can also enhance public trust and transform patient and provider experiences by improving patient satisfaction scores, building better relationships between patients and providers, activating patients, and improving self-management of chronic care.

Establish ethics-based governing principles. Artificial intelligence initiatives should adhere to key overarching principles to ensure these efforts are shaped and implemented in an ethical way. At a minimum, these principles should affirm:

  • The technology does no harm. AI developers should exercise reasonable judgement and maintain responsibility for the life cycle of AI algorithms and systems and health care outcomes stemming from those AI algorithms and systems via rigorous testing and calibration, demonstrating empathy for patients, and profoundly understanding the implications of recommendations stemming from those algorithms. AI developers should sign a code of conduct pledge.
  • The initiative is designed and developed using transparent protocols, auditable methodologies, and metadata.
  • That AI tools collect and treat patient data to reduce biases against population groups based on race, ethnicity, gender, and the like.
  • Patients are apprised of the known risks and benefits of AI technologies so they can make informed medical decisions about their care.

Establish a digital ethics steering committee. Health care systems should operationalize AI strategy through a digital ethics steering committee comprised of the chief data officer, chief privacy officer, chief information officer, chief health informatics officer, chief risk officer, and chief ethics officer. These individuals and their teams are best positioned to engage the intertwined issues of privacy, data, ethics, and technology. Health care organizations should consider establishing these C-level positions if they don’t exist already.

Convene diverse focus groups. Focus groups that include stakeholders from the diverse groups from whom datasets may be collected are essential to reducing algorithmic bias. Focus groups may include patients, patient advocates, providers, researchers, educators, and policymakers. They can contribute to requirements, human centered design, and design reviews; identify training data biases early and often; and participate in acceptance testing.

Subject algorithms to peer review. Rigorous peer review processes are essential to exposing and addressing blind spots and weaknesses in AI models. In particular, AI tools that will be applied to data involving race or gender should be peer-reviewed and validated to avoid compounding bias. Peer reviewers may include internal and external care providers, researchers, educators, and diverse groups of data scientists other than AI algorithm developers. Algorithms must be designed for explainability and interpretability. It is imperative that diversity be promoted among AI development teams.

Conduct AI model simulations. Simulation models that test scenarios in which AI tools are susceptible to bias will mitigate risk and improve confidence.

Develop clinician-focused guidance for interpreting AI results. Physicians must be equipped to give appropriate weight to artificial intelligence tools in ways that preserve their clinical judgement. AI technologies should be designed and implemented in ways that augment, rather than replace, professional clinical judgement.

Develop external change communication and training strategies. As applications of AI in health care evolve, creating a strategic messaging strategy is important to ensuring that the key benefits and risks of health care AI will be understood by patients and can be clearly and coherently communicated by their health care providers. A robust training plan must also underscore ethical and clinical nuances that arise among patients and care providers when using AI-based tools. An effective communication and training strategy complemented with behavioral economics will increase the adoption of AI tools.

Maintain a log of tests. A repository of test plans, methodologies, and results will facilitate identification of positive and negative trends in artificial intelligence technologies. This repository can be leveraged to improve comprehension of and confidence in AI outputs.

Test algorithms in controlled experiments that are blinded and randomized. Rigorous testing, complemented by randomized controlled experiments, is an effective method of isolating and reducing or eliminating sources of bias.

Continuously monitor algorithmic decision processes. AI, by its nature, is always evolving; consequently, algorithmic decision processes must be monitored, assessed, and refined continuously.

As AI transforms health care, these 10 steps can assist health care systems prepare a governance framework capable of executing AI initiatives on an enterprise scale in a manner that reduces ethical risks to patients, enhances public trust, affirms health equity and inclusivity, transforms patient experiences, drives digital health initiatives, and improves the reliability of AI technologies.

At a minimum, transparency must be required of AI developers and vendors. Access to patient data should be granted only to those who need it as part of job role, and this data must be encrypted when it is sent externally. Patients and providers — working with regulators from the Department of Health and Human Services and the Food and Drug Administration — must advocate for HIPAA protections and compliance requirements from AI developers, consultants, information technology and digital health companies, and system integrators. Patients should also demand informed consent and transparency of data in algorithms, AI tools, and clinical decision support systems.

Ethical principles have become the bedrock of AI strategy in organizations such as the Department of Defense. They should provide underpinning and solid foundation in all health care organizations that use artificial intelligence.

Satish Gattadahalli is director of digital health and health informatics at Grant Thornton Public Sector.

Editor’s note: A version of this article was previously published by Healthcare IT News.

Comments are closed.