Skip to Main Content
Contribute Try STAT+ Today

As the role of artificial intelligence grows in medicine, one of the leading concerns is that algorithmic tools will perpetuate disparities in care. Because AIs are trained on health records reflecting current standards of care, they could end up parroting bias baked into the medical system if not carefully designed. And if algorithms aren’t trained and tested on data from diverse populations, they could be less effective when used to guide care for poorly-represented subsets of patients.

So some AI development groups are tackling that problem head on, training and testing their algorithms on diverse patient data to ensure they can apply to a wide range of patients — long before they’re deployed in the wild.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!


What is it?

STAT+ is STAT's premium subscription service for in-depth biotech, pharma, policy, and life science coverage and analysis. Our award-winning team covers news on Wall Street, policy developments in Washington, early science breakthroughs and clinical trial results, and health care disruption in Silicon Valley and beyond.

What's included?

  • Daily reporting and analysis
  • The most comprehensive industry coverage from a powerhouse team of reporters
  • Subscriber-only newsletters
  • Daily newsletters to brief you on the most important industry news of the day
  • STAT+ Conversations
  • Weekly opportunities to engage with our reporters and leading industry experts in live video conversations
  • Exclusive industry events
  • Premium access to subscriber-only networking events around the country
  • The best reporters in the industry
  • The most trusted and well-connected newsroom in the health care industry
  • And much more
  • Exclusive interviews with industry leaders, profiles, and premium tools, like our CRISPR Trackr.

Create a display name to comment

This name will appear with your comment