A few years ago, when data from testing chemicals on thousands of animals were made public, a team of toxicologists and computer wizards noticed something alarming. They saw that the same substances were being squirted into rabbits’ eyes and rodents’ mouths again and again to figure out how toxic they might be. Two chemicals, for example, had been tested more than 90 times each, and 69 had been tried more than 45 times.

That represented an enormous amount of waste and unnecessary suffering. But it also opened up an opportunity. Animal tests are considered the gold standard for determining how toxic a substance might be to people. With these data, the team led by a Johns Hopkins University scientist could see that animal experiments, when repeated, often produced disparate results. And they thought their computer model could do better — and reduce the number of animals needed for evaluating chemicals.

Unlock this article by subscribing to STAT Plus. Try it FREE for 30 days!

GET STARTED

What is it?

STAT Plus is a premium subscription that delivers daily market-moving biopharma coverage and in-depth science reporting from a team with decades of industry experience.

What's included?

  • Authoritative biopharma coverage and analysis, interviews with industry pioneers, policy analysis, and first looks at cutting edge laboratories and early stage research
  • Subscriber-only networking events and panel discussions across the country
  • Monthly subscriber-only live chats with our reporters and experts in the field
  • Discounted tickets to industry events and early-bird access to industry reports

Leave a Comment

Please enter your name.
Please enter a comment.

Sign up for our Daily Recap newsletter

A roundup of STAT’s top stories of the day in science and medicine

Privacy Policy