
A few years ago, when data from testing chemicals on thousands of animals were made public, a team of toxicologists and computer wizards noticed something alarming. They saw that the same substances were being squirted into rabbits’ eyes and rodents’ mouths again and again to figure out how toxic they might be. Two chemicals, for example, had been tested more than 90 times each, and 69 had been tried more than 45 times.
That represented an enormous amount of waste and unnecessary suffering. But it also opened up an opportunity. Animal tests are considered the gold standard for determining how toxic a substance might be to people. With these data, the team led by a Johns Hopkins University scientist could see that animal experiments, when repeated, often produced disparate results. And they thought their computer model could do better — and reduce the number of animals needed for evaluating chemicals.
On Wednesday, in the journal Toxicological Science, that team reported that their new computer program won out. It predicts the dangers of new chemicals based on how similar they are to previously tested chemicals.