The idea that sticky brain plaques cause Alzheimer’s disease began as an interesting hypothesis and eventually became drug industry dogma. Now, after a string of clinical trial failures, that hypothesis looks less credible than ever.
But how did nearly two decades of failure not convince the brightest minds in pharma that it was time to move on?
The answers are a little complicated.
Join us for a brief journey through the history of Alzheimer’s research to understand why the amyloid hypothesis — the one that suggests targeting those plaques could treat the disease — proved so persistent in the face of almost constant disappointment.
We start with Alois Alzheimer’s turn-of-the-century discovery that such plaques were present in the brain of a woman with early-onset dementia. Then we cut to the 1990s, when the drug industry advanced its first big amyloid idea: a vaccine that would turn the body’s defenses against the offending plaque.
That, of course, would fail, just like the amyloid-directed pills and antibodies that followed. But, to many scientists, each sequential disappointment lit the way to a slightly better approach, one that might finally slow cognitive decline and result in a blockbuster drug.
Now, with the latest and arguably best amyloid-targeting therapy deemed a failure, the industry may at last be moving away from that approach to Alzheimer’s, having learned a series of hard lessons along the way.