Skip to Main Content

This is one of our periodic Five Year Watch columns, examining whether and why predictions of scientific progress were accurate, or hype.

In February 2011, a failed diabetes drug appeared to be getting a second chance. The University of Alberta’s Jack Jhamandas was studying the drug, called AC253, as a potential Alzheimer’s treatment. And, he told the Edmonton Journal that if studies in mice went well, trials in humans could be underway in about five years.

Some evidence pointed toward a link between diabetes and Alzheimer’s. For instance, a protein found in the malfunctioning pancreases of those with diabetes, known as amylin, was similar to one found in the brains of those with Alzheimer’s called amyloid. AC253 blocked the action of amylin, so what if it could also block the action of amyloid? It turned out that it did, at least in human brain cells grown in the lab. That set the stage for further tests, first in lab animals and then in humans.

But, for patients, has the five-year prediction held up? Well, only in the sense that it keeps getting remade, so it hasn’t had a chance to be wrong.


In 2012, Jhamandas predicted the same five-year timeline for potential trials of AC253 in a University of Alberta press release, meaning the horizon had been pushed off to 2017, although he “stressed that further testing needs to be done before such trials can occur.” And at the beginning of 2013, Jhamandas told Radio Canada International the same thing, extending the date to the beginning of 2018.

But as of today, no such trial has been registered in either Canada or the United States, which would be a requirement for any human studies. Jhamandas tells STAT that he and his team have continued to work on AC253 and related compounds, such as pramlintide, which is already on the market for diabetes, as potential Alzheimer’s treatments. One need: better versions of AC253 that will make it to the brain when injected.


But, Jhamandas acknowledges, “We are still some ways from a trial but the work … that we are currently engaged in are requisite steps before such trials can be undertaken.” A much more modest assessment than “five years from now” — and a much more realistic one.

The point here is not that researchers should be held accountable for missing estimated deadlines, or that they should never make such predictions lest they prove wrong — and, thus, subject to ridicule. And we understand why scientists want to have a digestible answer to the journalist’s inevitable question of “When will this help humanity?” (Those answers, by the way, can be a signal to funders to stick with the program for a few more years, since salvation is just around the corner.)

But as the case of AC253 illustrates, arbitrary and unrealistic forecasts that keep shifting not only make researchers look like bad prognosticators, they make them seem like poor judges of the quality of their work, too. Is it too much to ask reporters to stop asking, and scientists to stop answering when they do?