he marketers of “brain-training” games have long drummed up sales by pointing to data that show that their products make you smarter or ward off cognitive decline.
But a cleverly designed new study from researchers at George Mason University offers the best evidence yet that there’s a serious flaw in much of that research. The problem: People may get a mental boost because they expect to do better, not because the games actually work.
“This study seems to strongly support our concerns that brain-training effects might be nothing more than placebo effects,” said Walter Boot, a cognitive psychologist at Florida State University who was not involved in the research.
The new study, published Monday in the Proceedings of the National Academy of Sciences, flipped the script of most brain-training research. The 50 study participants played the same memory-boosting game, but were recruited in two different ways: either from a flyer blaring the buzzwords “brain training and cognitive enhancement” or one that simply invited them to “participate in a study” without any mention of brain training.
The former group of 25 people saw a 5-to-10-point IQ boost after playing the game; the latter showed no cognitive improvement. And that arrangement was not just hypothetical: After conducting their study, the George Mason researchers emailed academics who have published research on cognitive training and found that 17 of the 19 studies surveyed had recruited participants in a way that may have biased their outcome.
Industry under fire
The findings may be seen as another blow to a brain-training industry already under fire. The company behind the most high-profile brand of brain-training games, Lumosity, agreed in January to a $2 million settlement with the Federal Trade Commission for making deceptive claims about the health effects of its games. The money is being doled out as refunds to Lumosity’s customers.
“It’s really necessary for researchers to in some way account for this [placebo effect] moving forward because this could contaminate your effects,” said Cyrus Foroughi, a cognitive scientist and the study’s lead author.
There are also questions about how long any real cognition benefits might last. And it’s also unclear how much it actually matters: Cognitive improvement on the games is often measured by how well people perform abstract tasks in a university lab, which may not carry over to maintaining complicated skills such as driving a car or remembering a grocery list.
But it’s the placebo effect in particular that scientists have long suspected of clouding much of what they are seeing in the brain-training field.
Adam Gazzaley, a cognitive neuroscientist at the University of California, San Francisco, said the findings about the role of the placebo effect “confirm what a lot of us have always assumed.”
But Gazzaley, an adviser to a startup that’s trying to get approval from the Food and Drug Administration for a “prescription” video game based on his prototype, is known for an unusually rigorous approach to game development. And he sees a bright side in the growing evidence that people’s expectations can influence their cognitive performance, so long as researchers measure and account for it in their study design.
“We view the fact that people think that they can use a training program to improve themselves as a positive,” he said, “because it increases motivation and depth of engagement in the training.”