BBC “Brain Training” Experiment: the Good, the Bad, the Ugly
You may already have read the hundreds of media articles today titled “brain training doesn’t work” and similar, based on the BBC “Brain Test Britain” experiment.
Once more, claims seem to go beyond the science backing them up … except that in this case it is the researchers, not the developers, who are responsible.
Let’s recap what we learned today.
The Good Science
The study showed that putting together a variety of brain games in one website and asking people who happen to show up to play around for a grand total of 3–4 hours over 6 weeks (10 minutes 3 times a week for 6 weeks) didn’t result in meaningful improvements in cognitive functioning. This is useful information for consumers to know, because in fact there are websites and companies making claims based on similar approaches without supporting evidence. And this is precisely the reason SharpBrains exists, to help both consumers (through our book) and organizations (through our report) to make informed decisions. The paper only included people under 60, which is surprising, but, still, this is useful information to know.
A TIME article summarizes the lack of transfer well:
“But the improvement had nothing to do with the interim brain-training, says study co-author Jessica Grahn of the Cognition and Brain Sciences Unit in Cambridge. Grahn says the results confirm what she and other neuroscientists have long suspected: people who practice a certain mental task — for instance, remembering a series of numbers in sequence, a popular brain-teaser used by many video games — improve dramatically on that task, but the improvement does not carry over to cognitive function in general.”
The Bad Science
The study, which was not a gold standard clinical trial, contained obvious flaws both in methodology and in interpretation, as some neuroscientists have started to point out. Back to the TIME article:
“Klingberg (note: Torkel Klingberg is a cognitive neuroscientist who has published multiple scientific studies on the benefits of brain training, and founded a company on the basis of that published work)…criticizes the design of the study and points to two factors that may have skewed the results.
On average the study volunteers completed 24 training sessions, each about 10 minutes long — for a total of three hours spent on different tasks over six weeks. “The amount of training was low,” says Klingberg. “Ours and others’ research suggests that 8 to 12 hours of training on one specific test is needed to get a [general improvement in cognition].”
Second, he notes that the participants were asked to complete their training by logging onto the BBC Lab UK website from home. “There was no quality control. Asking subjects to sit at home and do tests online, perhaps with the TV on or other distractions around, is likely to result in bad quality of the training and unreliable outcome measures. Noisy data often gives negative findings,” Klingberg says.”
More remarkable, a critic of brain training programs had the following to say in this Nature article:
“I really worry about this study — I think it’s flawed,” says Peter Snyder, a neurologist who studies ageing at Brown University’s Alpert Medical School in Providence, Rhode Island.
…But he says that most commercial programs are aimed at adults well over 60 who fear that their memory and mental sharpness are slipping. “You have to compare apples to apples,” says Snyder. An older test group, he adds, would have a lower mean starting score and more variability in performance, leaving more room for training to cause meaningful improvement. “You may have more of an ability to see an effect if you’re not trying to create a supernormal effect in a healthy person,” he says.
Second, the “dosage” was small, Snyder said. The participants were asked to train for at least 10 minutes a day, three times a week for at least six weeks. That adds up to only four hours over the study period, which seemed modest to Snyder.
Update (04/26): just found this comment by Michael Valenzuela, responding to Nature article:
“In our meta-analysis of cognitive brain training RCTs in healthy elderly*, doses of active training ranged from 10hours to 45 hours, with an average dosage of 33 hours. Overall, the effect was significant and robust.
The minimum cited total dose in the BBC study was 3 hours (10mins three times a week for 6 weeks), and an average number of sessions is given as 23.86 and 28.39 for the two experimental groups. What was the average duration of each session? This information is not provided, nor controlled for, so let us assume 20minutes per session, leading to an average total active training dose of 9.5hours.
The BBC study therefore did not trial a sufficient dose of brain training, leaving aside the issue of the quality of training.
This study was seriously flawed and its conclusions are invalid.
*Valenzuela M., Sachdev P. Can cognitive exercise prevent the onset of dementia? A systematic review of clinical trials with longitudinal follow up. American Journal of Geriatric Psychiatry 2009 17:179–187.”
The Ugly Logic
Let’s analyze by analogy. Aren’t the BBC-sponsored researchers basing their extremely broad claims on this type of faulty logic?
- We have decided to design and manufacture our first car ever
- Oops, our car doesn’t work
- Therefore, cars DON’T work, CAN’T work, and WON’T work
- Therefore, ALL car manufacturers are stealing your money.
- Case closed, let’s all continue riding horses.
Klingberg points out this too, stressing to TIME that the study “draws a large conclusion from a single negative finding” and that it is “incorrect to generalize from one specific training study to cognitive training in general.”
Posit Science aimed to debunk the debunker (I have been critical of several Posit Science’ marketing claims in the past, but in this case agree with what they are saying):
“This is a surprising study methodology,” said Dr. Henry Mahncke, VP Research at Posit Science. “It would be like concluding that there are no compounds to fight bacteria because the compound you tested was sugar and not penicillin.”
We do need serious science and analysis on the value and limitations of scalable approaches to cognitive assessment, training and retraining. There are very promising published examples of methodologies that seem to work (which the BBC study design not only ignored but somehow managed to directly contradict), mixed with many claims not supported by evidence. What concerns me is that this study may not only manage to confuse the public even more, but to stifle much needed innovation to ensure we are better equipped over the next 5–10 years than we are today to meet the demands of an aging society in a rapidly changing world.
Resources:
- You can read the full paper Here (opens free 5‑page PDF)
- TIME article: Here
- Nature article: Here
- Wall Street Journal article: Here
- My email to BBC six months ago: Here (we gave the BBC full access to the January SharpBrains Summit on Technology for Cognitive Health and Performance; they chose not to engage)
- Book The SharpBrains Guide to Brain Fitness: Here
Previous SharpBrains articles:
Thanks, Alvaro–very helpful clarification.
Alvaro, You raise some very valid points regarding the BBC experiment. Bernard Croisile, Chief Science Office of Scientific Brain Training / HAPPYneuron, makes additional points on the construction and interpretation of the experiment. They can be read at http://www.brainfitnessforlife.com. — Laura
thanks, very interesting. I’m interested in designing apps and games that can have a transferable effect (www.timestableclock.com, for example). While I appreciate the attempts to bring science to a wider audience, I wish they wouldn’t make it sound like they were doing a valid experiment, and instead explained the limitations and that something like this can only be a demonstration.
I went to a non-computer, cognitive training workshop about 5 years ago and have never forgotten what the psychologist (Feuernstein) said, “If we don’t believe we can change brain behavior, then why do we teach?”
My experience with a computer-based, intensive, auditory training program was that it enhanced the auditory system momentarily, but the positive effects didn’t last over time. There’s definitely room for advancement in the ‘brain training’ world.
Hello everyone, thank you for your comments.
Let’s hope something good comes out all of this, such as more clarity into what “brain training” is and isn’t.
We have decided to publish online close to the full content of our book and consumer guide The SharpBrains Guide to Brain Fitness:
https://sharpbrains.com/resources/
I agree. The show’s logic is ugly. And stupid. It’s like saying that since most diets don’t work, that all the research on healthy eating is just a bunch of baloney. But then again, it’s a TV show. And on TV, stupid logic works.
So what do we need to do? Dumb down brain-fitness? I don’t think so… But if you are going to sell Brain Fitness to TV audience, you need to use visual and emotional hooks to deliver the message.
Testing brain/cognitive training products is important. However, in science, one can’t conclude that an effect (e.g., cognitive improvement through software use) is impossible or unlikely based on observation (even if the sample size is very large). Owen and colleagues could refer to the wikipedia article on the “null hypothesis” which most researchers understand. The range of possible cognitive mechanisms, strategies and skills that one can potentially train through software, and the range of manners in which one could train them, are so broad as to call for much more theorizing, software development and empirical testing of the effects of such software. (And of course many cognitive training effects have already been documented.)
I’ve tried brain training on-line and I’ve tried it on a Nintendo DS and notice considerable improvements. Try meditating. That’s REAL brain training.
Excellent points. But why didn’t the study draw from the literature and use brain training programs that have been scientifically demonstrated to have a broad transfer effect — such as the dual n‑back (e.g. http://www.highiqpro.com/high-iq-pro/scientific-basis-of-software)? It’s not just a dose problem but a task problem. Single n‑back training gains have NOT been found to transfer to fluid intelligence gains, while the dual n‑back has. This kind of specificity is important.
Yes, BBC basically muddled the waters for consumers (even more, yes) by naming “brain training” something that has nothing to do with the cognitive training that has enabled transfer in previous studies. If they wanted to debunk Nintendo, well, then, test Nintendo.
Cognitive training has been identified in the recent NIH independent panel report as the only protective factor against cognitive decline based on the highest degree of evidence. Cognitive engagement overall and physical activities are also protective, based on lower quality evidence.
It would seem as if the BBC may be inadvertent contributing to the cognitive decline of its viewers/ readers who trust its news and programs without critical judgment.