Authors: Develop digital games to improve brain function and well-being (UW-Madison News):
“Neuroscientists should help to develop compelling digital games that boost brain function and improve well-being, say two professors specializing in the field in a commentary article published in the science journal Nature. In the Feb. 28 issue, the two — Daphne Bavelier of the University of Rochester and Richard J. Davidson of the University of Wisconsin-Madison — urge game designers and brain scientists to work together to design new games that train the brain, producing positive effects on behavior, such as decreasing anxiety, sharpening attention and improving empathy.”
To Learn More:
Below you can find the full transcript of our engaging Q&A session yesterday on holistic brain health with clinical neuropsychologist Dr. Paul Nussbaum, author of Save Your Brain. You can learn more about the full Brain Fitness Q&A Series Here.
Perhaps one of the best exchanges was: Read the rest of this entry »
When you think of how the PC has altered the fabric of society, permitting instant access to information and automating processes beyond our wildest dreams, it is instructive to consider that much of this progress was driven by Moore’s law. Halving the size of semiconductor every 18 months catalysed an exponential acceleration in performance.
Why is this story relevant to modern neuroscience and the workings of the brain? Because transformative technological progress arises out of choice and the actions of individuals who see potential for change, and we may well be on the verge of such progress. Read the rest of this entry »
By: Dr. Pascale Michelon
Have you read the cover story of the New Scientist this week: Mental muscle: six ways to boost your brain?
The article, which includes good information on brain food, the value of meditation, etc., starts by saying that: “Brain training doesn’t work, but there are lots of other ways to give your grey matter a quick boost.” Further in the article you can read “… brain training software has now been consigned to the shelf of technologies that failed to live up to expectations.”
Such claims are based on the one study widely publicized earlier this year: the BBC “brain training” experiment, published by Owen et al. (2010) in Nature.
What happened to the scientific rigor associated with the New Scientist?
As expressed in one of our previous posts: “Once more, claims seem to go beyond the science backing them up … except that in this case it is the researchers, not the developers, who are responsible.” (See BBC “Brain Training” Experiment: the Good, the Bad, the Ugly).
Read our two previous posts to get to the heart of the BBC study and what it really means. As Alvaro Fernandez and Dr. Zelinski explore the potential scientific flaws of the study, they both point out that there are very promising published examples of brain training methodologies that seem to work.
BBC “Brain Training” Experiment: the Good, the Bad, the Ugly
Scientific critique of BBC/ Nature Brain Training Experiment
By: Dr. Elizabeth Zelinski
There has been quite a bit of comment about the Owen et al study in Nature available online on April 20, 2010. A quick synopsis of the study is that the BBC show Bang Goes the Theory worked with the study authors to provide a test of the hypothesis that commercially available brain training programs transfer to general cognitive abilities. The conclusion was that, despite improvements on the trained tasks, “no evidence was found for transfer effects to untrained tasks, even when those tasks were cognitively closely related.”
The study was conducted through the show’s web site. Of 52,617 participants who registered, approximately 20% (11,430) completed full participation in the study, which consisted of two benchmarking assessments 6 weeks apart with variants of neuropsychological tests and at least two training sessions. People were randomly assigned to one of three groups that were asked to train for about 10 min a day three times a week for the 6-week period, though they could train either more or less frequently. One of the two experimental groups was a “brain training” group that completed tasks including simple arithmetic, finding missing pieces, matching symbols to a target, ordering rotating numbers by numerical value, updating, and memory for items. Most of the training sessions were 90 sec each; the rotating numbers tasks was 3 min. These activities are similar to those used in “edutainment” programs that can be played online or with a handheld device. The other experimental group was trained on reasoning tasks that involved identifying relative weights of objects based on a visual “seesaw”, selecting the “odd” item in a concept formation type task, a task involving thinking through the effects of one action on current and future states, and three planning tasks including drawing a continuous line around a grid while ascertaining that the line will not hinder later moves, a version of the Tower of Hanoi task, and a tile sliding game. The control group spent time answering questions about obscure facts and organizing them chronologically based on any available online resource. Results indicated that the two experimental groups performed better than the control group on only one outcome test of grammatical reasoning; there were no differences between either experimental group and the controls on the remaining test. The experimental groups had improved on the trained tasks but not on the transfer tasks.
Although some news reports suggest that these findings are definitive, there are a number of concerns, many of which have to do with whether the findings have been overgeneralized to all forms of brain training because only a few tests were used. Second, there have been questions raised about the amount of time allocated to training and the issue of testing in the home environment. The study reported Read the rest of this entry »
By: Alvaro Fernandez
Update (04/20/10): after reading the full BBC study in Nature, I wrote the article titled BBC “Brain Training” Experiment: the Good, the Bad, the Ugly, saying that “you probably saw the hundreds of media articles titled “brain training doesn’t work”, based on a BBC experiment. Once more, claims seem to go beyond the science backing them up … except that in this case it is the researchers, not the developers, who are responsible.” You can keep reading full updated article Here.
Below is what I originally wrote before the paper itself was available.
Tomorrow we’ll probably witness a lot of media coverage about a experiment run by the BBC in the UK, to be published in Nature, on whether “brain training” works.
The paper is still embargoed, so we cannot comment on it, but what I can do is to share fragments of my email to a BBC reporter six months ago, discussing impressions on what they had announced as the ultimate test of whether “brain training” works.
Again, these were purely my impressions based on limited public information. Once we can comment on the published paper we’ll be able to provide a more informed perspective.
Here go some of my thoughts based on my external perception of your test:
- I agree with many of the premises for the test
- But “Does brain training really work” is a highly misleading frame: the obvious answer is, yes, it works as a category. If not, do you mean people can’t learn? meditate? go through cognitive therapy? cognitive retraining? increase working memory and other brain functions? All these are established beyond doubt through dozens of well-controlled studies where the intervention effect a) goes beyond placebo, and b) remains there once training is over. The 2009 report I sent you includes 10 Research Executive Briefs by leading scientists who reference published papers in high-quality journals. None evaluates Nintendo — but should they be ignored, as a group?
- Now, the key questions are, “what specific brain training are we talking about”, “work for what?” and “work for whom?”. That’s where we could help educate consumers separate hope from hype.
- …Right now you are inventing your own “brain game”, and the only thing you will test is whether that specific “brain game” you have develop “works” or not (not clear what outcome measures you have). I wouldn’t dare to manufacture my own car now from scratch and claim, based on the results, that “cars” work or don’t.
- I couldn’t agree more with “brain training that is good for one person might not be good for you”, since one of “brain training” properties (both strength and weakness) is its highly targeted nature. The implication? we need better assessments to pinpoint bottlenecks and direct appropriate intervention. consumers need better education and information to know what is a waste of time and money and what may be worthy. Yet, your test seems to fully ignore this, and test whether the same thing is good for everyone…you may be throwing out the baby with the water…”
(Will link to paper once published). Related articles:
By: Alvaro Fernandez
Welcome to the 16th edition of Scientia Pro Publica, the blog carnival that celebrates the best science, nature and medical writing published in the blogosphere within the past 60 days.
What are some of the fascinating topics you can explore and discuss with this group of bloggers?
Science & Us
The Evolving Mind: What’s the point of daydreaming?
Credit: Johan Stigwall, via Flickr
Generally Thinking: What is the brain impact of different types of meditation (focused, open monitoring, compassion)?
The Emotion Machine: Can blogging help you control your environment and manage stress?
Greater Good Magazine: Want to live longer and bettter?
Collective Imagination: Can you share a powerful uncanny experience?
Science & Friends
Lab Rat: Pros and Cons of having amphibian skin?
Science in Paradise: Do sharks get cancer?
Mauka to Makai: Can bunnies offer new light on what comes after Viagra, how to deal with nuclear feces, and new sources of electricity?
Kind of Curious: Did dinousaurs migrate? dead or alive?
Migrations: Do beliefs on evolution affect one’s ability to appreciate birding?
Science & Society
Science & Soul: Can we reverse corn monoculture trends?
Genomics Law Report: If a Direct-To-Consumer genomics company goes bankrupt, what happens to your data? does HIPAA cover it?
And this concludes today’s edition. Kelsey will host next edition (December 7th) at Mauka to Makai; you can submit posts using this handy form. And if you’re interested in hosting Scientia at your blog, contact Grrlscientist!