Sharp Brains: Brain Fitness and Cognitive Health News

Neuroplasticity, Brain Fitness and Cognitive Health News

Icon

What’s in it for me: Ten critical questions to navigate media coverage of latest scientific findings

research_actionHere at Greater Good, we cov­er research into social and emo­tion­al well-being, and we try to help peo­ple apply find­ings to their per­son­al and pro­fes­sion­al lives. We are well aware that our busi­ness is a tricky one.

Sum­ma­riz­ing sci­en­tif­ic stud­ies and apply­ing them to people’s lives isn’t just dif­fi­cult for the obvi­ous rea­sons, like under­stand­ing and then explain­ing sci­en­tif­ic jar­gon or meth­ods to non-spe­cial­ists. It’s also the case that con­text gets lost when we trans­late find­ings into sto­ries, tips, and tools for a more mean­ing­ful life, espe­cial­ly when we push it all through the nuance-squash­ing machine of the Inter­net. Many peo­ple nev­er read past the head­lines, which intrin­si­cal­ly aim to over­gen­er­al­ize and pro­voke inter­est. Because our arti­cles can nev­er be as com­pre­hen­sive as the orig­i­nal stud­ies, they almost always omit some cru­cial caveats, such as lim­i­ta­tions acknowl­edged by the researchers. To get those, you need access to the stud­ies them­selves.

And it’s very com­mon for find­ings to seem to con­tra­dict each oth­er. For exam­ple, we recent­ly cov­ered an exper­i­ment that sug­gests stress reduces empathy—after hav­ing pre­vi­ous­ly dis­cussed oth­er research sug­gest­ing that stress-prone peo­ple can be more empath­ic. Some read­ers asked: Which one is cor­rect? (You’ll find my answer here.)

But prob­a­bly the most impor­tant miss­ing piece is the future. That may sound like a fun­ny thing to say, but, in fact, a new study is not worth the PDF it’s print­ed on until its find­ings are repli­cat­ed and val­i­dat­ed by oth­er studies—studies that haven’t yet hap­pened. An exper­i­ment is mere­ly inter­est­ing until time and test­ing turns its find­ing into a fact.

Sci­en­tists know this, and they are trained to react very skep­ti­cal­ly to every new paper. They also expect to be greet­ed with skep­ti­cism when they present find­ings. Trust is good, but sci­ence isn’t about trust. It’s about ver­i­fi­ca­tion.

How­ev­er, jour­nal­ists like me, and mem­bers of the gen­er­al pub­lic, are often prone to treat every new study as though it rep­re­sents the last word on the ques­tion addressed. This par­tic­u­lar issue was high­light­ed last week by—wait for it—a new study that tried to repro­duce 100 pri­or psy­cho­log­i­cal stud­ies to see if their find­ings held up. The result of the three-year ini­tia­tive is chill­ing: The team, led by Uni­ver­si­ty of Vir­ginia psy­chol­o­gist Bri­an Nosek, got the same results in only 36 per­cent of the exper­i­ments they repli­cat­ed. This has led to some pre­dictably provoca­tive, over­gen­er­al­iz­ing head­lines imply­ing that we shouldn’t take psy­chol­o­gy seri­ous­ly.

I don’t agree.

Despite all the mis­takes and overblown claims and crit­i­cism and con­tra­dic­tions and arguments—or per­haps because of them—our knowl­edge of human brains and minds has expand­ed dra­mat­i­cal­ly dur­ing the past cen­tu­ry. Psy­chol­o­gy and neu­ro­science have doc­u­ment­ed phe­nom­e­na like cog­ni­tive dis­so­nance, iden­ti­fied many of the brain struc­tures that sup­port our emo­tions, and proved the place­bo effect and oth­er dimen­sions of the mind-body con­nec­tion, among oth­er find­ings that have been test­ed over and over again.

These dis­cov­er­ies have helped us under­stand and treat the true caus­es of many ill­ness­es. I’ve heard it argued that ris­ing rates of diag­noses of men­tal ill­ness con­sti­tute evi­dence that psy­chol­o­gy is fail­ing, but in fact, the oppo­site is true: We’re see­ing more and bet­ter diag­noses of prob­lems that would have com­pelled pre­vi­ous gen­er­a­tions to dis­miss peo­ple as “stu­pid” or “crazy” or “hyper” or “blue.” The impor­tant thing to bear in mind is that it took a very, very long time for sci­ence to come to these insights and treat­ments, fol­low­ing much tri­al and error.

Sci­ence isn’t a faith, but rather a method that takes time to unfold. That’s why it’s equal­ly wrong to uncrit­i­cal­ly embrace every­thing you read, includ­ing what you are read­ing on this page.

Giv­en the com­plex­i­ties and ambi­gu­i­ties of the sci­en­tif­ic endeav­or, is it pos­si­ble for a non-sci­en­tist to strike a bal­ance between whole­sale dis­missal and uncrit­i­cal belief? Are there red flags to look for when you read about a study on a site like Greater Good or in a pop­u­lar self-help book? If you do read one of the actu­al stud­ies, how should you, as a non-sci­en­tist, gauge its cred­i­bil­i­ty?

I drew on my own expe­ri­ence as a sci­ence jour­nal­ist, and sur­veyed my col­leagues here at the UC Berke­ley Greater Good Sci­ence Cen­ter. We came up 10 ques­tions you might ask when you read about the lat­est sci­en­tif­ic find­ings. These are also ques­tions we ask our­selves, before we cov­er a study.

1. Did the study appear in a peer-reviewed journal?

Peer review—submitting papers to oth­er experts for inde­pen­dent review before acceptance—remains one of the best ways we have for ascer­tain­ing the basic seri­ous­ness of the study, and many sci­en­tists describe peer review as a tru­ly hum­bling cru­cible. If a study didn’t go through this process, for what­ev­er rea­son, it should be tak­en with a much big­ger grain of salt.

2. Who was studied, where?

Ani­mal exper­i­ments tell sci­en­tists a lot, but their applic­a­bil­i­ty to our dai­ly human lives will be lim­it­ed. Sim­i­lar­ly, if researchers only stud­ied men, the con­clu­sions might not be rel­e­vant to women, and vice ver­sa.

This was actu­al­ly a huge prob­lem with Nosek’s effort to repli­cate oth­er people’s exper­i­ments. In try­ing to repli­cate one Ger­man study, for exam­ple, they had to use dif­fer­ent maps (ones that would be famil­iar to Uni­ver­si­ty of Vir­ginia stu­dents) and change a scale mea­sur­ing aggres­sion to reflect Amer­i­can norms. This kind of vari­ance could explain the dif­fer­ent results. It may also sug­gest the lim­its of gen­er­al­iz­ing the results from one study to oth­er pop­u­la­tions not includ­ed with­in that study.

As a mat­ter of approach, read­ers must remem­ber that many psy­cho­log­i­cal stud­ies rely on WEIRD (West­ern, edu­cat­ed, indus­tri­al­ized, rich and demo­c­ra­t­ic) sam­ples, main­ly col­lege stu­dents, which cre­ates an in-built bias in the discipline’s con­clu­sions. Does that mean you should dis­miss West­ern psy­chol­o­gy? Of course not. It’s just the equiv­a­lent of a “Cau­tion” or “Yield” sign on the road to under­stand­ing.

3. How big was the sample?

In gen­er­al, the more par­tic­i­pants in a study, the more valid its results. That said, a large sam­ple is some­times impos­si­ble or even unde­sir­able for cer­tain kinds of stud­ies. This is espe­cial­ly true in expen­sive neu­ro­science exper­i­ments involv­ing func­tion­al mag­net­ic res­o­nance imag­ing, or fMRI, scans.

And many mind­ful­ness stud­ies have scanned the brains of peo­ple with many thou­sands of hours of med­i­ta­tion experience—a rel­a­tive­ly small group. Even in those cas­es, how­ev­er, a study that looks at 30 expe­ri­enced med­i­ta­tors is prob­a­bly more sol­id than a sim­i­lar one that scanned the brains of only 15.

4. Did the researchers control for key differences?

Diver­si­ty or gen­der bal­ance aren’t nec­es­sar­i­ly virtues in a research study; it’s actu­al­ly a good thing when a study pop­u­la­tion is as homoge­nous as pos­si­ble, because it allows the researchers to lim­it the num­ber of dif­fer­ences that might affect the result. A good researcher tries to com­pare apples to apples, and con­trol for as many dif­fer­ences as pos­si­ble in her analy­sis.

5. Was there a control group?

One of the first things to look for in method­ol­o­gy is whether the sam­ple is ran­dom­ized and involved a con­trol group; this is espe­cial­ly impor­tant if a study is to sug­gest that a cer­tain vari­able might actu­al­ly cause a spe­cif­ic out­come, rather than just be cor­re­lat­ed with it (see next point).

For exam­ple, were some in the sam­ple ran­dom­ly assigned a spe­cif­ic med­i­ta­tion prac­tice while oth­ers weren’t? If the sam­ple is large enough, ran­dom­ized tri­als can pro­duce sol­id con­clu­sions. But, some­times, a study will not have a con­trol group because it’s eth­i­cal­ly impos­si­ble. (Would peo­ple still divert a trol­ley to kill one per­son in order to save five lives, if their deci­sion killed a real per­son, instead of just being a thought exper­i­ment? We’ll nev­er know for sure!)

The con­clu­sions may still pro­vide some insight, but they need to be kept in per­spec­tive.

6. Did the researchers establish causality, correlation, dependence, or some other kind of relationship?

I often hear “Cor­re­la­tion is not cau­sa­tion” shout­ed as a kind of bat­tle cry, to try to dis­cred­it a study. But correlation—the degree to which two or more mea­sure­ments seem to change at the same time—is impor­tant, and is one step in even­tu­al­ly find­ing causation—that is, estab­lish­ing a change in one vari­able direct­ly trig­gers a change in anoth­er.

The impor­tant thing is to cor­rect­ly iden­ti­fy the rela­tion­ship.

7. Is the journalist, or even the scientist, overstating the result?

Lan­guage that sug­gests a fact is “proven” by one study or which pro­motes one solu­tion for all peo­ple is most like­ly over­stat­ing the case. Sweep­ing gen­er­al­iza­tions of any kind often indi­cate a lack of humil­i­ty that should be a red flag to read­ers. A study may very well “sug­gest” a cer­tain con­clu­sion but it rarely, if ever, “proves” it.

This is why we use a lot of cau­tious, hedg­ing lan­guage in Greater Good, like “might” or “implies.”

8. Is there any conflict of interest suggested by the funding or the researchers’ affiliations?

A recent study found that you could drink lots of sug­ary bev­er­ages with­out fear of get­ting fat, as long as you exer­cised. The fun­der? Coca Cola, which eager­ly pro­mot­ed the results. This doesn’t mean the results are wrong. But it does sug­gest you should seek a sec­ond opin­ion.

9. Does the researcher seem to have an agenda?

Read­ers could under­stand­ably be skep­ti­cal of mind­ful­ness med­i­ta­tion stud­ies pro­mot­ed by prac­tic­ing Bud­dhists or exper­i­ments on the val­ue of prayer con­duct­ed by Chris­tians. Again, it doesn’t auto­mat­i­cal­ly mean that the con­clu­sions are wrong. It does, how­ev­er, raise the bar for peer review and repli­ca­tion. For exam­ple, it took hun­dreds of exper­i­ments before we could begin say­ing with con­fi­dence that mind­ful­ness can indeed reduce stress.

10. Do the researchers acknowledge limitations and entertain alternative explanations?

Is the study focused on only one side of the sto­ry or one inter­pre­ta­tion of the data? Has it failed to con­sid­er or refute alter­na­tive expla­na­tions? Do they demon­strate aware­ness of which ques­tions are answered and which aren’t by their meth­ods?

I sum­ma­rize my per­son­al stance as a non-sci­en­tist toward sci­en­tif­ic find­ings as this: Curi­ous, but skep­ti­cal. I take it all seri­ous­ly and I take it all with a grain of salt. I judge it against my expe­ri­ence, know­ing that my expe­ri­ence cre­ates bias. I try to cul­ti­vate humil­i­ty, doubt, and patience. I don’t always suc­ceed; when I fail, I try to admit fault and for­give myself. My own under­stand­ing is imper­fect, and I remind myself that one study is only one step in under­stand­ing. Above all, I try to bear in mind that sci­ence is a process, and that con­clu­sions always raise more ques­tions for us to answer.

Jeremy Adam Smith– Jere­my Adam Smith is pro­duc­er and edi­tor of Greater Good, an online mag­a­zine based at UC-Berke­ley that high­lights ground break­ing sci­en­tific research into the roots of com­pas­sion and altru­ism. He is also the author or coed­i­tor of four books, includ­ing The Dad­dy Shift, Are We Born Racist?, and The Com­pas­sion­ate Instinct. Before join­ing the GGSC, Jere­my was a 2010-11 John S. Knight Jour­nal­ism Fel­low at Stan­ford Uni­ver­si­ty. Pub­lished here by cour­tesy of Greater Good.

Relat­ed arti­cles:

Leave a Reply...

Loading Facebook Comments ...

Leave a Reply

Categories: Cognitive Neuroscience, Education & Lifelong Learning

Tags: , , , , ,

Watch All Recordings Now (40+ Speakers, 12+ Hours)

About SharpBrains

As seen in The New York Times, The Wall Street Journal, BBC News, CNN, Reuters and more, SharpBrains is an independent market research firm tracking health and performance applications of brain science.

Follow us and Engage via…

twitter_logo_header
RSS Feed

Search for anything brain-related in our article archives