Serious Games: Developing a Research Agenda for Educational Games and Simulations

(Edi­tor’s Note: the recent trade book Com­put­er Games and Instruc­tion brings togeth­er the lead­ing edge per­spec­tives of over a dozen sci­en­tists in the area of videogames and learn­ing, includ­ing a very insight­ful analy­sis ‑excerpt­ed below- by Har­vard’s Chris Dede. Please pay atten­tion to his thoughts on scal­a­bil­i­ty below, and enjoy!)

The research overview pro­vid­ed by Tobias, Fletch­er, and Dai (this vol­ume) is very help­ful in sum­ma­riz­ing stud­ies to date on var­i­ous dimen­sions of edu­ca­tion­al games and sim­u­la­tions. The next chal­lenge for the field is to move beyond iso­lat­ed research in which each group of inves­ti­ga­tors uses an idio­syn­crat­ic set of def­i­n­i­tions, con­cep­tu­al frame­works, and meth­ods. Instead, to make fur­ther progress, we as schol­ars should adopt com­mon research strate­gies and models—not only to ensure a high­er stan­dard of rig­or, but also to enable stud­ies that com­ple­ment each oth­er in what they explore.  As this book doc­u­ments, we now know enough as a research com­mu­ni­ty to under­take col­lec­tive schol­ar­ship that sub­di­vides the over­all task of under­stand­ing the strengths and lim­its of games and sim­u­la­tions for teach­ing and learn­ing. Fur­ther, through a con­tin­u­ous­ly evolv­ing research agen­da we can iden­ti­fy for fun­ders and oth­er stake­hold­ers an ongo­ing assess­ment of which types of stud­ies are most like­ly to yield valu­able insights, giv­en the cur­rent state of knowledge.

Research agen­das include both con­cep­tu­al frame­works for clas­si­fy­ing research and pre­scrip­tive state­ments about method­olog­i­cal rig­or. (For an exam­ple of a research agen­da out­side of gam­ing and sim­u­la­tion – in online pro­fes­sion­al devel­op­ment – see Dede, Ketel­hut, White­house, Bre­it, & McCloskey, 2009.) In addi­tion, research agen­das rest on tac­it assump­tions often unstat­ed, but in fact bet­ter made explic­it, as dis­cussed below. In this chap­ter, to inform a research agen­da for edu­ca­tion­al games and sim­u­la­tions, I offer thoughts about fun­da­men­tal assump­tions and a con­cep­tu­al frame­work that includes pre­scrip­tive heuris­tics about qual­i­ty. In doing so, my pur­pose is not to pro­pose what the research agen­da should be – that is a com­plex task best done by a group of peo­ple with com­ple­men­tary knowl­edge and per­spec­tives – but to start a dia­logue about what such an agen­da might include and how it might best be formulated.

Fun­da­men­tal Assumptions

My thoughts about a research agen­da for edu­ca­tion­al games and sim­u­la­tions are based on five fun­da­men­tal assump­tions. I take the trou­ble to artic­u­late these assump­tions because the beliefs and val­ues that under­lie a research agen­da often are the most impor­tant deci­sions made in its for­mu­la­tion. Mov­ing beyond “stealth” assump­tions about qual­i­ty to explic­it agree­ments and under­stand­ings is cen­tral to devel­op­ing schol­ar­ship that does not incor­po­rate many of the prob­lems that beset typ­i­cal edu­ca­tion­al research, such as irrel­e­vance and faulty meth­ods (Shavel­son & Towne, 2002). My five assump­tions posit that any research agen­da should focus on usable knowl­edge; col­lec­tive research; what works, when, for whom; more than a straight­for­ward com­par­i­son of the inno­va­tion to stan­dard prac­tice; and a focus on inno­va­tions that can be imple­ment­ed at scale. By “at scale,” I mean that that inno­va­tors can adapt the prod­ucts of research for effec­tive usage across a wide range of con­texts, many of which do not have an ide­al, full set of con­di­tions for suc­cess­ful implementation.

Usable Knowl­edge

My first assump­tion that any research agen­da should focus on “usable knowl­edge”: insights gleaned from research that can be applied to inform prac­tice and pol­i­cy. I believe in defin­ing research agen­das in such a way that schol­ars not only build sophis­ti­cat­ed the­o­ries and applied under­stand­ings, but also dis­sem­i­nate this knowl­edge in a man­ner that helps stake­hold­ers access, inter­pret, and apply these insights. It is impor­tant to note that the process of cre­at­ing and shar­ing usable knowl­edge is best accom­plished by a com­mu­ni­ty of researchers, prac­ti­tion­ers, and pol­i­cy­mak­ers, as opposed to schol­ars devel­op­ing inde­pen­dent find­ings for oth­er stake­hold­ers to con­sume. As the chap­ters in this vol­ume doc­u­ment, in the case of gam­ing and sim­u­la­tion for learn­ing these stake­hold­ers include K‑12 teach­ers and admin­is­tra­tors, high­er edu­ca­tion, busi­ness and indus­try, med­i­cine and the health sci­ences, the mil­i­tary, and a vast unor­ga­nized group of peo­ple who desire bet­ter ways of infor­mal learn­ing. A com­mu­ni­ty of researchers, prac­ti­tion­ers, and pol­i­cy­mak­ers may also bet­ter accom­plish col­lec­tive the­o­ry build­ing than now occurs with frag­ment­ed cre­ation and dis­tri­b­u­tion of schol­ar­ly findings.

As Stokes describes in his book, Pasteur’s Quad­rant (1997), usable knowl­edge begins with per­sis­tent prob­lems in prac­tice and pol­i­cy, rather than with intel­lec­tu­al curios­i­ty. (This is not to dis­par­age pure­ly basic research, but to indi­cate its lim­its in imme­di­ate­ly pro­duc­ing usable knowl­edge.) In my expe­ri­ence, too often edu­ca­tion­al games and sim­u­la­tions are devel­oped because they are “cool” or “fun” — they are solu­tions look­ing for prob­lems (“build it and they will come.” If we are to gain the respect and col­lab­o­ra­tion of prac­ti­tion­ers and pol­i­cy­mak­ers, the major­i­ty of our research agen­da must focus on how games and sim­u­la­tions can aid in resolv­ing peren­ni­al edu­ca­tion­al prob­lems and issues, giv­ing pol­i­cy­mak­ers and prac­ti­tion­ers vital lever­age in address­ing trou­bling, wide­spread issues (Carl­son & Wilmot, 2006). Stokes makes a com­pelling case that usable knowl­edge is a pre­em­i­nent­ly valu­able form of research invest­ment, and Lage­mann (2002) makes the case that this strat­e­gy is very impor­tant for edu­ca­tion­al improvement.

Col­lec­tive Research

My sec­ond assump­tion is that, even though indi­vid­ual stud­ies of cre­ative “out­lier” approach­es is impor­tant, col­lec­tive research is vital for the fur­ther evo­lu­tion of our field. Ful­ly under­stand­ing a com­plex edu­ca­tion­al inter­ven­tion involv­ing gam­ing and sim­u­la­tion and effec­tive across a wide range of con­texts may require mul­ti­ple stud­ies along its var­i­ous dimen­sions, each schol­ar­ly endeav­or led by a group that spe­cial­izes in the meth­ods best suit­ed to answer­ing research ques­tions along that dimen­sion. Using such a dis­trib­uted research strat­e­gy among col­lab­o­rat­ing inves­ti­ga­tors, fun­ders could cre­ate port­fo­lios in which var­i­ous stud­ies cov­er dif­fer­ent por­tions of this sophis­ti­cat­ed schol­ar­ly ter­ri­to­ry, with com­ple­men­tary research out­comes enabling full cov­er­age and col­lec­tive the­o­ry-build­ing. Fur­ther, once the effi­ca­cy of an inter­ven­tion is deter­mined via explorato­ry research, a sin­gle large study with a com­plex treat­ment is of greater val­ue for research than mul­ti­ple small stud­ies of indi­vid­ual sim­ple inter­ven­tions, none of which has the sta­tis­ti­cal pow­er to deter­mine the nuanced inter­ac­tion effects described next. [For exam­ple, a researcher who wish­es to detect a small dif­fer­ence between two inde­pen­dent sam­ple means (e.g., treat­ment and con­trol) at a sig­nif­i­cance lev­el of 0.05, requires a sam­ple of size of 393 or more stu­dents in each group (Cohen, 1992).]

As an exam­ple of steps towards col­lec­tive research on a sin­gle inter­ven­tion in edu­ca­tion­al gam­ing and sim­u­la­tion, in their lit­er­a­ture review Tobias et al (this vol­ume) doc­u­ment sev­er­al stud­ies by dif­fer­ent inves­ti­ga­tors on the Space Fortress videogame. While a com­mon con­cep­tu­al frame­work did not nec­es­sar­i­ly guide these stud­ies, pre­sum­ably each set of inves­ti­ga­tors built to some extent on pri­or research. Beyond a com­mon con­cep­tu­al frame­work, devel­op­ing shared mean­ings for terms is cen­tral to tru­ly col­lec­tive schol­ar­ship. For exam­ple, “trans­fer” is a term that has a vari­ety of mean­ings. In their research review, Tobias et al (this vol­ume) group stud­ies of trans­fer, but which of those stud­ies used old­er def­i­n­i­tions of this term and which used emerg­ing for­mu­la­tions (Mestre, 2002; Schwartz, Sears, & Brans­ford, 2005)? The use of wikis is help­ful in teams of inves­ti­ga­tors evolv­ing a com­mon ter­mi­nol­o­gy and con­cep­tu­al frame­work, as evi­denced by the Pitts­burgh Sci­ence of Learn­ing Center’s wiki on the­o­ry devel­op­ment (http://www.learnlab.org/research/wiki/index.php/Main_Page).

What Works

My third assump­tion is that a research agen­da should cen­ter on what works, when, for whom, going beyond whether or not some edu­ca­tion­al game or sim­u­la­tion “is effec­tive” in some uni­ver­sal man­ner (Koz­ma, (1994); Means, 2006). Learn­ing is a human activ­i­ty quite diverse in its man­i­fes­ta­tions from per­son to per­son (Dede, 2008). Con­sid­er three activ­i­ties in which all humans engage: sleep­ing, eat­ing, and bond­ing. One can arrange these on a con­tin­u­um from sim­ple to com­plex, with sleep­ing towards the sim­ple end of the con­tin­u­um, eat­ing in the mid­dle, and bond­ing on the com­plex side of this scale. Peo­ple sleep in rough­ly sim­i­lar ways;, but indi­vid­u­als like to eat dif­fer­ent foods and often seek out a range of quite dis­parate cuisines. Bond­ing as a human activ­i­ty is more com­plex still: Peo­ple bond to pets, to sports teams, to indi­vid­u­als of the same gen­der and of the oth­er gen­der; fos­ter­ing bond­ing and under­stand­ing its nature are incred­i­bly com­pli­cat­ed activ­i­ties. Edu­ca­tion­al research strong­ly sug­gests that indi­vid­ual learn­ing is as diverse and as com­plex as bond­ing, or cer­tain­ly as eat­ing. Yet the­o­ries of learn­ing and philoso­phies about how to use inter­ac­tive media for edu­ca­tion tend to treat learn­ing like sleep­ing, as an activ­i­ty rel­a­tive­ly invari­ant across peo­ple, sub­ject areas, and edu­ca­tion­al objec­tives. That is, behav­ior­ists, cog­ni­tivists, con­struc­tivists, and those who espouse “sit­u­at­ed learn­ing” all argue that, well imple­ment­ed, their approach to instruc­tion works for all learn­ers (Dede, 2008).

As a con­se­quence, many edu­ca­tion­al design­ers and schol­ars seek the sin­gle best medi­um for learn­ing, as if such a uni­ver­sal tool could exist. For exam­ple, a uni­ver­sal method for devel­op­ing instruc­tion is the goal of “instruc­tion­al sys­tems design” (Dick & Carey, 1996) Sim­i­lar to every oth­er form of edu­ca­tion­al tech­nol­o­gy, some see gam­ing and sim­u­la­tion as uni­ver­sal­ly opti­mal, a “sil­ver bul­let” for education’s woes (Salen, 2008). As Lar­ry Cuban doc­u­ments in his book, Over­sold and Under­used (2001), in suc­ces­sive gen­er­a­tions pun­dits have espoused as “mag­i­cal” media the radio, the tele­vi­sion, the com­put­er, the Inter­net, and now lap­tops, gam­ing, blog­ging, and pod­cast­ing (to name just a few). The weak­ness in this posi­tion is the tac­it assump­tion, per­va­sive in most dis­cus­sions of edu­ca­tion­al tech­nol­o­gy research, that instruc­tion­al media are “one size fits all” rather than enabling an ecol­o­gy of ped­a­go­gies to empow­er the many dif­fer­ent ways peo­ple learn.

No learn­ing medi­um is a tech­nol­o­gy like fire, where one only has to stand near it to get a ben­e­fit from it. Knowl­edge does not intrin­si­cal­ly radi­ate from edu­ca­tion­al games and sim­u­la­tions, infus­ing stu­dents with learn­ing as fires infuse their onlook­ers with heat. How­ev­er, var­i­ous the­o­ret­i­cal per­spec­tives (e.g., cog­ni­tive sci­ence, social con­struc­tivism, instruc­tion­al sys­tems design) can pro­vide insights on how to con­fig­ure these inter­ac­tive media to aid var­i­ous aspects of learn­ing, such as visu­al rep­re­sen­ta­tion, stu­dent engage­ment, and the col­lec­tion of assess­ment data. Deter­min­ing whether and how each instruc­tion­al medi­um can best enhance some aspect of a par­tic­u­lar ped­a­gogy is as sen­si­ble instru­men­tal­ly as devel­op­ing a range of tools (e.g., screw­driv­er, ham­mer, saw, wrench) that aid a carpenter’s abil­i­ty to con­struct artifacts.

Fur­ther, numer­ous stud­ies doc­u­ment that no opti­mal ped­a­gogy – or instruc­tion­al medi­um – is effec­tive across every sub­ject mat­ter (Shul­man, 1986; Bech­er, 1987; Lam­pert, 2001). As one exam­ple of research on sub­ject-spe­cif­ic ped­a­gogy, David Garvin (2003) doc­u­ments that the Har­vard Law School, Busi­ness School, and Med­ical School have sep­a­rate­ly strong­ly influ­enced how their par­tic­u­lar pro­fes­sion is taught, each by espous­ing and mod­el­ing sophis­ti­cat­ed “case-method” instruc­tion. Garvin’s find­ings show that what each of these fields means by case-method ped­a­gogy is quite dif­fer­ent and that those dis­sim­i­lar­i­ties are shaped by the par­tic­u­lar con­tent and skills pro­fes­sion­als in that type of prac­tice must mas­ter. Thus, the nature of the con­tent and skills to be learned shape the type of instruc­tion to use, just as the devel­op­men­tal lev­el of the stu­dent influ­ences what teach­ing meth­ods will work well. No edu­ca­tion­al approach, includ­ing gam­ing and sim­u­la­tion, is uni­ver­sal­ly effec­tive; and the best way to invest in learn­ing tech­nolo­gies is a research agen­da that includes the effects of the cur­ricu­lum, the con­text, and stu­dents’ and teach­ers’ char­ac­ter­is­tics in deter­min­ing which aspects of edu­ca­tion­al games and sim­u­la­tions work when, for whom, under what con­di­tions nec­es­sary for success.

Treat­ment Effects

My fourth assump­tion is that, even though sum­ma­tive eval­u­a­tions are impor­tant, the schol­ar­ly focus in the research agen­da should expand well beyond the “is there a sig­nif­i­cant dif­fer­ence in out­come between this inter­ven­tion and stan­dard prac­tice?” stud­ies that com­prise many of the pub­li­ca­tions in the Tobias et al review (this vol­ume). A vast lit­er­a­ture exists doc­u­ment­ing the “no sig­nif­i­cant dif­fer­ence” out­comes char­ac­ter­is­tic of many such stud­ies (Rus­sell, 1999). Beyond flaws in research design and ana­lyt­ic meth­ods, fre­quent rea­sons for lack of a sig­nif­i­cant treat­ment effect include an inter­ven­tion too short in dura­tion to expect a sub­stan­tial impact or a sam­ple so small that, for lack of sta­tis­ti­cal pow­er, even a large effect size could not be detect­ed. The use of mea­sures inad­e­quate to detect the sig­nif­i­cant dif­fer­ences that are occur­ring is anoth­er com­mon prob­lem; for exam­ple, paper-and-pen­cil item-based tests are flawed in their mea­sure­ment of sophis­ti­cat­ed think­ing skills, such as sci­en­tif­ic inquiry (Resnick & Resnick, 1992; Quell­malz & Haer­tel, 2004; Nation­al Research Coun­cil, 2006; Clarke & Dede, in press). Fur­ther, even when all these prob­lems are over­come, often the pop­u­la­tion in the study is nar­row, the teacher char­ac­ter­is­tics are opti­mal, or the con­text is unrep­re­sen­ta­tive; each of these gen­er­ates major threats to generalizability.

In fact, many of these stud­ies are sum­ma­tive eval­u­a­tions mas­querad­ing as research. There is noth­ing wrong with devel­op­ing an inter­ven­tion and con­duct­ing a sum­ma­tive eval­u­a­tion of its over­all impact under typ­i­cal con­di­tions and with rep­re­sen­ta­tive pop­u­la­tions for its poten­tial use. Design heuris­tics from eval­u­a­tions of suc­cess­ful inno­va­tions are often use­ful (Con­nol­ly, Stans­field, & Hainey, 2009). Fur­ther, eval­u­at­ing the effi­ca­cy of a treat­ment before con­duct­ing elab­o­rate research stud­ies of its rel­a­tive effec­tive­ness across mul­ti­ple types of con­texts is impor­tant in mak­ing wise allo­ca­tions of resources. How­ev­er, eval­u­a­tion stud­ies are a poor place to stop in research on an inno­va­tion and should be only a small part of a research agen­da, not the pre­pon­der­ance of work, as they typ­i­cal­ly do not con­tribute much to the­o­ry and do not pro­vide nuanced under­stand­ings of what works, when, for whom, and under what conditions.

Scal­a­bil­i­ty

My fifth assump­tion is that a research agen­da for edu­ca­tion­al gam­ing and sim­u­la­tion should priv­i­lege stud­ies of inter­ven­tions that can be imple­ment­ed at scale. Scale is not pure­ly a mat­ter of eco­nom­ic com­mon sense, such as not spend­ing large amounts of resources on stu­dents in each class­room hav­ing access to a game devel­op­ment com­pa­ny to build what they design, or sim­u­la­tions that involve high ratios of instruc­tors to learn­ers. Research has doc­u­ment­ed that in edu­ca­tion, unlike oth­er sec­tors of soci­ety, the scal­ing of suc­cess­ful instruc­tion­al pro­grams from a few set­tings to wide­spread use across a range of con­texts is very dif­fi­cult even for inno­va­tions that are eco­nom­i­cal­ly and logis­ti­cal­ly prac­ti­cal (Dede, Honan, & Peters, 2005).

In fact, research find­ings typ­i­cal­ly show sub­stan­tial influ­ence of con­tex­tu­al vari­ables (e.g., the teacher’s con­tent prepa­ra­tion, stu­dents’ self-effi­ca­cy, pri­or aca­d­e­m­ic achieve­ment) in shap­ing the desir­abil­i­ty, prac­ti­cal­i­ty, and effec­tive­ness of edu­ca­tion­al inter­ven­tions (Barab & Luehmann, 2003; Schnei­der & McDon­ald, 2007). There­fore, achiev­ing scale in edu­ca­tion requires designs that can flex­i­bly adapt to effec­tive use in a wide vari­ety of con­texts across a spec­trum of learn­ers and teach­ers. Clarke and Dede (2009) doc­u­ment the appli­ca­tion of a five-dimen­sion­al frame­work for scal­ing up to the imple­men­ta­tion of the Riv­er City mul­ti-user vir­tu­al envi­ron­ment for mid­dle school science:

  • Depth: eval­u­a­tion and design-based research to under­stand and enhance caus­es of effectiveness
  • Sus­tain­abil­i­ty: “robust design” to enable adapt­ing to inhos­pitable contexts
  • Spread: mod­i­fy­ing to retain effec­tive­ness while reduc­ing resources and exper­tise required
  • Shift: mov­ing beyond “brand” to sup­port users as co-eval­u­a­tors, co-design­ers, and co-scalers
  • Evo­lu­tion: learn­ing from users’ adap­ta­tions to rethink the innovation’s design model

This is not to argue that research agen­das should not include stud­ies of unscal­able inter­ven­tions – such research can aid with design and help evolve the­o­ry – but I believe that the bulk of a research agen­da, to pro­duce usable knowl­edge, should focus on inno­va­tions that can scale. As the research review by Tobias et al (this vol­ume) doc­u­ments, edu­ca­tion­al games and sim­u­la­tions in gen­er­al offer desir­able affor­dances for imple­men­ta­tion at scale.

I offer these assump­tions not as “truths,” but as propo­si­tions to be debat­ed in the course of for­mu­lat­ing a research agen­da for edu­ca­tion­al gam­ing and sim­u­la­tion. Oth­ers may wish to mod­i­fy assump­tions, to add assump­tions to this list, or even to argue that a research agen­da should not make any assump­tions about what con­sti­tutes qual­i­ty. My point is that any attempt to devel­op a research agen­da should make its under­ly­ing beliefs and val­ues explic­it, because these are cen­tral to deter­min­ing its con­cep­tu­al framework.

Chris Dede is the Wirth Pro­fes­sor in Learn­ing Tech­nolo­gies at Har­vard Grad­u­ate School of Edu­ca­tion. The excerpt above is part of his chap­ter Devel­op­ing a Research Agen­da for Edu­ca­tion­al Games and Sim­u­la­tions in the book Com­put­er Games and Instruc­tion, pub­lished in 2011 by Infor­ma­tion Age Publishing.

–> To Learn More and Order Book via pub­lish­er (offers dis­counts): click on Com­put­er Games and Instruction

–> To Learn More and Order Book via Amazon.com: click on Com­put­er Games and Instruction

Book Descrip­tion: There is intense inter­est in com­put­er games. A total of 65 per­cent of all Amer­i­can house­holds play com­put­er games, and sales of such games increased 22.9 per­cent last year. The aver­age amount of game play­ing time was found to be 13.2 hours per week. The pop­u­lar­i­ty and mar­ket suc­cess of games is evi­dent from both the increased earn­ings from games, over $7 Bil­lion in 2005, and from the fact that over 200 aca­d­e­m­ic insti­tu­tions world­wide now offer game relat­ed pro­grams of study.

In view of the intense inter­est in com­put­er games edu­ca­tors and train­ers, in busi­ness, indus­try, the gov­ern­ment, and the mil­i­tary would like to use com­put­er games to improve the deliv­ery of instruc­tion. Com­put­er Games and Instruc­tion is intend­ed for these edu­ca­tors and train­ers. It reviews the research evi­dence sup­port­ing use of com­put­er games, for instruc­tion, and also reviews the his­to­ry of games in gen­er­al, in edu­ca­tion, and by the mil­i­tary. In addi­tion chap­ters exam­ine gen­der dif­fer­ences in game use, and the impli­ca­tions of games for use by low­er socio-eco­nom­ic stu­dents, for stu­dents’ read­ing, and for con­tem­po­rary the­o­ries of instruc­tion. Final­ly, well known schol­ars of games will respond to the evi­dence reviewed.

TABLE OF CONTENTS
Pref­ace. SECTION I: INTRODUCTION TO COMPUTER GAMES. Intro­duc­tion, Sig­mund Tobias and J. D. Fletch­er. Search­ing For the Fun in Learn­ing: A His­tor­i­cal Per­spec­tive on the Evo­lu­tion of Edu­ca­tion­al Video Games, Alex Games and Kurt D. Squire. Using Video Games as Edu­ca­tion­al Tools in Health­care, Janis A. Can­non-Bow­ersClint Bow­ers, and Kate­lyn Proc­ci. After the Rev­o­lu­tion: Game-Informed Train­ing in the U.S. Mil­i­tary, Ralph Ernest Chatham. Mul­ti-User Games and Learn­ing: A Review of the Research, Jonathon Richterand Daniel Liv­ing­stoneSECTION II: REVIEW OF THE LITERATURE AND REACTIONS. Review of Research on Com­put­er Games, Sig­mund Tobias, J. D. Fletch­er, David Yun Dai, and Alexan­der P. Wind. Reflec­tions on Empir­i­cal Evi­dence on Games and Learn­ing, James Paul Gee. Devel­op­ing a Research Agen­da for Edu­ca­tion­al Games and Sim­u­la­tions, Chris Dede. Com­ments on Research Com­par­ing Games to Oth­er Instruc­tion­al Meth­ods,Marc Pren­skySECTION III: COMPUTER GAME ISSUES. Mul­ti­me­dia Learn­ing and Games, Richard E. May­er. Action Game Play as a Tool to Enhance Per­cep­tion, Atten­tion and Cog­ni­tion, Ash­ley F. Ander­son and Daphne Bave­li­er. Devel­op­ing an Elec­tron­ic Game for Vocab­u­lary Learn­ing: A Case Study, Michael L. Kamil and Cheryl Taitague. Instruc­tion­al Sup­port in Games, Hen­ny Leemkuil and Ton de Jong. Impli­ca­tions of Con­struc­tivism for the Design and Use of Seri­ous Games, Jamie R. KirkleyThomas M. DuffySon­ny E. Kirkley, and Deb­o­rah L. H. Kre­mer. Impli­ca­tions of Game Use for Explic­it Instruc­tion, Putai Jin and Renae Low. Cost Analy­sis in Assess­ing Games for Learn­ing, J. D. Fletch­er. Using Com­put­er Games to Teach Adult Learn­ers Prob­lem Solv­ing, Joan (Yuan-Chung) Lang and Harold F. O’Neil. Gen­der and Gam­ing, Elis­a­beth R. Hayes. Com­put­er Games and Oppor­tu­ni­ty to Learn: Impli­ca­tions for Teach­ing Stu­dents from Low Socioe­co­nom­ic Back­grounds,David Yun Dai and Alexan­der P. WindSECTION IV: EVALUATION AND SUMMING UP. Stealth Assess­ment in Com­put­er-Based Games to Sup­port Learn­ing, Valerie J. Shute. Com­put­er Games, Present and Future, Sig­mund Tobias and J. D. Fletch­er.

About SharpBrains

SHARPBRAINS is an independent think-tank and consulting firm providing services at the frontier of applied neuroscience, health, leadership and innovation.
SHARPBRAINS es un think-tank y consultoría independiente proporcionando servicios para la neurociencia aplicada, salud, liderazgo e innovación.

Top Articles on Brain Health and Neuroplasticity

Top 10 Brain Teasers and Illusions

Newsletter

Subscribe to our e-newsletter

* indicates required

Got the book?