67
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The potential of video games as a pedagogical tool

      editorial

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          When I was seven years old, my parents bought me and my brother a Nintendo Entertainment System, which they would eventually refer to as “The Idiot Box.” There was an implicit assumption (one that persists in much of the general public today) that video games were simply a toy and nothing of real substance could be gained from them. Considering this, and the prevalence of video games in society (59% of Americans play video games; Entertainment Software Association, 2014), numerous questions have been raised about the long term effects of regular use. While the media focus is generally on potential negative effects (see Ferguson and Kilburn, 2009, 2010; Anderson et al., 2010; Bushman et al., 2010; Rowell, 2010 for debate on violent video games causing aggression in gamers), there is also evidence to suggest that there may be a range of potential positive effects of video games. It should be noted that when “video games” are referenced in this article it is specifically with regard to commercial, “for fun” games and not games that were designed with educational purposes or cognitive training in mind. This article argues that video games can be a useful pedagogical tool for educators at all levels of academia. Let's consider a simple anecdotal example of how video games may encourage children to learn large amounts of information. Pokémon is a popular children's game in which the player has to fight and collect various creatures, called “Pokémon.” In the original Pokémon game, there were 150 Pokémon that could be found. Each Pokémon has a name, a “type” (like water type or fire type), a weakness and a strength (fire types are weak against water types, but strong against grass types), and a stage of its evolution (some Pokémon can turn into a new one if you used it enough). Roughly speaking, this gives us 5 distinct pieces of information per Pokémon and multiplying this by 150 gives 750 distinct units of information contained by the list of all the Pokémon in the first Pokémon game. Now let's consider another list of information that is considered more educationally important: the periodic table of the elements. Within the periodic table each element is defined by a symbol, a name, an atomic number and weight, a phase at room temperature, and its metal or gas type. That comes to 6 distinct units of information per cell. Multiplying this by 118, the total number of elements, returns 708 distinct units of information in the periodic table. What makes this interesting is that there are a large numbers of young children (and adults for that matter) who can recite, from memory much of the information contained in this list of Pokémon, but it would likely be difficult to find the same number of people who could do that for the periodic table. The point is that the medium of video games appears to have the potential facilitate significant learning. As it turns out, video games are designed to effectively be learning machines (Gee, 2003, 2005). If fact, many games start off with a simple tutorial level that teaches the player the basic mechanics of the game. Throughout the game, the strategy, and tactics needed to complete tasks become more complex while the teaching method gradually switches from an explicit tutorial to an experience based process. Essentially, games teach the player the skills needed to critically evaluate any situation within the game and determine the best course of action. Starcraft 2 (as well as its predecessors Starcraft and Starcraft: Brood War) is what is called a “Real Time Strategy” game. Players must obtain various resources and use them to purchase buildings and fighting units. Then, they must fight and defeat their enemy. To play this game successfully, players must manage their time and resources more efficiently than their opponents as each building and unit have a different cost, purpose, and build time. Additionally, much like Pokémon, units have strengths and weakness. A player has to constantly update their strategies throughout the game based on their interactions with the other players, requiring significant planning and critical thinking skills that are honed by players over time. In fact, while Starcraft beginners make less than 100 actions (or decisions) per minute, professionals can make over 400 per minute (Lewis et al., 2011; Latham et al., 2013). Despite evident in-game learning, one concern is that video games can only be used to teach players about game-related information and not about educationally relevant material. However, recent research into the field of digital game based learning (Prensky, 2003, 2005; Pivec, 2007; Hwang and Wu, 2012; also see Young et al., 2012) suggests that this is not the case. Squire (2005) conducted a qualitative case study on a secondary school history class in which he had students play an historical simulation game called Civilization 3 with the intention of having the students learn about history from playing the game. It should be noted that this game was not designed specifically as an educational game (see Girard et al. (2013) for a review of the effect of intentionally educational games). In this game, students take control of a civilization (like the Aztecs or the French, for example) and progress through history developing technology, engaging in simulated warfare and diplomacy, and managing the economies of their empires. Squire (2005) reported that playing this game for educational purposes seemed to benefit students who struggled with traditional education, though students who typically performed well preferred traditional teaching methods. Here is a brief excerpt of an interview with one of the students in this study: “Interviewer: Who do you think invented the alphabet before you played this game? Marvin: The English, because back then they were the classiest and smartest. Interviewer: Now who do you think invented the alphabet? Marvin: Probably the Egyptians with the hieroglyphics. It was the first writing to be done.” (Squire, 2005) When you play Civilization 3, you have to develop technology based on a set path. For example, you are required to invent the alphabet before you invent other things, like formal mathematics. Therefore, it is possible for this simple video game mechanic to teach players that the alphabet had to be invented in the first place and that it was invented before other things. Therefore, developing science in the game gives students a better idea of when things occurred in history. Another way in which video games are learning machines (Gee, 2003, 2005) is that they are highly motivating and therefore they can induce higher student engagement compared to traditional teaching methods. According to Hamlen (2013), “despite assumptions that children play video games to avoid mental stimulation, children are actually motivated by the challenge and thinking required by video games.” Squire (2005) reported that Marvin voluntarily, and without provocation, spent time learning more about history from the “civelopedia” which provides players with historically accurate information about their chosen civilizations and other aspects of the game. Improved student engagement using video games was also demonstrated by Stansbury and Munro (2013; Stansbury et al., 2014) who supplemented an undergraduate-level behavioral statistics lecture by having students play the game Dance Dance Revolution (DDR) to generate scores that would be used as dependent variables while teaching the students about factorial research designs. They found, based on a pre-test/post-test comparison, that students who played DDR as part of their lecture showed a greater increase in content knowledge compared to students who received a traditional lecture on the same topic. Even through the inherent content of DDR was not educationally relevant, clever pedagogical use of the game had the desired effect of increasing student engagement with the lecture content. Learning to learn There are also potential positive effects of video games that could have a less direct influence on learning and education. Harlow (1949; also see Green and Bavelier, 2008) coined the term “learning to learn,” which refers to “the process of developing skills that facilitate learning in other contexts…” (Bisoglio et al., 2014). Critically, there is evidence that video game training can influence numerous skills and abilities that are crucial to the learning process. For example, Kühn et al. (2014) trained 48 non-video game players (M Age = 24.1; S Age = 3.8) on Super Mario 64 for 30 min a day for 2 months. Super Mario 64 is a game in which the player must explore the game world, fight monsters, solve puzzles, and collect stars to progress. Gray matter volume was measured pre- and post- training and it was found that the volume of gray matter in the right dorsolateral pre-frontal cortex was significantly increased post-training. There is evidence to suggest that an increase in cortical volume due to video game training is related to improvements in the concomitant cognitive functions of that brain region (Basak et al., 2011; Voss et al., 2012). The dorsolateral pre-frontal cortex is has been frequently implicated in executive functions—including working memory (Goldman-Rakic, 1995; Bechara et al., 1998; Levy and Goldman-Rakic, 2000; Petrides, 2000; Curtis and D'Esposito, 2003), inhibitory control (Knight et al., 1999; MacDonald et al., 2000; Ridderinkhof et al., 2004), and attentional shifting (Nagahama et al., 2001; Kondo et al., 2004)—all of which are arguably critical to the learning process in an educational setting. There is also a wealth of behavioral evidence that video game training influences cognitive abilities including executive function (Maclin et al., 2011; Mathewson et al., 2012; Strobach et al., 2012; Anguera et al., 2013), spatial attention (Green and Bavelier, 2003, 2006a,b, 2007; Feng et al., 2007; Dye et al., 2009; Hubert-Wallander et al., 2011), selective attention (Wu et al., 2012; Belchior et al., 2013; Wu and Spence, 2013), distractor processing (Mishra et al., 2011; Krishnan et al., 2013), and attentional capture (West et al., 2008; Chisholm et al., 2010). There is even evidence to suggest that video gamers generate more robust internal representations of visual information (Green and Bavelier, 2004; Karle and James, 2011; Sungur and Boduroglu, 2012). However, a caveat is that due to inconsistent methodologies across the research in the field there is still debate over whether or not there is a causal relationship between playing video games and improved cognition (Boot et al., 2008, 2011, 2013; Boot and Simons, 2012; Schubert and Strobach, 2012; Kristjánsson, 2013; Latham et al., 2013; Bisoglio et al., 2014). That being said, overall the evidence appears to be trending toward there being a causal relationship. In addition to the cognitive skills necessary for learning, appropriate social skills are necessary for success in education because education in general is a highly social experience. It should be noted that although cognitive and social skills are being separated in this article for the sake of clarity, they are deeply interrelated abilities in practice. Being a student (or a teacher for that matter) requires constant interaction and communication with other people. This interaction can take the form of a lecture, group assignments, study groups, or even general emotional and social support. An often overlooked aspect of video games is that they too can be a highly social experience. Despite this, a common misconception about video gamers is that they shun and avoid social engagement (Jenkins, 2005). Consider that many modern video games are multiplayer online games. For example, World of Warcraft is what is known as a massively multiplayer online role playing game (MMORPG). In this game, to defeat enemies and to progress through the game, players must interact and coordinate with each other. In some cases the execution of a sophisticated strategy, involving up to 40 players, is required to win a battle. It has been suggested that MMORPG's can provide a medium through which one can learn and practice social skills (Ducheneaut and Moore, 2004, 2005; Yee, 2006; Zhong, 2011). Jang and Ryu (2011) analyzed survey data from 300 Korean MMORPG players (M Age = 25.4; S Age = 5.9) about their gaming habits and their leadership experiences (both online and offline). The results revealed a positive correlational relationship between online leadership experiences and with offline leadership experiences (each measured using a different leadership questionnaire). While correlations only allow for a limited interpretation there is converging evidence to suggest that the video game play may be a causal or at least influencing factor. Yee (2003), based on a survey analysis of 2804 MMORPG players (age not reported), reported that almost half of all subjects subjectively felt that they had improved their leadership skills, defined by four subcategories, at least a little bit as a result of their gaming experiences: Mediation (55.2%), Motivation (48.4%), Persuasion (43.8%), and Leadership (50.3%). Interestingly, being in a leadership or management role in real life did not seem to affect the rate at which subjects reported improved leadership skills, suggesting playing MMORPGs could have social benefits for a wide range of people. Additionally, competitive online team games also provide an excellent medium for enhancing social skills, particularly teamwork, and collaboration. A game mentioned earlier, Starcraft 2, has a highly in depth competitive culture and multiplayer community (as do games like League of Legends and DOTA 2). Poling (2013) conducted a study in which he taught an entire course using Starcraft 2 as the primary mode of instruction throughout. According to Poling (2013) “the StarCraft 2 course encouraged learners to create new knowledge by synthesizing what they know and learn in the game world with how they can apply those skills and concepts to their real-life professional world. It used StarCraft 2 as a digital sandbox where they had to learn to work well with others in order to succeed.” Based on interviews from three participants, it was expressed that they felt they had enhanced their knowledge regarding “collaboration, teamwork, and leadership” as a result of the course. Poling (2013) reported that “one of the most important lessons they learned from both the in-person and online collaboration processes was that collaboration is essentially about managing human relationships, maintaining effective communication, and learning how to work with others despite their differences.” Video games have also been shown to be able to influence pro-social behaviors. Greitemeyer and Osswald (2010) conducted a study in which they had subjects (M Age = 21.81; S Age = not reported; university students) play one of four games. In the interest of brevity, I will focus on only two. One of those games was called Lemmings and was chosen because they researchers deemed it a pro-social game. In this game, Lemmings drop from an entrance and walk forward unless there is something in their path, in which case they turn around. If the player does nothing, they will walk off the edge of cliffs and die. The purpose of the game is to guide the lemmings safely to the exits using a limited number of skills. It was deemed pro-social because the focus of the game is helping the lemming survive. The game Tetris was used as a neutral-social game. In this game, players must arrange geometrical shapes into horizontal lines to earn points. Players were asked to play their respective game for 8 min. After playing, they were presented with various social situations and their responses were recorded. The subjects were unaware that these were part of the study. In one study, the experimenter knocked some pencils onto the floor while talking to the subject. In another (which used a different pro-social game), a male confederate entered the room and verbally and physically harassed a female experimenter as if he were an ex-boyfriend who couldn't accept that their relationship was over. In both cases, the subjects who played the pro-social game were more likely to help the experimenter than the subjects who played the neutral game. The point is video games can have a positive influence on social behaviors (Durkin and Barber, 2002; Lenhart et al., 2008; Dalisay et al., in press), at least when played in moderation (Przybylski, 2014). Prosocial video games like lemmings can foster altruism, which could be good for encouraging students to help each other if one of them is struggling in class, and multiplayer games like World of Warcraft or Starcraft 2 can foster the social skills needed to coordinate and cooperate with other people, which could enhance student's ability to engage in group assignments. Video games have significant potential as a pedagogical tool. In order to begin to explore this potential, the common preconceptions about what video games are and their value must be re-evaluated. In general, there are two rules of thumb that one should keep in mind when considering how to use video games in an educational context. First and foremost, it matters what games you are playing. Puzzle games like Portal, may improve problem solving skills, but do nothing to improve social skills or attentional processing. Multiplayer team games like League of Legends may improve the ability to communicate and cooperate with groups, but may not affect interest in learning more about school work. Historical simulation games, like the Civilization series might enhance a student's motivation to learn more about history, but have no influence over executive functions. It is therefore important to pick and choose the games you may want to use carefully. And second, video games cannot replace a teacher or a curriculum, but judicious use of the appropriate games can complement an educational program. Conflict of interest statement The author is a scientific consultant for a company that specializes in cognitive training video games.

          Related collections

          Most cited references54

          • Record: found
          • Abstract: not found
          • Article: not found

          Cellular basis of working memory

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Video game training enhances cognitive control in older adults

            Cognitive control is defined by a set of neural processes that allow us to interact with our complex environment in a goal-directed manner 1 . Humans regularly challenge these control processes when attempting to simultaneously accomplish multiple goals (i.e., multitasking), generating interference as the result of fundamental information processing limitations 2 . It is clear that multitasking behavior has become ubiquitous in today’s technologically-dense world 3 , and substantial evidence has accrued regarding multitasking difficulties and cognitive control deficits in our aging population 4 . Here we show that multitasking performance, as assessed with a custom-designed 3-D video game (NeuroRacer), exhibits a linear age-related decline from 20–79 years of age. By playing an adaptive version of NeuroRacer in multitasking training mode, older adults (60–85 y.o.) reduced multitasking costs compared to both an active control group and a no-contact control group, attaining levels beyond that of untrained 20 year olds, with gains persisting for six months. Furthermore, age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by multitasking training (i.e., enhanced midline frontal theta power and frontal-posterior theta coherence). Critically, this training resulted in performance benefits that extended to untrained cognitive control abilities (i.e., enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting the training-induced boost in sustained attention and preservation of multitasking improvement six months later. These findings highlight the robust plasticity of the prefrontal cognitive control system in the aging brain, and provide the first evidence of how a custom-designed video game can be used to assess cognitive abilities across the lifespan, evaluate underlying neural mechanisms and serve as a powerful tool for cognitive enhancement. In a first experiment, we evaluated multitasking performance across the adult lifespan. 174 participants spanning six decades of life (ages 20–79; ~30 individuals per decade) played a diagnostic version of NeuroRacer to measure their perceptual discrimination ability (’sign task’) with and without a concurrent visuomotor tracking task (‘driving task’; see Supplementary Materials for details of NeuroRacer). Performance was evaluated using two distinct game conditions: 1) ‘Sign Only’- respond as rapidly as possible to the appearance of a sign only when a green circle was present, and 2) ‘Sign & Drive’- simultaneously perform the sign task while maintaining a car in the center of a winding road using a joystick (i.e., ‘drive’; see Figure 1a). Perceptual discrimination performance was evaluated using the signal detection metric of discriminability (d'). A ‘cost’ index was used to assess multitasking performance by calculating the percentage change in d’ from ‘Sign Only’ to ‘Sign & Drive’, such that greater cost (i.e., a more negative % cost) indicates increased interference when simultaneously engaging in the two tasks (see Methods Summary). Prior to the assessment of multitasking costs, an adaptive staircase algorithm was used to determine the difficulty levels of the game at which each participant performed the perceptual discrimination and visuomotor tracking tasks in isolation at ~80% accuracy. These levels were then used to set the parameters of the component tasks in the multitasking condition, so that each individual played the game at a customized challenge level. This assured that comparisons would inform differences in the ability to multitask, and not merely reflect disparities in component skills (see Methods, Supplementary Figures 1 & 2, and Supplementary Materials for more details). Multitasking performance diminished significantly across the adult lifespan in a linear fashion (i.e., increasing cost, see Figure 2a and Supplementary Table 1), with the only significant difference in cost between adjacent decades being the increase from the 20s (−26.7% cost) to the 30s (−38.6% cost). This deterioration in multitasking performance is consistent with the pattern of performance decline across the lifespan observed for fluid cognitive abilities, such as reasoning 5 and working memory 6 . Thus, using NeuroRacer as a performance assessment tool we replicated previously evidenced age-related multitasking deficits 7,8 , and revealed that multitasking performance declines linearly as we advance in age beyond our twenties. In a second experiment, we explored if older adults who trained by playing NeuroRacer in multitasking mode would exhibit improvements in their multitasking performance on the game 9,10 (i.e., diminished NeuroRacer costs). Critically, we also assessed if this training transferred to enhancements in their cognitive control abilities 11 beyond those attained by participants who trained on the component tasks in isolation. In designing the multitasking training version of NeuroRacer, steps were taken to maintain both equivalent difficulty and engagement in the component tasks to assure a prolonged multitasking challenge throughout the training period: difficulty was maintained using an adaptive staircase algorithm to independently adjust the difficulty of the ‘sign’ and ‘driving’ tasks following each 3-min run based on task performance, and balanced task engagement was motivated by rewards given only when both component tasks improved beyond 80% on a given run. We assessed the impact of training with NeuroRacer in a longitudinal experiment that involved randomly assigning 46 naïve older adults (60–85yrs: 67.1yrs ± 4.2) to one of three groups: Multitasking Training (MTT; n=16), Singletask Training (STT; n=15) as an active control, or No-Contact Control (NCC; n=15). Training involved playing NeuroRacer on a laptop at home for 1 hour a day, 3 times a week for 4 weeks (12 total hours of training), with all groups returning for a 1 month Post-training and a 6 month follow-up assessment (Figure 1b). The MTT group played the ‘Sign & Drive’ condition exclusively during the training period, while the STT group divided their time between a “Sign Only” and a “Drive Only” condition, and so were matched for all factors except the presence of interference. In addition to a battery of cognitive control tests used to assess the breadth of training benefits (see Supplementary Table 2), the neural basis of training effects was evaluated using electroencephalography (EEG) recorded at Pre- and Post-training visits while participants performed a neural assessment version of NeuroRacer. Analysis showed that only the MTT group’s multitasking performance index significantly improved from Pre- (−64.2% cost) to Post-training (−16.2% cost; Figure 2b), thus supporting the role of interference during game play as a key mechanistic feature of the training approach. In addition, although cost reduction was observed only in the MTT group, equivalent improvement in component task skills was exhibited by both STT and MTT (see Supplemental Figures 4 and 5). This indicates that enhanced multitasking ability was not solely the result of enhanced component skills, but a function of learning to resolve interference generated by the two tasks when performed concurrently. Moreover, the d' cost improvement following training was not the result of a task tradeoff, as driving performance costs also diminished for the MTT group from Pre- to Post-training (see Supplementary Materials). Notably in the MTT group, the multitasking performance gains remained stable 6 months after training without booster sessions (at 6 months: −21.9% cost). Interestingly, the MTT group’s Post-training cost improved significantly beyond the cost level attained by a group of 20-year-olds who played a single session of NeuroRacer (−36.7% cost; Experiment 3; p .50–1.0 (using Cohen’s d, see Methods)) for both cognitive control performance and neural measures versus either control group. The sustained multitasking cost reduction over time and evidence of generalizability to untrained cognitive control abilities provide optimism for the use of an adaptive, interference-rich, video game approach as a therapeutic tool for the diverse populations that suffer from cognitive control deficits (e.g., ADHD, depression, dementia). These findings stress the importance of a targeted training approach, as reinforced by a recent study that observed a distinct lack of transfer following non-specific online cognitive exercises 30 . In conclusion, we provide evidence of how a custom-designed video game targeting impaired neural processes in a population can be used to diagnosis deficits, assess underlying neural mechanisms, and enhance cognitive abilities. Methods Summary All participants had normal or corrected vision, no history of neurological, psychiatric, or vascular disease, and were not taking any psychotropic or hypertension medications. In addition, they were considered ‘non-gamers’ given that they played less than 2 hours of any type of video game per month. For NeuroRacer, each participant used their left thumb for tracking and their right index finger for responding to signs on a Logitech (Logitech, USA) gamepad controller. Participants engaged in three 3-minute runs of each condition in a randomized fashion. Signs were randomly presented in the same position over the fixation cross for 400 msec every 2, 2.5, or 3 seconds, with the speed of driving dissociated from sign presentation parameters. The multitasking cost index was calculated as follows: [(‘Sign & Drive’ performance - ‘Sign Only’ performance) / ‘Sign Only’ performance] * 100. EEG data for 1 MTT Post-training participant and 1 STT Pre-training participant were corrupted during acquisition. 2 MTT participants, 2 STT participants, and 4 NCC participants were unable to return to complete their 6-month follow-up assessments. Critically, no between-group differences were observed for neuropsychological assessments (p= .52) or Pre-training data involving: i) NeuroRacer thresholding for both Road (p= .57) and Sign (p= .43), ii) NeuroRacer component task performance (p> .10 for each task), iii) NeuroRacer multitasking costs (p= .63), iv) any of the cognitive tests (all ANOVAs at Pre-training: p≥ .26), v) ERSP power for either condition (p≥ .12), and, vi) coherence for either condition (p≥ .54). Methods Participants All participants were recruited through online and newspaper advertisements. For Experiment 1, 185 (90 male) healthy, right-handed individuals consented to participate according to procedures approved by the University of California at San Francisco. For Experiment 2 & 3, 60 (33 males) older adult individuals and 18 (9 male) young adult individuals participated without having been a part of Experiment 1 (see Supplementary Table 3 for demographic descriptions and Supplementary Figure 9 for Experiment 2 participant enrollment). Participants who were unable to perform the tasks, as indicated by tracking performance below 15% (6 individuals from Experiment 1, 8 individuals from Experiment 2), or a false positive rate greater than 70% (5 individuals from Experiment 1, 6 individuals from Experiment 2) during any one visit or across more than 4 individual training sessions, were excluded. Thresholding Prior to engaging in NeuroRacer, participants underwent an adaptive thresholding procedure for discrimination (nine 120 sec runs) and tracking ability (twelve 60 sec runs) to determine a ‘sign’ and ‘drive’ level that each participant would perform at ~80% accuracy (see Supplementary Figures 1 & 2). Having individuals engage each condition in their own ‘space’ following thresholding procedures facilitated a fairer comparison across ages and abilities. This is a frequently omitted procedure in other studies, and leads to difficulty interpreting performance differences (especially multitasking) as being the result of differences in interference processing or due to differences in component task skills. For the perceptual discrimination thresholding, each participant’s performance for a given run was determined by calculating a proportion correct score involving: i) correctly responding to targets, ii) correctly avoiding non-targets, iii) late responses to targets, and iv) responding to non-targets. At the end of each run, if this score was greater than 82.5%, the subsequent run would be played at a higher level which had a corresponding shorter time window for responses to targets. More specifically, the adaptive algorithm would make proportional level changes depending upon participants performance from this ~80% median, such that each 1.75% increment away from this median corresponded with a change in level (see Supplementary Figure 1a). Thus, a 90% performance would lead to a 40msec reduction in the time window, while a 55% (or less) performance would lead to a 100msec lengthening of said window. Thresholding parameters for road levels followed a similar pattern with each .58% increment away from the same median corresponded with a change in level (see Supplementary Figure 1b). These parameters were chosen following extensive pilot testing to: (1) minimize the number of trial runs until convergence was reached and (2) minimize convergence instability, while (3) maximizing sampling resolution of user performance. The first 3 driving thresholding blocks were considered practice to familiarize participants with the driving portion of the task and were not analyzed. A regression over the 9 thresholding runs in each a case was computed to select the ideal time window and road speed to promote a level of ~80% accuracy on each distraction free task throughout the experiment (see Supplementary Figure 2). All participants began the thresholding procedures at the same road (level 20) and sign levels (level 29). Conditions Following the driving and sign thresholding procedures, participants performed 5 different three minute 'missions', with each mission performed three times in a pseudo-randomized fashion. In addition to the ‘Sign Only’, ‘Drive Only’, and ‘Sign & Drive’ conditions, participants also performed a "Sign With Road" condition where the car was placed on 'auto pilot' for the duration of the run and participants responded to the signs, and a ‘Drive with Signs’ condition where participants were told to ignore the presence of signs appearing that and continue to drive as accurately as possible. Data from these two conditions are not presented here. Feedback was given at the end of each run as the proportion correct to all signs presented for the perceptual discrimination task (although we used the signal detection metric of discriminability (d') 31 to calculate our ‘Cost’ index throughout the study), and percentage of time spent on the road (see Supplementary Figure 10). Prior to the start of the subsequent run, participants were informed as to which condition would be engaged in next, and made aware of how many experimental runs were remaining. Including thresholding, the testing session encompassed 75min of gameplay. NeuroRacer training and testing protocol For Experiment 1, participants were seated in a quiet room in front of an Apple© MacBook Pro 5.3 laptop computer at an approximate distance of 65 cm from the 15" screen. For Experiment 2 and 3, participants were seated in a dark room with the screen ~100 cm from the participants. All training participants trained at their homes using an Apple© MacBook Pro 5.3 laptop computer while sitting ~60 cm from the screen (see Supplementary Figure 11a). For Experiment 1, each perceptual discrimination-based experimental run (180 sec) contained 36 relevant targets (green circles) and 36 lures (green, blue and red pentagons and squares). For Experiments 2 & 3, the sign ratio was to 24/48. Prior to training, each participant was given a tutorial demonstrating how to turn on the laptop, properly setup the joystick, navigate to the experiment, shown what the 1st day of training would be like in terms of the task, how to interpret what the feedback provided meant, and were encouraged to find a quiet environment in their home for their training sessions. If indicated by the participant, a lab member would visit the participant at their home to help set up the computer and instruct training. In addition, to encourage/assess compliance and hold participants to a reasonable schedule, participants were asked to plan their training days & times with the experimenter for the entire training period and enter this information into a shared calendar. Each participant (regardless of group) was informed that their training protocol was designed to train cognitive control faculties, using the same dialogue to avoid expectancy differences between groups. There was no contact between participants of different groups, and they were encouraged to avoid discussing their training protocol with others to avoid potentially biasing participants in the other groups. Each day of training, the participants were shown a visualization of a map that represented their ‘training journey’ to provide a sense of accomplishment following each training session (Supplementary Figure 11b). They were also shown a brief video that reminded them how to hold the controller, which buttons to use, their previous level(s) reached, and what the target would be that day for the perceptual discrimination condition. In addition, the laptop’s built-in video camera was also activated (indicated by a green light) for the duration of said run, providing i) visual assessment of task engagement, ii) motivation for participants to be compliant with the training task instructions, and iii) information about any run where performance was dramatically poorer than others. Participants were discouraged from playing 2 days in a row, while they were encouraged to play at the same time of day. MTT participants were reminded that an optimal training experience was dependent upon doing well on both their sign and drive performance without sacrificing performance on one task for the other. While the STT group were provided a ‘Driving’ or ‘Sign’ score following each training run, the MTT group were also provided an ‘Overall’ score following each run as a composite of performance on both tasks (see Supplementary Figures 5 and 11). Following the completion of every 4th run, participants were rewarded with a ‘fun fact’ screen regarding basic human physiology (http://faculty.washington.edu/chudler/ffacts.html) before beginning their subsequent training run. To assess if training was a ‘fun’ experience, participants in each training group rated the training experience on their final visit to the laboratory on a scale of 1 (minimally) to 10 (maximally) (MTT: 6.5 ± 2.2; STT 6.9 ± 2.4; t= .65, p= .52). Critically, training groups did not differ on their initial thresholding values for both Road (F(2,45)= .58, p= .57) and Sign (F(2,45)= .87, p= .43). Each laptop was configured to transmit NeuroRacer performance data to our secure lab server wirelessly using DropBox® as each run was completed. This facilitated monitoring for compliance and data integrity in a relatively real-time fashion, as participants would be contacted if i) there was a failure to complete all 20 training runs on a scheduled training day, ii) ‘Sign Only’ and ‘Drive Only’ performance was suggestive that a problem had occurred within a given training session, and iii) a designated training day was missed. Individuals without wireless internet in their home were instructed to visit an open wireless internet location (e.g., coffee shop, public library) at least once a week to transfer data, and if this was not an option, researchers arranged for weekly home visits to acquire said data. All participants were contacted via email and/or phone calls on a weekly basis to encourage and discuss their training; similarly, in the event of any questions regarding the training procedures, participants were able to contact the research staff via phone and email. Pre- and Post-training evaluations involving cognitive testing and NeuroRacer EEG took place across 3 different days (appointment and individual test order were counterbalanced), with all sessions completed approximately within the span of a week (total number of days to complete all Pre-training testing: 6.5 days ± 2.2; Post-training testing: 6.1 days ± 1.5). Participants returned for their 1st Post-training cognitive assessments 2.0 ± 2.2 days following their final training session. While scheduled for 6 months after their final testing session, the 6 month follow-up visits actually occurred on average 7.6 months ± 1.1 afterwards due to difficulties in maintaining (and rescheduling) these distant appointments. Critically, no group differences were present regarding any of these time-of-testing measures (F .18 for each comparison). Cognitive Battery The cognitive battery (see Supplementary Table 2) consisted of tasks spanning different cognitive control domains: sustained attention (TOVA; see Supplementary Figure 12a), working memory (delayed-recognition- see Supplementary Figure 12b); visual working memory capacity (see Supplementary Figure 13), dual-tasking (see Supplementary Figure 14), useful field of view (UFOV; see Supplementary Figure 15), and two control tasks of basic motor and speed of processing (stimulus detection task, digit symbol substitution task; see Supplementary Figure 16). Using the analysis metrics regularly reported for each measure, we performed a mixed model ANOVA of Group (3: MTT, STT, NCC) X Session (2: pre, post) X Cognitive test (11; see Supplementary Table 2), and observed a significant 3-way interaction (F(20, 400)= 2.12, p= .004) indicative that training had selective benefits across group and test. To interrogate this interaction, each cognitive test was analyzed separately with Session X Group ANOVAs to isolate those measures that changed significantly following training. We also present the p-value associated with the ANCOVAs for each measure in Supplementary Table 2 (dependent measure = Post-training performance, covariate = Pre-training performance), which showed a similar pattern of effects as most of the 2-way ANOVAs. The ANCOVA approach is considered to be a more suitable approach when post-test performance that is not conditional/predictable based on pre-test performance is the primary outcome of interest following treatment, as opposed to characterizing gains achieved from Pre-training performance (e.g., group X session interaction(s)) 32 ; however, both are appropriate statistical tools that have been used to assess cognitive training outcomes 27,33 (see Supplementary Figure 17 as an example). EEG Recordings and Eye Movements Neural data were recorded using an Active Two head cap (Cortech-Solutions) with a BioSemiActiveTwo 64-channel EEG acquisition system in conjunction with BioSemiActiView software (Cortech-Solutions). Signals were amplified and digitized at 1024 Hz with a 16-bit resolution. Anti-aliasing filters were used and data were band-pass filtered between 0.01–100 Hz during data acquisition. For each EEG recording session, the NeuroRacer code was modified to flash a 1x1” white box for 10msec at one of the corners on the stimulus presentation monitor upon the appearance of a sign. A photodiode (http://www.gtec.at/Products/Hardware-and-Accessories/g.TRIGbox-Specs-Features) captured this change in luminance to facilitate precise time-locking of the neural activity associated with each sign event. During the experiment, these corners were covered with tape to prevent participants from being distracted by the flashing light. To ensure that any training effects were not due to changes in eye movement, electrooculographic data were analyzed as described by Berry and colleagues 34 . Using this approach, vertical (VEOG = FP2-IEOG electrodes) and horizontal (HEOG= REOG-LEOG electrodes) difference waves were calculated from the raw data and baseline corrected to the mean prestimulus activity. The magnitude of eye movement was computed as follows: (VEOG2 + HEOG2)1/2. The variance in the magnitude of eye movement was computed across trials and the mean variance was specifically examined from −200 to 1000msec post-stimulus onset. The variance was compared i) between sessions for each group’s performance on the ‘Sign and Drive’ and ‘Sign Only’ conditions, ii) between groups at each session for each condition, and iii) between young and older adults on each condition. We used two-tailed t-test that were uncorrected for multiple comparisons at every msec time point to be as conservative as possible. There was no session difference for any group on the ‘Sign Only’ condition (p> .05 for each group comparison); similarly, there were no differences for the MTT or NCC groups on the ‘Sign & Drive’ condition (p> .30 for each comparison), with the STT group showing more variance following training (p= .01). With respect to Experiment 3, there were also no age differences on either condition (p> .45 for each comparison). This indicates that the training effects observed were not due to learned eye movements, and that the age-effects observed were also not a function of age-related differences in eye movements as well. EEG analysis Preprocessing was conducted using Analyzer software (Brain Vision, LLC) then exported to EEGLAB 35 for event-related spectral perturbations (ERSP) analyses. ERSP is a powerful approach to identifying stable features in a spontaneous EEG spectrum that are induced by experimental events, and have been used to successfully isolate markers of cognitive control 36,37 . We selected this approach because we felt that a measure in the frequency domain would be more stable than other metrics given the dynamic environment of NeuroRacer. Blinks and eye-movement artifacts were removed through an independent components analysis (ICA), as were epochs with excessive peak-to-peak deflections (±100 µV). Given the use of d’, which takes into account performance on every trial, we collapsed across all trial types for all subsequent analyses. −1000 to +1000msec epochs were created for ERSP total power analysis (evoked power + induced power), with theta band activity analyzed by resolving 4–100 Hz activity using a complex Morlet wavelet in EEGLAB and referenced to a −900 to −700 pre-stimulus baseline (thus relative power (dB)). Assessment of the “Sign & Drive” ERSP data in 40msec time bins collapsing across all older adult participants and experimental sessions revealed the onset of peak midline frontal activity to be between 360–400msec post-stimulus, and so all neural findings were evaluated within this time window for the older adults (see Supplementary Figure 7 for these topographies). For younger adults, peak theta activity occurred between 280–320 msec, and so for across-group comparisons, data from this time window was used for younger adults. The cognitive aging literature has demonstrated delayed neural processing in older adults using EEG 38,39 . For example, Zanto and colleagues 38 demonstrated that older adults show similar patterns of activity as younger adults in terms of selective processing, but there is a time shift to delayed processing with aging. For the data generated in this study, presented topographically in Supplementary Figure 7, it was clear that the peak of the midline frontal theta was delayed in older versus younger adults. To fairly assess if there was a difference in power, it was necessary to select different comparison windows in an unbiased, data-driven manner for each group. Coherence data for each channel was first filtered in multiple pass bands using a two-way, zero phase-lag, finite impulse response filter (eegfilt.m function in EEGLAB toolbox) to prevent phase distortion. We then applied a Hilbert transform to each of these time series (hilbert.m function), yielding results equivalent to sliding window FFT and wavelet approaches 40 , giving a complex time series, hx [n] = ax [n]exp(iϕ x [n]) where ax[n] and φx[n] are the instantaneous amplitudes and phases, respectively. The phase time series φx assumes values within (−π, π] radians with a cosine phase such that π radians corresponds to the trough and 0 radians to the peak. In order to compute PLV for theta phase, for example, we extract instantaneous theta phases φθ[n] by taking the angle of hθ[n]. Event-related phase time-series are then extracted and, for each time point, the mean vector length Rθ[n] is calculated across trials (circ_r.m function in CircStats toolbox) 41 . This mean vector length represents the degree of PLV where an R of 1 reflects perfect phase-locking across trials and a value of 0 reflects perfectly randomly distributed phases. These PLVs were controlled for individual state differences at each session by baseline correcting each individual’s PLVs using their −200 to 0 period (thus, a relative PLV score was calculated for each subject). Statistical analyses Mixed model ANOVAs with: i) decade of life (Experiment 1), ii) training group (Experiment 2), or iii) age (Experiment 3) as the between-group factor were used for all behavioral and neural comparisons, with planned follow-up t-tests and the Greenhouse-Geisser correction utilized where appropriate. One-tailed t-tests were utilized to interrogate group differences for all transfer measures given our a priori hypothesis of the direction of results following multitask training. All effect size values were calculated using Cohen’s d 42 and corrected for small sample bias using the Hedges and Olkin 43 approach. The neural-behavioral correlations presented included only those MTT participants who demonstrated increased midline frontal theta power following training (14/15 participants). For statistical analyses, we created 1 frontal and 3 posterior composite electrodes of interest (EOI) from the average of the following electrodes: AFz, Fz, FPz, AF3, and AF4 (medial frontal), PO8, P8, and P10 (right-posterior), PO7, P7, and P9 (left-posterior); POz, Oz, O1, O2 and Iz (central-posterior), with PLVs calculated for each frontal-posterior EOI combination separately. For the coherence data, the factor of posterior EOI location (3) was modeled in the ANOVA, but did not show either a main effect or interaction with the other factors. Supplementary Material 1
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Persistent activity in the prefrontal cortex during working memory.

              The dorsolateral prefrontal cortex (DLPFC) plays a crucial role in working memory. Notably, persistent activity in the DLPFC is often observed during the retention interval of delayed response tasks. The code carried by the persistent activity remains unclear, however. We critically evaluate how well recent findings from functional magnetic resonance imaging studies are compatible with current models of the role of the DLFPC in working memory. These new findings suggest that the DLPFC aids in the maintenance of information by directing attention to internal representations of sensory stimuli and motor plans that are stored in more posterior regions.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                30 September 2014
                2014
                : 5
                : 1109
                Affiliations
                School of Psychology, University of Birmingham Birmingham, UK
                Author notes

                This article was submitted to Educational Psychology, a section of the journal Frontiers in Psychology.

                Edited by: Clare Wood, Coventry University, UK

                Reviewed by: Kevin Durkin, University of Strathclyde, UK

                Article
                10.3389/fpsyg.2014.01109
                4179712
                4f19dee7-cf50-4008-9b67-aaef953ff252
                Copyright © 2014 Ashinoff.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 15 July 2014
                : 12 September 2014
                Page count
                Figures: 0, Tables: 0, Equations: 0, References: 76, Pages: 5, Words: 5110
                Categories
                Psychology
                Opinion Article

                Clinical Psychology & Psychiatry
                video games,educational technology,cognitive training,plasticity and learning,games for learning,training-induced changes,training effects

                Comments

                Comment on this article