43
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Harnessing the neuroplastic potential of the human brain & the future of cognitive rehabilitation

      editorial

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Neuroplasticity is the remarkable ability of the brain that allows us to learn and adapt to our environment. Many studies have now shown that plasticity is retained throughout the lifespan from infancy to very old age (Merzenich et al., 1991; Merzenich and DeCharms, 1996; Greenwood and Parasuraman, 2010; May, 2011; Bavelier et al., 2012). Enriching life experiences, including literacy, prolonged engagement in the arts, sciences and music, meditation and aerobic physical activities have all been shown to engender positive neuroplasticity that boosts cognitive function and/or prevents cognitive loss (Vance et al., 2010; Hayes et al., 2013; Matta Mello Portugal et al., 2013; Newberg et al., 2013; Zatorre, 2013). Unfortunately, just as enriching experiences generate positive plasticity, negative plasticity ensues in impoverished settings. For instance, many studies now show that low socio-economic, resource-poor environments, which are associated with stress, violence and abuse within families and communities, have detrimental effects on cognition and neural function (D'Angiulli et al., 2012; McEwen and Morrison, 2013). As cognitive neuroscientists we observe both positive and negative aspects of plasticity in neural systems, in functional changes of neural activations, neural oscillations and strength of connectivity between brain regions, in structural changes in gray matter volume and white matter integrity, and importantly in the relationship between such neuroplastic changes and concomitant cognitive/behavioral changes. As we come to understand various facets of plasticity, it drives further the quest to develop new activities/interventions that engender maximal positive plasticity in selectively targeted neural systems; we envision such activities will in turn generate “far transfer of benefit” to generalized cognition and thereby improve the human condition. In today's modern technological and internet-connected era, individuals are increasingly engaging with cognitive training software to improve cognitive function. In fact over the past 10–15 years, several companies have become established proponents and marketers of such software, transforming it into a multi-million dollar industry with exponential projected future growth. The fact that this technology is easily accessible over the internet to the home-setting, and at low-cost, has facilitated it's mass adoption. Scientifically, however, not all “brain training” is made equal. All too often, basic cognitive neuroscience experimental paradigms are embedded in commercial “brain training” approaches with add-on visual graphic skins that attempt to maximize user-engagement; a process known as gamification. Although these experimental paradigms had been originally developed to understand cognition, that does not mean that they are also the best tools to engender positive neuroplasticity. It is no surprise then that some scientific investigations have uncovered that generic brain training approaches yield no positive cognitive outcomes (Owen et al., 2010). However, a blanket statement that all cognitive training is ineffective is also unfair. In recent years, development and evaluation of cognitive training approaches in many labs, including our own, has revealed evidence for positive neuroplasticity, as well as for transfer of benefit to untrained cognitive abilities (Tallal et al., 1996; Temple et al., 2003; Stevens et al., 2008; Smith et al., 2009; Ball et al., 2010; Berry et al., 2010; Anderson et al., 2013; Anguera et al., 2013; Mishra et al., 2013; Wolinsky et al., 2013). Furthermore, in two of our training studies we find neurobehavioral correlations that relate on-task neuroplasticity to broader improvements in untrained aspects of cognition. Other researchers have also reported positive findings and transfer of training effects to untrained cognitive abilities in the context of custom-designed working memory exercises (Klingberg, 2010; Rutledge et al., 2012), task-switching training (Karbach and Kray, 2009), as well as for a specific genre of commercially available games, i.e., action video games (Bavelier et al., 2012) (although it is difficult to make strong recommendations about many off-the-shelf games given concerns over violent content). From these studies we are coming to understand some of the design principles that may govern the development of effective neuroplasticity-targeted training, as well as the scientific evaluation methods that can be used to provide convincing proof of the efficacy of the training intervention. Here, we summarize some of these principles that have emerged from two of our published training studies that now inform the development and evaluation of our next generation of training tools. In our first training study in older adults, we simply trained visual perceptual discrimination of Gabor patches that had built-in directed motion animation (Berry et al., 2010). Ten hours of training improved on-task perception relative to performance changes in a non-training (no-contact) control group. Interestingly, the training also benefitted delayed-recognition working memory of an untrained motion direction task. Not only was working memory performance improved, electroencephalography (EEG) neural recordings showed that training evoked more efficient sensory encoding of the stimuli, which correlated with the working memory performance gains. This finding that 10 h of simple perceptual training engendered transfer of benefits to working memory aligns with recent understanding that perceptual training improves signal to noise contrast, which then leads to refined encoding at multiple neural scales and hence, at least some degree of generalized cognitive benefits (Vinogradov et al., 2012). We are now gaining an appreciation that the observed gains in our perceptual training study, and in similar studies performed by other labs, some of which have shown long-lasting cognitive benefits (Willis et al., 2006; Rebok et al., 2014), may be mediated by two fundamental design elements that drive neuroplasticity. 1) Training incorporated continuous performance feedback at multiple levels of game play providing repeated cycles of reward to the user 2) Training was adaptive to the trainee's in-the-moment game performance; i.e., adaptivity was incorporated using psychophysical staircase functions that enhance training challenge in response to accurate performance and reduce it for inaccurate performance. The up-down step ratio in such staircases is often chosen to maintain overall task challenge at 75–85%, at which point the user is optimally engaged but not frustrated. Thus, continuous performance feedback rewards and adaptive task challenge uniquely personalize the training to the cognitive capacity of each individual, and allows abilities to improve over time. Overall we have found these features to be critically important in generating positive neuroplasticity and cognitive benefit. Note, it is important to realize that casual game software is often not designed to provide the optimal dose of repetitive rewards nor incorporate adaptive progressions specifically targeted to the cognitive domains that may be deficient in a given population cohort. These factors, along with the heterogeneity of tested populations, very small training doses on multiple cognitive exercises, and the use of assessment measures that are insensitive to detect training related benefits in the tested population, all may contribute to a failure to observe positive impact (e.g., Owen et al., 2010). While reward cycles and adaptive progressions are key components of software design, it is equally important to tailor these game mechanics toward improving specific deficits observed in a population cohort. For instance, Anguera et al. (2013) showed that deficient cognitive control abilities, such as working memory and sustained attention, in healthy older adults can be enhanced by specifically training on a multitasking performance-adaptive and rewarding video game, “Neuroracer.” “Neuroracer” implements visual discrimination training in a go-no-go task for colored shape targets, with the added demand of simultaneously driving on a virtual road. “Neuroracer” evidenced extensive gains such that healthy older adults who multi-tasked 175% worse than younger adults on a first assessment, achieved significant post-training performance levels on the game itself that surpassed those of young adults by +44%. Importantly, training on “Neuroracer” transferred to untrained measures of sustained attention and working memory in the setting of interference, with EEG-based neural recordings showing that plasticity of midline frontal theta (mf theta) neural oscillations may be a mediator of these cognitive improvements. While we have tested some aspects of sound game design, as described, other aspects of high-level video games may contribute to their success and we look ahead to assessing these empirically. For example, immersion, fun, real-world features, continuous performance, 3D environments, virtual reality, high-levels of art, story, and music facilitate sustained performance and better compliance, and also deeper engagement that we suspect maximally harnesses plasticity. Evaluation of the influence of these features on training effectiveness requires careful scientific study design. For this, the “Neuroracer” study adopted a rigorous three-armed randomized controlled design. In addition to the multitasking training group, the study included an active single-task training control, as well as a no-contact control group. The single-task training control performed the exact same tasks as the training group of visual discrimination and driving, except that task engagement was not concurrent. This active control directly tested our hypothesis that only training in a setting that stresses cognitive control via a high interference environment would show significant cognitive gains. Outcomes of the “Neuroracer” multitasking training were not achieved in the active control group or in the no-contact group, the latter being critical for assessing practice effects due to repeated evaluations. Thus, the “Neuroracer” study highlighted that rigorous scientific evaluation of a cognitive training approach requires appropriate control groups, and often more than one control group, especially if we want to understand the underlying mechanisms of training effectiveness. Indeed longitudinal data collection is arduous, but without randomized, controlled and single/double blinded enrollments, we cannot convince ourselves of the significance of the results of new interventions. This is especially appropriate for healthy populations, while single-arm feasibility trials do remain informative as a first pass in cognitively impaired populations. In addition, we should also implement expectation bias measures for all participants, which confirm that all study groups anticipate the same level of influence of their assigned intervention on the outcome measures, thus assuring appropriate placebo control (Boot et al., 2013). Finally, adequately powered large sample size studies and investigations that measure sustainability of the cognitive gains and underlying neuroplasticity in yearly follow-ups are rare and need to be performed more often to address the long-term efficacy of cognitive interventions. Such rigor is convention in pharmaceutical clinical trials, and its adoption for video game testing, along with safety evaluations that detect potential side effects such as game-addiction, would promote a path toward FDA approvals and medical prescription of such technologies. Equipped with our growing understanding of how to design cognitive training approaches to target plasticity in specific neural circuits, we are now embarking on the development of the next generation training technologies. We envision these advances to include combining behavior-digital closed loops that link behavioral performance metrics to adaptive modulations of a training task on a digital platform, with neuro-digital closed loops that link neural performance measures to adaptive game mechanics. For example, the “Neuroracer” training study discovered that neuroplasticity of midline frontal theta (enhanced mf theta post-training) is a key neural factor that correlates with transferred cognitive gains. In order to test whether mf theta plasticity is truly causal in enabling improved cognition, we are now developing neuro-digital closed loops that directly target mf theta activity. More specifically, technological development is being directed at real-time EEG-based recordings that occur simultaneous with the cognitive task training (Delorme et al., 2011; Makeig et al., 2012; Kothe and Makeig, 2013). The goal is for these measurements to be event-locked to task stimuli, account for ocular and muscle-related artifacts, and use source localization algorithms (Mullen et al., 2013) so that they can be directly integrated in the game environment to guide reward feedback to the user and adaptivity of task challenge in real-time. We hypothesize that using neural performance as the driver for task-adaptivity will generate more rapid, efficient and specific circuit plasticity than is currently obtained using behavior-adaptive cognitive training approaches. This hypothesis is borne out of data, which shows that single-trial behavioral performance is predicted by neural measures such as mf-theta oscillations preceding the behavior. Thus the neuro-digital closed loop offers the potential to selectively train and refine the bottleneck neural processes that govern the final behavioral outcome. Importantly, by directly embedding task-related neural activity in a closed loop, this approach can provide missing causal evidence between neuroplasticity and cognitive benefits. This line of investigation is especially promising in the light of accumulating scientific evidence of the value of conventional neurofeedback approaches (Gruzelier, 2013; Wang and Hsieh, 2013; Arns et al., 2014), which also creates a neuro-digital closed loop, albeit driven by ongoing scalp EEG oscillations as opposed to task-related neural processes as we envision. We are aware that unlike traditional cognitive training, a neuro-digital closed loop approach is not feasible as a mainstay in the home setting at present. Yet, with rapid developments of mobile EEG technology (Stopczynski et al., 2013), as well as advances in the real-time computational power available on consumer devices such as laptops and tablets, we expect that deployment in the home environment will be a reality within a few years. Neuro-digital closed loops are also an exciting way to achieve personalized therapeutics, as each feedback loop is customized to the individual user's neural capacities in the moment. While here we have provided a simplistic example of a closed loop tied to task-related mf theta activation, one can conceive of more sophisticated neural targets, including frontal-posterior effective connectivity based on task interaction dynamics. Further advances in this field are expected as neuroscientists collaborate with neural engineers, who have predominantly focused related efforts on neuroprosthetic development (Borton et al., 2013). Neuro-engineers have designed efficient closed loop decoding algorithms for brain-machine interfaces in animal model systems, and these techniques are now ripe for adoption in humans (Carmena, 2013). Finally, especially beneficial for clinical populations that exhibit weakened neural responsivity, another intriguing step will be the integration of neuro-digital closed loop systems with transcranial electrical current stimulation or even deep brain stimulation technologies (Coffman et al., 2013), which may provide a needed plasticity boost to impaired brain regions. To achieve the goals of our field and fully harness the potential of neuroplasticity for cognitive benefit, we look forward to continued technological development, such as neuro-digital closed loops, and their integration with emerging design principles of cognitive training games. These technologies validated using randomized, controlled scientific evaluation methodologies will generate new understanding of how to translate cognitive neuroscience discoveries into new educational tools for healthy populations and mental healthcare interventions for neuropsychiatric populations in need of cognitive remediation. Conflict of interest statement Jyoti Mishra is a part-time scientist at the Brain Plasticity Institute, PositScience, a company that develops cognitive training software. Adam Gazzaley is co-founder and chief science advisor of Akili Interactive Labs, a company that develops cognitive training software. Jyoti Mishra and Adam Gazzaley have a patent pending for “Methods of Suppressing Irrelevant Stimuli.” Adam Gazzaley has a patent pending for a game-based cognitive training intervention: “Enhancing cognition in the presence of distraction and/or interruption.”

          Related collections

          Most cited references32

          • Record: found
          • Abstract: found
          • Article: not found

          Video game training enhances cognitive control in older adults

          Cognitive control is defined by a set of neural processes that allow us to interact with our complex environment in a goal-directed manner 1 . Humans regularly challenge these control processes when attempting to simultaneously accomplish multiple goals (i.e., multitasking), generating interference as the result of fundamental information processing limitations 2 . It is clear that multitasking behavior has become ubiquitous in today’s technologically-dense world 3 , and substantial evidence has accrued regarding multitasking difficulties and cognitive control deficits in our aging population 4 . Here we show that multitasking performance, as assessed with a custom-designed 3-D video game (NeuroRacer), exhibits a linear age-related decline from 20–79 years of age. By playing an adaptive version of NeuroRacer in multitasking training mode, older adults (60–85 y.o.) reduced multitasking costs compared to both an active control group and a no-contact control group, attaining levels beyond that of untrained 20 year olds, with gains persisting for six months. Furthermore, age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by multitasking training (i.e., enhanced midline frontal theta power and frontal-posterior theta coherence). Critically, this training resulted in performance benefits that extended to untrained cognitive control abilities (i.e., enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting the training-induced boost in sustained attention and preservation of multitasking improvement six months later. These findings highlight the robust plasticity of the prefrontal cognitive control system in the aging brain, and provide the first evidence of how a custom-designed video game can be used to assess cognitive abilities across the lifespan, evaluate underlying neural mechanisms and serve as a powerful tool for cognitive enhancement. In a first experiment, we evaluated multitasking performance across the adult lifespan. 174 participants spanning six decades of life (ages 20–79; ~30 individuals per decade) played a diagnostic version of NeuroRacer to measure their perceptual discrimination ability (’sign task’) with and without a concurrent visuomotor tracking task (‘driving task’; see Supplementary Materials for details of NeuroRacer). Performance was evaluated using two distinct game conditions: 1) ‘Sign Only’- respond as rapidly as possible to the appearance of a sign only when a green circle was present, and 2) ‘Sign & Drive’- simultaneously perform the sign task while maintaining a car in the center of a winding road using a joystick (i.e., ‘drive’; see Figure 1a). Perceptual discrimination performance was evaluated using the signal detection metric of discriminability (d'). A ‘cost’ index was used to assess multitasking performance by calculating the percentage change in d’ from ‘Sign Only’ to ‘Sign & Drive’, such that greater cost (i.e., a more negative % cost) indicates increased interference when simultaneously engaging in the two tasks (see Methods Summary). Prior to the assessment of multitasking costs, an adaptive staircase algorithm was used to determine the difficulty levels of the game at which each participant performed the perceptual discrimination and visuomotor tracking tasks in isolation at ~80% accuracy. These levels were then used to set the parameters of the component tasks in the multitasking condition, so that each individual played the game at a customized challenge level. This assured that comparisons would inform differences in the ability to multitask, and not merely reflect disparities in component skills (see Methods, Supplementary Figures 1 & 2, and Supplementary Materials for more details). Multitasking performance diminished significantly across the adult lifespan in a linear fashion (i.e., increasing cost, see Figure 2a and Supplementary Table 1), with the only significant difference in cost between adjacent decades being the increase from the 20s (−26.7% cost) to the 30s (−38.6% cost). This deterioration in multitasking performance is consistent with the pattern of performance decline across the lifespan observed for fluid cognitive abilities, such as reasoning 5 and working memory 6 . Thus, using NeuroRacer as a performance assessment tool we replicated previously evidenced age-related multitasking deficits 7,8 , and revealed that multitasking performance declines linearly as we advance in age beyond our twenties. In a second experiment, we explored if older adults who trained by playing NeuroRacer in multitasking mode would exhibit improvements in their multitasking performance on the game 9,10 (i.e., diminished NeuroRacer costs). Critically, we also assessed if this training transferred to enhancements in their cognitive control abilities 11 beyond those attained by participants who trained on the component tasks in isolation. In designing the multitasking training version of NeuroRacer, steps were taken to maintain both equivalent difficulty and engagement in the component tasks to assure a prolonged multitasking challenge throughout the training period: difficulty was maintained using an adaptive staircase algorithm to independently adjust the difficulty of the ‘sign’ and ‘driving’ tasks following each 3-min run based on task performance, and balanced task engagement was motivated by rewards given only when both component tasks improved beyond 80% on a given run. We assessed the impact of training with NeuroRacer in a longitudinal experiment that involved randomly assigning 46 naïve older adults (60–85yrs: 67.1yrs ± 4.2) to one of three groups: Multitasking Training (MTT; n=16), Singletask Training (STT; n=15) as an active control, or No-Contact Control (NCC; n=15). Training involved playing NeuroRacer on a laptop at home for 1 hour a day, 3 times a week for 4 weeks (12 total hours of training), with all groups returning for a 1 month Post-training and a 6 month follow-up assessment (Figure 1b). The MTT group played the ‘Sign & Drive’ condition exclusively during the training period, while the STT group divided their time between a “Sign Only” and a “Drive Only” condition, and so were matched for all factors except the presence of interference. In addition to a battery of cognitive control tests used to assess the breadth of training benefits (see Supplementary Table 2), the neural basis of training effects was evaluated using electroencephalography (EEG) recorded at Pre- and Post-training visits while participants performed a neural assessment version of NeuroRacer. Analysis showed that only the MTT group’s multitasking performance index significantly improved from Pre- (−64.2% cost) to Post-training (−16.2% cost; Figure 2b), thus supporting the role of interference during game play as a key mechanistic feature of the training approach. In addition, although cost reduction was observed only in the MTT group, equivalent improvement in component task skills was exhibited by both STT and MTT (see Supplemental Figures 4 and 5). This indicates that enhanced multitasking ability was not solely the result of enhanced component skills, but a function of learning to resolve interference generated by the two tasks when performed concurrently. Moreover, the d' cost improvement following training was not the result of a task tradeoff, as driving performance costs also diminished for the MTT group from Pre- to Post-training (see Supplementary Materials). Notably in the MTT group, the multitasking performance gains remained stable 6 months after training without booster sessions (at 6 months: −21.9% cost). Interestingly, the MTT group’s Post-training cost improved significantly beyond the cost level attained by a group of 20-year-olds who played a single session of NeuroRacer (−36.7% cost; Experiment 3; p .50–1.0 (using Cohen’s d, see Methods)) for both cognitive control performance and neural measures versus either control group. The sustained multitasking cost reduction over time and evidence of generalizability to untrained cognitive control abilities provide optimism for the use of an adaptive, interference-rich, video game approach as a therapeutic tool for the diverse populations that suffer from cognitive control deficits (e.g., ADHD, depression, dementia). These findings stress the importance of a targeted training approach, as reinforced by a recent study that observed a distinct lack of transfer following non-specific online cognitive exercises 30 . In conclusion, we provide evidence of how a custom-designed video game targeting impaired neural processes in a population can be used to diagnosis deficits, assess underlying neural mechanisms, and enhance cognitive abilities. Methods Summary All participants had normal or corrected vision, no history of neurological, psychiatric, or vascular disease, and were not taking any psychotropic or hypertension medications. In addition, they were considered ‘non-gamers’ given that they played less than 2 hours of any type of video game per month. For NeuroRacer, each participant used their left thumb for tracking and their right index finger for responding to signs on a Logitech (Logitech, USA) gamepad controller. Participants engaged in three 3-minute runs of each condition in a randomized fashion. Signs were randomly presented in the same position over the fixation cross for 400 msec every 2, 2.5, or 3 seconds, with the speed of driving dissociated from sign presentation parameters. The multitasking cost index was calculated as follows: [(‘Sign & Drive’ performance - ‘Sign Only’ performance) / ‘Sign Only’ performance] * 100. EEG data for 1 MTT Post-training participant and 1 STT Pre-training participant were corrupted during acquisition. 2 MTT participants, 2 STT participants, and 4 NCC participants were unable to return to complete their 6-month follow-up assessments. Critically, no between-group differences were observed for neuropsychological assessments (p= .52) or Pre-training data involving: i) NeuroRacer thresholding for both Road (p= .57) and Sign (p= .43), ii) NeuroRacer component task performance (p> .10 for each task), iii) NeuroRacer multitasking costs (p= .63), iv) any of the cognitive tests (all ANOVAs at Pre-training: p≥ .26), v) ERSP power for either condition (p≥ .12), and, vi) coherence for either condition (p≥ .54). Methods Participants All participants were recruited through online and newspaper advertisements. For Experiment 1, 185 (90 male) healthy, right-handed individuals consented to participate according to procedures approved by the University of California at San Francisco. For Experiment 2 & 3, 60 (33 males) older adult individuals and 18 (9 male) young adult individuals participated without having been a part of Experiment 1 (see Supplementary Table 3 for demographic descriptions and Supplementary Figure 9 for Experiment 2 participant enrollment). Participants who were unable to perform the tasks, as indicated by tracking performance below 15% (6 individuals from Experiment 1, 8 individuals from Experiment 2), or a false positive rate greater than 70% (5 individuals from Experiment 1, 6 individuals from Experiment 2) during any one visit or across more than 4 individual training sessions, were excluded. Thresholding Prior to engaging in NeuroRacer, participants underwent an adaptive thresholding procedure for discrimination (nine 120 sec runs) and tracking ability (twelve 60 sec runs) to determine a ‘sign’ and ‘drive’ level that each participant would perform at ~80% accuracy (see Supplementary Figures 1 & 2). Having individuals engage each condition in their own ‘space’ following thresholding procedures facilitated a fairer comparison across ages and abilities. This is a frequently omitted procedure in other studies, and leads to difficulty interpreting performance differences (especially multitasking) as being the result of differences in interference processing or due to differences in component task skills. For the perceptual discrimination thresholding, each participant’s performance for a given run was determined by calculating a proportion correct score involving: i) correctly responding to targets, ii) correctly avoiding non-targets, iii) late responses to targets, and iv) responding to non-targets. At the end of each run, if this score was greater than 82.5%, the subsequent run would be played at a higher level which had a corresponding shorter time window for responses to targets. More specifically, the adaptive algorithm would make proportional level changes depending upon participants performance from this ~80% median, such that each 1.75% increment away from this median corresponded with a change in level (see Supplementary Figure 1a). Thus, a 90% performance would lead to a 40msec reduction in the time window, while a 55% (or less) performance would lead to a 100msec lengthening of said window. Thresholding parameters for road levels followed a similar pattern with each .58% increment away from the same median corresponded with a change in level (see Supplementary Figure 1b). These parameters were chosen following extensive pilot testing to: (1) minimize the number of trial runs until convergence was reached and (2) minimize convergence instability, while (3) maximizing sampling resolution of user performance. The first 3 driving thresholding blocks were considered practice to familiarize participants with the driving portion of the task and were not analyzed. A regression over the 9 thresholding runs in each a case was computed to select the ideal time window and road speed to promote a level of ~80% accuracy on each distraction free task throughout the experiment (see Supplementary Figure 2). All participants began the thresholding procedures at the same road (level 20) and sign levels (level 29). Conditions Following the driving and sign thresholding procedures, participants performed 5 different three minute 'missions', with each mission performed three times in a pseudo-randomized fashion. In addition to the ‘Sign Only’, ‘Drive Only’, and ‘Sign & Drive’ conditions, participants also performed a "Sign With Road" condition where the car was placed on 'auto pilot' for the duration of the run and participants responded to the signs, and a ‘Drive with Signs’ condition where participants were told to ignore the presence of signs appearing that and continue to drive as accurately as possible. Data from these two conditions are not presented here. Feedback was given at the end of each run as the proportion correct to all signs presented for the perceptual discrimination task (although we used the signal detection metric of discriminability (d') 31 to calculate our ‘Cost’ index throughout the study), and percentage of time spent on the road (see Supplementary Figure 10). Prior to the start of the subsequent run, participants were informed as to which condition would be engaged in next, and made aware of how many experimental runs were remaining. Including thresholding, the testing session encompassed 75min of gameplay. NeuroRacer training and testing protocol For Experiment 1, participants were seated in a quiet room in front of an Apple© MacBook Pro 5.3 laptop computer at an approximate distance of 65 cm from the 15" screen. For Experiment 2 and 3, participants were seated in a dark room with the screen ~100 cm from the participants. All training participants trained at their homes using an Apple© MacBook Pro 5.3 laptop computer while sitting ~60 cm from the screen (see Supplementary Figure 11a). For Experiment 1, each perceptual discrimination-based experimental run (180 sec) contained 36 relevant targets (green circles) and 36 lures (green, blue and red pentagons and squares). For Experiments 2 & 3, the sign ratio was to 24/48. Prior to training, each participant was given a tutorial demonstrating how to turn on the laptop, properly setup the joystick, navigate to the experiment, shown what the 1st day of training would be like in terms of the task, how to interpret what the feedback provided meant, and were encouraged to find a quiet environment in their home for their training sessions. If indicated by the participant, a lab member would visit the participant at their home to help set up the computer and instruct training. In addition, to encourage/assess compliance and hold participants to a reasonable schedule, participants were asked to plan their training days & times with the experimenter for the entire training period and enter this information into a shared calendar. Each participant (regardless of group) was informed that their training protocol was designed to train cognitive control faculties, using the same dialogue to avoid expectancy differences between groups. There was no contact between participants of different groups, and they were encouraged to avoid discussing their training protocol with others to avoid potentially biasing participants in the other groups. Each day of training, the participants were shown a visualization of a map that represented their ‘training journey’ to provide a sense of accomplishment following each training session (Supplementary Figure 11b). They were also shown a brief video that reminded them how to hold the controller, which buttons to use, their previous level(s) reached, and what the target would be that day for the perceptual discrimination condition. In addition, the laptop’s built-in video camera was also activated (indicated by a green light) for the duration of said run, providing i) visual assessment of task engagement, ii) motivation for participants to be compliant with the training task instructions, and iii) information about any run where performance was dramatically poorer than others. Participants were discouraged from playing 2 days in a row, while they were encouraged to play at the same time of day. MTT participants were reminded that an optimal training experience was dependent upon doing well on both their sign and drive performance without sacrificing performance on one task for the other. While the STT group were provided a ‘Driving’ or ‘Sign’ score following each training run, the MTT group were also provided an ‘Overall’ score following each run as a composite of performance on both tasks (see Supplementary Figures 5 and 11). Following the completion of every 4th run, participants were rewarded with a ‘fun fact’ screen regarding basic human physiology (http://faculty.washington.edu/chudler/ffacts.html) before beginning their subsequent training run. To assess if training was a ‘fun’ experience, participants in each training group rated the training experience on their final visit to the laboratory on a scale of 1 (minimally) to 10 (maximally) (MTT: 6.5 ± 2.2; STT 6.9 ± 2.4; t= .65, p= .52). Critically, training groups did not differ on their initial thresholding values for both Road (F(2,45)= .58, p= .57) and Sign (F(2,45)= .87, p= .43). Each laptop was configured to transmit NeuroRacer performance data to our secure lab server wirelessly using DropBox® as each run was completed. This facilitated monitoring for compliance and data integrity in a relatively real-time fashion, as participants would be contacted if i) there was a failure to complete all 20 training runs on a scheduled training day, ii) ‘Sign Only’ and ‘Drive Only’ performance was suggestive that a problem had occurred within a given training session, and iii) a designated training day was missed. Individuals without wireless internet in their home were instructed to visit an open wireless internet location (e.g., coffee shop, public library) at least once a week to transfer data, and if this was not an option, researchers arranged for weekly home visits to acquire said data. All participants were contacted via email and/or phone calls on a weekly basis to encourage and discuss their training; similarly, in the event of any questions regarding the training procedures, participants were able to contact the research staff via phone and email. Pre- and Post-training evaluations involving cognitive testing and NeuroRacer EEG took place across 3 different days (appointment and individual test order were counterbalanced), with all sessions completed approximately within the span of a week (total number of days to complete all Pre-training testing: 6.5 days ± 2.2; Post-training testing: 6.1 days ± 1.5). Participants returned for their 1st Post-training cognitive assessments 2.0 ± 2.2 days following their final training session. While scheduled for 6 months after their final testing session, the 6 month follow-up visits actually occurred on average 7.6 months ± 1.1 afterwards due to difficulties in maintaining (and rescheduling) these distant appointments. Critically, no group differences were present regarding any of these time-of-testing measures (F .18 for each comparison). Cognitive Battery The cognitive battery (see Supplementary Table 2) consisted of tasks spanning different cognitive control domains: sustained attention (TOVA; see Supplementary Figure 12a), working memory (delayed-recognition- see Supplementary Figure 12b); visual working memory capacity (see Supplementary Figure 13), dual-tasking (see Supplementary Figure 14), useful field of view (UFOV; see Supplementary Figure 15), and two control tasks of basic motor and speed of processing (stimulus detection task, digit symbol substitution task; see Supplementary Figure 16). Using the analysis metrics regularly reported for each measure, we performed a mixed model ANOVA of Group (3: MTT, STT, NCC) X Session (2: pre, post) X Cognitive test (11; see Supplementary Table 2), and observed a significant 3-way interaction (F(20, 400)= 2.12, p= .004) indicative that training had selective benefits across group and test. To interrogate this interaction, each cognitive test was analyzed separately with Session X Group ANOVAs to isolate those measures that changed significantly following training. We also present the p-value associated with the ANCOVAs for each measure in Supplementary Table 2 (dependent measure = Post-training performance, covariate = Pre-training performance), which showed a similar pattern of effects as most of the 2-way ANOVAs. The ANCOVA approach is considered to be a more suitable approach when post-test performance that is not conditional/predictable based on pre-test performance is the primary outcome of interest following treatment, as opposed to characterizing gains achieved from Pre-training performance (e.g., group X session interaction(s)) 32 ; however, both are appropriate statistical tools that have been used to assess cognitive training outcomes 27,33 (see Supplementary Figure 17 as an example). EEG Recordings and Eye Movements Neural data were recorded using an Active Two head cap (Cortech-Solutions) with a BioSemiActiveTwo 64-channel EEG acquisition system in conjunction with BioSemiActiView software (Cortech-Solutions). Signals were amplified and digitized at 1024 Hz with a 16-bit resolution. Anti-aliasing filters were used and data were band-pass filtered between 0.01–100 Hz during data acquisition. For each EEG recording session, the NeuroRacer code was modified to flash a 1x1” white box for 10msec at one of the corners on the stimulus presentation monitor upon the appearance of a sign. A photodiode (http://www.gtec.at/Products/Hardware-and-Accessories/g.TRIGbox-Specs-Features) captured this change in luminance to facilitate precise time-locking of the neural activity associated with each sign event. During the experiment, these corners were covered with tape to prevent participants from being distracted by the flashing light. To ensure that any training effects were not due to changes in eye movement, electrooculographic data were analyzed as described by Berry and colleagues 34 . Using this approach, vertical (VEOG = FP2-IEOG electrodes) and horizontal (HEOG= REOG-LEOG electrodes) difference waves were calculated from the raw data and baseline corrected to the mean prestimulus activity. The magnitude of eye movement was computed as follows: (VEOG2 + HEOG2)1/2. The variance in the magnitude of eye movement was computed across trials and the mean variance was specifically examined from −200 to 1000msec post-stimulus onset. The variance was compared i) between sessions for each group’s performance on the ‘Sign and Drive’ and ‘Sign Only’ conditions, ii) between groups at each session for each condition, and iii) between young and older adults on each condition. We used two-tailed t-test that were uncorrected for multiple comparisons at every msec time point to be as conservative as possible. There was no session difference for any group on the ‘Sign Only’ condition (p> .05 for each group comparison); similarly, there were no differences for the MTT or NCC groups on the ‘Sign & Drive’ condition (p> .30 for each comparison), with the STT group showing more variance following training (p= .01). With respect to Experiment 3, there were also no age differences on either condition (p> .45 for each comparison). This indicates that the training effects observed were not due to learned eye movements, and that the age-effects observed were also not a function of age-related differences in eye movements as well. EEG analysis Preprocessing was conducted using Analyzer software (Brain Vision, LLC) then exported to EEGLAB 35 for event-related spectral perturbations (ERSP) analyses. ERSP is a powerful approach to identifying stable features in a spontaneous EEG spectrum that are induced by experimental events, and have been used to successfully isolate markers of cognitive control 36,37 . We selected this approach because we felt that a measure in the frequency domain would be more stable than other metrics given the dynamic environment of NeuroRacer. Blinks and eye-movement artifacts were removed through an independent components analysis (ICA), as were epochs with excessive peak-to-peak deflections (±100 µV). Given the use of d’, which takes into account performance on every trial, we collapsed across all trial types for all subsequent analyses. −1000 to +1000msec epochs were created for ERSP total power analysis (evoked power + induced power), with theta band activity analyzed by resolving 4–100 Hz activity using a complex Morlet wavelet in EEGLAB and referenced to a −900 to −700 pre-stimulus baseline (thus relative power (dB)). Assessment of the “Sign & Drive” ERSP data in 40msec time bins collapsing across all older adult participants and experimental sessions revealed the onset of peak midline frontal activity to be between 360–400msec post-stimulus, and so all neural findings were evaluated within this time window for the older adults (see Supplementary Figure 7 for these topographies). For younger adults, peak theta activity occurred between 280–320 msec, and so for across-group comparisons, data from this time window was used for younger adults. The cognitive aging literature has demonstrated delayed neural processing in older adults using EEG 38,39 . For example, Zanto and colleagues 38 demonstrated that older adults show similar patterns of activity as younger adults in terms of selective processing, but there is a time shift to delayed processing with aging. For the data generated in this study, presented topographically in Supplementary Figure 7, it was clear that the peak of the midline frontal theta was delayed in older versus younger adults. To fairly assess if there was a difference in power, it was necessary to select different comparison windows in an unbiased, data-driven manner for each group. Coherence data for each channel was first filtered in multiple pass bands using a two-way, zero phase-lag, finite impulse response filter (eegfilt.m function in EEGLAB toolbox) to prevent phase distortion. We then applied a Hilbert transform to each of these time series (hilbert.m function), yielding results equivalent to sliding window FFT and wavelet approaches 40 , giving a complex time series, hx [n] = ax [n]exp(iϕ x [n]) where ax[n] and φx[n] are the instantaneous amplitudes and phases, respectively. The phase time series φx assumes values within (−π, π] radians with a cosine phase such that π radians corresponds to the trough and 0 radians to the peak. In order to compute PLV for theta phase, for example, we extract instantaneous theta phases φθ[n] by taking the angle of hθ[n]. Event-related phase time-series are then extracted and, for each time point, the mean vector length Rθ[n] is calculated across trials (circ_r.m function in CircStats toolbox) 41 . This mean vector length represents the degree of PLV where an R of 1 reflects perfect phase-locking across trials and a value of 0 reflects perfectly randomly distributed phases. These PLVs were controlled for individual state differences at each session by baseline correcting each individual’s PLVs using their −200 to 0 period (thus, a relative PLV score was calculated for each subject). Statistical analyses Mixed model ANOVAs with: i) decade of life (Experiment 1), ii) training group (Experiment 2), or iii) age (Experiment 3) as the between-group factor were used for all behavioral and neural comparisons, with planned follow-up t-tests and the Greenhouse-Geisser correction utilized where appropriate. One-tailed t-tests were utilized to interrogate group differences for all transfer measures given our a priori hypothesis of the direction of results following multitask training. All effect size values were calculated using Cohen’s d 42 and corrected for small sample bias using the Hedges and Olkin 43 approach. The neural-behavioral correlations presented included only those MTT participants who demonstrated increased midline frontal theta power following training (14/15 participants). For statistical analyses, we created 1 frontal and 3 posterior composite electrodes of interest (EOI) from the average of the following electrodes: AFz, Fz, FPz, AF3, and AF4 (medial frontal), PO8, P8, and P10 (right-posterior), PO7, P7, and P9 (left-posterior); POz, Oz, O1, O2 and Iz (central-posterior), with PLVs calculated for each frontal-posterior EOI combination separately. For the coherence data, the factor of posterior EOI location (3) was modeled in the ANOVA, but did not show either a main effect or interaction with the other factors. Supplementary Material 1
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Long-term effects of cognitive training on everyday functional outcomes in older adults.

            Cognitive training has been shown to improve cognitive abilities in older adults but the effects of cognitive training on everyday function have not been demonstrated. To determine the effects of cognitive training on daily function and durability of training on cognitive abilities. Five-year follow-up of a randomized controlled single-blind trial with 4 treatment groups. A volunteer sample of 2832 persons (mean age, 73.6 years; 26% black), living independently in 6 US cities, was recruited from senior housing, community centers, and hospitals and clinics. The study was conducted between April 1998 and December 2004. Five-year follow-up was completed in 67% of the sample. Ten-session training for memory (verbal episodic memory), reasoning (inductive reasoning), or speed of processing (visual search and identification); 4-session booster training at 11 and 35 months after training in a random sample of those who completed training. Self-reported and performance-based measures of daily function and cognitive abilities. The reasoning group reported significantly less difficulty in the instrumental activities of daily living (IADL) than the control group (effect size, 0.29; 99% confidence interval [CI], 0.03-0.55). Neither speed of processing training (effect size, 0.26; 99% CI, -0.002 to 0.51) nor memory training (effect size, 0.20; 99% CI, -0.06 to 0.46) had a significant effect on IADL. The booster training for the speed of processing group, but not for the other 2 groups, showed a significant effect on the performance-based functional measure of everyday speed of processing (effect size, 0.30; 99% CI, 0.08-0.52). No booster effects were seen for any of the groups for everyday problem-solving or self-reported difficulty in IADL. Each intervention maintained effects on its specific targeted cognitive ability through 5 years (memory: effect size, 0.23 [99% CI, 0.11-0.35]; reasoning: effect size, 0.26 [99% CI, 0.17-0.35]; speed of processing: effect size, 0.76 [99% CI, 0.62-0.90]). Booster training produced additional improvement with the reasoning intervention for reasoning performance (effect size, 0.28; 99% CI, 0.12-0.43) and the speed of processing intervention for speed of processing performance (effect size, 0.85; 99% CI, 0.61-1.09). Reasoning training resulted in less functional decline in self-reported IADL. Compared with the control group, cognitive training resulted in improved cognitive abilities specific to the abilities trained that continued 5 years after the initiation of the intervention. clinicaltrials.gov Identifier: NCT00298558.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The brain on stress: vulnerability and plasticity of the prefrontal cortex over the life course.

              The prefrontal cortex (PFC) is involved in working memory and self-regulatory and goal-directed behaviors and displays remarkable structural and functional plasticity over the life course. Neural circuitry, molecular profiles, and neurochemistry can be changed by experiences, which influence behavior as well as neuroendocrine and autonomic function. Such effects have a particular impact during infancy and in adolescence. Behavioral stress affects both the structure and function of PFC, though such effects are not necessarily permanent, as young animals show remarkable neuronal resilience if the stress is discontinued. During aging, neurons within the PFC become less resilient to stress. There are also sex differences in the PFC response to stressors. While such stress and sex hormone-related alterations occur in regions mediating the highest levels of cognitive function and self-regulatory control, the fact that they are not necessarily permanent has implications for future behavior-based therapies that harness neural plasticity for recovery. Copyright © 2013 Elsevier Inc. All rights reserved.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Hum Neurosci
                Front Hum Neurosci
                Front. Hum. Neurosci.
                Frontiers in Human Neuroscience
                Frontiers Media S.A.
                1662-5161
                11 April 2014
                2014
                : 8
                : 218
                Affiliations
                Department of Neurology, Physiology and Psychiatry, University of California San Francisco, San Francisco, CA, USA
                Author notes

                This article was submitted to the journal Frontiers in Human Neuroscience.

                Edited by: Guido P. H. Band, Leiden University, Netherlands

                Reviewed by: Geert Van Boxtel, Tilburg University, Netherlands

                Article
                10.3389/fnhum.2014.00218
                3990041
                24782745
                a8e18580-7ea6-4638-b01c-1789fd8bd799
                Copyright © 2014 Mishra and Gazzaley.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 30 December 2013
                : 27 March 2014
                Page count
                Figures: 0, Tables: 0, Equations: 0, References: 41, Pages: 4, Words: 3799
                Categories
                Neuroscience
                Opinion Article

                Neurosciences
                cognitive training,neurotherapeutics,cognitive control,neuroplasticity,closed loop
                Neurosciences
                cognitive training, neurotherapeutics, cognitive control, neuroplasticity, closed loop

                Comments

                Comment on this article