+1 Recommend
0 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Interdisciplinary Tensions When Developing Digital Interventions Supporting Individuals With ADHD


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          Introduction Attention Deficit/Hyperactivity Disorder (ADHD) is the most prevalent childhood psychiatric condition, with a worldwide prevalence estimate between 5% and 7.2% (1, 2), affecting nearly 9.4% of children among 2–17 years old in the United States (3). Individuals with ADHD display symptoms of inattention (e.g., they are most notably easily distracted and have trouble sustaining attention for a prolonged amount of time), hyperactive/impulsive behaviors, and difficulty regulating their bodies and emotions (4). Behavioral interventions are promising as approaches to improving the control of attention and impulsivity (5), especially when using technological interventions, which have shown promising results in supporting children and adults with ADHD [see recent reviews (6–8)]. Our collaborative team of experts in psychology, psychiatry, computer science, and human-computer interaction (HCI) recently published a book on Digital Health Interventions (DHI) for individuals with Attention Deficit Hyperactivity Disorder (ADHD) and related difficulties (6), and two review papers (7, 8) in which we focused on two domains of scientific inquiry: design and computing—which includes computer and information sciences, HCI, and related fields—and clinical, which includes medical and psychological fields. Our analysis observed tensions between these two fields around the research project lifecycles, requirements and design methods, implementation, evaluation methods, and measurement. Blandford et al. (9) described these contrasts in practice between HCI and health 1 , noting how tensions complicate translation across fields. These difficulties ultimately impact the potential for adoption by clinicians, patients, and families, resulting in many innovative technologies failing to make a demonstrable impact on health outcomes. This opinion article aims to draw, from our experiences alongside our recent literature reviews, the interdisciplinary tensions that arise when developing and studying DHI for ADHD to specify recommendations and build a multidisciplinary agenda that will improve the quality and impact of DHI. Life Cycles DHIs for ADHD support diagnosis, assessment, and interventions that target attention, social-emotional skills, self-regulation, motor skills, and academic and vocational skills. Despite a shared interest in these challenges and goals, researchers in the different fields tend to follow different research lifecycles. HCI researchers typically follow a user-centered design approach that involves identifying user needs, understanding the context of use, and designing digital tools iteratively and collaboratively (10). This approach involves users in all stages of development, from early design to prototyping to full system development and user studies. For example, Sonne et al. (11) conducted a contextual inquiry to design an initial version of MOBERO, a mobile application supporting families that include children with ADHD during morning and bedtime routines. They piloted the tool with two families to gather more requirements and evolve MOBERO functionality. Using their eventual stable DHI, they conducted a deployment study with 13 families to provide evidence of the usability and potential effectiveness of MOBERO (12). In clinical research, the lifecycle should start with development based on a well-known or evidence-based theory, a hypothesized mechanism of action, which is then followed by pilot testing, randomized controlled trials (RCT), and subsequent implementation studies (13). In a recent mapping review of ADHD and DHI (8), although we were able to identify 51 studies involving DHI for ADHD, only 12 reported RCTs examining DHI outcomes in children or adolescents with ADHD. None of the products developed or studied appeared to have reached the stage of implementation research. One of the RCTs included in our review examined outcomes from Plan-It Commander (14), an internet-based serious game for ADHD children, builds on theories of self-regulation (15), social cognition (16), and learning (17) to teach time management, planning and organizing, and prosocial skills. The game was evaluated in a 20-week RCT with 182 children (aged 8–10 years) with ADHD (14) in which parents and teachers reported improvements in social skills surrounding gameplay, but reports of planning and organizing skills were not significantly different between groups (18). With a mixed team of clinical and computational scientists, we have embraced an approach that blends user-centered design with clinical research methods. Applying self-regulation theory and evidence-based interventions for ADHD (19), we designed an app to assist parents in supporting the behavioral goals of their child with ADHD and promoting the use of self-regulation strategies in youth with ADHD. To understand the application of theory to practice through DHI, we engage user-center design methods, including co-design with children with ADHD and early user testing and engagement with caregivers (parents and teachers) (20). We are now testing CoolCraig, a mobile and smartwatch application to support a token economy and zone of regulation strategies in a family setting, which resulted from this blend of theoretical and empirical design work (21). We will continue to iterate on this system with the ultimate goal of creating a stable version for an RCT and eventual translation to clinical and educational practice. Requirements and Design Methods Clinical and computational approaches to DHI design for ADHD involve end-users to some degree. However, how their input is considered during the design process varies greatly across projects and fields. Additionally, over time, all the fields in this study appear to be moving towards an ethos of greater inclusion, which can be a difficult shift to norms and culture within and across disciplines. HCI researchers frequently use field-based and contextual design methods, including ethnographic approaches, for understanding the needs and practices of people with ADHD and related stakeholders. Thus, HCI researchers must develop strategies for engaging individuals with ADHD during these activities, especially when working with children. For example, Fekete and Lucero (22) found that considering children with ADHD's needs, preferences, and desires, in tandem with a structured environment and scaffolds, can motivate them to actively participate in the co-design process of DHI. These kinds of efforts can help HCI researchers as well as user experience professionals, therapists, teachers, and even parents to center the needs and interests of ADHD children in their projects. In clinical research, designs translate current theories into digital interventions. For example, the first FDA-approved video game for treating children with ADHD (23) was developed following the fundamentals of Neuroracer, a videogame designed to support the multitasking of older adults (24). Neuroracer was adapted into a mobile DHI for children with ADHD that was initially evaluated in a proof of concept study (25) and then in an RCT with 857 patients (26). In this case, the clinicians' selected feasible theories according to their experience that have the most potential to support the clinical outcomes expected. To balance the inclusion of evidence-based clinical knowledge and the lived experiences of people with ADHD, interdisciplinary teams must develop innovative strategies to ensure attention is paid to all types of expertise. In our research, we balance those approaches by selecting theories from ADHD experts and conducting qualitative research with individuals with ADHD and clinicians, so both have chances to be co-designers of the digital intervention. Implementation In computational fields, fundable and publishable research implementations often must include some innovation in software or hardware. Therefore, it was not surprising that many papers in our past reviews (6) related to proposing algorithms to assess ADHD using different machine learning approaches to classify brain activity (27–40). Similarly, papers often contributed to the scholarly discourse of developing novel prototypes (41–44) in which the efficacy had not been demonstrated. Sometimes with empirical evidence about usability, the prototype itself is considered a contribution in the HCI field (45), including protocols for data privacy and analysis of data gathered from input devices. On the clinical side, the term implementation is used as the final step “when a complex intervention (incorporating digital technologies) has been fully tested” (9). Thus, before the implementation stage, there should be conducted at least efficacy RCTs to fully test the DHI. Most of these types of studies for ADHD either use commercially available devices like mobile phones (46) or personal computers (14, 18) with software or systems that are primarily “off-the-shelf” [e.g., neurofeedback training (47–49)]. Using commercially available systems allows long, complex interventions to test efficacy once usability and safety have already been determined. As in our design research approach, we take a hybrid approach again by relying on commercially available devices such as iPhones and Apple Watch while including custom-developed applications. In this case, we seek to evaluate the use, adoption, and potential efficacy of novel designs and systems as implemented in so-called “off the shelf” devices. Evaluation Methodologies The “gold standard” to evaluate a DHI in a clinical field is the RCT (50). Such study designs may use control or waitlist conditions with experimental conditions. They involve extensive planning with a finalized digital tool and intervention prior to the trial. Given the stability of design required, RCTs tend to include commercially available applications or devices [e.g., ACTIVATETM (51–53); RoboMemo (54)], and use standardized assessment with well-established validity and high reliability to assess outcomes. Moreover, the inclusion and exclusion criteria typically require participants to exhibit clinically significant symptoms of ADHD. When studying technological tools focused on diagnosing or assessing ADHD, researchers conduct the diagnosis using standard clinical assessment approaches to determine whether participants meet the diagnostic criteria for the disorder (55, 56) and compare the tool under investigation with these clinical measures. These studies usually require robust and well-diagnosed samples of at least 50 to 100 participants. In HCI and related fields, a formative evaluation to test the usability, usefulness, acceptability, and user experience can be conducted even with a small number of participants [e.g., (57, 58)] and sometimes “in-the-wild” [e.g., (11, 59, 60)]. The inclusion and exclusion criteria, despite often being as strict as clinical fields, are frequently not well-described in publications [e.g., (61, 62)]. Formal diagnostic assessment is often not conducted or required for participation in these studies. The differences in approaches draw out two clear tensions in DHI research more broadly. In any given research study, a focus on adoption and usability will identify approaches that end users would engage in but may not provide as much evidence for efficacy. A focus on clinically verified approaches will likely mean that the intervention is efficacious, still research participants need to use the tools at a certain dosage to measure that efficacy and will have either been required or incentivized to do so as part of the research study. It is incredibly difficult -if not impossible- in a single research study to measure both whether and how people will use the tool and its effects when used properly. Currently, in our research, we are conducting a formative evaluation with a small number of participants, using a more HCI approach. However, we also use standardized assessments for pre- and post-evaluations in keeping with clinical research standards and with the aim of moving toward an RCT to examine efficacy. Discussion: Recommendations Our research—design, literature, empirical, and technical— raise important questions about creating DHI that are valid, efficacious, and accepted by end-users, including both people with ADHD and clinicians who might recommend or prescribe them. At the same time, it raises questions about developing innovative technologies that are also stable enough to withstand clinical quality evaluations. As improved software engineering, Artificial Intelligence (AI), and design techniques allow for more rapid prototyping of stable yet innovative tools, we can now conduct clinical studies quickly while still engaging in iterative and interactive design approaches. Combining empirically based theories of ADHD with contextual design enriches the understanding of requirements. Co-design with people with ADHD and traditional “experts” leads to better and more inclusive design. However, researchers must carefully engage these groups—sometimes separately—to ensure that all the voices are heard, and a variety of views are taken into account. Although “implementation” has a very different meaning in both fields (implement the design solution vs. implement the DHI in the long term), the better an HCI implementation is done, the more likely it is that clinicians will take up the solution in practice. As an emergent multidisciplinary field, researchers working on DHI for ADHD should commit to describing participants (samples) similarly, providing details about software and hardware implementation and the context of use. Likewise, researchers must commit to creating usable and appealing DHI as equally important goals to creating clinically efficacious, evidence-based tools. Accomplishing both requires a commitment, upfront, to the resources, time, and effort required (63). Publication standards must also allow greater flexibility in multidisciplinary approaches, such that researchers can engage communities around both initial design probes and longer and larger studies leading to an RCT. Indeed, a spectrum of approaches must be applauded, not simply allowed. As technologies change rapidly and family contexts develop as children grow, this flexibility is essential when considering DHI for ADHD. Ultimately, literature searches, publication standards, and dissemination norms must allow HCI researchers to learn more about clinical theory and clinicians and clinical researchers' practices to engage with and appreciate iterative design approaches of HCI. Interdisciplinary and diverse teams are needed to create innovative DHIs that translate ideas and prototypes into commercially available products. While we have focused on the fields from which our interdisciplinary team originates in this article, we recognize that a broad interdisciplinary approach across more fields would be ideal for truly innovative but also saleable and sustainable research tested approaches to ADHD more broadly. Moreover, those teams need financial support to conduct pilot testing at the early stages of technology development, but the cost increases once they conduct clinical trials. Consideration must also be made for the challenge between the time it takes for technology to be updated (or become obsolete) and the time required for interdisciplinary teams to obtain sufficient funding for testing. Once teams receive sufficient grant support, substantial modifications to the study design may already be required. Also, obtaining support for interdisciplinary research poses its own challenges; at least in the United States, there is an institution focused on funding “clinical research” (i.e., National Institutes of Health: NIH) and another focused on investing in non-medical fields (i.e., National Science Foundation: NSF). In recent years, efforts to bridge this divide have been developed and used to a certain degree (e.g., calls from NIH for DHIs and collaborative grant opportunities from the NIH and NSF). However, further systemic changes are needed to develop DHIs not only for ADHD but for other groups that could benefit from DHIs. Author Contributions FC, GH, and KL contributed to the conception of the paper. FC and EM organized the information and wrote the first draft of the manuscript. FC, GH, SS, MN, and KL wrote sections for the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. Funding Research reported in this paper was supported by AHRQ under award number 1R21HS026058, a Jacobs Foundation Advanced Research Fellowship, and the Jacobs Foundation CERES network. Author Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding organizations. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

          Related collections

          Most cited references63

          • Record: found
          • Abstract: found
          • Article: not found

          ADHD prevalence estimates across three decades: an updated systematic review and meta-regression analysis.

          Previous studies have identified significant variability in attention-deficit / hyperactivity disorder (ADHD) prevalence estimates worldwide, largely explained by methodological procedures. However, increasing rates of ADHD diagnosis and treatment throughout the past few decades have fuelled concerns about whether the true prevalence of the disorder has increased over time. We updated the two most comprehensive systematic reviews on ADHD prevalence available in the literature. Meta-regression analyses were conducted to test the effect of year of study in the context of both methodological variables that determined variability in ADHD prevalence (diagnostic criteria, impairment criterion and source of information), and the geographical location of studies. We identified 154 original studies and included 135 in the multivariate analysis. Methodological procedures investigated were significantly associated with heterogeneity of studies. Geographical location and year of study were not associated with variability in ADHD prevalence estimates. Confirming previous findings, variability in ADHD prevalence estimates is mostly explained by methodological characteristics of the studies. In the past three decades, there has been no evidence to suggest an increase in the number of children in the community who meet criteria for ADHD when standardized diagnostic procedures are followed.
            • Record: found
            • Abstract: found
            • Article: not found

            Video game training enhances cognitive control in older adults

            Cognitive control is defined by a set of neural processes that allow us to interact with our complex environment in a goal-directed manner 1 . Humans regularly challenge these control processes when attempting to simultaneously accomplish multiple goals (i.e., multitasking), generating interference as the result of fundamental information processing limitations 2 . It is clear that multitasking behavior has become ubiquitous in today’s technologically-dense world 3 , and substantial evidence has accrued regarding multitasking difficulties and cognitive control deficits in our aging population 4 . Here we show that multitasking performance, as assessed with a custom-designed 3-D video game (NeuroRacer), exhibits a linear age-related decline from 20–79 years of age. By playing an adaptive version of NeuroRacer in multitasking training mode, older adults (60–85 y.o.) reduced multitasking costs compared to both an active control group and a no-contact control group, attaining levels beyond that of untrained 20 year olds, with gains persisting for six months. Furthermore, age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by multitasking training (i.e., enhanced midline frontal theta power and frontal-posterior theta coherence). Critically, this training resulted in performance benefits that extended to untrained cognitive control abilities (i.e., enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting the training-induced boost in sustained attention and preservation of multitasking improvement six months later. These findings highlight the robust plasticity of the prefrontal cognitive control system in the aging brain, and provide the first evidence of how a custom-designed video game can be used to assess cognitive abilities across the lifespan, evaluate underlying neural mechanisms and serve as a powerful tool for cognitive enhancement. In a first experiment, we evaluated multitasking performance across the adult lifespan. 174 participants spanning six decades of life (ages 20–79; ~30 individuals per decade) played a diagnostic version of NeuroRacer to measure their perceptual discrimination ability (’sign task’) with and without a concurrent visuomotor tracking task (‘driving task’; see Supplementary Materials for details of NeuroRacer). Performance was evaluated using two distinct game conditions: 1) ‘Sign Only’- respond as rapidly as possible to the appearance of a sign only when a green circle was present, and 2) ‘Sign & Drive’- simultaneously perform the sign task while maintaining a car in the center of a winding road using a joystick (i.e., ‘drive’; see Figure 1a). Perceptual discrimination performance was evaluated using the signal detection metric of discriminability (d'). A ‘cost’ index was used to assess multitasking performance by calculating the percentage change in d’ from ‘Sign Only’ to ‘Sign & Drive’, such that greater cost (i.e., a more negative % cost) indicates increased interference when simultaneously engaging in the two tasks (see Methods Summary). Prior to the assessment of multitasking costs, an adaptive staircase algorithm was used to determine the difficulty levels of the game at which each participant performed the perceptual discrimination and visuomotor tracking tasks in isolation at ~80% accuracy. These levels were then used to set the parameters of the component tasks in the multitasking condition, so that each individual played the game at a customized challenge level. This assured that comparisons would inform differences in the ability to multitask, and not merely reflect disparities in component skills (see Methods, Supplementary Figures 1 & 2, and Supplementary Materials for more details). Multitasking performance diminished significantly across the adult lifespan in a linear fashion (i.e., increasing cost, see Figure 2a and Supplementary Table 1), with the only significant difference in cost between adjacent decades being the increase from the 20s (−26.7% cost) to the 30s (−38.6% cost). This deterioration in multitasking performance is consistent with the pattern of performance decline across the lifespan observed for fluid cognitive abilities, such as reasoning 5 and working memory 6 . Thus, using NeuroRacer as a performance assessment tool we replicated previously evidenced age-related multitasking deficits 7,8 , and revealed that multitasking performance declines linearly as we advance in age beyond our twenties. In a second experiment, we explored if older adults who trained by playing NeuroRacer in multitasking mode would exhibit improvements in their multitasking performance on the game 9,10 (i.e., diminished NeuroRacer costs). Critically, we also assessed if this training transferred to enhancements in their cognitive control abilities 11 beyond those attained by participants who trained on the component tasks in isolation. In designing the multitasking training version of NeuroRacer, steps were taken to maintain both equivalent difficulty and engagement in the component tasks to assure a prolonged multitasking challenge throughout the training period: difficulty was maintained using an adaptive staircase algorithm to independently adjust the difficulty of the ‘sign’ and ‘driving’ tasks following each 3-min run based on task performance, and balanced task engagement was motivated by rewards given only when both component tasks improved beyond 80% on a given run. We assessed the impact of training with NeuroRacer in a longitudinal experiment that involved randomly assigning 46 naïve older adults (60–85yrs: 67.1yrs ± 4.2) to one of three groups: Multitasking Training (MTT; n=16), Singletask Training (STT; n=15) as an active control, or No-Contact Control (NCC; n=15). Training involved playing NeuroRacer on a laptop at home for 1 hour a day, 3 times a week for 4 weeks (12 total hours of training), with all groups returning for a 1 month Post-training and a 6 month follow-up assessment (Figure 1b). The MTT group played the ‘Sign & Drive’ condition exclusively during the training period, while the STT group divided their time between a “Sign Only” and a “Drive Only” condition, and so were matched for all factors except the presence of interference. In addition to a battery of cognitive control tests used to assess the breadth of training benefits (see Supplementary Table 2), the neural basis of training effects was evaluated using electroencephalography (EEG) recorded at Pre- and Post-training visits while participants performed a neural assessment version of NeuroRacer. Analysis showed that only the MTT group’s multitasking performance index significantly improved from Pre- (−64.2% cost) to Post-training (−16.2% cost; Figure 2b), thus supporting the role of interference during game play as a key mechanistic feature of the training approach. In addition, although cost reduction was observed only in the MTT group, equivalent improvement in component task skills was exhibited by both STT and MTT (see Supplemental Figures 4 and 5). This indicates that enhanced multitasking ability was not solely the result of enhanced component skills, but a function of learning to resolve interference generated by the two tasks when performed concurrently. Moreover, the d' cost improvement following training was not the result of a task tradeoff, as driving performance costs also diminished for the MTT group from Pre- to Post-training (see Supplementary Materials). Notably in the MTT group, the multitasking performance gains remained stable 6 months after training without booster sessions (at 6 months: −21.9% cost). Interestingly, the MTT group’s Post-training cost improved significantly beyond the cost level attained by a group of 20-year-olds who played a single session of NeuroRacer (−36.7% cost; Experiment 3; p .50–1.0 (using Cohen’s d, see Methods)) for both cognitive control performance and neural measures versus either control group. The sustained multitasking cost reduction over time and evidence of generalizability to untrained cognitive control abilities provide optimism for the use of an adaptive, interference-rich, video game approach as a therapeutic tool for the diverse populations that suffer from cognitive control deficits (e.g., ADHD, depression, dementia). These findings stress the importance of a targeted training approach, as reinforced by a recent study that observed a distinct lack of transfer following non-specific online cognitive exercises 30 . In conclusion, we provide evidence of how a custom-designed video game targeting impaired neural processes in a population can be used to diagnosis deficits, assess underlying neural mechanisms, and enhance cognitive abilities. Methods Summary All participants had normal or corrected vision, no history of neurological, psychiatric, or vascular disease, and were not taking any psychotropic or hypertension medications. In addition, they were considered ‘non-gamers’ given that they played less than 2 hours of any type of video game per month. For NeuroRacer, each participant used their left thumb for tracking and their right index finger for responding to signs on a Logitech (Logitech, USA) gamepad controller. Participants engaged in three 3-minute runs of each condition in a randomized fashion. Signs were randomly presented in the same position over the fixation cross for 400 msec every 2, 2.5, or 3 seconds, with the speed of driving dissociated from sign presentation parameters. The multitasking cost index was calculated as follows: [(‘Sign & Drive’ performance - ‘Sign Only’ performance) / ‘Sign Only’ performance] * 100. EEG data for 1 MTT Post-training participant and 1 STT Pre-training participant were corrupted during acquisition. 2 MTT participants, 2 STT participants, and 4 NCC participants were unable to return to complete their 6-month follow-up assessments. Critically, no between-group differences were observed for neuropsychological assessments (p= .52) or Pre-training data involving: i) NeuroRacer thresholding for both Road (p= .57) and Sign (p= .43), ii) NeuroRacer component task performance (p> .10 for each task), iii) NeuroRacer multitasking costs (p= .63), iv) any of the cognitive tests (all ANOVAs at Pre-training: p≥ .26), v) ERSP power for either condition (p≥ .12), and, vi) coherence for either condition (p≥ .54). Methods Participants All participants were recruited through online and newspaper advertisements. For Experiment 1, 185 (90 male) healthy, right-handed individuals consented to participate according to procedures approved by the University of California at San Francisco. For Experiment 2 & 3, 60 (33 males) older adult individuals and 18 (9 male) young adult individuals participated without having been a part of Experiment 1 (see Supplementary Table 3 for demographic descriptions and Supplementary Figure 9 for Experiment 2 participant enrollment). Participants who were unable to perform the tasks, as indicated by tracking performance below 15% (6 individuals from Experiment 1, 8 individuals from Experiment 2), or a false positive rate greater than 70% (5 individuals from Experiment 1, 6 individuals from Experiment 2) during any one visit or across more than 4 individual training sessions, were excluded. Thresholding Prior to engaging in NeuroRacer, participants underwent an adaptive thresholding procedure for discrimination (nine 120 sec runs) and tracking ability (twelve 60 sec runs) to determine a ‘sign’ and ‘drive’ level that each participant would perform at ~80% accuracy (see Supplementary Figures 1 & 2). Having individuals engage each condition in their own ‘space’ following thresholding procedures facilitated a fairer comparison across ages and abilities. This is a frequently omitted procedure in other studies, and leads to difficulty interpreting performance differences (especially multitasking) as being the result of differences in interference processing or due to differences in component task skills. For the perceptual discrimination thresholding, each participant’s performance for a given run was determined by calculating a proportion correct score involving: i) correctly responding to targets, ii) correctly avoiding non-targets, iii) late responses to targets, and iv) responding to non-targets. At the end of each run, if this score was greater than 82.5%, the subsequent run would be played at a higher level which had a corresponding shorter time window for responses to targets. More specifically, the adaptive algorithm would make proportional level changes depending upon participants performance from this ~80% median, such that each 1.75% increment away from this median corresponded with a change in level (see Supplementary Figure 1a). Thus, a 90% performance would lead to a 40msec reduction in the time window, while a 55% (or less) performance would lead to a 100msec lengthening of said window. Thresholding parameters for road levels followed a similar pattern with each .58% increment away from the same median corresponded with a change in level (see Supplementary Figure 1b). These parameters were chosen following extensive pilot testing to: (1) minimize the number of trial runs until convergence was reached and (2) minimize convergence instability, while (3) maximizing sampling resolution of user performance. The first 3 driving thresholding blocks were considered practice to familiarize participants with the driving portion of the task and were not analyzed. A regression over the 9 thresholding runs in each a case was computed to select the ideal time window and road speed to promote a level of ~80% accuracy on each distraction free task throughout the experiment (see Supplementary Figure 2). All participants began the thresholding procedures at the same road (level 20) and sign levels (level 29). Conditions Following the driving and sign thresholding procedures, participants performed 5 different three minute 'missions', with each mission performed three times in a pseudo-randomized fashion. In addition to the ‘Sign Only’, ‘Drive Only’, and ‘Sign & Drive’ conditions, participants also performed a "Sign With Road" condition where the car was placed on 'auto pilot' for the duration of the run and participants responded to the signs, and a ‘Drive with Signs’ condition where participants were told to ignore the presence of signs appearing that and continue to drive as accurately as possible. Data from these two conditions are not presented here. Feedback was given at the end of each run as the proportion correct to all signs presented for the perceptual discrimination task (although we used the signal detection metric of discriminability (d') 31 to calculate our ‘Cost’ index throughout the study), and percentage of time spent on the road (see Supplementary Figure 10). Prior to the start of the subsequent run, participants were informed as to which condition would be engaged in next, and made aware of how many experimental runs were remaining. Including thresholding, the testing session encompassed 75min of gameplay. NeuroRacer training and testing protocol For Experiment 1, participants were seated in a quiet room in front of an Apple© MacBook Pro 5.3 laptop computer at an approximate distance of 65 cm from the 15" screen. For Experiment 2 and 3, participants were seated in a dark room with the screen ~100 cm from the participants. All training participants trained at their homes using an Apple© MacBook Pro 5.3 laptop computer while sitting ~60 cm from the screen (see Supplementary Figure 11a). For Experiment 1, each perceptual discrimination-based experimental run (180 sec) contained 36 relevant targets (green circles) and 36 lures (green, blue and red pentagons and squares). For Experiments 2 & 3, the sign ratio was to 24/48. Prior to training, each participant was given a tutorial demonstrating how to turn on the laptop, properly setup the joystick, navigate to the experiment, shown what the 1st day of training would be like in terms of the task, how to interpret what the feedback provided meant, and were encouraged to find a quiet environment in their home for their training sessions. If indicated by the participant, a lab member would visit the participant at their home to help set up the computer and instruct training. In addition, to encourage/assess compliance and hold participants to a reasonable schedule, participants were asked to plan their training days & times with the experimenter for the entire training period and enter this information into a shared calendar. Each participant (regardless of group) was informed that their training protocol was designed to train cognitive control faculties, using the same dialogue to avoid expectancy differences between groups. There was no contact between participants of different groups, and they were encouraged to avoid discussing their training protocol with others to avoid potentially biasing participants in the other groups. Each day of training, the participants were shown a visualization of a map that represented their ‘training journey’ to provide a sense of accomplishment following each training session (Supplementary Figure 11b). They were also shown a brief video that reminded them how to hold the controller, which buttons to use, their previous level(s) reached, and what the target would be that day for the perceptual discrimination condition. In addition, the laptop’s built-in video camera was also activated (indicated by a green light) for the duration of said run, providing i) visual assessment of task engagement, ii) motivation for participants to be compliant with the training task instructions, and iii) information about any run where performance was dramatically poorer than others. Participants were discouraged from playing 2 days in a row, while they were encouraged to play at the same time of day. MTT participants were reminded that an optimal training experience was dependent upon doing well on both their sign and drive performance without sacrificing performance on one task for the other. While the STT group were provided a ‘Driving’ or ‘Sign’ score following each training run, the MTT group were also provided an ‘Overall’ score following each run as a composite of performance on both tasks (see Supplementary Figures 5 and 11). Following the completion of every 4th run, participants were rewarded with a ‘fun fact’ screen regarding basic human physiology (http://faculty.washington.edu/chudler/ffacts.html) before beginning their subsequent training run. To assess if training was a ‘fun’ experience, participants in each training group rated the training experience on their final visit to the laboratory on a scale of 1 (minimally) to 10 (maximally) (MTT: 6.5 ± 2.2; STT 6.9 ± 2.4; t= .65, p= .52). Critically, training groups did not differ on their initial thresholding values for both Road (F(2,45)= .58, p= .57) and Sign (F(2,45)= .87, p= .43). Each laptop was configured to transmit NeuroRacer performance data to our secure lab server wirelessly using DropBox® as each run was completed. This facilitated monitoring for compliance and data integrity in a relatively real-time fashion, as participants would be contacted if i) there was a failure to complete all 20 training runs on a scheduled training day, ii) ‘Sign Only’ and ‘Drive Only’ performance was suggestive that a problem had occurred within a given training session, and iii) a designated training day was missed. Individuals without wireless internet in their home were instructed to visit an open wireless internet location (e.g., coffee shop, public library) at least once a week to transfer data, and if this was not an option, researchers arranged for weekly home visits to acquire said data. All participants were contacted via email and/or phone calls on a weekly basis to encourage and discuss their training; similarly, in the event of any questions regarding the training procedures, participants were able to contact the research staff via phone and email. Pre- and Post-training evaluations involving cognitive testing and NeuroRacer EEG took place across 3 different days (appointment and individual test order were counterbalanced), with all sessions completed approximately within the span of a week (total number of days to complete all Pre-training testing: 6.5 days ± 2.2; Post-training testing: 6.1 days ± 1.5). Participants returned for their 1st Post-training cognitive assessments 2.0 ± 2.2 days following their final training session. While scheduled for 6 months after their final testing session, the 6 month follow-up visits actually occurred on average 7.6 months ± 1.1 afterwards due to difficulties in maintaining (and rescheduling) these distant appointments. Critically, no group differences were present regarding any of these time-of-testing measures (F .18 for each comparison). Cognitive Battery The cognitive battery (see Supplementary Table 2) consisted of tasks spanning different cognitive control domains: sustained attention (TOVA; see Supplementary Figure 12a), working memory (delayed-recognition- see Supplementary Figure 12b); visual working memory capacity (see Supplementary Figure 13), dual-tasking (see Supplementary Figure 14), useful field of view (UFOV; see Supplementary Figure 15), and two control tasks of basic motor and speed of processing (stimulus detection task, digit symbol substitution task; see Supplementary Figure 16). Using the analysis metrics regularly reported for each measure, we performed a mixed model ANOVA of Group (3: MTT, STT, NCC) X Session (2: pre, post) X Cognitive test (11; see Supplementary Table 2), and observed a significant 3-way interaction (F(20, 400)= 2.12, p= .004) indicative that training had selective benefits across group and test. To interrogate this interaction, each cognitive test was analyzed separately with Session X Group ANOVAs to isolate those measures that changed significantly following training. We also present the p-value associated with the ANCOVAs for each measure in Supplementary Table 2 (dependent measure = Post-training performance, covariate = Pre-training performance), which showed a similar pattern of effects as most of the 2-way ANOVAs. The ANCOVA approach is considered to be a more suitable approach when post-test performance that is not conditional/predictable based on pre-test performance is the primary outcome of interest following treatment, as opposed to characterizing gains achieved from Pre-training performance (e.g., group X session interaction(s)) 32 ; however, both are appropriate statistical tools that have been used to assess cognitive training outcomes 27,33 (see Supplementary Figure 17 as an example). EEG Recordings and Eye Movements Neural data were recorded using an Active Two head cap (Cortech-Solutions) with a BioSemiActiveTwo 64-channel EEG acquisition system in conjunction with BioSemiActiView software (Cortech-Solutions). Signals were amplified and digitized at 1024 Hz with a 16-bit resolution. Anti-aliasing filters were used and data were band-pass filtered between 0.01–100 Hz during data acquisition. For each EEG recording session, the NeuroRacer code was modified to flash a 1x1” white box for 10msec at one of the corners on the stimulus presentation monitor upon the appearance of a sign. A photodiode (http://www.gtec.at/Products/Hardware-and-Accessories/g.TRIGbox-Specs-Features) captured this change in luminance to facilitate precise time-locking of the neural activity associated with each sign event. During the experiment, these corners were covered with tape to prevent participants from being distracted by the flashing light. To ensure that any training effects were not due to changes in eye movement, electrooculographic data were analyzed as described by Berry and colleagues 34 . Using this approach, vertical (VEOG = FP2-IEOG electrodes) and horizontal (HEOG= REOG-LEOG electrodes) difference waves were calculated from the raw data and baseline corrected to the mean prestimulus activity. The magnitude of eye movement was computed as follows: (VEOG2 + HEOG2)1/2. The variance in the magnitude of eye movement was computed across trials and the mean variance was specifically examined from −200 to 1000msec post-stimulus onset. The variance was compared i) between sessions for each group’s performance on the ‘Sign and Drive’ and ‘Sign Only’ conditions, ii) between groups at each session for each condition, and iii) between young and older adults on each condition. We used two-tailed t-test that were uncorrected for multiple comparisons at every msec time point to be as conservative as possible. There was no session difference for any group on the ‘Sign Only’ condition (p> .05 for each group comparison); similarly, there were no differences for the MTT or NCC groups on the ‘Sign & Drive’ condition (p> .30 for each comparison), with the STT group showing more variance following training (p= .01). With respect to Experiment 3, there were also no age differences on either condition (p> .45 for each comparison). This indicates that the training effects observed were not due to learned eye movements, and that the age-effects observed were also not a function of age-related differences in eye movements as well. EEG analysis Preprocessing was conducted using Analyzer software (Brain Vision, LLC) then exported to EEGLAB 35 for event-related spectral perturbations (ERSP) analyses. ERSP is a powerful approach to identifying stable features in a spontaneous EEG spectrum that are induced by experimental events, and have been used to successfully isolate markers of cognitive control 36,37 . We selected this approach because we felt that a measure in the frequency domain would be more stable than other metrics given the dynamic environment of NeuroRacer. Blinks and eye-movement artifacts were removed through an independent components analysis (ICA), as were epochs with excessive peak-to-peak deflections (±100 µV). Given the use of d’, which takes into account performance on every trial, we collapsed across all trial types for all subsequent analyses. −1000 to +1000msec epochs were created for ERSP total power analysis (evoked power + induced power), with theta band activity analyzed by resolving 4–100 Hz activity using a complex Morlet wavelet in EEGLAB and referenced to a −900 to −700 pre-stimulus baseline (thus relative power (dB)). Assessment of the “Sign & Drive” ERSP data in 40msec time bins collapsing across all older adult participants and experimental sessions revealed the onset of peak midline frontal activity to be between 360–400msec post-stimulus, and so all neural findings were evaluated within this time window for the older adults (see Supplementary Figure 7 for these topographies). For younger adults, peak theta activity occurred between 280–320 msec, and so for across-group comparisons, data from this time window was used for younger adults. The cognitive aging literature has demonstrated delayed neural processing in older adults using EEG 38,39 . For example, Zanto and colleagues 38 demonstrated that older adults show similar patterns of activity as younger adults in terms of selective processing, but there is a time shift to delayed processing with aging. For the data generated in this study, presented topographically in Supplementary Figure 7, it was clear that the peak of the midline frontal theta was delayed in older versus younger adults. To fairly assess if there was a difference in power, it was necessary to select different comparison windows in an unbiased, data-driven manner for each group. Coherence data for each channel was first filtered in multiple pass bands using a two-way, zero phase-lag, finite impulse response filter (eegfilt.m function in EEGLAB toolbox) to prevent phase distortion. We then applied a Hilbert transform to each of these time series (hilbert.m function), yielding results equivalent to sliding window FFT and wavelet approaches 40 , giving a complex time series, hx [n] = ax [n]exp(iϕ x [n]) where ax[n] and φx[n] are the instantaneous amplitudes and phases, respectively. The phase time series φx assumes values within (−π, π] radians with a cosine phase such that π radians corresponds to the trough and 0 radians to the peak. In order to compute PLV for theta phase, for example, we extract instantaneous theta phases φθ[n] by taking the angle of hθ[n]. Event-related phase time-series are then extracted and, for each time point, the mean vector length Rθ[n] is calculated across trials (circ_r.m function in CircStats toolbox) 41 . This mean vector length represents the degree of PLV where an R of 1 reflects perfect phase-locking across trials and a value of 0 reflects perfectly randomly distributed phases. These PLVs were controlled for individual state differences at each session by baseline correcting each individual’s PLVs using their −200 to 0 period (thus, a relative PLV score was calculated for each subject). Statistical analyses Mixed model ANOVAs with: i) decade of life (Experiment 1), ii) training group (Experiment 2), or iii) age (Experiment 3) as the between-group factor were used for all behavioral and neural comparisons, with planned follow-up t-tests and the Greenhouse-Geisser correction utilized where appropriate. One-tailed t-tests were utilized to interrogate group differences for all transfer measures given our a priori hypothesis of the direction of results following multitask training. All effect size values were calculated using Cohen’s d 42 and corrected for small sample bias using the Hedges and Olkin 43 approach. The neural-behavioral correlations presented included only those MTT participants who demonstrated increased midline frontal theta power following training (14/15 participants). For statistical analyses, we created 1 frontal and 3 posterior composite electrodes of interest (EOI) from the average of the following electrodes: AFz, Fz, FPz, AF3, and AF4 (medial frontal), PO8, P8, and P10 (right-posterior), PO7, P7, and P9 (left-posterior); POz, Oz, O1, O2 and Iz (central-posterior), with PLVs calculated for each frontal-posterior EOI combination separately. For the coherence data, the factor of posterior EOI location (3) was modeled in the ANOVA, but did not show either a main effect or interaction with the other factors. Supplementary Material 1
              • Record: found
              • Abstract: found
              • Article: not found

              Prevalence of attention-deficit/hyperactivity disorder: a systematic review and meta-analysis.

              Overdiagnosis and underdiagnosis of attention-deficit/hyperactivity disorder (ADHD) are widely debated, fueled by variations in prevalence estimates across countries, time, and broadening diagnostic criteria. We conducted a meta-analysis to: establish a benchmark pooled prevalence for ADHD; examine whether estimates have increased with publication of different editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM); and explore the effect of study features on prevalence.

                Author and article information

                Front Digit Health
                Front Digit Health
                Front. Digit. Health
                Frontiers in Digital Health
                Frontiers Media S.A.
                12 May 2022
                : 4
                : 876039
                [1] 1Fowler School of Engineering, Chapman University , Orange, CA, United States
                [2] 2Graduate School of Education, University of California, Riverside , Riverside, CA, United States
                [3] 3Pediatrics Department, University of California, Irvine , Irvine, CA, United States
                [4] 4Department of Psychiatry and Neuroscience, University of California, Riverside , Riverside, CA, United States
                [5] 5Informatics Department, University of California, Irvine , Irvine, CA, United States
                Author notes

                Edited by: Anders Nordahl-Hansen, Østfold University College, Norway

                Reviewed by: Nigel Newbutt, University of Florida, United States; Stian Orm, Western Norway University of Applied Sciences, Norway

                *Correspondence: Franceli L. Cibrian cibrian@ 123456chapman.edu

                This article was submitted to Health Informatics, a section of the journal Frontiers in Digital Health

                †These authors have contributed equally to this work and share senior authorship

                Copyright © 2022 Cibrian, Monteiro, Schuck, Nelson, Hayes and Lakes.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                : 15 February 2022
                : 28 March 2022
                Page count
                Figures: 0, Tables: 0, Equations: 0, References: 63, Pages: 6, Words: 4924
                Digital Health

                digital health intervention,adhd,mental health,human-computer interaction,development


                Comment on this article