Corrigendum: Cognitive training with casual video games: points to consider
- 1Department of Psychology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, IL, USA
- 2Department of Psychology, University of Iowa, Iowa City, IA, USA
- 3Brain Plasticity Institute, San Francisco, CA, USA
- 4Department of Psychology, Vanderbilt University, Nashville, TN, USA
- 5Digital Artefacts LLC, Iowa City, IA, USA
Brain training programs have proliferated in recent years, with claims that video games or computer-based tasks can broadly enhance cognitive function. However, benefits are commonly seen only in trained tasks. Assessing generalized improvement and practicality of laboratory exercises complicates interpretation and application of findings. In this study, we addressed these issues by using active control groups, training tasks that more closely resemble real-world demands and multiple tests to determine transfer of training. We examined whether casual video games can broadly improve cognition, and selected training games from a study of the relationship between game performance and cognitive abilities. A total of 209 young adults were randomized into a working memory–reasoning group, an adaptive working memory–reasoning group, an active control game group, and a no-contact control group. Before and after 15 h of training, participants completed tests of reasoning, working memory, attention, episodic memory, perceptual speed, and self-report measures of executive function, game experience, perceived improvement, knowledge of brain training research, and game play outside the laboratory. Participants improved on the training games, but transfer to untrained tasks was limited. No group showed gains in reasoning, working memory, episodic memory, or perceptual speed, but the working memory–reasoning groups improved in divided attention, with better performance in an attention-demanding game, a decreased attentional blink and smaller trail-making costs. Perceived improvements did not differ across training groups and those with low reasoning ability at baseline showed larger gains. Although there are important caveats, our study sheds light on the mixed effects in the training and transfer literature and offers a novel and potentially practical training approach. Still, more research is needed to determine the real-world benefits of computer programs such as casual games.
Introduction
What does it mean to “train your brain”? “Brain training games” have increased in popularity over the last decade, with findings and even stronger claims that computer-based tasks of working memory and attention can broadly improve cognition (Jaeggi et al., 2008; Sternberg, 2008; Karbach and Kray, 2009; Klingberg, 2010; Morrison and Chein, 2011). However, there is often insufficient data to support these claims, with many pilot experiments1 and studies showing improved performance on trained tasks but limited transfer to unpracticed tasks (Willis and Schaie, 1986; Ball et al., 2002; Green and Bavelier, 2003; Willis et al., 2006; Ackerman et al., 2010; Boot et al., 2010; Owen et al., 2010; Mackey et al., 2011; Lee et al., 2012). Some training programs are plagued by replication failures (Boot et al., 2008; Owen et al., 2010; Chooi and Thompson, 2012; Redick et al., 2012; Shipstead et al., 2012; Kundu et al., 2013; Thompson et al., 2013), and methodological issues involving only single tests of transfer to cognitive abilities, placebo effects, and the lack of appropriate active control groups (Boot et al., 2011, 2013). Many programs are also costly and “games” based on laboratory tasks pose implementation concerns in terms of motivation, adherence, and task specialization.
In this study, we use a variety of casual video games, validated by their quantitative association with cognitive constructs, to train aspects of cognitive function such as reasoning ability, working memory, and attentional control. In the validation study (Baniqued et al., 2013), we used a combination of cognitive task analysis, correlational analyses, and structural equation modeling to identify casual games that were most highly associated with well-studied tasks of working memory and reasoning or fluid intelligence. Casual games are relatively easy to learn, widely and freely available on the web and on handheld devices, and can be completed in short periods of time, although they still involve a wide array of cognitive skills, complex rules, and challenging objectives. Unlike laboratory-based “games” that train cognitive abilities in more sterile or controlled paradigms, video games demand execution of skills in an integrated or more externally valid environment. For example, multitasking and working memory abilities are tapped in a game (Sushi Go Round) that involves juggling between learning and preparing different recipes correctly, ordering ingredients to keep up with demand, and cleaning the tables to make way for new customers, whereas the laboratory-based dual n-back paradigm requires participants to remember pairs of auditory and visual stimuli in sequence, with predictable order (n-back), timing, and identity of stimuli.
In addition to the richness of the game environments, the novelty and challenge from playing multiple games – akin to athletic “cross-training” (Mackey et al., 2011) may better lead to maximal engagement and gains in cognitive abilities (Green and Bavelier, 2008; Holmes et al., 2009; Schmiedek et al., 2010; Bavelier et al., 2012; Brehmer et al., 2012). The overarching goal of training endeavors is to maintain or improve everyday functioning, so programs should aim to prepare an individual for a variety of challenges. Moreover, skill acquisition research has long shown that training programs that are variable, adaptive, promote cognitive flexibility, and discourage task-specific mastery lead to greater and broader learning (Schmidt and Bjork, 1992; Kramer et al., 1995, 1999). We cannot directly evaluate these concepts in the current study, though they provide a general rationale for the approach of using multiple games to improve cognition.
Training games were selected based on a quantitative analysis of the relationship between game performance and specific cognitive abilities (Baniqued et al., 2013). In the current study, a total of 209 young adults were randomized into four groups: (1) WM-REAS 1, a group that trained on four games (one adaptive across sessions) that heavily tapped working memory and reasoning ability, (2) WM-REAS 2, another working memory–reasoning group that employed four games that were all adaptive across sessions to maximally challenge performance, (3) an active control group that trained on four games (one adaptive across sessions) that did not heavily tap working memory and reasoning, as well as a (4) no-contact control group to better assess practice effects. The WM-REAS groups played a mix of working memory and reasoning games, as validation experiments (Baniqued et al., 2013) showed that these games highly correlated with tests of reasoning and working memory, with little differentiation between the degree of correlation with the two constructs – an unsurprising finding given the integrative nature of the games and the demonstrated relationship between working memory and reasoning abilities (Carpenter et al., 1990; Colom et al., 2004; Kane et al., 2004; Unsworth and Engle, 2006; Jaeggi et al., 2008; Salthouse and Pink, 2008; Conway and Getz, 2010).
In the initial validation study, principal component analysis (PCA) of the games also showed that the WM-REAS 1 games clustered together. This further confirmed that they tapped similar skills, consistent with an a priori cognitive task analysis on the casual games. Moreover, structural equation modeling showed that fluid intelligence best predicted performance on most of the games, but that fluid intelligence and working memory accounted for most of the variance in the WM-REAS 1 games at 27 and 14%, respectively (Baniqued et al., 2013). Not surprisingly, correlation coefficients between WM-REAS 1 games and working memory and reasoning tasks were 0.5–0.6 at a composite level, and 0.3–0.5 at an individual task level, all significant at p < 0.001. Meanwhile, the active control games did not cluster together and were the least correlated with working memory and reasoning measures, with individual game by task correlations ranging from non-significant to a maximum of around 0.25. Because not all of the WM-REAS 1 games could be implemented to be adaptive across sessions (limitations due to third-party sourcing of the games), we ran a similar validation study on more games that had the ability to be adaptive across sessions. We identified those that showed comparable robust relationships with the same working memory and reasoning tasks used to evaluate WM-REAS 1 games. These additional across-session adaptive games were used for the WM-REAS 2 group (for more detail, see Supplementary Methods2). Given the comparable results in the second validation study, the WM-REAS 1 and WM-REAS 2 games differed mainly in their adaptive component. Three out of the four WM-REAS 1 games were not across-session adaptive and may be more susceptible to automaticity or increased reliance on task-specific mastery, and thus not maximally engage working memory and reasoning skills that can better generalize to other paradigms. That is, although we hypothesize that the WM-REAS groups would show greater improvements in cognition compared to the active and no-contact control groups, the WM-REAS 2 group may show larger gains as complex skills are continually challenged for the duration of training.
To address issues in interpreting training and transfer effects, we employed comparable training groups as mentioned above, multiple tests of each cognitive ability, and a post-experiment survey that assessed perceived improvement and inquired about game play outside of the laboratory. The inclusion of a non-WM-REAS active control group was important for assessing whether differential expectations regarding the skills tapped during training may influence performance of the transfer tasks, akin to a placebo effect (Boot et al., 2011, 2013). We also aimed to shed light on the mixed results in the cognitive training literature by discussing our results in the context of previous findings, taking into account video games and laboratory-based experiments, as well as examining individual differences that may have implications for the efficacy of game training.
To summarize, our main predictions consisted of the following: (1) WM-REAS training, given its demand on complex skills, will broadly improve cognition, (2) Individuals lower in cognitive ability (as indexed by a composite measure of reasoning tasks) will show the greatest gains from WM-REAS training, and (3) Given the integrative nature of casual games, improvement expectations will not differ between the WM-REAS and active control groups, thus making a stronger case for the utility of casual game training.
Materials and Methods
Participants
Participants were recruited from the Champaign-Urbana community through flyers, newspaper, and online postings advertising participation in a “cognitive training study.” Applicants were first screened via email with a questionnaire that surveyed basic demographic information (e.g., sex, education, English language proficiency), and time spent playing video and board games. To mask the purpose of the game questions, these items were embedded with other lifestyle and activity questions that included the Godin Leisure-Time Exercise Questionnaire (Godin and Shephard, 1997). If not excluded based on the survey, a phone interview was conducted to check for medical and non-medical conditions that may affect neuropsychological testing. Although we focus only on the behavioral effects in this paper, we also collected brain scans for the study and thus screened for safety in a magnetic resonance imaging (MRI) environment. Eligible participants were (1) right-handed, (2) between the ages 18 and 30, (3) had normal or corrected-to-normal vision, (4) had no major medical conditions, (5) reported no non-removal metal on their body that might present a safety hazard in the MRI or affect image quality, and (6) reported playing video and board games for 3 h or less per week in the last 6 months. A total of 209 young adults completed the study (see Table 1 for information on excluded participants and other basic demographic information). All participants signed an informed consent form approved by the University of Illinois Institutional Review Board. Upon study completion, participants were paid $15 an hour for laboratory visits. Participants who dropped out or were disqualified after the first testing session were paid $7.50 an hour. Due to the scale of the study and multitude of tasks administered, detailed procedures can be found in a supplementary document at http://lbc.beckman.illinois.edu/pdfs/CasualGames_SuppMethods.pdf
Study Design
All participants underwent three cognitive testing sessions and an MRI session in a fixed session and task order (Table 2). Participants were randomly assigned to one of four groups: working memory and reasoning games (WM-REAS 1), adaptive working memory and reasoning games (WM-REAS 2), active control casual games that did not correlate with working memory and reasoning, or a no-contact control group (Table 3). Lab personnel were not blind to group assignment. Participants assigned to the training groups completed training sessions two to three times per week, for a total of 10 sessions. During each training session, four games were played in random order, with each game played for ~20 min each. After training was completed for the training groups or, after a comparable amount of time had elapsed for the no-contact control group, participants completed the same testing sessions in reverse session order.
Cognitive Assessment
Assessments administered before and after training were grouped into five categories: perceptual speed, reasoning/fluid intelligence (gF), working memory, episodic memory, and attentional control (selective visual attention, divided attention). Additionally, participants played two casual video games (one reasoning, one attention) that were not used as training games in any of the groups. Participants also completed the Behavior Rating Inventory of Executive Function Adult Version (Roth et al., 2005). Below is a brief description of each task, with more details in Table 2. At the very last testing session, participants were asked about study expectations and gaming experience in more detail. If participants reported in this post-experiment questionnaire that they played the testing or training games outside the laboratory, or were active video game players, their data was discarded from all the analyses. If a participant had 0% accuracy (except for Attentional Blink), a negative d-prime score (where applicable), or scored more than four standard deviations below the mean in a task (mean and standard deviation taken separately for each session), their data was excluded from training-related analyses of that task only. If the outlier data identified using the aforementioned methods was from the post-testing session, that participant’s pre-testing score was still used in the pre-test PCA.
Reasoning, Episodic Memory, and Perceptual Speed
With the exception of matrix reasoning, all tasks for these three constructs were taken from the Virginia Cognitive Aging Project (Salthouse and Ferrer-Caja, 2003; Salthouse, 2004, 2005, 2010). These tasks have been extensively and uniformly used so only brief descriptions are provided below.
Word recall. Participants listen to lists of words and recall the words in any order.
Logical memory. Participants listen to stories and recall the stories in detail.
Paired associates. Participants remember word pairs and recall the second word in the pair.
Digit-symbol coding. Participants write the corresponding symbol for each digit using a coding table for reference.
Letter comparison and pattern comparison. Participants determine whether a pair of patterns or letter combinations are the same or different.
Form boards. Participants choose shapes that will exactly fill a certain space.
Spatial relations. Participants identify the three-dimensional object that would match a folded two-dimensional object.
Paper folding. Participants identify the resulting pattern of holes from a sequence of folds and a punch through the folded sheet.
Shipley abstract. Participants identify the missing stimuli in a progressive sequence of letters, words, or numbers.
Letter sets. Participants see five patterns and identify the pattern that does not match the others.
Matrix reasoning. The Raven’s Progressive Matrices task was modified for a functional MRI paradigm and was largely based on a relational reasoning task used in Crone et al. (2009). Participants viewed a 3 × 3 matrix containing patterns in all but one cell and chose the best pattern out of three options to identify the missing piece. They solved two types of problems: control trials in which no integration was required across rows and columns, and reasoning trials that required integration of information across cells.
Working Memory
Visual short-term memory. An array of four shapes briefly appeared on the screen. After a delay, a shape appeared and participants had to decide whether this stimulus was in the original array. The experiment consisted of three blocks with targets varying in color, shape, and conjunctions of color and shape in each block, respectively.
N-back. Participants viewed a sequence of centrally presented letters. For each letter, participants were instructed to determine if the letter was the same as the previous letter (first block), the same as the letter two back (second block), or the same as the letter three back (third block).
Spatial working memory. On each trial, a configuration of two, three, or four black dots was presented on the screen. After a brief delay, a red dot appeared and participants were instructed to determine if the red dot was in the same position as one of the black dots presented earlier in that trial.
Running span. Participants are presented a sequence of letters and are instructed to remember the last n items presented.
Symmetry span. Participants performed symmetry judgments while remembering a sequence of red squares within a matrix. Participants were asked to recall the order and locations of the previously presented sequence.
Attentional Control
Task switching. Participants were asked to determine whether a number was odd or even, or whether it was higher or lower than five. The background color (blue or pink) determined the task to be performed. Participants completed two single task blocks and then a mixed task block where the task varied unpredictably across trials.
Attentional blink. Participants viewed sequences of rapidly presented black letters. In each sequence, a white letter appeared (location in sequence varied between trials) and on 50% of trials, a black “X” followed the white letter at varying lags. During the critical condition, participants were asked to identify the white letter and whether or not an X was presented.
Trail making. Participants first connected numbers distributed across a sheet of paper by drawing a line between numbers in ascending order. Participants then connected numbers and letters in alternating and ascending order on a second sheet.
Attention network test. Participants responded to the direction of a central arrow that pointed in the same (congruent) or opposite direction (incongruent) as four other adjacent arrows (two on each side). On some trials, warning cues appeared at the center of screen or at the location of the upcoming arrows. The task was adapted for the MRI environment, following procedures detailed in Fan et al. (2002).
Color stroop. Participants viewed a sequence of words and were asked to determine the color of the word. Three trial types were randomly presented: congruent (e.g., word “red” in red ink), neutral (e.g., word “dog” in red ink), or incongruent (e.g., word “red” in blue ink).
Casual Video Games Used for Assessment
Dodge. Participants aim to avoid enemy missiles that are actively chasing the ship under their control. Participants earn points and pass levels by guiding missiles into enemies.
Bloxorz. Participants rotate and move a rectangular block around a maze while avoiding falling off the platform. Levels are passed when the block reaches a target hole on the maze.
Self-Report Instruments
Behavior rating inventory of executive function by PARTM. Participants indicated the frequency that they experienced a variety of executive function problems (never, sometimes, or often). The questionnaire included several dimensions: Inhibit, Shift, Emotional Control, Self-Monitor, Initiate, Working Memory, Plan/Organize, Organization of Materials, and Task Monitor.
Post-experiment questionnaire. Participants completed a form that inquired about gameplay and lifestyle history as well as their experience in the study. In one section (hereafter referred to as perceived improvement questions), they were asked to rate whether they felt that participation in the study changed the following functions: overall intelligence, short-term or working memory, long-term memory, ability to pay attention or focus, ability to pay attention to multiple things at once (divided attention), hand-eye or visuomotor coordination, perception, vision or visual acuity, problem-solving ability, multi-tasking ability, reasoning ability, academic performance, spatial visualization ability, emotional regulation, and productivity at work or school, or tendency to procrastinate. Participants were also asked to give feedback and elaborate on strategies used in the training games, report whether they played any assessment or training games outside the lab (with no penalty to their participation in the study), and answer other questions on the nature of their knowledge and experience with video games.
Casual Games Used for Training
The WM-REAS 1 training group was formed using games that were highly correlated with performance on working memory and reasoning tasks, and the active control training group was composed of games that were not highly correlated with working memory and reasoning tasks (Baniqued et al., 2013). After about 20 participants were run in each of these two groups, we included an adaptive reasoning training (WM-REAS 2) group and a no-contact control group. The WM-REAS 2 group played games that also showed high and comparable correlations (as the WM-REAS 1 games) with working memory and reasoning tasks3. Unlike the first two training groups where adaptiveness in three out of the four games was only within session (exceptions: Silversphere in WM-REAS 1 and Alphattack in active control), participants in the WM-REAS 2 group started on the level that they ended on in the previous session, such that the games were adaptive across sessions. Games were embedded and played on a research portal designed for the study by Digital Artefacts4. Table 3 contains brief descriptions of each game played by the groups. After the first, fifth, and last training sessions, training participants were asked to answer the following questions for each game, rating their answers on a scale of 1–10 (1 = least, 5 = neutral, 10 = greatest): (1) How much did you enjoy/like each game, (2) How engaging was each game, (3) How demanding/effortful was each game, and (4) How motivated were you to achieve the highest possible score on each game?
Results
We first analyze the training games to determine practice-related improvement across the 10 sessions of training. We also assess whether the training groups differed in their experience with their respective games. In the next section, we determine whether game training transfers to untrained tasks by comparing performance on the pre- and post-assessment tasks, first at a construct-level and then at the individual task-level to determine the consistency of the effects. Since transfer to untrained tasks may vary depending on initial cognitive ability, we also investigated the effect of baseline fluid intelligence (reasoning) ability on transfer effects. We then examined whether perceived improvement in cognitive abilities differs across the training groups, which would prompt a re-analysis of the transfer affects to take into account expectations. Finally, we analyze other variables that may affect the effectiveness of training.
Practice Effects
Game Performance Across Sessions
All groups improved on their respective training games, regardless of whether the games were adaptive across sessions. If participants completed the last level of any across-session adaptive game, they started back at level one. For analysis purposes, the data for these succeeding sessions was replaced with the maximum score or level. Repeated measures ANOVA with session as a within-subjects factor (10 time points) was conducted for the primary measure of each game. The practice effects were robust, with significant main effects of session at p < 0.001 for all games. In games like Sushi Go Round, where participants started at level one at each session and thus highest level completed plateaued over time, participants improved in other aspects of the game such as in total number of customers served. Group averages are plotted in Figure 1, with scores divided by the maximum average score of each game for ease of presentation.
FIGURE 1. Mean training game performance as a function of group and session. Group average scores at each session, normalized by each game’s maximum average score: WM-REAS 1: Silversphere = 18.861, TwoThree = 20.265, Sushi-Go-Round = 5785.429, Digital Switch = 8.161; WM-REAS 2: Silversphere = 19.421, Gude Balls = 14.474, Aengie Quest = 20.526, Block Drop = 52.385; Active Control: Alphattack = 51.867, Music Catch = 4032358.24, Crashdown = 6.667, Enigmata = 4.069. Dashed lines indicate games that were adaptive across sessions. Error bars represent ±SEM.
Game Experience Across Sessions
The four feedback questions of enjoyment, engagement, motivation, and effort were entered separately into repeated measures ANOVAs with group as between-subjects factor and time (training sessions 1, 5, and 10) as within-subjects factor. Ratings for each question were averaged across the four games played by each participant. Results are summarized in Figure 2.
FIGURE 2. Training game feedback as a function of group and session. Feedback regarding game enjoyment, motivation, engagement, and effort were collected during the first, fifth, and last training sessions. Feedback scale: 1 = least, 5 = neutral, 10 = greatest. Error bars represent ±SEM.
For enjoyment, there was no group × time interaction, and no main effects of group and time. For engagement, there was no main effect of group, and no group × time interaction, but a main effect of time where engagement decreased across sessions [F(2,216) = 7.389, p = 0.001, = 0.064]. For motivation, there was no group × time interaction, but a main effect of time [F(2,222) = 5.026, p = 0.007, =0.043] with decreased motivation over sessions, and a main effect of group [F(2,111) = 6.035, p = 0.003, = 0.098], with lower motivation for the WM-REAS 2 group compared to the WM-REAS 1 and active control groups (ps < 0.05). For effort, there was no main effect of time, but a main effect of group [F(2,111) = 3.339, p = 0.045, = 0.054], where effort ratings were higher for the WM-REAS 2 group compared to the active control group (p = 0.017). The WM-REAS groups were not different from each other and the WM-REAS 1 group did not differ from the active control group. The group × time interaction was significant [F(4,222) = 2.913, p = 0.022, = 0.050], with effort ratings for WM-REAS 2 peaking at the fifth session compared to the first session peak for WM-REAS 1. When only taking into account the first and last session, the group × time interaction was not significant [F(2,115) = 2.364, p = 0.099, = 0.039]. Overall, the feedback questions indicated that the three training groups were comparable in their experience of the games, although the WM-REAS 2 group reported lower motivation overall and higher effort but only at mid-training, likely due to the greater demand from the adaptive games.
Qualitative feedback regarding strategies and overall experience for each game can be found at http://lbc.beckman.illinois.edu/pdfs/CasualGames_SuppAnalyses.pdf.
Transfer of Training
Composite-Level Analyses
To ascertain whether game training had any general effect on cognitive abilities and to better address the issue of measurement error and multiple comparisons, we performed analyses at the construct level using composite scores derived by averaging standardized improvement scores (post-test – pre-test/standard deviation of pre-test, collapsed across groups) from related tasks. These task groupings were confirmed by a PCA on the pre-test data. Despite the smaller sample size (n = 116, using all subjects with baseline data of each task) and the addition of several measures, the PCA was comparable with the previous validation study (Baniqued et al., 2013), with seven interpretable components that in combination explained 57% of the variance (Table 4): reasoning or fluid intelligence (Matrix Reasoning, Paper Folding, Form Boards, Spatial Relations, Letter Sets, Shipley Abstract, Bloxorz), perceptual speed (Digit Symbol, Pattern Comparison, Letter Comparison), episodic memory (Word Recall, Logical Memory, Paired Associates), ANT-visual attention (ANT alerting, orienting effects), divided attention (Dodge, Attention Blink, Trail Making), and two working memory components [N-back, Spatial WM, Visual short-term memory (STM), Running Span, Symmetry Span], with a notable separation between more simple (Component 6: Spatial WM, N-back, Visual STM) and complex (Component 7: Symmetry Span, Running Span) working memory tasks. We also reran the PCA without the ANT measures and the results were similar, with interpretable components of fluid intelligence, perceptual speed, episodic memory, divided attention, and working memory.
Because of the smaller PCA sample size and for ease of interpretation, only tasks that were consistent with previous literature were included in the component score calculations (e.g., WM measures that loaded highly onto the first component were excluded from the gF composite score). Given the overlap of simple and complex WM measures in Components 1, 6, and 7, we combined the simple and complex WM measures into one composite score.
We conducted ANOVAs on the composite gain scores with group as a between-subjects factor and found a significant group effect for divided attention [F(3,166) = 5.613, p = 0.001, = 0.092], with higher gain scores for both WM-REAS training groups (Figure 3). No group effects were found for fluid intelligence [F(3,166) = 0.667, p = 0.573, = 0.012], perceptual speed [F(3,166) = 0.316, p = 0.814, = 0.006], episodic memory [F(3,166) = 0.637, p = 0.592, = 0.011], ANT-visual attention [F(3,154) = 0.468, p = 0.705, = 0.009] and working memory [F(3,166) = 1.388, p = 0.248, = 0.024].
ANOVAs on composite scores that included all tasks with loadings of greater than 0.30 yielded similar results. The ANT composite also yielded a non-significant result when the alerting and orienting effects were summed with equal positive weight. The results were also similar for a re-analysis without the no-contact control group; training effects were only found in divided attention [F(2,124) = 6.676, p = 0.002, = 0.097].
Task-Level Analyses
To check whether the groups performed equivalently at pre-testing, one-way ANOVAs with group as between-subjects factor (all four groups) were conducted for all pre-test primary measures reported in Table 5. At baseline, group differences were only found in Trail Making measures (p = 0.039 for Trails B–A, p = 0.063 for Trail B). None of the other measures differed among groups at pre-testing (ps > 0.13).
To evaluate transfer of training, repeated measures ANOVAs were performed for each task, with time as a within-subjects factor and group as a between-subjects factor. The ANOVAs were rerun without the no-contact control group and the results were similar, although the effects described below were less robust and at times no longer significant at p < 0.05. For brevity, results for analyses with and without the no-contact control group are shown in Table 5.
Significant group × time interactions at p < 0.05 were found in Dodge, Attentional Blink and Trail-Making, which were also the three tasks that made up the divided attention composite. Post hoc tests revealed that both WM-REAS groups reached higher levels of Dodge at post-test (time effect p < 0.001 for both groups), while only the WM-REAS 1 group showed a reduced Trails cost at post-test (p < 0.01).
Because the Trail-Making measures had significant group differences at baseline, driven by longer Trail B times for WM-REAS 1 and WM-REAS 2, we excluded the pre and post-testing data of subjects with the two longest baseline times in each WM-REAS group (which were also the four highest times overall across all groups) so that group mean values were comparable at baseline. These data points were not outliers as identified by methods described earlier. One-way ANOVAs on the subset of data confirmed that the groups were no longer significantly different at baseline. After excluding the longest times, the results were similar to the analysis with all subjects (Table 5), with the Trails B–A group × time interaction still significant at [F(2,126) = 3.373, p = 0.020, = 0.061].
The magnitude of the attentional blink was smaller at post-test for the WM-REAS 2 (p < 0.001) and no-contact control (p < 0.01) groups. Since the pattern of results is complex5, we also analyze lag 2 and lag 8 separately. The group by time interaction for lag 8 was driven by increased performance at post-test for the active control group (p < 0.001). For lag 2, the time effect was significant for the no-contact control (p = 0.002), WM-REAS 1 (p = 0.026) and WM-REAS 2 (p < 0.001) groups. Taken together, the results for lag 2, lag 8, and the difference effect (lag 8 – lag 2) suggest that the reduced blink effect is only reliable in the WM-REAS 2 group.
Baseline Reasoning Ability and Transfer: Composite-Level Analysis
To determine whether training may be more or selectively effective for those with lower abilities at initial testing, we correlated transfer gains with baseline reasoning or gF ability (pre-training composite of Matrix Reasoning, Paper Folding, Form Boards, Spatial Relations, Letter Sets, Shipley Abstract), which provides an estimate of general mental ability (Gray and Thompson, 2004).
Pre-training gF correlated with gains in divided attention, such that participants with lower baseline gF had larger gains from training. This was significant only for the WM-REAS 1 group (r = -0.327, p = 0.032) and the WM-REAS 2 group (r = -0.333, p = 0.036).
An ANCOVA on the divided attention gain composite with the three training groups as between-subjects factor and with baseline gF as a covariate revealed a significant effect of training after controlling for baseline gF [F(2,123) = 5.509, p = 0.005, = 0.082], with larger gains from the WM-REAS groups. Baseline gF was confirmed to have an effect on divided attention gain [F(1,123) = 6.113, p = 0.015, = 0.047]. To confirm lack of transfer in other abilities, ANCOVAs with baseline gF as a covariate were also conducted on the other composites. The findings were consistent with previous analyses as no group effects were found.
To test the robustness of the divided attention gains in the WM-REAS groups, we reran the composite-level ANCOVAs after excluding the highest performers (upper quartile) in each group and still found a significant group effect in divided attention (and not in other cognitive abilities), with higher gains in the WM-REAS groups. This was true in analyses with [F(3,124) = 5.554, p = 0.001, = 0.118) and without the no-contact control group [F(2,92) = 6.199, p = 0.003, = 0.119].
Pre-training gF also correlated with gains in reasoning for the WM-REAS 1 (r = -0.320, p = 0.036), active control (r = -0.299, p = 049), and no-contact control (r = -0.440, p = 0.003) groups. Pre-training gF also correlated with perceptual speed (r = 0.360, p = 0.018), but this was only true for the WM-REAS 1 group.
Perceived Improvement
Post-Experiment Survey
Compared to the no-contact control group (12.5%), a greater percentage of participants in the three training groups reported that the study changed the way they performed their daily activities, “in a good way” [χ2(3) = 10.010, p = 0.018, WM-REAS 1 = 33.3%, WM-REAS 2 = 43.6%, active control = 37.2%)]. There was no difference between training groups when the no-contact control group was excluded from the chi-square analysis [χ2(2) = 0.917, p = 0.639]. Due to the low frequency of responses in the “Yes, but not in a good way” category (WM-REAS 1 = 2, WM-REAS 2 = 1, active control = 1, no-contact = 0), we excluded this option in the chi-square tests.
All groups reported that their overall skill at videogames was higher at post-test [F(1,164) = 217.620, p < 0.001, = 0.570], but a group × session interaction [F(3,164) = 4.802, p = 0.003, = 0.081] revealed that the training groups rated themselves significantly higher than the no-contact control group. There was, however, no difference between the three training groups in perceived video game skill after training [F(2,125) = 0.070, p = 0.933, = 0.001].
Due to experimenter error that resulted in a change in instructions when a web-based form of the survey was administered, for the perceived improvement questions, we only present statistics for subjects who received the same electronic version of the post-experiment survey (although all subjects are included in Figure 4 to provide the general pattern of results). In the initial written survey completed by 22 out of 44 subjects in WM-REAS 1, and 16 out of 44 subjects in the active control group, participants checked a box to indicate whether the study changed that particular ability, and then rated the extent of the change (1 = very poorly, 10 = very desirably). In the web-based survey, each item required an answer. That is, participants had to rate change on the ability on a scale of 1–10, which lent more ambiguity as an answer of 1 could now be interpreted as no change or negative change.
FIGURE 4. Perceived improvement as a function of group. Average responses for each group, including data from WM-REAS 1 and active control participants who did not receive a web-based version of the post-experiment survey. Error bars represent ±SEM.
Separate question ANOVAs revealed a significant group effect at p < 0.05 for working memory [F(3,126) = 2.765, p = 0.045], hand-eye or visuomotor coordination [F(3,126) = 5.332, p = 0.002], multitasking [F(3,126) = 6.714, p < 0.001], problem-solving [F(3,126) = 2.944, p = 0.036], reasoning [F(3,126) = 3.730, p = 0.013], and academic performance [F(3,126) = 4.530, p = 0.005], with higher ratings in general for the training groups compared to the no-contact control group. When the perceived improvement questions were analyzed without the no-contact control group, however, only the group effects for multitasking [F(2,88) = 6.300, p = 0.003] and academic performance [F(2,87) = 3.305, p = 0.041] remained significant, although none of the post hoc comparisons between groups were significant at p < 0.05.
Behavioral Rating Inventory of Executive Function
Repeated measures ANOVA revealed a significant group × time interaction only for the Shift index (problems transitioning between activity, strategy or situation), both when the no-contact control group was included in the analyses [F(3,141) = 3.995, p = 0.009, = 0.078], and when it was not [F(2,94) = 5.129, p = 0.008, = 0.098]. Paired t-tests revealed that this was due to an increase in Shift problems for the WM-REAS 1 group, although this effect must be taken lightly as the WM-REAS 1 group also had a lower mean Shift score at pre-test compared to the other groups, and only 21 subjects in this group completed the questionnaire at both time-points. Given the limited range of answers (never, sometimes, often) and the relatively weak task effects, it is possible that the BRIEF questionnaire could not adequately measure any subtle changes or differences between groups. Overall, the BRIEF results are consistent with the perceived improvement findings where majority of participants reported little to no improvement in cognitive functions or daily activities.
Exploratory Analysis: Other Individual Differences and Transfer Gain
We found that initial reasoning/gF ability predicted gains in divided attention, so we went a step further and conducted an exploratory analysis of other individual differences that may influence the effectiveness of WM-REAS casual game training. A few studies have found that training-related transfer is predicted by the amount of improvement in the trained tasks, such that greater “responders” show greater transfer (Jaeggi et al., 2011, 2013). We examined this in the current study by correlating transfer gain composite scores with training gain composite score. For each individual in each training group, we calculated the difference between performance in the later sessions (9, 10) and performance in the early sessions (1, 2). This difference score was then divided by the standard deviation in the early sessions (within each group). Standardized scores for the four games were then averaged to form a training gain composite score. Correlations conducted separately for each training group did not reveal any significant relationship between training gain and transfer gains, even after controlling for baseline game performance.
Mixed results from previous studies, coupled with small sample sizes and population demographic differences suggest the contribution of other factors such as gender, motivation, and other pre-existing abilities to training effectiveness (Jaeggi et al., 2013). Thus, for the WM-REAS groups, correlations were conducted between each transfer gain composite score and the following factors: gender, game play habits (only <3 h/week; combined modalities), training game experience (enjoyment, engagement, motivation, and effort after fifth and last training sessions), bilingualism, exercise (Godin Leisure-Time questionnaire), time spent watching television/movies, sleeping, reading books/magazines/newspapers, surfing the web, on social network sites, meditating, in nature, learning a new language, and learning a new instrument. Given the within-group and exploratory nature of this analysis, we only state correlations that were significant at p < 0.01.
For the WM-REAS 1 group, more time on social network sites (r = 0.458, p = 0.002) correlated with higher divided attention gains, and more time spent reading correlated with gains in fluid intelligence (r = 0.461, p = 0.002).
For the WM-REAS 2 group, game effort at mid-training correlated with gains in divided attention (r = 0.443, p = 0.008) such that greater effort was associated with larger gains. There was also correlation between sleep and gains in ANT-visual attention gain (r = 0.470, p = 0.004).
We did not find significant correlations with the other factors, which may be due to the lack of variability or lack of representation in certain conditions (e.g., maximum of less than 3 h weekly video game play), especially given the predominantly collegiate make-up of the sample.
Discussion
We examined whether widely available casual video games can broadly improve cognition by demonstrating transfer to untrained tasks. In our relatively sizeable sample (approximately 40 participants in each group), we found that while participants improved on trained games, transfer to untrained tasks was limited. Playing casual video games for 15 h did not improve most aspects of cognition, but playing working memory and reasoning casual games improved divided attention, with some caveats to be noted. As several of the training tasks involve working memory and reasoning demands in several fast-paced situations, and given our findings of higher divided attention gains for those with lower initial reasoning ability, we also provide a link between the working memory and action video game training literature.
Effects of WM-REAS Training on Cognitive Function
Groups trained on working memory and reasoning games improved in a composite measure of divided attention. All three tasks used for the divided attention score (Dodge, Attentional Blink, Trail Making) involve paying attention to multiple targets, with little demand on maintaining internal representations of stimuli. Multi-object tracking demands were also part of the active control games (Enigmata, Alphattack, Crashdown, MusicCatch), but it is likely that the lack of reasoning or planning demands in the games led to a more passive strategy as participants only reacted to objects as they appeared on the screen. Indeed, participant feedback for the active control games contained more statements about psychomotor strategies such as clicking as quickly as possible in response to stimuli. On the other hand, the WM-REAS groups practiced a mix of speeded and non-speeded tasks, with the speeded tasks (Silversphere, Sushi-Go-Round, DigiSwitch, TwoThree, Gude Balls) requiring both planning ahead and attention to multiple stimuli on the screen. The additional management demands in the WM-REAS games may have better developed divided attention skills as coordinated execution of multiple elements was critical to success in many games.
In the initial game validation study (Baniqued et al., 2013), fluid intelligence best predicted performance on multi-object tracking games such as Dodge, with Dodge performance also significantly correlating with performance on reasoning games (and not just attention or multiple-object tracking games). These findings can be taken as evidence of near transfer when taking into account the previously demonstrated relationship between Dodge and reasoning ability, and relatively far transfer given the dissimilar surface features of the trained games and transfer tasks such as Dodge. Such transfer to untrained paradigms bolsters the idea that the complex and more externally valid environment found in strategy-heavy video games may provide a more useful and practical platform for developing cognitive skills (Green et al., 2010).
These results are consistent with findings that playing strategy-demanding time-limited games can enhance attention skills (Green and Bavelier, 2003, 2006a,b, 2007; Basak et al., 2008; Hubert-Wallander et al., 2011a; Glass et al., 2013; Oei and Patterson, 2013). More strikingly, our findings parallel research (Bavelier et al., 2012) showing that active video game players perform better in a variety of attention-demanding tasks, including the attention blink and multiple-object tracking paradigms. We did not find improvements in the Attention Network Test, but this is not entirely unexpected in the context of other findings that active video game players do not show benefits for exogenous attention (Hubert-Wallander et al., 2011b). It is especially interesting that despite also playing fast-paced and attention-demanding games, the active control group did not improve to the level of the participants who practiced games with greater reasoning and working memory demands.
Working memory capacity has repeatedly been shown to correlate with attention abilities, with findings that capacity can predict the magnitude of the attentional blink (Arnell and Stubitz, 2010). We did not find increases in working memory capacity or fluid intelligence, but it is plausible that such changes in higher-level abilities evolve more slowly than changes in lower level attention abilities, following the developmental trajectory of processing speed, working memory, and fluid intelligence (Fry and Hale, 1996; Kail, 2007; Coyle et al., 2011). Alternatively, it may be that at least in young adults, training abilities such as working memory does not improve capacity per se, but more lower-level attention or information processing mechanisms that overlap or are common elements across reasoning, working memory, and other attentional control paradigms (Thorndike, 1913). In fact, Kundu et al. (2013) found that while dual n-back training did not improve fluid intelligence or complex working memory span, training improved “efficiency of stimulus processing”, as indexed by improvements in visual search and short-term memory. More and more studies find that training on a single adaptive working memory task does not transfer to working memory capacity or fluid intelligence (Chooi and Thompson, 2012; Redick et al., 2012; Lilienthal et al., 2013; Thompson et al., 2013), and studies that do find transfer observe them in attention measures (Chein and Morrison, 2010; Kundu et al., 2013; Oelhafen et al., 2013). On the other hand, it is also worth mentioning that a greater variety of training tasks may be more effective for demonstrating transfer to higher-level abilities. A study that trained participants on multiple working memory tasks for an average of 30 h over 12 weeks resulted in gains in several measures of reasoning, although the sample size in this study was relatively small (Jauvs, 2012), and transfer to other cognitive domains such as attention was not assessed. While the pattern of transfer results depends on the nature of the training tasks, overall the evidence points to working memory training as weakly beneficial for fluid intelligence, but promising in terms of enhancing attention skills.
A common difficulty in intervention studies is employing appropriate control groups to address placebo effects. We attempted to overcome this here by using multiple training groups and measuring performance expectations after training. Despite all training groups reporting equivalent increases in perceived videogame skill, only the reasoning groups improved in Dodge performance. This is especially interesting given that the active control games emphasized processing speed and tracking multiple objects on the screen. We found a group effect in multi-tasking expectations, however, the pairwise comparisons between training groups was not significant. Moreover, training feedback showed that the groups were generally comparable in enjoyment, engagement, motivation and effort. The WM-REAS 2 group reported less motivation overall and slightly greater effort at mid-training, which is likely due to the greater demands from the across-session adaptive games. Such reported challenge or difficulty can be argued to account for the transfer results, though this does not explain why the WM-REAS 1 group also demonstrated transfer even without differences in perceived effort or motivation during training. It is likely that in the context of this experiment where individuals are paid for simply playing games, motivation does not play a significant role in determining training effectiveness.
Although we cannot determine whether only a subset of WM-REAS games led to the effects in the reasoning groups, we can infer that playing a variety of reasoning games promoted more generalizable skills as opposed to task mastery. Taatgen (2013) makes a compelling argument that tasks such as working memory and task switching promote development of “proactive” control that encourages endogenous preparation. As several of the WM-REAS games and strategy video games involve fast-paced decision making, endogenous preparation likely comes into play such that sequence of actions are planned ahead of time and deployed quickly at the right moment. Conversely, it can be argued that the active control games promoted more “reactive” control that is not measurable in the cognitive abilities we tested. Taatgen further makes the argument that executive function training improves “skill” and not “capacity,” which echoes a sentiment articulated by Luck and Vogel (2013) that greater working memory capacity may not lead to better problem-solving, but that individuals who can flexibly develop strategies to enhance performance may more ably execute working memory and other related tasks (Kirby and Lawson, 1983). Participants in the WM-REAS groups completed a variety of challenges in the WM-REAS games and practiced these problem solving skills (with many self-reports of “trying out new combinations, strategies”) under demanding and in some occasions, extremely time-limited conditions. This idea of enhanced decision-making under high load is also a main explanation provided for why playing fast-paced action games leads to improvement in attention-demanding tasks (Hubert-Wallander et al., 2011a; Mishra et al., 2011). In this regard, our findings are in line with previous research and extend the literature by showing that game-related improvement in attention skills may also result from non-violent gaming environments.
This study was conducted with healthy young adults, which limits the extension of these results to other populations. However, the correlation between divided attention transfer gain and baseline reasoning, selected as a proxy for general ability (Gray and Thompson, 2004), suggests that these kinds of protocols may be more useful in populations that have more to gain from training, such as children or older adults who experience age-related cognitive decline. This relationship between pre-existing differences in cognitive ability and training efficacy also offers an explanation for the mixed results in training studies. As most working memory training studies have relatively small sample sizes (for a review, see Morrison and Chein, 2011), individual differences may enhance or obscure any effects of training on a subset of participants.
Limitations and Future Directions
We acknowledge that other factors such as improvement expectations may influence the transfer results. However, due to the ambiguity of the scale in the perceived improvement questions, we could not reliably factor in expectations in the statistical analyses. Nonetheless, it is interesting to note that the training groups did not significantly differ in perceived improvement, and that the WM-REAS groups improved in divided attention, an ability where their expectations did not differ from the active control group. Although we found a group difference in perceived multitasking improvement, which can be taken as related to divided attention, the post hoc comparisons were not significant. Moreover, no improvements were found in Task Switching or in Symmetry Span, both of which clearly involved managing multiple tasks.
It should also be noted that the divided attention composite includes tasks that are not as extensively evaluated as the tasks used to estimate reasoning, perceptual speed and episodic memory abilities. Nonetheless, similarities in Dodge, Attention Blink and Trail Making were confirmed by a PCA and give us more confidence in the divided attention score. We also revisited the validation study and found correlations between Dodge, Attention Blink and Trail-Making measures. The tasks may also be sensitive to practice effects, although all groups performed the same tests and no improvements were found in the control groups. Nonetheless, this training approach needs to be re-examined with a more extensive battery of attentional control tasks to shed light on why benefits were not observed in tasks like Symmetry Span which also involved divided attention, albeit in the form of shifting from one task to another. The tasks that showed transfer involved distributing attention across objects in space (Trail Making, Dodge), or across a narrow time frame, as is the case with Attentional Blink, but this needs to be further evaluated.
It can also be argued that the improvement in the WM-REAS groups was due to a change in strategy when performing the assessments. This is worthwhile to explore in future studies since working memory–reasoning tasks may not improve divided attention per se, but planning or reasoning abilities that may be best observed or manifested in such “divided attention” tasks. It may also be the case that despite their high correlations with working memory and reasoning, the WM-REAS games demanded other skills for successful gameplay over the course of training, with a shift of emphasis from reasoning to divided attention skills as participants gained mastery of the games. Indeed, the degree to which reasoning ability predicts performance has been shown to change, with declining influence at later points of skill acquisition (Ackerman, 1988; Quiroga et al., 2009, 2011).
Ceiling performance and practice effects due to lack of alternate versions for six out of the seven fluid intelligence tasks (including Bloxorz) may contribute to the null effect in fluid intelligence, although note that gains were also not observed in the matrix reasoning task used in the magnet, which presented unique items at pre- and post-testing, with lower post-testing performance overall due to task design (Table 5). This null finding is consistent with decades-old literature showing that fluid intelligence is relatively stable in adulthood (Jensen, 1969; though with age-related decreases) and further challenges the claim that cognitive training can lead to improvement in this ability (Jaeggi et al., 2008; Sternberg, 2008). However, it is conceivable that the game training period in the current study was too short to train such an ability, and that more hours of practice may result in stronger and broader effects on cognition. Some participants also reached ceiling performance in the training games, so it would be useful to test whether playing more demanding games can lead to transfer to higher-level abilities of working memory and reasoning. In a recent experiment, Glass et al. (2013) found increases in cognitive flexibility following 40 h of real-time strategy game play (StarCraft) that emphasized a variety of skills including reasoning, working memory, and rapid switching in an adaptive and integrated setting.
Real-world measures of divided attention are needed to verify whether playing working memory and reasoning casual games can transfer to useful skills in daily life. Moreover, we did not conduct follow-up retention tests, so it is not known whether benefits persist beyond the training period. It is to be expected, however, in the same way as physical exercise, that continued practice is essential to maintaining or reaping intervention-related benefits.
Other interventions have been shown to improve cognition, and we provide modest evidence that playing casual games is one possible means to improve attention skills. The relatively non-violent nature of casual games compared to first-person shooter games also minimizes concerns regarding the negative effects of video game play. Nevertheless, with the aggressive marketing of brain games and the liberal application of preliminary training results, we caution against using video games or other computer-based programs as a sole or primary approach to improving brain function, particularly if it leads to a more sedentary lifestyle or in the words of Weis and Cerankosky (2010) “displace(s) activities that might have greater educational value.” Activities such as physical exercise have repeatedly been shown to benefit not only overall physical health, but also neurocognitive function (Hillman et al., 2008; Voss et al., 2013). Future studies should investigate the effects of combined and synergistic interventions to elucidate the ways in which activities may commonly and differentially change brain function. The goal of this line of research is not simply to evaluate the efficacy of interventions or the superiority of one over another, but to identify several avenues that promote a better quality of life, as a program that works for a certain population may not be suitable for another.
Author Contributions
All the authors contributed to designing the study. Pauline L. Baniqued and Michael B. Kranz supervised data collection and analyzed the data. Pauline L. Baniqued wrote the first draft of the manuscript with help from Michael B. Kranz for the section “Materials and Methods.” All the authors revised the manuscript.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
The Office of Naval Research supported this study (grant no. N000140710903). Pauline L. Baniqued was supported by a National Science Foundation Neuroengineering IGERT Fellowship (grant no. 0903622). The authors are grateful to Jay Zhang and Damien Clarke for modifying Dodge and Bloxorz, respectively, for testing purposes in this study. The authors thank Silvia Bunge and Carter Wendelken for assistance with the matrix reasoning task, Kalina Christoff and colleagues for providing some of the reasoning stimuli, Randall Engle and colleagues for the operation span tasks, and Timothy Salthouse and colleagues for the fluid intelligence, episodic memory, and perceptual speed tasks. The authors express their deep gratitude to Anya Knecht, Andrew Lewis, Natalie Henry, Robert Weisshappel, Matthew King, Jason Steinberg, Chelsea Ohler, Simon Cho, Sarah Chen, Aubrey Lutz, Melek Mourad, Courtney Allen, Anna Bengtson, Ali Golshani, Sarah Banducci, Aki Nikolaidis, Aldis Sipolins, Cher-Wee Ang, and the dozens of undergraduates and lab volunteers for their help in data collection. Although the imaging data is not reported in this paper, we are also very grateful to Nancy Dodge, Holly Tracy, Chelsea Wong, Rochelle Yambert, Brad Sutton, and Ryan Larsen for their help in the design and collection of imaging data.
Footnotes
- ^For an example, see http://hcp.lumosity.com/research/bibliography
- ^http://lbc.beckman.illinois.edu/pdfs/CasualGames_SuppMethods.pdf
- ^http://lbc.beckman.illinois.edu/pdfs/CasualGames_SuppMethods.pdf
- ^http://research.cognitiveme.com
- ^http://lbc.beckman.illinois.edu/pdfs/CasualGames_SuppAnalyses.pdf
References
Ackerman, P. L. (1988). Determinants of individual differences during skill acquisition: cognitive abilities and information processing. J. Exp. Psychol. Gen. 117, 288–318. doi: 10.1037/0096-3445.117.3.288
Ackerman, P. L., Kanfer, R., and Calderwood, C. (2010). Use it or lose it? Wii brain exercise practice and reading for domain knowledge. Psychol. Aging 25, 753–766. doi: 10.1037/a0019277
Arnell, K. M., and Stubitz, S. M. (2010). Attentional blink magnitude is predicted by the ability to keep irrelevant material out of working memory. Psychol. Res. 74, 457–467. doi: 10.1007/s00426-009-0265-8
Ball, K., Berch, D. B., Helmers, K. F., Jobe, J. B., Leveck, M. D., Marsiske, M., et al. (2002). Effects of cognitive training interventions with older adults: a randomized controlled trial. JAMA 288, 2271–2281. doi: 10.1001/jama.288.18.2271
Baniqued, P. L., Lee, H., Voss, M. W., Basak, C., Cosman, J. D., DeSouza, S., et al. (2013). Selling points: what cognitive abilities are tapped by casual video games? Acta Psychol. 142, 74–86. doi: 10.1016/j.actpsy.2012.11.009
Basak, C., Boot, W. R., Voss, M. W., and Kramer, A. F. (2008). Can training in a real-time strategy video game attenuate cognitive decline in older adults? Psychol. Aging 23, 765–777. doi: 10.1037/a0013494
Bavelier, D., Green, C. S., Pouget, A., and Schrater, P. (2012). Brain plasticity through the life span: learning to learn and action video games. Annu. Rev. Neurosci. 35, 391–416. doi: 10.1146/annurev-neuro-060909-152832
Bennett, G., Seashore, H., and Wesman, A. (1997). Drflerential Aptitude Test. San Antonio, TX: The Psychological Corporation.
Boot, W. R., Basak, C., Erickson, K. I., Neider, M., Simons, D. J., Fabiani, M., et al. (2010). Transfer of skill engendered by complex task training under conditions of variable priority. Acta Psychol. 135, 349–357. doi: 10.1016/j.actpsy.2010.09.005
Boot, W. R., Blakely, D. P., and Simons, D. J. (2011). Do action video games improve perception and cognition? Front. Psychol. 2:226. doi: 10.3389/fpsyg.2011.00226
Boot, W. R., Kramer, A. F., Simons, D. J., Fabiani, M., and Gratton, G. (2008). The effects of video game playing on attention, memory, and executive control. Acta Psychol. 129, 387–398. doi: 10.1016/j.actpsy.2008.09.005
Boot, W. R., Simons, D. J., Stothart, C., and Stutts, C. (2013). The pervasive problem with placebos in psychology: why active control groups are not sufficient to rule out placebo effects. Perspect. Psychol. Sci. 8, 445–454. doi: 10.1177/1745691613491271
Brehmer, Y., Westerberg, H., and Backman, L. (2012). Working-memory training in younger and older adults: training gains, transfer, and maintenance. Front. Hum. Neurosci. 6:63. doi: 10.3389/fnhum.2012.00063
Broadway, J. M., and Engle, R. W. (2010). Validating running memory span: measurement of working memory capacity and links with fluid intelligence. Behav. Res. Methods 42, 563–570. doi: 10.3758/BRM.42.2.563
Carpenter, P. A., Just, M. A., and Shell, P. (1990). What one intelligence test measures: a theoretical account of the processing in the raven progressive matrices test. Psychol. Rev. 97, 404–431. doi: 10.1037/0033-295X.97.3.404
Chein, J. M., and Morrison, A. B. (2010). Expanding the mind’s workspace: training and transfer effects with a complex working memory span task. Psychon. Bull. Rev. 17, 193–199. doi: 10.3758/PBR.17.2.193
Chooi, W., and Thompson, L. A. (2012). Working memory training does not improve intelligence in healthy young adults. Intelligence 40, 531–542. doi: 10.1016/j.intell.2012.07.004
Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., and Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence 32, 277–296. doi: 10.1016/j.intell.2003.12.002
Conway, A. R., and Getz, S. J. (2010). Cognitive ability: does working memory training enhance intelligence? Curr. Biol. 20, R362–R364. doi: 10.1016/j.cub.2010.03.001
Coyle, T. R., Pillow, D. R., Snyder, A. C., and Kochunov, P. (2011). Processing speed mediates the development of general intelligence (g) in adolescence. Psychol. Sci. 22, 1265–1269. doi: 10.1177/0956797611418243
Crone, E. A., Wendelken, C., Van Leijenhorst, L., Honomichl, R. D., Christoff, K., and Bunge, S. A. (2009). Neurocognitive development of relational reasoning. Dev. Sci. 12, 55–66. doi: 10.1111/j.1467-7687.2008.00743.x
Ekstrom, R. B., French, J. W., Harman, H. H., and Dermen, D. (1976). Manual for Kit of Factor-referenced Cognitive Tests. Princeton, NJ: Educational Testing Service.
Erickson, K. I., Voss, M. W., Prakash, R. S., Basak, C., Szabo, A., Chaddock, L., et al. (2011). Exercise training increases size of hippocampus and improves memory. Proc. Natl. Acad. Sci. U.S.A. 108, 3017–3022. doi: 10.1073/pnas.1015950108
Fan, J., McCandliss, B. D., Sommer, T., Raz, A., and Posner, M. I. (2002). Testing the efficiency and independence of attentional networks. J. Cogn. Neurosci. 14, 340–347. doi: 10.1162/089892902317361886
Fry, A. F., and Hale, S. (1996). Processing speed, working memory, and fluid intelligence: evidence for a developmental cascade. Psychol. Sci. 7, 237–241. doi: 10.1111/j.1467-9280.1996.tb00366.x
Glass, B. D., Maddox, W. T., and Love, B. C. (2013). Real-time strategy game training: emergence of a cognitive flexibility trait. PLoS ONE 8:e70350. doi: 10.1371/journal.pone.0070350
Godin, G., and Shephard, R. (1997). Godin leisure-time exercise questionnaire. Med. Sci. Sports Exerc. 29, S36. doi: 10.1097/00005768-199706001-00009
Gray, J. R., and Thompson, P. M. (2004). Neurobiology of intelligence: science and ethics. Nat. Rev. Neurosci. 5, 471–482. doi: 10.1038/nrn1405
Green, C. S., and Bavelier, D. (2003). Action video game modifies visual selective attention. Nature 423, 534–537. doi: 10.1038/nature01647
Green, C. S., and Bavelier, D. (2006a). Effect of action video games on the spatial distribution of visuospatial attention. J. Exp. Psychol. Hum. Percept. Perform. 32, 1465–1478. doi: 10.1037/0096-1523.32.6.1465
Green, C. S., and Bavelier, D. (2006b). Enumeration versus multiple object tracking: the case of action video game players. Cognition 101, 217–245. doi: 10.1016/j.cognition.2005.10.004
Green, C. S., and Bavelier, D. (2007). Action-video-game experience alters the spatial resolution of vision. Psychol. Sci. 18, 88–94. doi: 10.1111/j.1467-9280.2007.01853.x
Green, C. S., and Bavelier, D. (2008). Exercising your brain: a review of human brain plasticity and training-induced learning. Psychol. Aging 23, 692–701. doi: 10.1037/a0014345
Green, C. S., Pouget, A., and Bavelier, D. (2010). Improved probabilistic inference as a general learning mechanism with action video games. Curr. Biol. 20, 1573–1579. doi: 10.1016/j.cub.2010.07.040
Hillman, C. H., Erickson, K. I., and Kramer, A. F. (2008). Be smart, exercise your heart: exercise effects on brain and cognition. Nat. Rev. Neurosci. 9, 58–65. doi: 10.1038/nrn2298
Holmes, J., Gathercole, S. E., and Dunning, D. L. (2009). Adaptive training leads to sustained enhancement of poor working memory in children. Dev. Sci. 12, F9–F15. doi: 10.1111/j.1467-7687.2009.00848.x
Hubert-Wallander, B., Green, C. S., and Bavelier, D. (2011a). Stretching the limits of visual attention: the case of action video games. Wiley Interdiscip. Rev. Cogn. Sci. 2, 222–230. doi: 10.1002/wcs.116
Hubert-Wallander, B., Green, C. S., Sugarman, M., and Bavelier, D. (2011b). Changes in search rate but not in the dynamics of exogenous attention in action videogame players. Atten. Percept. Psychophys. 73, 2399–2412. doi: 10.3758/s13414-011-0194-7
Jaeggi, S. M., Buschkuehl, M., Jonides, J., and Perrig, W. J. (2008). Improving fluid intelligence with training on working memory. Proc. Natl. Acad. Sci. U.S.A. 105, 6829–6833. doi: 10.1073/pnas.0801268105
Jaeggi, S. M., Buschkuehl, M., Jonides, J., and Shah, P. (2011). Short-and long-term benefits of cognitive training. Proc. Natl. Acad. Sci. U.S.A. 108, 10081–10086. doi: 10.1073/pnas.1103228108
Jaeggi, S. M., Buschkuehl, M., Shah, P., and Jonides, J. (2013). The role of individual differences in cognitive training and transfer. Mem. Cogn. doi: 10.3758/s13421-013-0364-z [Epub ahead of print].
Jaušovec, N., and Jaušovec, K. (2012). Working memory training: improving intelligence – changing brain activity. Brain Cogn. 79, 96–106. doi: 10.1016/j.bandc.2012.02.007
Jensen, A. R. (1969). How much can we boost IQ and scholastic achievement. Harv. Educ. Rev. 39, 1–123. doi: 10.1111/j.1745-6916.2007.00037.x
Kail, R. V. (2007). Longitudinal evidence that increases in processing speed and working memory enhance children’s reasoning. Psychol. Sci. 18, 312–313. doi: 10.1111/j.1467-9280.2007.01895.x
Kane, M. J., Conway, A. R., Miura, T. K., and Colflesh, G. J. (2007). Working memory, attention control, and the N-back task: a question of construct validity. J. Exp. Psychol. Learn. Mem. Cogn. 33, 615. doi: 10.1037/0278-7393.33.3.615
Kane, M. J., Hambrick, D. Z., Tuholski, S. W., Wilhelm, O., Payne, T. W., and Engle, R. W. (2004). The generality of working memory capacity: a latent-variable approach to verbal and visuospatial memory span and reasoning. J. Exp. Psychol. Gen. 133, 189–217. doi: 10.1037/0096-3445.133.2.189
Karbach, J., and Kray, J. (2009). How useful is executive control training? age differences in near and far transfer of task-switching training. Dev. Sci. 12, 978–990. doi: 10.1111/j.1467-7687.2009.00846.x
Kirby, J. R., and Lawson, M. J. (1983). Effects of strategy training on progressive matrices performance. Contemp. Educ. Psychol. 8, 127–140. doi: 10.1016/0361-476X(83)90004-8
Kirchner, W. K. (1958). Age differences in short-term retention of rapidly changing information. J. Exp. Psychol. 55, 352–358. doi: 10.1037/h0043688
Klingberg, T. (2010). Training and plasticity of working memory. Trends Cogn. Sci. 14, 317–324. doi: 10.1016/j.tics.2010.05.002
Kramer, A. F., Hahn, S., and Gopher, D. (1999). Task coordination and aging: explorations of executive control processes in the task switching paradigm. Acta Psychol. 101, 339–378. doi: 10.1016/S0001-6918(99)00011-6
Kramer, A. F., Larish, J. F., and Strayer, D. L. (1995). Training for attentional control in dual task settings: a comparison of young and old adults. J. Exp. Psychol. Appl. 1, 50. doi: 10.1037/1076-898X.1.1.50
Kundu, B., Sutterer, D. W., Emrich, S. M., and Postle, B. R. (2013). Strengthened effective connectivity underlies transfer of working memory training to tests of short-term memory and attention. J. Neurosci. 33, 8705–8715. doi: 10.1523/JNEUROSCI.5565-12.2013
Lee, H., Boot, W. R., Basak, C., Voss, M. W., Prakash, R. S., Neider, M., et al. (2012). Performance gains from directed training do not transfer to untrained tasks. Acta Psychol. 139, 146–158. doi: 10.1016/j.actpsy.2011.11.003
Lilienthal, L., Tamez, E., Shelton, J. T., Myerson, J., and Hale, S. (2013). Dual n-back training increases the capacity of the focus of attention. Psychon. Bull. Rev. 20, 135–141. doi: 10.3758/s13423-012-0335-6
Luck, S. J., and Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature 390, 279–281. doi: 10.1038/36846
Luck, S. J., and Vogel, E. K. (2013). Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends Cogn. Sci. 17, 391–400. doi: 10.1016/j.tics.2013.06.006
Mackey, A. P., Hill, S. S., Stone, S. I., and Bunge, S. A. (2011). Differential effects of reasoning and speed training in children. Dev. Sci. 14, 582–590. doi: 10.1111/j.1467-7687.2010.01005.x
Mishra, J., Zinni, M., Bavelier, D., and Hillyard, S. A. (2011). Neural basis of superior performance of action videogame players in an attention-demanding task. J. Neurosci. 31, 992–998. doi: 10.1523/JNEUROSCI.4834-10.2011
Morrison, A. B., and Chein, J. M. (2011). Does working memory training work? the promise and challenges of enhancing cognition by training working memory. Psychon. Bull. Rev. 18, 46–60. doi: 10.3758/s13423-010-0034-0
Oei, A. C., and Patterson, M. D. (2013). Enhancing cognition with video games: a multiple game training study. PLoS ONE 8:e58546. doi: 10.1371/journal.pone.0058546
Oelhafen, S., Nikolaidis, A., Padovani, T., Blaser, D., Koenig, T., and Perrig, W. J. (2013). Increased parietal activity after training of interference control. Neuropsychologia 51, 2781–2790. doi: 10.1016/j.neuropsychologia.2013.08.012
Owen, A. M., Hampshire, A., Grahn, J. A., Stenton, R., Dajani, S., Burns, A. S., et al. (2010). Putting brain training to the test. Nature 465, 775–778. doi: 10.1038/nature09042
Pashler, H. (2000). “Task switching and multitask performance,” in Control of Cognitive Processes: Attention and Performance XVIII, eds S. Monsell and J. Driver (Cambridge, MA: MIT Press), 277–309.
Quiroga, M., Román, F. J., Catalán, A., Rodríguez, H., Ruiz, J., Herranz, M., et al. (2011). Videogame performance (not always) requires intelligence. Int. J. Online Pedagog. Course Des. 1, 18–32. doi: 10.4018/ijopcd.2011070102
Quiroga, M. A., Herranz, M., Gómez-Abad, M., Kebir, M., Ruiz, J., and Colom, R. (2009). Video-games: do they require general intelligence? Comput. Educ. 53, 414–418. doi: 10.1016/j.compedu.2009.02.017
Raymond, J. E., Shapiro, K. L., and Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: an attentional blink? J. Exp. Psychol. Hum. Percept. Perform. 18, 849–860. doi: 10.1037/0096-1523.18.3.849
Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., et al. (2012). No evidence of intelligence improvement after working memory training: a randomized, placebo-controlled study. J. Exp. Psychol. Gen. 142, 359–379. doi: 10.1037/a0029082
Reitan, R. M. (1958). Validity of the trail making test as an indicator of organic brain damage. Percept. Mot. Skills 8, 271–276. doi: 10.2466/pms.1958.8.3.271
Roth, R. M., Isquith, P. K., and Gioia, G. A. (2005). BRIEF-A: Behavior Rating Inventory of Executive Function – Adult Version: Professional Manual. Lutz, FL: Psychological Assessment Resources.
Salthouse, T. A. (2004). Localizing age-related individual differences in a hierarchical structure. Intelligence 32, 541–561. doi: 10.1016/j.intell.2004.07.003
Salthouse, T. A. (2005). Relations between cognitive abilities and measures of executive functioning. Neuropsychology 19, 532–545. doi: 10.1037/0894-4105.19.4.532
Salthouse, T. A. (2010). Influence of age on practice effects in longitudinal neurocognitive change. Neuropsychology 24, 563–572. doi: 10.1037/a0019026
Salthouse, T. A., and Babcock, R. L. (1991). Decomposing adult age differences in working memory. Dev. Psychol. 27, 763. doi: 10.1037/0012-1649.27.5.763
Salthouse, T. A., and Ferrer-Caja, E. (2003). What needs to be explained to account for age-related effects on multiple cognitive variables? Psychol. Aging 18, 91–110. doi: 10.1037/0882-7974.18.1.91
Salthouse, T. A., Fristoe, N., and Rhee, S. H. (1996). How localized are age-related effects on neuropsychological measures? Neuropsychology 10, 272. doi: 10.1037/0894-4105.10.2.272
Salthouse, T. A., and Pink, J. E. (2008). Why is working memory related to fluid intelligence? Psychon. Bull. Rev. 15, 364–371. doi: 10.3758/PBR.15.2.364
Schmidt, R. A., and Bjork, R. A. (1992). New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychol. Sci. 3, 207–217. doi: 10.1111/j.1467-9280.1992.tb00029.x
Schmiedek, F., Lovden, M., and Lindenberger, U. (2010). Hundred days of cognitive training enhance broad cognitive abilities in adulthood: findings from the COGITO study. Front. Aging Neurosci. 2:27. doi: 10.3389/fnagi.2010.00027
Shipstead, Z., Redick, T. S., and Engle, R. W. (2012). Is working memory training effective? Psychol. Bull. 138, 628–654. doi: 10.1037/a0027473
Sternberg, R. J. (2008). Increasing fluid intelligence is possible after all. Proc. Natl. Acad. Sci. U.S.A. 105, 6791–6792. doi: 10.1073/pnas.0803396105
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. J. Exp. Psychol. 18, 643. doi: 10.1037/h0054651
Stroop, J. R. (1992). Studies of interference in serial verbal reactions. J. Exp. Psychol. Gen. 121, 15–23. doi: 10.1037/0096-3445.121.1.15
Taatgen, N. A. (2013). The nature and transfer of cognitive skills. Psychol. Rev. 120, 439–471. doi: 10.1037/a0033138
Thompson, T. W., Waskom, M. L., Garel, K. A., Cardenas-Iniguez, C., Reynolds, G. O., Winter, R., et al. (2013). Failure of working memory training to enhance cognition or intelligence. PLoS ONE 8:e63614. doi: 10.1371/journal.pone.0063614
Thorndike, E. L. (1913). The Psychology of Learning. New York: Teachers College, Columbia University.
Unsworth, N., and Engle, R. W. (2006). Simple and complex memory spans and their relation to fluid abilities: evidence from list-length effects. J. Mem. Lang. 54, 68–80. doi: 10.1016/j.jml.2005.06.003
Voss, M. W., Vivar, C., Kramer, A. F., and van Praag, H. (2013). Bridging animal and human models of exercise-induced brain plasticity. Trends Cogn. Sci. 17, 525–544. doi: 10.1016/j.tics.2013.08.001
Wechsler, D. (1997a). WAIS-III: Wechsler Adult Intelligence Scale. San Antonio, TX: The Psychological Corporation.
Wechsler, D. (1997b). Wechsler Memory Scale (WMS-III). San Antonio, TX: The Psychological Corporation.
Weis, R., and Cerankosky, B. C. (2010). Effects of video-game ownership on young boys’ academic and behavioral functioning: a randomized, controlled study. Psychol. Sci. 21, 463–470. doi: 10.1177/0956797610362670
Willis, S. L., and Schaie, K. W. (1986). Training the elderly on the ability factors of spatial orientation and inductive reasoning. Psychol. Aging 1, 239–247. doi: 10.1037/0882-7974.1.3.239
Willis, S. L., Tennstedt, S. L., Marsiske, M., Ball, K., Elias, J., Koepke, K. M., et al. (2006). Long-term effects of cognitive training on everyday functional outcomes in older adults. JAMA 296, 2805–2814. doi: 10.1001/jama.296.23.2805
Keywords: attention, working memory, reasoning, fluid intelligence, video games, cognitive training, casual games, transfer of training
Citation: Baniqued PL, Kranz MB, Voss MW, Lee H, Cosman JD, Severson J and Kramer AF (2014) Cognitive training with casual video games: points to consider. Front. Psychol. 4:1010. doi: 10.3389/fpsyg.2013.01010
Received: 29 September 2013; Accepted: 17 December 2013;
Published online: 07 January 2014.
Edited by:
Bernhard Hommel, Leiden University, NetherlandsCopyright © 2014 Baniqued, Kranz, Voss, Lee, Cosman, Severson and Kramer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Pauline L. Baniqued, Department of Psychology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, 405 North Mathews Avenue, Urbana, IL 61801, USA e-mail: banique1@illinois.edu