Skip to main content

MINI REVIEW article

Front. Psychol., 28 October 2021
Sec. Developmental Psychology
This article is part of the Research Topic Empirical Research at a Distance: New Methods for Developmental Science View all 40 articles

Remote Research Methods: Considerations for Work With Children

  • Cognitive Development Lab, Department of Psychology, University of Colorado, Colorado Springs, Colorado Springs, CO, United States

The growing shift to online research provides numerous potential opportunities, including greater sample diversity and more efficient data collection. While online methods and recruitment platforms have gained popularity in research with adults, there is relatively little guidance on best practices for how to conduct remote research with children. The current review discusses how to conduct remote behavioral research with children and adolescents using moderated (i.e., real-time interactions between the experimenter and child) and unmoderated (i.e., independent completion of study without experimenter interaction) methods. We examine considerations regarding sample diversity and provide recommendations on implementing remote research with children, including discussions about remote software, study design, and data quality. These recommendations can promote the use of remote research amongst developmental psychologists by contributing to our knowledge of effective online research practices and helping to build standardized guidelines when working with children.

Introduction

Researchers have grown increasingly interested in collecting data using online or remote methodologies. Remote research provides several benefits, such as the potential for quicker data collection and the inclusion of more diverse participant samples (Buhrmester et al., 2011; Dworkin et al., 2016). However, remote methods may also present unique challenges, including difficulties transferring in-person studies to remote formats and the potential for lower quality data due to less direct control over the environmental setting (Bridges et al., 2020; Chmielewski and Kucker, 2020). Previous research using remote methods has mainly been conducted with adults (Paolacci and Chandler, 2014; Lee et al., 2018), and we have a limited understanding of how to best implement remote methodologies with developmental populations (Sheskin et al., 2020). Research conducted with children versus adults can vary substantially, such as differences in instructions and task design (Barker and Weller, 2003). Therefore, it is important to develop appropriate remote research practices that apply to developmental populations. Below we explore remote research methodologies with typically developing child and adolescent populations, focusing on behavioral research in cognitive psychology.

Before assessing the use of remote methods, it is important to note that remote research can be conducted in multiple formats. Unmoderated remote studies utilize online software that allows participants to complete a study individually, at any time, without the presence of a researcher. In contrast, moderated remote studies take place virtually such that researchers interact directly with participants through virtual meeting platforms (e.g., Zoom) and lead participants through the study procedure in real-time. We will include general considerations regarding both remote unmoderated and moderated methods to help investigators understand the benefits and drawbacks of each format.

Benefits of Remote Research and Collecting Representative Samples

Diverse samples are critical to developmental research given the large amount of variability that occurs within developmental processes, including cognitive skills (Rowley and Camacho, 2015; Nielsen et al., 2017). Furthermore, developmental processes may be susceptible to environmental effects and vary as a function of ethnicity, socioeconomic status (SES), and geographical location (Bradley and Corwyn, 2002; McCulloch, 2006; Quintana et al., 2006). However, psychological research tends to collect data from homogenous or non-representative samples (Rowley and Camacho, 2015), and this may occur, in part, because most academic research uses in-person or lab-based studies. In-person studies may limit the diversity of research samples due to geographical, temporal, and fiscal restrictions (Rowley and Camacho, 2015; Nielsen et al., 2017; Rhodes et al., 2020). Importantly, remote methods have the potential to overcome some of these limitations by removing the time and costs associated with traveling to a physical research location and allowing individuals to participate at any time (in unmoderated studies). Consistent with this idea, some research shows that both adult and children samples collected through remote research have greater racial and geographical diversity than in-person studies (Birnbaum, 2004; Rhodes et al., 2020).

Despite the potential to increase sample diversity through remote research methods, diversity may still be limited for multiple reasons. For example, internet access, computer access, and technological literacy, which are frequently required to participate in remote studies, are often raised as critical barriers to participation (Kraut et al., 2004; Scott et al., 2017; Grootswagers, 2020). Furthermore, although remote research may decrease the need for travel, having children participate in remote studies may still be time-consuming for parents. For example, parents may need to answer scheduling emails, prescreening forms, or questionnaires, and provide consent or help during the study session. Thus, although remote studies have the potential to increase diversity, there are still limiting factors and future research is needed to determine whether the use of remote research can successfully increase diversity in child samples.

Implementing Remote Research Studies

Research with children is generally considered more challenging than research with adults because tasks need to be adapted to appropriately match children’s language, comprehension, and executive function abilities, and children tend to be more subject to fatigue during participation (Fargas-Malet et al., 2010; Rollins and Riggins, 2021). Similar to in-person research, including engaging, meaningful, and easy to understand task content (Fargas-Malet et al., 2010; Nussenbaum et al., 2020) is also important when conducting remote research with children. Furthermore, although researchers can remotely collect physiological measures, including eye movements (Scott et al., 2017), most remote work collects behavioral responses. There are some instances when remote research may not be possible, such as when special equipment (e.g., EEG) or highly controlled environmental contexts are required. Below we outline considerations regarding software, experimental design, and data quality for remote behavioral research using typically developing children.

Remote Technology

Remote behavioral research typically requires software that participants can interact with on their own devices (e.g., mobile phones, tablets, laptops, or computers). Several software and online platforms exist to aid researchers in remote data collection (see Table 1). A complete summary of available software is beyond the scope of the current review. We suggest researchers examine available software to select the one that best fits their needs. For example, some programs are available through an internet browser (e.g., Qualtrics, Gorilla Experiment Builder), while other programs may require participants to install software on specific operating systems or devices (e.g., Eprime Go). Online software may also vary in its flexibility to implement research designs. For example, Qualtrics is commonly used to collect survey responses but has limited functions for complex coding (e.g., randomization based on multiple variables).

TABLE 1
www.frontiersin.org

Table 1. Comparison of remote software.

Researchers who work with children and adolescents should also consider participants’ development capabilities regarding technology use when designing remote studies. Although more research is needed on children’s evolving technological skills, direct observations of children’s interaction with technology show that toddlers (Geist, 2012) and infants as young as 15-months-old are able to tap on touch screen devices (Zack et al., 2009; Ziemer et al., 2021). Preschoolers can engage in more complex touch actions such as drag-and-drop (Vatavu et al., 2015). Furthermore, both direct observations and parental reports suggest that 2.5-year-olds begin to use a mouse or keyboard input and 5-year-olds begin to develop basic typing skills with substantial improvements throughout middle childhood (Read et al., 2001; Calvert et al., 2005; Donker and Reitsma, 2007; Kiefer et al., 2015). Therefore, researchers must adopt technological methods that can accommodate the fine-motor skills of their participants, such as using mobile devices or tablets to collect touch input when working with younger children. Researchers should also consider using software that enables video/audio recordings (e.g., Gorilla) or using video conference programs (e.g., Zoom) to collect verbal rather than typed responses for younger participants. Furthermore, children’s previous experience with technological devices can also impact research findings (Couse and Chen, 2010; Jin and Lin, 2021), suggesting that researchers should measure children’s familiarity with technology as a potential covariate.

Differences in hardware, software, and response modality may also impact the precision and accuracy of display times, location of stimuli, or response times (Chetverikov and Upravitelev, 2016; Poth et al., 2018). Although remote research software has relatively minimal display and reaction time delays (<100 ms) (Anwyl-Irvine et al., 2020; Bridges et al., 2020), variability exists between browsers, operating systems, and hardware (Garaizar and Reips, 2019; Bridges et al., 2020). Additionally, certain input types (e.g., touch) can result in faster reaction times compared to other input modalities (e.g., mouse) (Woodward et al., 2017; Ross-Sheehy et al., 2021), suggesting it is important to control for input type when measuring reaction times. Critically, general findings may replicate across study methods, with recent research suggesting that response time patterns in children ages 4–12 are similar across remote and in-person studies (Nussenbaum et al., 2020; Ross-Sheehy et al., 2021). Overall, researchers who need highly precise stimuli presentation or response times should instruct participants to use a particular setup during study sessions (e.g., Chrome browser and keyboard), calibrate programs to adjust for the type of operating system and device used, and use within-subjects comparisons or controls (Bridges et al., 2020).

Study Design and Data Quality

Researchers have less control over the experimental environment during remote research, potentially lowering data quality. Remote methods can differ from in-person research in terms of participant engagement (Dandurand et al., 2008), response honesty (Shapiro et al., 2013), and susceptibility to scammers (Dennis et al., 2018). We discuss these factors below and include recommendations on how to overcome some of these challenges.

Task Considerations and Instructions

Remote studies may result in fewer participant—researcher interactions, especially in unmoderated remote research. Although this may be less of a concern in research with adults, the cognitive skills required to independently guide oneself through a task, including self-regulation and language abilities, develop substantially throughout childhood (Montroy et al., 2016; Skibbe et al., 2019). Additionally, infants through preschoolers learn better from in-person interactions than pre-recorded videos (DeLoache et al., 2010; Myers et al., 2017). However, social exchanges that occur virtually in real time (e.g., video chatting) have been shown to be effective even for young children’s learning (Strouse and Samson, 2021). Therefore, moderated remote methods where virtual participant-research interactions occur may be especially appropriate with younger children. However, unmoderated methods are still possible when additional considerations are used, such as comprehensive instructions, comprehension checks, and parental involvement (Oppenheimer et al., 2009; Kannass et al., 2010; Scott et al., 2017). Furthermore, developmental differences in reading ability can be lessened by using age-appropriate, prerecorded instructions.

Parental involvement may increase during remote relative to in-person studies. For example, parents need to be able to operate and troubleshoot the technological software used for remote research. Because of this, we suggest the use of browser-based platforms and to limit the use of special software that requires local downloads (see Table 1). Furthermore, we suggest that prior to the study session, researchers provide parents with step-by-step instructions on how to use software (see https://osf.io/wahky/ for guides on using Zoom from our lab) and information on what type of hardware can be used (e.g., mobile phones, tablets, laptops). Critically, due to COVID-19, adults’ technological literacy (Sari, 2021) and children’s time spent interacting with technology has increased (Drouin et al., 2020; Ribner et al., 2021). These changes have likely made it easier for parents and children to implement basic functions in video conferencing platforms (e.g., video/audio communication and screen sharing) and other software. However, we recommend that researchers add approximately 10 min of additional time during study sessions to troubleshoot any technological issues and prepare to reschedule sessions if needed.

Researchers may also want to intentionally direct parental involvement during data collection. Parental support and scaffolding can be helpful, especially when working with younger children. Recent research shows that during remote sessions having parents input responses for children ages 4–10 results in similar findings as in-person studies (Ross-Sheehy et al., 2021), providing some evidence that parental involvement can be used successfully during remote research. However, researchers may often want to prevent unwanted parental involvement (e.g., additional unmonitored explanations, biasing of responses) or require children to input their own responses, especially if accurate response times are needed. As children learn to communicate independently, they may be less likely to need parental intervention, with research suggesting children as young as 4 years of age can independently input their responses during remote research (Vales et al., 2021). To limit parental involvement during data collection, researchers can read instructions to children or use pre-recorded audio or videos (Rhodes et al., 2020). During moderated sessions, researchers could also share their screen and input children’s responses or have children share their screen and monitor children’s behaviors while children input their own responses. We also recommend that researchers communicate to parents the importance of children’s independent responses. Additionally, we suggest researchers collect feedback from both children and parents on any issues that may have come up during the study, such as cheating or asking for parental help.

Increasing Attention and Motivation

Lack of participant attention during remote research, including increased distractions and decreased motivation, can lower data quality (Zwarun and Hall, 2014; Finley and Penningroth, 2015). Participants are also more likely to experience distractions in natural settings outside of a research laboratory, and these distractions can lead to different findings than those observed during lab-based studies (Kane et al., 2007; Varao-Sousa et al., 2018). Furthermore, children and adolescents have greater difficulty ignoring irrelevant information (Davidson et al., 2006; Garon et al., 2008), and therefore environmental distractions may be more likely to impact remote research with developmental populations. In addition to distractions, it is possible that participants may be less motivated during remote studies and rapidly complete tasks or provided unvaried answers (Litman et al., 2015; Ahler et al., 2020).

Several methods have been found to reduce participant inattention during remote research with adults. Attention checks, including trap questions (e.g., regardless of your true preference select “Movies”) can be used to flag inattentive participants (Liu et al., 2009; Hunt and Scheetz, 2018). Comprehension checks (e.g., what are your instructions for this task?) can also be used to help researchers ensure that participants understand and are attentive to the task. Researchers can then use predetermined criteria for removing participants based on responses to these questions to improve overall data quality (Dworkin et al., 2016; Jensen-Doss et al., 2021). When working with children, trap questions (e.g., answer this question by pressing the blue button) and comprehension checks (e.g., select the option that shows what you will be doing in the study) that require specific age-appropriate responses can also be included to assess and remove inattentive participants. Moderated studies with children may be inherently more engaging and therefore less susceptible to low levels of attention and motivation (Dandurand et al., 2008), but researchers can still continue to directly monitor, address, and note participant attention. Additionally, shorter, engaging tasks may improve attention during remote research, including the use of animations, child-friendly stimuli, and frequent breaks (Barker and Weller, 2003; Rhodes et al., 2020).

Limiting Cheating

Another concern that can affect data quality is the honesty of participants’ responses. Participants may be more likely to answer dishonestly on tasks completed in the absence of researcher supervision (Lilleholt et al., 2020). The percentage of adults that cheat during online studies can vary (e.g., ranging from 24 to 41% according to Clifford and Jerit, 2014), but research suggests that most adult participants answer honestly when encouraged to do so (Corrigan-Gibbs et al., 2015). However, cheating behaviors may differ when working with developmental populations, with research suggesting younger children ages 8–10 cheat more frequently than older children ages 11–16 during in-person studies (Evans and Lee, 2011; Ding et al., 2014).

Several methods have been shown to decrease cheating behaviors. Simple interventions such as honesty reminders (e.g., “please answer honestly”) and honesty checks were found to decrease cheating behaviors in adults (Clifford and Jerit, 2014; Corrigan-Gibbs et al., 2015) and children (Heyman et al., 2015), and these types of interventions can easily be included in either moderated or unmoderated remote research. During unmoderated sessions, researchers could also minimize cheating by recording participants or taking periodic video captures of participants. During moderated sessions, researchers can monitor participants via video and screen-sharing, and verbally intervene if cheating behaviors are observed. In our own remote research, we have found that nearly all families consent to video recording (>99%) during moderated sessions, suggesting video monitoring is a potentially feasible solution to help mitigate cheating (see https://osf.io/hrp4y/ for our consent documents). Finally, task designs may need to be altered to mitigate cheating, particularly during memory tasks during which cheating can easily occur (e.g., writing down to-be-learned material). To minimize cheating, memory researchers can avoid stimuli that can easily be labeled and instead use abstract, similar, or difficult to label stimuli (e.g., scenes, fractals), limit encoding time and require participants to complete an additional task during encoding (e.g., mouse-click on the presented stimuli), or use incidental encoding designs where participants are unaware that an upcoming memory test will occur.

Avoiding Bots and Scammers

Remote studies with minimal researcher interaction may be at risk for compromised data quality due to information-security threats (Teitcher et al., 2015). Previous research with adults has highlighted information-security issues and offers potential solutions (Ahler et al., 2020; Chmielewski and Kucker, 2020). For example, automated computer program responses (i.e., bots) tend to differ from human responses and consist of atypical text formats, grammatically incorrect text responses, or responses that directly copy prompt text (Chmielewski and Kucker, 2020). Therefore, bots are relatively easy to flag and remove. Implementing bot checks (e.g., captchas) and collecting participant screening questions can also decrease bots (Jones et al., 2015; Kennedy et al., 2020). However, scammers may be particularly problematic for remote developmental research, especially during unmoderated designs. Scammers often fabricate responses to receive compensation (Chandler and Paolacci, 2017), including falsely claiming to be of a key demographic (e.g., an adult claiming to be a child). To alleviate some of these issues, researchers can utilize prescreening questions and check for consistencies in responses, such as asking about a child’s age repeatedly and in multiple formats (e.g., DOB, numeric age) or requesting specific information relevant to identifying your targeted population (e.g., asking a parent to describe a recent moment they were proud of their child) (Jones et al., 2015). Email requests to participate in a research study can also be monitored for potential scammers. Instances of strange email addresses, rapidly incoming email inquiries, and inquiries consisting of unusual responses (grammatical errors, copied text, etc.), may further indicate potential scammers. Ultimately, moderated studies may be the most effective at reducing scammers as direct participant-researcher interactions can easily ensure human participants are completing the study.

Conclusion

As remote research becomes more common, understanding its benefits and limitations is increasingly important. Above, we outlined several considerations for implementing remote research with children and adolescents, including information about participant samples, remote technologies, study design, and data quality. Although future work is needed to better understand how remote research differs between children and adults, and which methods are most effective for children, the provided recommendations contribute to building a guideline for effective and reliable remote research with developmental populations.

Author Contributions

MS contributed to the development and writing of the manuscript. MM contributed to the literature review process and editing of the manuscript. DS contributed to the development and revision of the manuscript. All authors read and approved the submitted version of the manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ahler, D. J., Roush, C. E., and Sood, G. (2020). The Micro-Task Market for Lemons: Data Quality on Amazon’s Mechanical Turk. Meeting of the Midwest Political Science Association. Available online at: http://www.gsood.com/research/papers/turk.pdf

Google Scholar

Anwyl-Irvine, A., Dalmaijer, E. S., Hodges, N., and Evershed, J. K. (2020). Realistic precision and accuracy of online experiment platforms, web browsers, and devices. Behav. Res. Methods 53, 1407–1425. doi: 10.3758/s13428-020-01501-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Barker, J., and Weller, S. (2003). “Is it fun?” developing children centered research methods. Int. J. Sociol. Soc. Pol. 23, 33–58. doi: 10.1108/01443330310790435

CrossRef Full Text | Google Scholar

Birnbaum, M. H. (2004). Human research and data collection via the internet. Annu. Rev. Psychol. 55, 803–832. doi: 10.1146/annurev.psych.55.090902.141601

PubMed Abstract | CrossRef Full Text | Google Scholar

Bradley, R. H., and Corwyn, R. F. (2002). Socioeconomic status and child development. Annu. Rev. Psychol. 53, 371–399. doi: 10.1146/annurev.psych.53.100901.135233

PubMed Abstract | CrossRef Full Text | Google Scholar

Bridges, D., Pitiot, A., MacAskill, M. R., and Peirce, J. W. (2020). The timing mega-study: comparing a range of experiment generators, both lab-based and online. PeerJ 8, 1–29. doi: 10.7717/peerj.9414

PubMed Abstract | CrossRef Full Text | Google Scholar

Buhrmester, M., Kwang, T., and Gosling, S. D. (2011). Amazon’s mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6, 3–5. doi: 10.1177/1745691610393980

PubMed Abstract | CrossRef Full Text | Google Scholar

Calvert, S. L., Rideout, V. J., Woolard, J. L., Barr, R. F., and Strouse, G. A. (2005). Age, ethnicity, and socioeconomic patterns in early computer use: a national survey. Am. Behav. Sci. 48, 590–607. doi: 10.1177/0002764204271508

CrossRef Full Text | Google Scholar

Chandler, J. J., and Paolacci, G. (2017). Lie for a dime: when most prescreening responses are honest but most study participants are impostors. Soc. Psychol. Personal. Sci. 8, 500–508. doi: 10.1177/1948550617698203

CrossRef Full Text | Google Scholar

Chetverikov, A., and Upravitelev, P. (2016). Online versus offline: the Web as a medium for response time data collection. Behav. Res. Methods 48, 1086–1099. doi: 10.3758/s13428-015-0632-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Chmielewski, M., and Kucker, S. C. (2020). An MTurk Crisis? Shifts in data quality and the impact on study results. Soc. Psychol. Personal. Sci. 11, 464–473. doi: 10.1177/1948550619875149

CrossRef Full Text | Google Scholar

Clifford, S., and Jerit, J. (2014). Is there a cost to convenience? An experimental comparison of data quality in laboratory and online studies. J. Exp. Polit. Sci. 1, 120–131. doi: 10.1017/xps.2014.5

CrossRef Full Text | Google Scholar

Corrigan-Gibbs, H., Gupta, N., Northcutt, C., Cutrell, E., and Thies, W. (2015). Deterring cheating in online environments. ACM Trans. Comput.-Hum. Interact. 22, 1–23. doi: 10.1145/2810239

CrossRef Full Text | Google Scholar

Couse, L. J., and Chen, D. W. (2010). A tablet computer for young children? Exploring its viability for early childhood education. J. Res. Technol. Educ. 43, 75–98. doi: 10.1080/15391523.2010.10782562

CrossRef Full Text | Google Scholar

Dandurand, F., Shultz, T. R., and Onishi, K. H. (2008). Comparing online and lab methods in a problem-solving experiment. Behav. Res. Methods 40, 428–434. doi: 10.3758/BRM.40.2.428

PubMed Abstract | CrossRef Full Text | Google Scholar

Davidson, M. C., Amso, D., Anderson, L. C., and Diamond, A. (2006). Development of cognitive control and executive functions from 4 to 13 years: evidence from manipulations of memory, inhibition, and task switching. Neuropsychologia 44, 2037–2078. doi: 10.1016/j.neuropsychologia.2006.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

DeLoache, J. S., Chiong, C., Sherman, K., Islam, N., Vanderborght, M., Troseth, G. L., et al. (2010). Do babies learn from baby media? Psychol. Sci. 21, 1570–1574. doi: 10.1177/0956797610384145

PubMed Abstract | CrossRef Full Text | Google Scholar

Dennis, S., Goodson, B. M., and Pearson, C. (2018). Online worker fraud and evolving threats to the intergirty of MTurk data: a discussion of virtual private servers and the limitations of IP-Based screening procedures. Behav. Res. Account. 2019, 1–55. doi: 10.2139/ssrn.3233954

CrossRef Full Text | Google Scholar

Ding, X. P., Omrin, D. S., Evans, A. D., Fu, G., Chen, G., and Lee, K. (2014). Elementary school children’s cheating behavior and its cognitive correlates. J. Exp. Child Psychol. 121, 85–95. doi: 10.1016/j.jecp.2013.12.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Donker, A., and Reitsma, P. (2007). Young children’s ability to use a computer mouse. Comput. Educ. 48, 602–617. doi: 10.1016/j.compedu.2005.05.001

CrossRef Full Text | Google Scholar

Drouin, M., McDaniel, B. T., Pater, J., and Toscos, T. (2020). How parents and their children used social media and technology at the beginning of the COVID-19 pandemic and associations with anxiety. Cyberpsychol. Behav. Soc. Network. 23, 727–736. doi: 10.1089/cyber.2020.0284

PubMed Abstract | CrossRef Full Text | Google Scholar

Dworkin, J., Hessel, H., Gliske, K., and Rudi, J. H. (2016). A comparison of three online recruitment strategies for engaging parents. Fam. Relat. 65, 550–561. doi: 10.1111/fare.12206

PubMed Abstract | CrossRef Full Text | Google Scholar

Evans, A. D., and Lee, K. (2011). Verbal deception from late childhood to middle adolescence and its relation to executive functioning skills. Dev. Psychol. 47, 1108–1116. doi: 10.1037/a0023425

PubMed Abstract | CrossRef Full Text | Google Scholar

Fargas-Malet, M., McSherry, D., Larkin, E., and Robinson, C. (2010). Research with children: methodological issues and innovative techniques. J. Early Childhood Res. 8, 175–192. doi: 10.1177/1476718x09345412)

CrossRef Full Text | Google Scholar

Finley, A. J., and Penningroth, S. L. (2015). “Online versus in-lab: pros and cons of an online prospective memory experiment,” in Advances in Psychology Research, Vol. 113, eds A. M. Columbus (Hauppauge, NY: Nova Science Publishers, Inc.), 135–162.

Google Scholar

Garaizar, P., and Reips, U. D. (2019). Best practices: two Web-browser-based methods for stimulus presentation in behavioral experiments with high-resolution timing requirements. Behav. Res. Methods 51, 1441–1453. doi: 10.3758/s13428-018-1126-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Garon, N., Bryson, S. E., and Smith, I. M. (2008). Executive function in preschoolers: a review using an integrative framework. Psychol. Bull. 134, 31–60. doi: 10.1037/0033-2909.134.1.31

PubMed Abstract | CrossRef Full Text | Google Scholar

Geist, E. A. (2012). A qualitative examination of two year-olds interaction with tablet based interactive technology. J. Instruct. Psychol. 39, 26–35. https://link.gale.com/apps/doc/A303641377/HRCA?u=colosprings&sid=summon&xid=ae8c6fbe.

Google Scholar

Grootswagers, T. (2020). A primer on running human behavioural experiments online. Behav. Res. Methods 52, 2283–2286. doi: 10.3758/s13428-020-01395-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Heyman, G. D., Fu, G., Lin, J., Qian, M. K., and Lee, K. (2015). Eliciting promises from children reduces cheating. J. Exp. Child Psychol. 139, 242–248. doi: 10.1016/j.jecp.2015.04.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Hunt, N. C., and Scheetz, A. M. (2018). Using MTurk to distribute a survey or experiment: methodological considerations. J. Inform. Syst. 33, 43–65. doi: 10.2308/isys-52021

CrossRef Full Text | Google Scholar

Jensen-Doss, A., Patel, Z. S., Casline, E., Mora Ringle, V. A., and Timpano, K. R. (2021). Using mechanical turk to study parents and children: an examination of data quality and representativeness. J. Clin. Child Adolescent Psychol. [Online ahead of print] 1–15. doi: 10.1080/15374416.2020.1815205

PubMed Abstract | CrossRef Full Text | Google Scholar

Jin, Y. R., and Lin, L. Y. (2021). Relationship between touchscreen tablet usage time and attention performance in young children. J. Res. Technol. Educ. 1–10. doi: 10.1080/15391523.2021.1891995

CrossRef Full Text | Google Scholar

Jones, M. S., House, L. A., and Gao, Z. (2015). Respondent screening and revealed preference axioms: testing quarantining methods for enhanced data quality in web panel surveys. Public Opin. Q. 79, 687–709. doi: 10.1093/poq/nfv015

CrossRef Full Text | Google Scholar

Kane, M. J., Brown, L. H., McVay, J. C., Silvia, P. J., Myin-Germeys, I., and Kwapil, T. R. (2007). For whom the mind wanders, and when: an experience-sampling study of working memory and executive control in daily life. Psychol. Sci. 18, 614–621. doi: 10.1111/j.1467-9280.2007.01948.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Kannass, K. N., Colombo, J., and Wyss, N. (2010). Now, pay attention! the effects of instruction on children’s attention. J. Cogn. Dev. 11, 509–532. doi: 10.1080/15248372.2010.516418

PubMed Abstract | CrossRef Full Text | Google Scholar

Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., and Winter, N. J. G. (2020). The shape of and solutions to the MTurk quality crisis. Political Sci. Res. Methods 8, 614–629. doi: 10.1017/psrm.2020.6

CrossRef Full Text | Google Scholar

Kiefer, M., Schuler, S., Mayer, C., Trumpp, N. M., Hille, K., and Sachse, S. (2015). Handwriting or Typewriting? The influence of pen- or keyboard-based writing training on reading and writing performance in preschool children. Adv. Cogn. Psychol. 11, 136–146. doi: 10.5709/acp-0178-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., and Couper, M. (2004). Psychological research online: report of board of scientific affairs’ advisory group on the conduct of research on the internet. Am. Psychol. 59, 105–117. doi: 10.1037/0003-066X.59.2.105

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, Y. S., Seo, Y. W., and Siemsen, E. (2018). Running behavioral operations experiments using Amazon’s mechanical turk. Product. Operat. Manag. 27, 973–989. doi: 10.1111/poms.12841

CrossRef Full Text | Google Scholar

Lilleholt, L., Schild, C., and Zettler, I. (2020). Not all computerized cheating tasks are equal: a comparison of computerized and non-computerized versions of a cheating task. J. Econ. Psychol. 78:102270. doi: 10.1016/j.joep.2020.102270

CrossRef Full Text | Google Scholar

Litman, L., Robinson, J., and Rosenzweig, C. (2015). The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk. Behav. Res. Methods 47, 519–528. doi: 10.3758/s13428-014-0483-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, D., Sabbagh, M. A., Gehring, W. J., and Wellman, H. M. (2009). Neural correlates of children’s theory of mind development. Child Dev. 80, 318–326. doi: 10.1111/j.1467-8624.2009.01262.x

PubMed Abstract | CrossRef Full Text | Google Scholar

McCulloch, A. (2006). Variation in children’s cognitive and behavioural adjustment between different types of place in the British National Child Development Study. Soc. Sci. Med. 62, 1865–1879. doi: 10.1016/j.socscimed.2005.08.048

PubMed Abstract | CrossRef Full Text | Google Scholar

Montroy, J. J., Bowles, R. P., Skibbe, L. E., McClelland, M. M., and Morrison, F. J. (2016). The development of self-regulation across early childhood. Dev. Psychol. 52, 1744–1762. doi: 10.1037/dev0000159

PubMed Abstract | CrossRef Full Text | Google Scholar

Myers, L. J., LeWitt, R. B., Gallo, R. E., and Maselli, N. M. (2017). Baby FaceTime: can toddlers learn from online video chat? Dev. Sci. 20:e12430. doi: 10.1111/desc.12430

PubMed Abstract | CrossRef Full Text | Google Scholar

Nielsen, M., Haun, D., Kärtner, J., and Legare, C. H. (2017). The persistent sampling bias in developmental psychology: a call to action. J. Exp. Child Psychol. 162, 31–38. doi: 10.1016/j.jecp.2017.04.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Nussenbaum, K., Scheuplein, M., Phaneuf, C. V., Evans, M. D., and Hartley, C. A. (2020). Moving developmental research online: comparing in-lab and web-based studies of model- based reinforcement learning. Collabra: Psychol. 6, 17213. doi: 10.1525/collabra.17213

CrossRef Full Text | Google Scholar

Oppenheimer, D. M., Meyvis, T., and Davidenko, N. (2009). Instructional manipulation checks: detecting satisficing to increase statistical power. J. Exp. Soc. Psychol. 45, 867–872. doi: 10.1016/j.jesp.2009.03.009

CrossRef Full Text | Google Scholar

Paolacci, G., and Chandler, J. (2014). Inside the turk: understanding mechanical turk as a participant pool. Curr. Direct. Psychol. Sci. 23, 184–188. doi: 10.1177/0963721414531598

CrossRef Full Text | Google Scholar

Poth, C. H., Foerster, R. M., Behler, C., Schwanecke, U., Schneider, W. X., and Botsch, M. (2018). Ultrahigh temporal resolution of visual presentation using gaming monitors and G-Sync. Behav. Res. Methods 50, 26–38. doi: 10.3758/s13428-017-1003-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Quintana, S. M., Chao, R. K., Cross, W. E., Hughes, D., Gall, S. N., Aboud, F. E., et al. (2006). Race, ethnicity, and culture in child development: contemporary research and future directions. Child Dev. 77, 1129–1141. doi: 10.1111/j.1467-8624.2006.00951

CrossRef Full Text | Google Scholar

Read, J., MacFarlane, S., and Casey, C. (2001) “Measuring the usability of text input methods for children,” in People and Computers XV—Interaction Without Frontiers, eds A. Blandford, J. Vanderdonckt, and P. Gray (London: Springer). doi: 10.1007/978-1-4471-0353-0_35

CrossRef Full Text | Google Scholar

Rhodes, M., Rizzo, M. T., Foster-Hanson, E., Moty, K., Leshin, R. A., Wang, M., et al. (2020). Advancing developmental science via unmoderates remote research with children. J. Cogn. Dev. 21, 477–493.

Google Scholar

Ribner, A. D., Coulanges, L., Friedman, S., and Libertus, M. E. (2021). Screen time in the COVID Era: international trends of increasing use among 3- to 7-Year-Old Children. J. Pediatr. doi: 10.1016/j.jpeds.2021.08.06

CrossRef Full Text | Google Scholar

Rollins, L., and Riggins, T. (2021). Adapting event-related potential research paradigms for children: considerations from research on the development of recognition memory. Dev. Psychobiol. 63:e22159. doi: 10.1002/dev.22159

PubMed Abstract | CrossRef Full Text | Google Scholar

Ross-Sheehy, S., Reynolds, E., and Eschman, B. (2021). Unsupervised online assessment of visual working memory in 4- to 10-Year-Old Children: array size influences capacity estimates and task performance. Front. Psychol. 12:692228. doi: 10.3389/fpsyg.2021.692228

PubMed Abstract | CrossRef Full Text | Google Scholar

Rowley, S. J., and Camacho, T. C. (2015). Increasing diversity in cognitive developmental research: issues and solutions. J. Cogn. Dev. 16, 683–692. doi: 10.1080/15248372.2014.976224

CrossRef Full Text | Google Scholar

Sari, M. K. (2021). The impacts of Covid-19 pandemy in term of technology literacy usage on students learning experience. J. Sos. Humaniora JSH 43–51. doi: 10.12962/j24433527.v0i0.8348

CrossRef Full Text | Google Scholar

Scott, K., Chu, J., and Schulz, L. (2017). Lookit (Part 2): assessing the viability of online developmental research, results from three case studies. Open Mind 1, 15–29. doi: 10.1162/opmi_a_00001

CrossRef Full Text | Google Scholar

Shapiro, D. N., Chandler, J., and Mueller, P. A. (2013). Using mechanical turk to study clinical populations. Clin. Psychol. Sci. 1, 213–220. doi: 10.1177/2167702612469015

CrossRef Full Text | Google Scholar

Sheskin, M., Scott, K., Mills, C. M., Bergelson, E., Bonawitz, E., Spelke, E. S., et al. (2020). Online developmental science to foster innovation, access, and impact. Trends Cogn. Sci. 24, 675–678. doi: 10.1016/j.tics.2020.06.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Skibbe, L. E., Montroy, J. J., Bowles, R. P., and Morrison, F. J. (2019). Self-regulation and the development of literacy and language achievement from preschool through second grade. Early Childhood Res. Q. 46, 240–251. doi: 10.1016/j.ecresq.2018.02.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Strouse, G. A., and Samson, J. E. (2021). Learning from video: a meta-analysis of the video deficit in children ages 0 to 6 years. Child Dev. 92, e20–e38. doi: 10.1111/cdev.13429

PubMed Abstract | CrossRef Full Text | Google Scholar

Teitcher, J. E. F., Bockting, W. O., Bauermeister, J. A., Hoefer, C. J., Miner, M. H., and Klitzman, R. L. (2015). Detecting, preventing, and responding to “Fraudsters” in internet research: ethics and tradeoffs. J. Law Med. Ethics 43, 116–133. doi: 10.1111/jlme.12200

PubMed Abstract | CrossRef Full Text | Google Scholar

Vales, C., Wu, C., Torrance, J., Shannon, H., States, S. L., and Fisher, A. V. (2021). Research at a distance: replicating semantic differentiation effects using remote data collection with children participants. Front. Psychol. 12:697550. doi: 10.3389/fpsyg.2021.697550

PubMed Abstract | CrossRef Full Text | Google Scholar

Varao-Sousa, T. L., Smilek, D., and Kingstone, A. (2018). In the lab and in the wild: how distraction and mind wandering affect attention and memory. Cogn. Res.: Principles Implications 3:42. doi: 10.1186/s41235-018-0137-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Vatavu, R. D., Cramariuc, G., and Schipor, D. M. (2015). Touch interaction for children aged 3 to 6 years: experimental findings and relationship to motor skills. Int. J. Hum. Comput. Stud. 74, 54–76. doi: 10.1016/j.ijhcs.2014.10.007

CrossRef Full Text | Google Scholar

Woodward, J., Shaw, A., Aloba, A., Jain, A., Ruiz, J., and Anthony, L. (2017). “Tablets, tabletops, and smartphones: cross-platform comparisons of children’s touchscreen interactions,” in Proceedings of the 19th ACM International Conference on Multimodal Interaction, (New York, NY: ACM), 5–14.

Google Scholar

Zack, E., Barr, R., Gerhardstein, P., Dickerson, K., and Meltzoff, A. N. (2009). Infant imitation from television using novel touch screen technology. Br. J. Dev. Psychol. 27, 13–26. doi: 10.1348/026151008x334700

PubMed Abstract | CrossRef Full Text | Google Scholar

Ziemer, C. J., Wyss, S., and Rhinehart, K. (2021). The origins of touchscreen competence: examining infants’ exploration of touchscreens. Infant Behav. Dev. 64:101609.

Google Scholar

Zwarun, L., and Hall, A. (2014). What’s going on? Age, distraction, and multitasking during online survey taking. Comput. Hum. Behav. 41, 236–244. doi: 10.1016/j.chb.2014.09.041

CrossRef Full Text | Google Scholar

Keywords: remote research, remote research design, remote research software, development, children

Citation: Shields MM, McGinnis MN and Selmeczy D (2021) Remote Research Methods: Considerations for Work With Children. Front. Psychol. 12:703706. doi: 10.3389/fpsyg.2021.703706

Received: 30 April 2021; Accepted: 27 September 2021;
Published: 28 October 2021.

Edited by:

Natasha Kirkham, Birkbeck, University of London, United Kingdom

Reviewed by:

Mary M. Flaherty, University of Illinois at Urbana-Champaign, United States
Naomi Sweller, Macquarie University, Australia

Copyright © 2021 Shields, McGinnis and Selmeczy. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Michelle M. Shields, mgarci24@uccs.edu; Diana Selmeczy, Diana.Selmeczy@uccs.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.