Skip to main content

PERSPECTIVE article

Front. Educ., 18 January 2021
Sec. Digital Education

A Framework for Applying the Learning Sciences to MOOC Design

  • Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, United States

Evidence-based practices grounded in the learning sciences provide an opportunity for improved learning experiences in traditional in-person, as well as in hybrid and online environments. We advocate specifically that large-scale, online learning experiences, such as Massive Open Online Courses (MOOCs) benefit from applications of the learning sciences. To that end, we present a description of how and why we use specific learning science practices in a biochemistry MOOC with the intention to contribute to the discussion about the quality of online learning experiences and lower the barrier for other practitioners seeking a framework for implementing evidence-based course design. We believe that the application of the learning sciences makes online learning experiences more rigorous and effective, and practitioners should optimize the use of these strategies through clever tests in specific contexts.

Introduction

The quality of learning experiences across the 13,500 Massive Open Online Courses (MOOCs) offered since 2012 (Shah, 2019) is heterogeneous. There are calls to assess the design of MOOCs, establish criteria for development, and standardize quality assurance measures (Alario-Hoyos et al., 2014a,b; Kopp and Lackner, 2014; Yousef et al., 2014; Doherty et al., 2015; Lee et al., 2016). Best practices for teaching and learning rooted in the learning sciences advance scientific teaching methods through the adoption of active learning (Handelsman et al., 2004; Freeman et al., 2014) and promote efficient and effective ways to study (Miyatsu et al., 2018). Digital learning designers translate many of these best practices to online learning environments. Our aim is to document this translation from the perspective of MOOC developers by presenting a framework for applying evidence-based strategies in a biochemistry MOOC and advocating for testing the effectiveness of instructional practices in specific online contexts.

As practitioners concerned with maximizing learning gains in an understudied, yet widely used non-traditional learning environment (Veletsianos and Shepherdson, 2016), we are agnostic to adhering to strategies based in one conceptual framework of educational research over another. Digital learning designers should have all possible tools at their disposal for creating engaging and effective learning experiences. To this end, we take a systematic, eclectic, instructional design approach (Yanchar and Gabbitas, 2011; Honebein and Sink, 2012) to learning engineering, where we draw upon practices rooted in behaviorism, cognitivism, social learning, and constructivism to address the challenges of online learning at scale.

Here we document practices supported by the learning sciences that we readily implement in an online learning environment to contribute to conversations in the community of practice about evidence-based approaches to MOOC design. We hope these conversations lower the barrier for other developers of online and hybrid experiences to apply these methods. We found the following practices both robust from a theoretical standpoint, and practical in terms of application in our course design. We outline the evidence-based practices applied to course assets organized by function (pre-instruction, multimedia content, formative assessment, supporting resources, and summative assessment). However, it is worth noting that many of the practices overlap in their usage. This article is an exposition on why and how to use the learning sciences in a specific context as a model for others while highlighting a few strategies, and not an exhaustive review of these practices, nor a formal assessment of their efficacy.

Course Overview

The biochemistry MOOC has a hierarchical structure composed of mainly eight different topic units (like chapters in a book), each broken down into one or two learning sequences (analogous to sections of each chapter). The learning sequences contain a series of course assets (Figure 1, colorful inner nodes) informed by evidence-based principles (Figure 1, gray outer nodes). These course assets are designed to prime the learner for the material (pre-instructional strategies; Figure 1, green nodes), convey foundational information (multimedia content; Figure 1, yellow node), build understanding (formative assessments; Figure 1, blue nodes), and support learning (supporting resources; Figure 1, purple nodes). In addition to topic units and other supplemental material, the course ends with a comprehensive exam (summative assessment; Figure 1, orange node).

FIGURE 1

Figure 1. A mapping of evidence-based practices stemming from the learning sciences (gray outer nodes) to course assets (colorful inner nodes). The course assets are color-coded by function: green = pre-instruction, yellow = multimedia, purple = supporting resources, blue = formative assessments, and orange = summative assessment. To explore this figure as an interactive visualization, please visit https://web.mit.edu/mitxbio/learning_sciences.html.

Evidence-Based Practices Employed

Pre-Instruction

To prime our MOOC learners for the introduction of new material in each learning sequence, we employ two pre-instructional techniques, learning objectives and advance organizers (Figure 1, green nodes). Positioned at the beginning of the learning sequences, we outline one or two big picture goals for each sequence as well as several measurable objectives the learner should meet by engaging with the sequence material. By outlining the goals and objectives that we used to develop and align the assessments during course development, we clearly communicate our expectations for the learner, and promote transparency in the course (Hartley and Davies, 1976). We know from a survey of our hybrid learning experiences that students find these goals and objectives useful for studying, even when not explicitly directed to use them in this way. We also use a type of advance organizer (Weisberg, 1970; Hartley and Davies, 1976; Meng and Patty, 1991; Bui and McDaniel, 2015) in the form of image handouts. By presenting key images ahead of instruction, the learners have an opportunity to preview the material, scaffold their note-taking, and make deeper connections during the video segments. These handouts contain a series of images that appear in the sequences, each coupled with an open-ended, yet focused question (Wylie and Chi, 2014) designed to make connections between what each image represents and the learning sequence topic (Supplementary Figure 1). We do not formally assess the coupled questions, but rather prompt self-explanation. Learners answer the questions in their own words, revealing their own thought process and understanding, which promotes metacognitive processes (Chi et al., 1994; Aleven and Koedinger, 2002; Ainsworth and Th Loizou, 2003).

Multimedia Content

To connect learners' prior knowledge to more advanced concepts in biochemistry, we offer a series of video and images, which we designed to maximize engagement and minimize extraneous cognitive load (Sweller, 1994, 2005; Sweller et al., 2019) (Figure 1, yellow node). We filmed a live classroom, where the professor writes on a chalkboard to discuss the material unscripted. Students may prefer the professor writing on a board over presenting slides because the method is more interactive, the pace of the presentation is more accessible, and the information is streamlined (Armour et al., 2016). There is also evidence that unscripted lectures may be more engaging to learners than scripted deliveries (Thornton et al., 2017). This method of multimedia content delivery meets the principles of multimedia learning that promote generative processing (personalization, voice, and embodiment) as well as some that maintain essential processing (modality and segmenting) (Clark and Mayer, 2016; Mayer, 2017). Often embedded within the videos are brief animations of the dynamic, process-oriented concepts and complex structures overlaid on the active discussion in class (Supplementary Video 1). These animations are consistent in style and color to familiarize learners with the representation of specific concepts (pre-training) (Mayer et al., 2002). We also took care to adhere to the remaining multimedia learning principles (coherence, signaling, redundancy, and contiguity) while maintaining scientific accuracy, to promote a deeper understanding of the core concepts represented and maintain a manageable cognitive load for learners (Sweller, 2005; Clark and Mayer, 2016; Mayer, 2017).

Formative Assessments

We include over 600 formative assessment questions to guide learners through the process of constructing a foundational understanding of biochemistry and transferring that understanding to novel contexts (Figure 1, blue nodes). All the assessments are graded automatically, including multiple choice responses, checkbox, and numerical and text inputs. To gauge understanding of the new material introduced in each video segment, we intersperse test yourself questions (TYs) between videos. We include concrete examples in TYs to help learners generalize abstract ideas and transfer knowledge to different contexts (Paivio et al., 1994; Weinstein et al., 2018). These concrete examples can also serve a dual purpose to personally connect the learner to the material. Our examples often have relevance to current research, medical applications, or everyday life, which can help promote intrinsic motivation to learn (Ambrose et al., 2010). One such example is a TY in the pH and buffers unit, where we ask learners to calculate the pH of a homemade buttermilk substitute given the volume of lemon juice and milk, the amount of citric acid in lemon juice, and the dissociation constant of citric acid. This question goes beyond a simple calculation and engages the learner in how biochemistry applies to daily living.

Engaging learners is a challenge for most formative assessments, so we constructed problem sets (PSs) at the end of every unit, to generate interest, as well as prompt learners to think deeply about what they learned and apply their knowledge to new contexts. We take inspiration from narrative-centered learning in gaming (Mott et al., 1999) to leverage the great appeal of storytelling in education. The PS questions are connected by an ongoing story about taking biochemistry and engaging in research as an undergraduate, where we inject the learners as characters in the narrative. The assessments that compose the PSs reference objectives covered throughout the associated learning sequence(s), and as such represent both spaced retrieval and interleaving practices (Karpicke and Roediger, 2007; Roediger and Butler, 2011; Birnbaum et al., 2013). Retrieval practices such as these, and the resulting testing effect, are useful techniques for studying and retaining information on longer timescales (Roediger and Karpicke, 2006; Rowland, 2014). The PS questions also contain scaffolding to break complex questions into component parts, offering opportunities for the learners to adapt their approaches based on the immediate feedback given (Reiser, 2004). An example of scaffolding in the course is where we task the learners with a series of questions about a mathematical description of cooperative binding. First, we ask learners to identify the correct mathematical transformations of their raw data to set up the calculation. Then the learners need to select the correct mathematical expression to fit the data, and perform calculations to generate a model from the transformed data and mathematical expression selected earlier. Finally, learners must interpret the output of the model by explaining the biochemical logic of their results. Learners have less confusion, indicated by fewer discussion forum posts about challenging assessments, when we break down assessments into stepwise processes.

To give feedback to the learners on whether they answered a formative assessment correctly in real time, and provide an opportunity to reassess and adapt their strategy to answer again, every formative assessment in the MOOC has immediate grading for correctness. Moreover, since previous work demonstrates the benefit of formative feedback (Bangert-Drowns and Morgan, 1991; Moreno, 2004; Nicol and Macfarlane-Dick, 2006; Hattie and Timperley, 2007; Shute, 2008), we offer detailed and specific feedback that clarifies attributes of the question target, the larger topic at hand, or the relationships between correct and incorrect responses (Shute, 2008). In addition to learning more about the correct answer, or why other options are incorrect, we present the process to arrive at the correct solution through a worked example for some question types. Worked examples guide the learner step-by-step to the solution from an expert's point of view, with the intention of facilitating the transfer of knowledge and skills to similar questions (Atkinson et al., 2000).

Supporting Resources

We include a number of resources for our learners that are meant to supplement their learning, although they are not required or count toward advancing progress (Figure 1, purple nodes). There is a separate section of the MOOC for study resources, which includes text, graphics, videos, and links that detail specific concepts, skills, and techniques that the learners should have as prerequisite knowledge. These study resources are linked explicitly in the sections of the course in which their reference could be useful. For example, in the enzyme catalysis learning sequence, we link relevant TYs to the organic chemistry study resources for a refresher on functional groups and reaction terminology. By embedding references to material that can function as a review, we are attempting to activate learner prior knowledge, which is a foundational step in facilitating lasting learning (Ambrose et al., 2010).

We also include optional molecular viewer activities. We use a molecular visualization software that structural biochemists employ in their research to help learners view and manipulate protein structural models from deposited data. Thus, these molecular viewer activities are an example of authentic learning, where learners apply their newly learned biochemistry skills in a context that is shared with professional scientists (Herrington et al., 2004; Herrington and Kervin, 2007; Lombardi, 2007; Oliver et al., 2007). The connection of educational materials to real-life applications is related to expert-level thinking (Semsar et al., 2011), and may help adult learners see themselves as belonging in scientific fields (Rossiter, 2007). Although these activities exploring protein structure are consequential for learning, the assessments associated with them do not influence the learners' grades. Low-stakes tasks such as these facilitate trying something intellectually risky and relieve some pressure to perform on assessments, which may also help intrinsic motivation and self-esteem (Nicol and Macfarlane-Dick, 2006). In this context, the optional molecular viewer assignments reduce the pressure to perform while engaging with a potentially unfamiliar tool.

At the bottom of each page in the course (excluding summative assessments), we include the opportunity to engage in the discussion forum. Discussion forum posts offer a way for learners to personally engage with each other and the instructors over the course material. Learners are encouraged to introduce themselves and their motivations for taking the MOOC, which we hope connects the material to personal values they hold. By articulating the value that the MOOC offers for them, learners may feel more motivated to sustain participation (Canning et al., 2018). The discussion forum is also a place where learners can have informal conversations about the material, ask for help from staff or peers, and answer each others' questions and comments. Engaging with other learners and staff, and helping each other build an understanding of course material can contribute to academic achievement and satisfaction (Jung et al., 2002; Kellogg et al., 2014).

Summative Assessment

The final, summative assessment of the biochemistry MOOC is a competency exam (CE) (Figure 1, orange node). We designed the CE to test the majority of learning objectives introduced in the course by engaging in scientific thinking, synthesizing concepts, and transferring knowledge to new contexts. We use the hierarchy of cognitive processing outlined by Bloom's taxonomy (Krathwohl, 2002; Krathwohl and Anderson, 2009) to guide the creation of all assessments. According to the taxonomy, many CE questions require application, analysis, and evaluation, similar to PS questions. Both CE and PS questions are more cognitively demanding to answer than TYs, which require more recall.

Discussion

We used a systematic, eclectic, instructional design approach to drive our design decisions in a biochemistry MOOC. Largely this approach follows from our unique perspectives as biology PhDs with pedagogical experience. Our roles in course development involve learning engineering (Wilcox et al., 2016), which lays at the intersections of applying expertise in the subject matter, educational technology, instructional and graphic design, educational research, data analytics, teaching, and project management. Although familiar with the learning science literature, we are free from the constraints of specializing in a specific area of educational research set by tenure track or grants. This freedom allows us to maintain a practitioners' point of view, draw inspiration from the different frameworks of how learning works, and collectively deploy an amalgamation of strategies that enhance the design of our learning experiences.

As scientists, we value evidence-based practices, and the empirical approach the learning sciences offers. As developers of digital learning experiences, we understand that there are orchestration constraints in MOOC development that present challenges to implementing best practices for teaching and learning outlined by the literature. There is a great need to test educational design decisions in context and assess the relevant variables in successful implementation (Moir, 2018). These evaluations of design should be both formal, as randomized control trials, which is the standard for testing the effects of intervention in education (US Department of Education; Institute of Education Sciences; National Center for Education Evaluation Regional Assistance, 2003), and also more informal through the iterative revisions necessary to keep MOOCs current and rigorous. Testing in specific contexts is needed furthermore because there are differences in practice across institutions and/or courses in what best promotes engagement and learning. For example, the relationship between video length and learner engagement is a popular topic because of the heavy reliance on multimedia in many MOOCs. In one study, researchers recommend that videos should be <6 min to maximize engagement (Guo et al., 2014), however, our own research indicates that video length is not a significant determinant in engagement in another one of our biology MOOCs (Thornton et al., 2017). This exemplifies the need to test different strategies, collect learner data, and make evidence-based design decisions informed by implementation research that are relevant to each course.

Application of the learning sciences to hybrid course and MOOC design provides a strong foundation of evidence-based practices that one can optimize for different online learning experiences. Making design decisions grounded in scientific evidence is a crucial first step. Equally important is the dissemination of these decision-making processes and subsequent evaluation of their implementation. By documenting the process of incorporating and testing applications of the learning sciences, we collectively can contribute to enriching the community of practice for digital learning designers while providing a more robust experience for learners. We see this perspective as a step forward to incite conversations and actions around evidence-based design efforts in online educational environments.

Author Contributions

DG and MW contributed to the conception and scope of the perspective. DG wrote the initial draft of the manuscript, and both authors contributed to manuscript revision, read and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2020.500481/full#supplementary-material

Supplementary Figure 1. An example of an image handout from the protein structure learning sequence highlighting essential images complete with focused, open-ended concept questions.

Supplementary Video 1. A clip of a video segment detailing the steps in calcium ion transportation, where we cut back and forth between the professor explaining each step to a live classroom and an animation of this dynamic process added in post-production.

References

Ainsworth, S., and Th Loizou, A. (2003). The effects of self-explaining when learning with text or diagrams. Cogn. Sci. 27, 669–681. doi: 10.1207/s15516709cog2704_5

CrossRef Full Text | Google Scholar

Alario-Hoyos, C., Pérez-Sanagustín, M., Cormier, D., and Delgado-Kloos, C. (2014a). Proposal for a conceptual framework for educators to describe and design MOOCs. J. Univers. Comput. Sci. 20, 6–23. doi: 10.3217/jucs-020-01-0006

CrossRef Full Text | Google Scholar

Alario-Hoyos, C., Pérez-Sanagustín, M., Kloos, C. D., and Muñoz-Merino, P. J. (2014b). “Recommendations for the design and deployment of MOOCs: insights about the MOOC digital education of the future deployed in MiríadaX,” in Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality - TEEM'14 (Salamanca: ACM Press), 403–408. doi: 10.1145/2669711.2669931

CrossRef Full Text | Google Scholar

Aleven, V. A. W. M. M., and Koedinger, K. R. (2002). An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cogn. Sci. 26, 147–179. doi: 10.1207/s15516709cog2602_1

CrossRef Full Text | Google Scholar

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., and Norman, M. K. (2010). How Learning Works: Seven Research-Based Principles for Smart Teaching. Hoboken, NJ: John Wiley and Sons.

Google Scholar

Armour, C., Schneid, S. D., and Brandl, K. (2016). Writing on the board as students' preferred teaching modality in a physiology course. Adv. Physiol. Educ. 40, 229–233. doi: 10.1152/advan.00130.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Atkinson, R. K., Derry, S. J., Renkl, A., and Wortham, D. (2000). Learning from examples: instructional principles from the worked examples research. Rev. Educ. Res. 70, 181–214. doi: 10.3102/00346543070002181

CrossRef Full Text | Google Scholar

Bangert-Drowns, R. L., and Morgan, M. (1991). The instructional effect of feedback in test-like events. Rev. Educ. Res. 61, 213–238. doi: 10.3102/00346543061002213

CrossRef Full Text | Google Scholar

Birnbaum, M. S., Kornell, N., Bjork, E. L., and Bjork, R. A. (2013). Why interleaving enhances inductive learning: the roles of discrimination and retrieval. Mem. Cognit. 41, 392–402. doi: 10.3758/s13421-012-0272-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Bui, D. C., and McDaniel, M. A. (2015). Enhancing learning during lecture note-taking using outlines and illustrative diagrams. J. Appl. Res. Mem. Cogn. 4, 129–135. doi: 10.1016/j.jarmac.2015.03.002

CrossRef Full Text | Google Scholar

Canning, E. A., Harackiewicz, J. M., Priniski, S. J., Hecht, C. A., Tibbetts, Y., and Hyde, J. S. (2018). Improving performance and retention in introductory biology with a utility-value intervention. J. Educ. Psychol. 110, 834–849. doi: 10.1037/edu0000244

PubMed Abstract | CrossRef Full Text | Google Scholar

Chi, M. T. H., Leeuw, N. D., Chiu, M.-H., and Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cogn. Sci. 18, 439–477. doi: 10.1207/s15516709cog1803_3

CrossRef Full Text | Google Scholar

Clark, R. C., and Mayer, R. E. (2016). e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning. Hoboken, NJ: John Wiley and Sons. doi: 10.1002/9781119239086

CrossRef Full Text | Google Scholar

Doherty, I., Harbutt, D., and Sharma, N. (2015). Designing and developing a MOOC. Med. Sci. Educ. 25, 177–181. doi: 10.1007/s40670-015-0123-9

CrossRef Full Text | Google Scholar

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., et al. (2014). Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. U. S. A. 111, 8410–8415. doi: 10.1073/pnas.1319030111

PubMed Abstract | CrossRef Full Text | Google Scholar

Guo, P. J., Kim, J., and Rubin, R. (2014). “How video production affects student engagement: an empirical study of MOOC videos,” in Proceedings of ACM Conference on Learning at Scale (L@S) (Atlanta, GA), 4–5. doi: 10.1145/2556325.2566239

CrossRef Full Text

Handelsman, J., Ebert-May, D., Beichner, R., Bruns, P., Chang, A., DeHaan, R., et al. (2004). Education: scientific teaching. Science 304, 521–522. doi: 10.1126/science.1096022

PubMed Abstract | CrossRef Full Text | Google Scholar

Hartley, J., and Davies, I. K. (1976). Preinstructional strategies: the role of pretests, behavioral objectives, overviews and advance organizers. Rev. Educ. Res. 46:28. doi: 10.3102/00346543046002239

CrossRef Full Text | Google Scholar

Hattie, J., and Timperley, H. (2007). The power of feedback. Rev. Educ. Res. 77, 81–112. doi: 10.3102/003465430298487

CrossRef Full Text | Google Scholar

Herrington, J., and Kervin, L. (2007). Authentic learning supported by technology: ten suggestions and cases of integration in classrooms. Educ. Media Int. 44, 219–236. doi: 10.1080/09523980701491666

CrossRef Full Text | Google Scholar

Herrington, J., Reeves, T. C., Oliver, R., and Woo, Y. (2004). Designing authentic activities in web-based courses. J. Comput. High. Educ. 16, 3–29. doi: 10.1007/BF02960280

CrossRef Full Text | Google Scholar

Honebein, P. C., and Sink, D. L. (2012). The practice of eclectic instructional design. Perform. Improv. 51, 26–31. doi: 10.1002/pfi.21312

CrossRef Full Text | Google Scholar

Jung, I., Choi, S., Lim, C., and Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in Web-based instruction. Innov. Educ. Teach. Int. 39, 153–162. doi: 10.1080/14703290252934603

CrossRef Full Text | Google Scholar

Karpicke, J. D., and Roediger, H. L. (2007). Expanding retrieval practice promotes short-term retention, but equally spaced retrieval enhances long-term retention. J. Exp. Psychol. Learn. Mem. Cogn. 33, 704–719. doi: 10.1037/0278-7393.33.4.704

PubMed Abstract | CrossRef Full Text | Google Scholar

Kellogg, S., Booth, S., and Oliver, K. (2014). A social network perspective on peer supported learning in MOOCs for educators. Int. Rev. Res. Open Distrib. Learn. 15:5. doi: 10.19173/irrodl.v15i5.1852

CrossRef Full Text | Google Scholar

Kopp, M., and Lackner, E. (2014). “Do MOOCS need a special instructional design?” in Proc. EDULEARN14 Conf. (Barcelona), 11.

Google Scholar

Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: an overview. Theory Pract. 41, 212–218. doi: 10.1207/s15430421tip4104_2

CrossRef Full Text | Google Scholar

Krathwohl, D. R., and Anderson, L. W. (2009). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. Harlow: Longman.

Google Scholar

Lee, G., Keum, S., Kim, M., and Choi, Y. (2016). A study on the development of a MOOC design model. Educ. Technol. Int. 17, 1–37. Available online at: http://kset.or.kr/eti_ojs/index.php/instruction/article/view/69

Google Scholar

Lombardi, M. M. (2007). Authentic learning for the 21st century: an overview. Educ. Learn. Initiat. 1, 1–12. Available online at: https://library.educause.edu/resources/2007/1/authentic-learning-for-the-21st-century-an-overview

Google Scholar

Mayer, R. E. (2017). Using multimedia for e-learning: multimedia for e-learning. J. Comput. Assist. Learn. 33, 403–423. doi: 10.1111/jcal.12197

CrossRef Full Text | Google Scholar

Mayer, R. E., Mathias, A., and Wetzell, K. (2002). Fostering understanding of multimedia messages through pre-training: evidence for a two-stage theory of mental model construction. J. Exp. Psychol. Appl. 8, 147–154. doi: 10.1037/1076-898X.8.3.147

PubMed Abstract | CrossRef Full Text | Google Scholar

Meng, K., and Patty, D. (1991). Field dependence and contextual organizers. J. Educ. Res. 84, 183–189. doi: 10.1080/00220671.1991.10886013

CrossRef Full Text | Google Scholar

Miyatsu, T., Nguyen, K., and McDaniel, M. A. (2018). Five popular study strategies: their pitfalls and optimal implementations. Perspect. Psychol. Sci. 13, 390–407. doi: 10.1177/1745691617710510

PubMed Abstract | CrossRef Full Text | Google Scholar

Moir, T. (2018). Why is implementation science important for intervention design and evaluation within educational settings? Front. Educ. 3:61. doi: 10.3389/feduc.2018.00061

CrossRef Full Text | Google Scholar

Moreno, R. (2004). Decreasing cognitive load for novice students: effects of explanatory versus corrective feedback in discovery-based multimedia. Instr. Sci. 32, 99–113. doi: 10.1023/B:TRUC.0000021811.66966.1d

CrossRef Full Text | Google Scholar

Mott, B. W., Callaway, C. B., Zettlemoyer, L. S., Lee, S. Y., and Lester, J. C. (1999). “Towards narrative-centered learning environments,” in Proc. 1999 AAAI Fall Symp. Narrat. Intell. (Menlo Park), 78–82.

Google Scholar

Nicol, D. J., and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. doi: 10.1080/03075070600572090

CrossRef Full Text | Google Scholar

Oliver, R., Herrington, A., Herrington, J., and Reeves, T. C. (2007). Representing authentic learning designs supporting the development of online communities of learners. J. Learn. Des. 2, 1–21. doi: 10.5204/jld.v2i2.36

CrossRef Full Text | Google Scholar

Paivio, A., Walsh, M., and Bons, T. (1994). Concreteness effects on memory: when and why? J. Exp. Psychol. Learn. Mem. Cogn. 20, 1196–1204. doi: 10.1037/0278-7393.20.5.1196

CrossRef Full Text | Google Scholar

Reiser, B. J. (2004). Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J. Learn. Sci. 13, 273–304. doi: 10.1207/s15327809jls1303_2

CrossRef Full Text | Google Scholar

Roediger, H. L., and Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends Cogn. Sci. 15, 20–27. doi: 10.1016/j.tics.2010.09.003

CrossRef Full Text | Google Scholar

Roediger, H. L., and Karpicke, J. D. (2006). The power of testing memory: basic research and implications for educational practice. Perspect. Psychol. Sci. 1, 181–210. doi: 10.1111/j.1745-6916.2006.00012.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Rossiter, M. (2007). Possible selves: an adult education perspective. New Dir. Adult Contin. Educ. 114, 5–15. doi: 10.1002/ace.252

CrossRef Full Text | Google Scholar

Rowland, C. A. (2014). The effect of testing versus restudy on retention: a meta-analytic review of the testing effect. Psychol. Bull. 140, 1432–1463. doi: 10.1037/a0037559

PubMed Abstract | CrossRef Full Text | Google Scholar

Semsar, K., Knight, J. K., Birol, G., and Smith, M. K. (2011). The Colorado Learning Attitudes about Science Survey (CLASS) for use in biology. CBE—Life Sci. Educ. 10, 268–278. doi: 10.1187/cbe.10-10-0133

PubMed Abstract | CrossRef Full Text | Google Scholar

Shah, D. (2019). Class Central's top 100 MOOCs of all time (2019 edition). Cl. Cent. Available online at: https://www.classcentral.com/report/top-moocs-2019-edition/ (accessed September 16, 2019).

Shute, V. J. (2008). Focus on formative feedback. Rev. Educ. Res. 78, 153–189. doi: 10.3102/0034654307313795

CrossRef Full Text | Google Scholar

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learn. Instr. 4, 295–312. doi: 10.1016/0959-4752(94)90003-5

CrossRef Full Text | Google Scholar

Sweller, J. (2005). “Implications of cognitive load theory for multimedia learning,” in The Cambridge Handbook of Multimedia Learning, ed R. E. Mayer (New York, NY: Cambridge University Press), 19–30. doi: 10.1017/CBO9780511816819.003

CrossRef Full Text | Google Scholar

Sweller, J., van Merriënboer, J. J. G., and Paas, F. (2019). Cognitive architecture and instructional design: 20 Years Later. Educ. Psychol. Rev. 31, 261–292. doi: 10.1007/s10648-019-09465-5

CrossRef Full Text | Google Scholar

Thornton, S., Riley, C., and Wiltrout, M. E. (2017). “Criteria for video engagement in a biology MOOC,” in Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale - L@S'17 (Cambridge, MA: ACM Press), 291–294. doi: 10.1145/3051457.3054007

CrossRef Full Text | Google Scholar

US Department of Education; Institute of Education Sciences; National Center for Education Evaluation and Regional Assistance (2003). Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide. Washington, DC: Coalition for Evidence-Based Policy.

Google Scholar

Veletsianos, G., and Shepherdson, P. (2016). A systematic analysis and synthesis of the empirical MOOC literature published in 2013–2015. Int. Rev. Res. Open Distrib. Learn. 17:2. doi: 10.19173/irrodl.v17i2.2448

CrossRef Full Text | Google Scholar

Weinstein, Y., Madan, C. R., and Sumeracki, M. A. (2018). Teaching the science of learning. Cogn. Res. Princ. Implic. 3:2. doi: 10.1186/s41235-017-0087-y

CrossRef Full Text | Google Scholar

Weisberg, J. S. (1970). The use of visual advance organizers for learning earth science concepts. J. Res. Sci. Teach. 7, 161–165. doi: 10.1002/tea.3660070212

CrossRef Full Text | Google Scholar

Wilcox, K. E., Sarma, S., and Lippel, P. H. (2016). Online Education: A Catalyst for Higher Education Reforms. Cambridge, MA: Massachusetts Institute of Technology.

Google Scholar

Wylie, R., and Chi, M. T. H. (2014). “The self-explanation principle in multimedia learning,” in The Cambridge Handbook of Multimedia Learning, ed R. E. Mayer (New York, NY: Cambridge University Press), 413–432. doi: 10.1017/CBO9781139547369.021

CrossRef Full Text | Google Scholar

Yanchar, S. C., and Gabbitas, B. W. (2011). Between eclecticism and orthodoxy in instructional design. Educ. Technol. Res. Dev. 59, 383–398. doi: 10.1007/s11423-010-9180-3

CrossRef Full Text | Google Scholar

Yousef, A. M. F., Chatti, M. A., Schroeder, U., and Wosnitza, M. (2014). “What drives a successful MOOC? An empirical examination of criteria to assure design quality of MOOCs,” in 2014 IEEE 14th International Conference on Advanced Learning Technologies (Athens: IEEE), 44–48. doi: 10.1109/ICALT.2014.23

CrossRef Full Text | Google Scholar

Keywords: learning sciences, online education, instructional design, digital learning, learning engineering, MOOC

Citation: Gordon DG and Wiltrout ME (2021) A Framework for Applying the Learning Sciences to MOOC Design. Front. Educ. 5:500481. doi: 10.3389/feduc.2020.500481

Received: 25 September 2019; Accepted: 17 December 2020;
Published: 18 January 2021.

Edited by:

Leman Figen Gul, Istanbul Technical University, Turkey

Reviewed by:

Adamantios Koumpis, University of Passau, Germany
Clifford A. Shaffer, Virginia Tech, United States

Copyright © 2021 Gordon and Wiltrout. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Darcy G. Gordon, ZGdnb3Jkb24mI3gwMDA0MDttaXQuZWR1; Mary Ellen Wiltrout, bWV3MjcmI3gwMDA0MDttaXQuZWR1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.