Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Crossing Disciplinary Boundaries to Improve Technology-Rich Learning Environments


by Susanne P. Lajoie & Eric Poitras - 2017

Background: The capacity of instructional technologies to personalize instruction has progressively improved over the last decade, in conjunction with changes in learning theories that dictate what, when, and how to support learners.

Focus of Study: This paper reviews several technology-rich learning environments that are investigated by members of the Learning Environments Across Disciplines partnership, including Newton’s Playground, the War of 1812 iHistory tours, Crystal Island, BioWorld, and MetaTutor. The adaptive capabilities of these systems are discussed in terms of the metaphors of using computers as cognitive, metacognitive, and affective tools.

Research Design: Researchers rely on convergent methodologies to collect data via multiple modalities to gain a better understanding of what learners know, feel, and understand. The design guidelines of these learning environments are used to situate this understanding as a means to generalize best practices in personalizing instruction.

Conclusions: The findings of these investigations have significant implications for the metaphor of using technology as a tool to augment our thinking. The challenge is now to broaden learning theories while taking into consideration the social and emotional perspective of learning, as well as to leverage recent advances in learning analytics and data-mining techniques to iteratively improve the design of technology-rich learning environments.



There is a long and rich history of adaptive technologies designed to improve teaching and learning. Rather than review this history, this paper addresses specific technology tools created to support learning. Learning tools can be as commonplace as the alphabet and numerical systems created to represent, communicate, and process information (Nickerson, 2005). The printing press and the World Wide Web were both considered technological innovations for distributing and sharing information (Lajoie, 2007; Lesgold, 2000). Cognitive tools are technology tools that allow learners to engage in higher-order thinking skills by supporting lower-level skills, offering opportunities for generating, testing, and evaluating hypotheses in problem-solving contexts and scaffolding memory and metacognitive skills (Jonassen & Reeves, 1996; Lajoie, 2005; Salomon, Perkins, & Globerson, 1991).

This paper will describe technology-rich learning environments (TREs) that use a combination of tools to support learners. TREs have an instructional purpose and are guided by learning theories to support learners in achieving the goals of instruction (Lajoie & Azevedo, 2006). These affordances include opportunities for learners to interact with instructional materials, receive feedback through the structure of the environment and/or by human or computer agents that scaffold the learner, and encounter adaptive challenges to sustain attention and keep learners engaged. Our challenge is to create such adaptive learning environments that teach 21st-century skills and accurately assess student mastery of these skills, a skillset that emphasizes, but is not limited to, critical thinking and problem solving, creativity and innovation, collaboration, self-direction, and digital literacy.

COGNITIVE TOOLS: VARIATIONS ON A THEME

Metaphors have been used to guide and describe theory and research pertaining to learning and instruction. Mayer’s (1996) seminal article described the evolution of learning theories, moving from behaviorist to information-processing to constructivist approaches to describe learning, and instructional practices based on such theories. The metaphor guiding the behaviorist movement was that of response strengthening (the law of exercise; see Thorndike, 1911/1965), where learning is strengthened through stimulus–response methods; this results in instruction that consists of drills and practice. The information-processing metaphor proposes intervening variables where information is encoded, structured, stored, and retrieved by the learner (Neisser, 1967). From an instructional perspective, Mayer explained that the information-processing metaphor is sometimes seen as knowledge transmission through media such as lectures and books. However, he further explained that the concept of knowledge acquisition grew with the introduction of the computer in the 1950s, when the leap was made to the mind as a computer where information is received and transformed in some manner (Lachman, Lachman, & Butterfield, 1979). In other words, knowledge is not simply received but constructed in some fashion. Constructivism remains the current learning metaphor, in which learners make sense of the information they receive by constructing their own knowledge through guided discovery, discussion, etc. In essence, learners are situated in meaningful learning experiences where they learn through building their own understanding (Greeno, 1998). However, this metaphor has been stretched to include the social-constructivist notion of learning, in which we learn from others by sharing multiple perspectives (Anderson, Greeno, Reder, & Simon, 2000; Clancey, 1997; von Glasersfeld, 1995; Vosniadou, 2007).

Metaphors for learning with technology are also used to guide research on adaptive technologies. The likelihood that technology will foster learning is greatly increased when cognitive theories guide the design of technology for instructional purposes (Lajoie, 2005). The cognitive tools metaphor is used to describe how technology can support learning by helping learners accomplish cognitive tasks (Jonassen & Reeves, 1996; Lajoie, 2000; Lajoie & Derry, 1993; Perkins, 1985; Salomon et al., 1991). Cognitive tools can be used to assist memory, problem-solving, decision-making, metacognition, etc. Computers can serve as intellectual partners (Salomon et al., 1991) by helping learners accomplish tasks through the sharing of information. Sharing, in this context, means that the computer can assist the learner in some way to solve problems. Intelligent tutoring systems and the advent of pedagogical agents that simulate tutorial discourse demonstrate how far the notion of computers as partners has evolved. These tutors and pedagogical agents now serve learners by providing assistance in the context of their learning. The field of computer-supported collaborative learning also reflects the shared partnerships between groups of learners and technology tools. Cognitive tools, be they simulations, games, or intelligent adaptive systems, can be created to help learners generate and test hypotheses in the context of complex problem-solving, which ultimately helps them construct new knowledge and practice the application of knowledge in the context of meaningful activities.

The cognitive tools metaphor has evolved together with theories of learning; the last three decades have seen different questions driving the design of adaptive technology. In the early ’90s, researchers were preoccupied with the question: “Can we model human problem-solving using technology?” Three camps formed: modelers, non-modelers, and middle camp (Derry & Lajoie, 1993). Researchers in the model camp studied performance to see how experts differed from novices. They would then develop intelligent computer tutoring systems that would use models of learning to automatically diagnose errors and adapt levels of feedback based on an individual’s performance. The non-modelers thought it impossible for computers to diagnose all human errors and envisioned the use of technology as a cognitive tool to situate experiences for learners in authentic contexts. Instead of modeling the learner, they used technology to support the social experience by acting to scaffold learners, a pedagogical process whereby knowledgeable students help learners to perform tasks that they cannot do by themselves (Wood, Bruner, & Ross, 1976). Scaffolding allows learners to realize their potential by providing assistance when needed and slowly removing it as learning occurs (Collins, Brown, & Newman, 1989; Lajoie, 2005; Pea, 2004; Vygotsky, 1978). Finally, the middle camp combined cognitive apprenticeship, constructivist learning, and cognitive tools with computer-based student modeling. This last camp adhered to the belief that computers can and should serve the cognitive mentorship function, providing scaffolding without giving over control of the learning/assessment process to those using the system.

In the following decade, as learning theories reflected constructivist designs, researchers started to consider the influence of both the individual and the social context of learning, asking, “Who should we model, the individual, the group, or both?” The debate resulted in an evolution of theory in which both the individual and social construction of knowledge were taken into consideration (Anderson et al., 2000). Researchers demonstrated the value of modeling both individual knowledge construction and learning in social situations through the use of technology. Computers, human tutors, and peers were considered as assisting in the modeling of knowledge and scaffolding learners in the context of problem-solving.

This decade we face a different question, pushing the cognitive tools metaphor further. The question is, “What should we model?” Research on learning and affect is becoming more connected; consequently, there is more interest in adapting instruction to differences in learners’ cognitive skills and affective inclinations to help them reach their potential. Adaptive tools for cognition (how we think, remember, decide, perceive, understand, and use knowledge) need to be examined in conjunction with how affect (enjoyment, interest, achievement, emotion, etc.) is impacted by specific learning situations. Affect can increase or decrease learning and retention, depending on the context. Positive affect is usually associated with engagement and interest in learning, whereas negative affect can lead to disengagement. In addressing the affective component of learning, it is important to recognize how affect influences decision-making and how individuals are engaged as they interact with new technologies. The current metaphor for learning with technology must extend the computers-as-cognitive-tools metaphor by contributing to theories and methodologies that model both the cognitive and the affective processes that lead to effective learning and engagement for individuals and groups of learners.

In the following section, we discuss how expanded views of learning with technology have led to more effective designs for adaptive learning environments. In line with contemporary theories of learning and instruction, the metaphor of computers designed as tools to augment human cognition has broadened in terms of the breadth and depth of constructs that are targeted, including affective and motivational activities that impact learning as well as self-regulatory processes that mediate this relationship.

BROADENING THE SCOPE OF THE METAPHOR: AFFECTIVE TOOLS FOR LEARNING

The adage “Where there’s a will, there’s a way” is historically grounded in psychological theories (James, 1899) in which the will to do something was linked to action. There is an obvious connection between the will to act and the motivation to learn that pertains to where the locus of control of the learning situation resides (Bandura, 1977). Pekrun’s (2006) control–value theory expanded on these assumptions, stating that when individuals feel in control of their situation and value the activity, more positive learning outcomes can occur. However, there is a complex relationship between learner control and appropriate levels of challenge. Lepper (1988) cautioned that unconstrained learner control might lead to the selection of activities that are too challenging, which could lead to failure; alternatively, selecting tasks that are too easy can lead to boredom. Volition, the will to do something, plays a strong part in self-regulated learning processes that require the control of monitoring, evaluating, and revising one’s learning strategies (Corno, 2001). Azevedo and Feyzi-Behnagh (2010) cautioned that dysregulation may occur in TRE situations that are too open-ended, because learners may not have the self-regulated learning skills necessary to distinguish what is important for learning in such situations (Hmelo-Silver, Duncan, & Chinn, 2007; Kirschner, Sweller, & Clark, 2006).

When expanding our ideas about how to develop affective tools for learning, we need to identify how adaptive learning environments provide learners with motivationally appropriate learning environments (Du Boulay et al., 2010; Woolf et al., 2009) that contribute to positive emotions and engage students in persisting in their learning. As students learn with technology, the relationships among emotion, effort, persistence, and learning can be approached in a more temporal manner, in that student emotions can be monitored dynamically during learning. Analyzing emotion in the context of learning with TREs can help determine the precursors to both positive and negative learning situations.

DEEPENING THE SCOPE OF THE METAPHOR: METACOGNITIVE TOOLS FOR LEARNING

The field of metacognition and self-regulated learning continues to evolve, and part of this evolution has been to provide operational definitions of constructs in this area that others can agree to (Dinsmore, Alexander, & Loughlin, 2008; Lajoie, 2008). A recent handbook on metacognition and learning technologies (Azevedo & Aleven, 2013) demonstrated how far researchers have broadened the metacognitive tools metaphor to assist learners using technology. As we start to examine self and other regulation more broadly, careful considerations of operational definitions and methodologies are needed to calibrate the research. Furthermore, new links must be made to the social-emotional variables that influence metacognition in individuals and groups of learners.

Metacognition refers to thinking about one’s own thinking (Flavell, 1979). Both metacognitive knowledge and regulatory mechanisms are needed to help one determine what one knows and understands (Baker & Brown, 1984). Self-regulation can be applied to many contexts and involves cognitions, behaviors, emotions, and motivations (Bandura, 1977; Loyens, Magda, & Rikers, 2008). Individuals use cognitive and metacognitive regulatory processes to plan, perform, and maintain their desired objectives (Volet, Vauras, Khosa, & Iiskala, 2013). Self-regulation applies more specifically to learning; hence, the term self-regulated learning (SRL) was created (Corno & Mandinach, 1983; Zimmerman, 1986). SRL refers to monitoring and controlling one’s own learning (Dinsmore et al., 2008; Lajoie, 2008; Pintrich, 2000, 2004; Zimmerman & Schunk, 2001). Self-regulated learners can actively manage their own learning cognitively, motivationally, and behaviorally (Azevedo, 2009; Winne & Perry, 2000; Zimmerman, 2008). SRL is a recursive process that occurs at all stages of a learning episode. Some refer to SRL as an event that unfolds dynamically, where individual SRL processes fluctuate in terms of frequency during the learning task (Azevedo, Moos, Johnson, & Chauncey, 2010).

ASSESSING LEARNING AND ENGAGEMENT THROUGH AFFECTIVE AND METACOGNITIVE TOOLS

The Learning Environments Across Disciplines (LEADS) Partnership was created to advance our understanding of how theories can influence the design of better learning opportunities with TREs (see http://www.leadspartnership.ca). In particular, LEADS members expand our examination of the metaphor of using computers as tools to include the co-occurrence of affect and metacognition while learning to determine their influence on student outcomes. Members use an interdisciplinary approach to design and assess learning and engagement with TREs. Educators, psychologists, computer scientists, engineers, physicians, and historians work in tandem to examine the complex relationships among cognition, metacognition, behavior, motivation, and affect while learning with technology. This interdisciplinary approach allows us to pose new theoretical frameworks as well as to test methodological innovations in providing evidence of learning and engagement with TREs. Convergent methodologies are used to identify how students think and feel in these contexts. Such methods include computational analyses, machine learning, semantic analysis, and physiological and behavioral indices. The TREs include simulations, intelligent tutoring systems, multimedia learning environments, agent-based systems, augmented reality systems, and serious games. We provide a few examples of how TREs can determine when learners are engaged and happy, as opposed to bored and angry, while learning. Our goal is to discover how best to tailor the learning experience to the cognitive and affective needs of the learner. Examples of specific TREs that use this integrated approach are described in terms of the theories and methods used to provide evidence of what learners know, feel, and understand.

New assessment designs and methods are needed to examine the ways in which students learn using TREs. In particular, advanced technologies for learning are able to address 21st-century skills (e.g., inquiry, problem solving, and communication) in ways that were not conceivable with standardized achievement tests. Shute, Leighton, Jang, and Chu (2016) described how technology is changing assessment due to its dynamic and adaptive nature. Most importantly, they discussed the need for more attention to the intersection of cognition and emotional variables, and better attention to the types of tools that promote and evaluate learning. In doing so, the tools embedded in these learning environments are capable of adapting themselves to the dynamic nature of learning and engagement, as students fluctuate from a state of confusion to enjoyment, or to the use of ineffective and effective learning strategies.

In order for a technology to be adaptive, it must monitor and update a student’s model of performance or proficiency. Individuals differ in their levels and types of competencies and the rate at which they make improvements over time. Adaptive feedback helps learners along a learning trajectory by using learner profiles to reveal areas where they need assistance. Innovative forms of assessment lead to informed decisions regarding adaptive feedback.

Adaptive instructional systems may be defined as a systematic process that consists of four steps: (a) capturing information about the learner, (b) analyzing learner interactions through a model of learner characteristics in relation to the domain, (c) selecting the appropriate instructional content and resources, and (d) delivering the content to the learner (Shute & Zapata-Rivera, 2012). The analytical function of the learner model can be further classified in terms of processes conducted at both the macro and micro levels (VanLehn, 2006). At the macro level, a representation of the path toward competency within the domain is updated for each task, with the aim of selecting the next task that is most appropriate for the learner. At the micro level, instructional materials such as hints and feedback are delivered to the learner on the basis of a learner model that is repeatedly updated over the duration of task performance.

Adaptive technologies generally use at least one form of assessment that is embedded in the learning environment. Embedded assessments are both dynamic and diagnostic (Lajoie & Lesgold, 1992) and can lead to better learning opportunities. An important development in assessment design is the use of learner evidence collected by the system to build complex multidimensional profiles that can be extracted from the data (Shute et al., 2016). Evidence of cognitive, metacognitive, and affective signatures can be identified both separately and in combination, so that we can see when a learner is engaged, disengaged, reflecting, or not reflecting on his or her own learning. Furthermore, many researchers are using a combination of assessments to build a more robust and nuanced assessment of learner profiles during specific learning situations. One such assessment is referred to as stealth assessment (Shute, 2008), whereby learners are assessed during “game play” in a seamless manner, not realizing that they are being assessed at all. We describe this approach in the following section on Newton’s Playground, a physics game in which learners are assessed in the context of playing a game and learning physics.

LEARNING PHYSICS THROUGH GAME PLAY: NEWTON’S PLAYGROUND

As we stated earlier, learning through interaction is one of the most meaningful types of learning, since learners are situated in an active learning environment doing something, rather than simply absorbing material that is transmitted. Shute, Ventura, and Kim (2013) created Newton’s Playground (NP) as a game in which learners play with physics concepts by interacting with materials to test their theories about gravity and force. Players create physical objects that are meaningful or of interest to them (e.g., dinosaurs or balloons) to solve physics problems involving force and motion. These objects replace the more typical objects used to solve physics problems (e.g., levers, ramps, and pendulums) with objects that are of interest to them and that, when drawn and tested in NP, obey the basic rules of physics (i.e., Newton's three laws of motion). The student-constructed objects must work within the confines of physical laws. In this way, learners test their hypotheses about laws of motion by playing with objects.

NP is adaptive, in the sense that it captures and analyzes learner interactions and responds to learner actions by generating a simulation that animates the objects that students have created to solve physics problems. Students receive visual feedback from NP in the form of an animation that illustrates how close they are to solving the physics problem. Figure 1 illustrates a student using a dinosaur tail as a ramp; a ball travels down to a propelling lever, where force is applied to see whether the ball can be released upward to reach the intended target, which is the upper platform. Stealth assessment of learners occurs as they play the game. Data are collected during the course of gameplay, and ongoing assessments are used to generate appropriate levels of feedback based on the learner activity (Shute, 2008). Stealth assessment is evidence-centered (Mislevy, Steinberg, Breyer, Almond, & Johnson, 2002): Inferences are made about learner competencies through actions that are recorded and updated in TRE profiles (Conati, Gertner, & VanLehn, 2002). Shute examined how learning competencies are acquired in the context of games so that they can be exploited to support learning processes and outcomes (e.g., causal reasoning, creative problem-solving, or physics understanding). Games can then be designed to sustain attention by providing optimal challenges to learners. As we stated earlier, appropriate motivational designs are needed to keep learners engaged in the learning process. Optimal learning challenges are those that hover at the boundary of a student’s competence (Cordova & Lepper, 1996; Gee, 2005; Vygotsky, 1978).

[39_21770.htm_g/00002.jpg]

Figure 1. Student work in Newton’s Playground (compliments of V. Shute)

Some TREs can be thought of as one-stop shopping, where one environment can provide the learner with all of the materials needed to enhance learning. For instance, Shute et al. (2016) described how TREs can provide opportunities for problem-solving; information searches; discriminating among and synthesizing multiple information streams and data sources; planning, modifying, and re-executing strategies; hypothesizing about consequences of best actions; testing ideas; receiving feedback directly; perseverance; cognitive flexibility; creativity; and coordinating and collaborating with others. The challenge is how to analyze the different data sources to build robust learner profiles and provide appropriate levels of feedback based on such profiles. Shute is extending her research with NP to include different types of adaptation, including selection of appropriate instructional content and resources in response to the identification of different levels of student competencies. The different disciplines that work together in accomplishing this system are psychology, education, physics, and computer science.

In Shute’s work we see how student engagement is addressed by having students create and design their own physics simulations and interact with their creations. Serious games provide high levels of control and specific goals and rules; they are adaptive, challenging, and suspenseful, since we cannot predict their outcomes (Shute & Ke, 2012). Each TRE is created with specific instructional goals and assumptions about learning. In the next section we describe an example of how mobile technologies can support informal learning opportunities by providing just-in-time knowledge to answer specific questions when needed. Researchers are beginning to explore how mobile devices can serve as augmented reality applications to help individuals construct new knowledge. We explore an example of how learners can experience the past by playing with augmented reality applications on mobile devices.

A WALK THROUGH TIME: HISTORY AT YOUR FINGERTIPS THROUGH AUGMENTED REALITY

Historical events, by definition, are events that occurred in the past. It is difficult to recreate a past experience, but one way to make history come to life is to situate learners in contexts that bring meaning to past events. As a case example, we describe a collection of augmented reality applications that are designed to help people learn by interacting with historical artifacts. The War of 1812 suite of iHistoryTour applications, designed by K. Kee (Kee & Darbyson, 2011), is intended to help individuals experience history as they take augmented walking tours across Niagara-on-the-Lake and Queenston. The suite includes Niagara 1812: Return of the Fenian Shadow and Queenston 1812: The Bomber’s Plot. The applications adapt to individuals based on their locations and interactions with the app. Location-based augmented reality narratives appear based on the visitors’ locations near heritage sites. The War of 1812 applications rely on real-time GPS tracking to determine the visitors’ progress throughout the tour and tailor the instructional content accordingly. The instruction includes interactive game-like puzzles and problems to solve to engage visitors in learning about the War of 1812. Quest mode leads visitors through specific locations while helping them to investigate age-old mysteries, decode puzzles, and learn about historic figures.

For example, visitors to the village of Queenston, Ontario learn about the history of one of the most famous battles of the War of 1812. The augmented reality application guides visitors as they explore historical heritage sites and presents them with location-based investigations, e.g., “Who bombed the Sir Isaac Brock monument in 1840?” The plot unfolds around five locations: the Laura Secord homestead, the Queenston Baptist Church, the Wall remains, History Alley, and the Mackenzie Printery. Visitors can acquire factual information regarding each location, as well as details surrounding the Battle of Queenston Heights and the suspects allegedly involved in the bombing of the monument. In the process, visitors reason on the basis of historical sources, including pictorial evidence (e.g., a portrait of Laura Secord) and written documents (e.g., a declaration written by Colonel James Morreau). The application incorporates game-based features to demonstrate how historians evaluate the credibility of sources by comparing these sources and analyzing characteristics of artifacts. The plot unfolds around evidence collected at each location. Visitors use their evidence to reason about the causes of the bombing, and they select the most likely suspect based on their evidence. Once they select a suspect, the application tailors the feedback to illustrate an actual historian’s reasoning about the causes of the event. Visitors learn about the importance of reasoning on the basis of credible sources and how to rule out potential explanations for historical events.

In collaboration with co-investigators from the LEADS partnership, researchers from the digital humanities work together with psychologists, educators, and computer scientists to apply existing methods and develop and evaluate new measures to gather convergent evidence of visitor learning and engagement in the context of both actual and simulated tours (Poitras, Kee, Lajoie, & Cataldo, 2013). Guided tours in the field can be virtually recreated in a laboratory setting through the use of 360-degree panoramic photos of the relevant locations displayed on a large, touch-sensitive screen. Users of the application can thus navigate to and from each location while changing the orientation of their view. The laboratory setting provides opportunities for collecting data that capture the visitors’ experiences at multiple time points throughout the tour, including sensors that capture behaviors (portable eye-tracking and behavioral coding application), affect (galvanic skin response and structured self-report questionnaire), and cognition and metacognition (audio records of verbal discourse). These measures allow researchers to study how visitors allocate their attention to areas of interest in the interface with respect to specific locations featured in the tour, as well as to appraise changes in enjoyment and boredom before, during, and after each tour location. The value added of an interdisciplinary approach to augmented reality applications, to quote Kevin Kee, a digital humanist and LEADS member, is that “while humanists are increasingly engaged in the development of augmented reality applications to communicate culture, rigorous testing for user engagement and learning with these applications is less common. Our research suggests that design and development by humanists should be coupled with evaluation by researchers of educational psychology [and informed by their theories of affect, learning, and instruction]” (K. Kee, personal communication, May 3, 2013).

A pilot study was conducted on how participants used the MTL Urban Museum, a location-based augmented reality application developed by the McCord Museum. This study evaluated the benefits of using the convergent methods mentioned above to study learning and affect in users of augmented reality applications, such as the Queenston 1812 application (Harley, Poitras, Jarrell, Lajoie, & Duffy, 2015). The application relies on GPS-based location markers to deliver historical photographs of the appearance and history of buildings across the campus. The aim of the study was to evaluate the ability of visitors to contextualize the past by comparing the past and present locations to highlight noticeable differences in the appearances of buildings, monuments, and modes of transportation. Researchers transcribed and coded the verbal transcripts to characterize the discursive strategies that were utilized by the tour guide to prompt visitors to articulate these differences. The results of the analysis of the dialogue between the visitor and guide suggested that the visitors outlined differences in building appearances, transportation modes, and the Roddick Gates themselves. The involvement of the guide led to an improvement in visitors’ ability to identify differences between the past and present, resulting in an average of 3.08 features identified with the guide, compared to 1.77 without assistance. The data obtained from the eye-tracking equipment confirmed that visitors carefully attended to the material, shifting their gaze from the application to the virtual environment on an average of 48 occasions. Self-reported levels of enjoyment of the tour, the guide, and the learning activity were consistently high before and after visiting this location, while levels of boredom were found to be low. Using convergent methods to document learning and enjoyment in augmented reality applications can help identify what is learned, where feedback is needed, and whether participants enjoy the experience.

In the following section we describe an immersive game-based adaptive learning system, Crystal Island, designed by James Lester and his interdisciplinary team of computer scientists, science education researchers, educational psychologists, literacy specialists, and digital artists. Science students engage in a realistic investigation of a mysterious disease that is spreading through an island, thereby learning about scientific concepts while performing inquiries into the problem.

BEING A SCIENTIST: CRYSTAL ISLAND

Crystal Island immerses students in a virtual world where they are acting as scientists. This game-based learning environment is designed to support learning of microbiology and scientific literacy through complex narratives (Rowe, Shores, Mott, & Lester, 2011) and context-sensitive feedback and tailored explanations. Learners are given meaningful tasks where they play the protagonist and investigate the identity and source of an infectious disease that is spreading throughout the island. The game allows learners to interact by exploring the island while gathering evidence about relevant diseases, forming and testing hypotheses, and recording their findings. The learners also engage with virtual members of the scientific team, test potential sources of the disease in a laboratory, and present their findings, diagnosis, and treatment plan. The scientific problem solving skills pertain to building a scientific argument about what plague has infested the island based on the scientific evidence that is collected. Crystal Island serves as both a learning and research platform, enabling the collection of student interaction data that allows for investigating automated forms of tutorial planning (Lee, Rowe, Mott, & Lester, 2014), learning goal recognition (Ha, Rowe, Mott, & Lester, 2011), and affect recognition (Sabourin, Mott, & Lester, 2013). More precisely, this means that different forms and types of adaptation can be created to respond to individual learners. Lester designs computational models of interactive narrative to adapt the story experience in response to the learners’ actions and as such tailors the story elements to a learner’s preferences. He does this using a structured machine learning approach that induces computational models of interactive narrative from large data sets of student interactions that yield a training corpus for automatically inducing models for the virtual agents (Lee et al., 2014). In one study, Rowe and Lester (2015) explored two conditions for adaptation, one that presented learners with a reinforcement learning-based adaptation versus a condition that presented a random adaptation version and found the reinforcement adaptation improved problem solving processes among middle school students. In other research with middle and high school students (McQuiggan, Rowe, & Lester, 2008) comparisons were made for students whose virtual agent responded to student emotions or did not respond. In the responsive condition the virtual agent responded empathetically to the learner’s emotion self-reports. Students in the empathy-responsive condition reported feeling more presence in Crystal Island. Presence in virtual worlds is often reported as a proxy for engagement.

According to Lester, conducting research on learning in TREs is an inherently interdisciplinary enterprise: The diversity of perspectives provided by educational psychologists, curriculum specialists, and computer scientist is essential for effectively investigating next-generation learning because we must understand (a) the principles of learning, (b) the pedagogies that support it, and (c) the design of adaptive learning technologies that are informed by these principles and pedagogies” (J. Lester, personal communication, December 5, 2014). Rafael Calvo states that “even the most human-centered computer researcher will bring perspectives to the problem that are very different to those of a psychologist or educator. Each one has a set of tools and research method that is unique to her discipline” (R. Calvo, personal communication, November 29, 2014).

Lajoie and her team worked with educators, psychologists, physicians, psychometricians, and computer scientists to design BioWorld, an adaptive technology that supports medical students in their reasoning about patient cases. Supporting learning in real-world contexts requires an interdisciplinary approach to achieve validity with stakeholders.

LEARNING THROUGH DELIBERATE PRACTICE OF DIAGNOSTIC REASONING: BIOWORLD

Medical students learn in apprenticeship settings where they see patients and are tutored by physicians at the patient’s bedside. However, this type of learning is restricted by time limitations and case opportunity. Medical students do not always see a patient case to its conclusion, because it may take some time to manage a patient, and they only see cases that present to the ward during their rotations. BioWorld (Lajoie, 2009) provides medical students with a deliberate practice environment (Ericsson, Krampe, & Tesch-Römer, 1993) to solve virtual patient cases by practicing their diagnostic reasoning skills in a simulated environment: collecting patient symptoms, running diagnostic tests, formulating and changing diagnoses, and searching for information in online libraries. BioWorld adapts to students by capturing and analyzing their interactions with the system. Expert feedback is provided to help students in the context of their diagnostic reasoning. However, since diagnostic reasoning is an ill-structured problem-solving task, the challenge is to model multiple paths to attaining the correct solution (Lajoie, 2003, 2009). Although medical experts reach consensus on a diagnosis, they reach their conclusions via different routes. However, these same experts justify plausible hypotheses on the basis of common evidence items, including patient symptoms and lab-test information (Gauthier & Lajoie, 2014). Based on modelling expert evidence, BioWorld traces students in the context of the actions they take and the processes they use to solve a case (patient history, diagnostic tests, and library searches) and provides micro-level scaffolding in the context of their solutions. More macro-level feedback is provided at the conclusion of the case, when BioWorld provides a visualization that compares the student solution to an average set of expert actions taken to solve a case. The visualization provides an opportunity for novices to self-reflect on their own approaches to resolving the problem.

Lajoie and colleagues have explored the relationship between medical student diagnostic reasoning proficiency in BioWorld and use of expert feedback, SRL, use of online library tools, and accuracy of written case summaries. Proficiency was related to better use of feedback (Lajoie, Poitras, et al., 2013), more SRL behavior, and greater use of online library tools to supplement knowledge (Lajoie, Naismith, et al., 2013; Poitras, Jarrell, Doleck, & Lajoie, 2014). The novice–expert overlay models are also useful when capturing linguistic features from written case summaries to determine differences in reasoning that can help determine where feedback content may need to be tailored (Poitras, Doleck, & Lajoie, 2014). The relationship between motivational and affective constructs while learning with BioWorld has been explored in terms of the impact of goal orientations and affective reactions on the attention given to feedback (Lajoie, Naismith, et al., 2013; Naismith, 2013). Student attention to feedback in BioWorld is mediated by a number of factors, including learning performance, achievement-goal orientation, feedback emotions, and how the feedback is displayed (Naismith, 2013; Naismith & Lajoie, 2014). Understanding the relative contribution of each of these factors is important for the effective design and implementation of computer-based feedback to support diagnostic reasoning.

Lajoie (2014) comments that interdisciplinary teams can expand the depth of assessment of learning trajectories. As multiple forms of learner data are collected, psychometricians may look at convergent data to build theoretical models of learner profiles that can be used to predict performance. Latent class analysis of variables has the potential to discover individual-based dynamics and identify distinct learning patterns attributable to specific TRE components that ultimately can contribute to the development of profile-based design interventions in the TRE to individualize and maximize learning (Jang, Wagner, & Xu, 2014). According to Jacqueline Leighton, a LEADS member and educational psychologist who specializes in assessment, “interdisciplinary approaches to assessment design ensure that all relevant factors—human, environmental, and social—are considered in the process of collecting the most accurate data about how individuals learn. In addition, such approaches force us to continually challenge and diversify our existing practices for the purpose of designing the most innovative assessments possible” (J. Leighton, personal communication, December 4, 2014).

In the final example we explore how a TRE, MetaTutor, can adapt feedback to promote self-regulated learning about anatomical systems. MetaTutor was designed by Roger Azevedo and his colleagues. Their approach integrates conceptual and theoretical frameworks and models from the cognitive, educational, learning, and affective sciences in its design. With computer scientists, they have created pedagogical agents that model, scaffold, and foster cognitive and metacognitive processes in the context of learning anatomy.

SCAFFOLDING SCIENCE COMPREHENSION BY HELPING LEARNERS KNOW WHAT THEY KNOW: METATUTOR

Some TREs are open-ended: Students can explore any and all avenues that interest them in the context of their learning. Some learners do not perform well in open-ended computer-based environments because they cannot determine where to direct their cognitive resources. Research has shown that learners often fail to engage in regulatory processes, which can lead to decreases in learning and performance (Azevedo & Feyzi-Behnagh, 2010). MetaTutor was designed to assist learners in developing their self-regulatory skills in the context of learning about complex biological systems in an adaptive multi-agent hypermedia learning environment (Azevedo, Johnson, Chauncey, & Burkett, 2010; Azevedo, Moos, Greene, Winters, & Cromley, 2008). Azevedo and his colleagues based their design of MetaTutor on empirical research that examined the effectiveness of specific self-regulatory processes while interacting with human tutors; as a result, the design of MetaTutor includes pedagogical agents that emulate human–tutor interactions in an adaptive manner (see Azevedo et al., 2008). MetaTutor allows learners to interact with several pedagogical agents while learning about science. The agents capture learner interactions and use production rules to assess the cognitive and metacognitive processes learners use in the context of MetaTutor. These rules are triggered by both learner- and system-initiated actions that determine the type of adaptive feedback provided to the learner. By analyzing the student model, the agents determine such things as whether a student exceeded a time threshold on an irrelevant diagram and whether he or she made no overt metacognitive act; the system would infer that there was no specific metacognitive judgment (e.g., content evaluation) and would then trigger a certain pedagogical action on behalf of the metacognitive agent, depending on specific behavioral and contextual factors during learning. These rules are used to provide learners with appropriate scaffolding in terms of setting goals for their learning, monitoring their progress, using learning strategies to better achieve these goals, and handling task difficulties and demands. These self-regulatory processes involve cognitive, metacognitive, motivational, and affective activities that unfold during learning. Learners engage in these regulatory processes to transform information about biological systems from multiple types of representation.

Before learning with MetaTutor, learners have the benefit of instructional videos in which self-regulatory skills are modeled, a discrimination task in which learners recognize appropriate and inappropriate uses of these strategies, and a detection task in which they identify the onset of a self-regulatory strategy. During the actual learning task, the learners have the benefit of pedagogical agents, menus to select and organize the subject matter and learning subgoals, and various representations of the science content. The pedagogical agents provide feedback within the tutorial dialogue to support learners in selecting appropriate goals, judging their degree of understanding in an accurate manner, and using effective learning strategies. The learners use a palette to engage in specific metacognitive monitoring and control activities.

MetaTutor records user interactions for the purpose of providing adaptive feedback on the deployment of self-regulatory processes. When they detect patterns in learner behaviors, the pedagogical agents may prompt learners to make a metacognitive monitoring judgment. Once the learners are prompted to monitor their learning, a brief quiz is administered to tailor the content of the feedback to the individual needs of different learners. This computational model is evaluated and iteratively revised through the collection of converging data gained from multiple sources, including self-report measures of self-regulatory processes as well as online measures such as concurrent think-aloud protocols, tutorial dialogues, physiological signals, eye-tracking data, and facial data of emotional reactions. These types of data complement the findings obtained from the log-file record of user interactions, including note-taking behaviors, drawing, and selections, as well as performance on embedded quizzes. These sources of data enable researchers to characterize the deployment of self-regulatory processes by examining the temporal aspects of specific processes; navigational paths across hypermedia content; fixation patterns across interface elements (Azevedo, Johnson, et al., 2010; Azevedo & Witherspoon, 2009); patterns of learner interactions (Bouchet, Azevedo, Kinnebrew, & Biswas, 2012); and the onset of emotional reactions (Harley, Bouchet, Hussain, Azevedo, & Calvo, 2015).

Empirical research using MetaTutor has demonstrated how and when specific self-regulatory agents assist learners in their use of such skills to learn about the circulatory system in a hypermedia environment. More specifically, they have identified specific precursors to the use of self-regulatory skills in the context of learning in this advanced technology.

LEARNING AND INSTRUCTION: WHAT ARE THE AFFORDANCES OF TECHNOLOGY?

The underlying mechanisms and design principles that dictate how learning and engagement are enhanced in the context of these TREs have been the focus of continued research during the last decade. Although generalizable guidelines are still a matter of considerable debate, most learning scientists now agree that such learning environments promote learning and engagement while instruction (a) conveys opportunities for one-on-one tutoring with pedagogical agents, (b) externalizes models of proficiency in task performance, (c) enables sustained practice in performing meaningful tasks, and (d) supports learners while they become more autonomous and self-directed.

LEARNING FROM HUMAN AND COMPUTER TUTORS OR AGENTS

There is no “one size fits all” when it comes to instruction. Individuals differ in their aptitudes and in their motivational profiles and consequently have different preferences for learning. Consequently, it is no surprise that one-on-one tutoring with good tutors is the most effective form of instruction, because tutors can adapt their instruction based on learners’ needs. Bloom (1984) reported that students involved in one-to-one tutoring with human tutors performed at around the 98th percentile, two standard deviations above those in traditional classrooms. Tutors help learners by establishing models of competency within specific domains that can help those less competent become more proficient (Lajoie, 2003; Pellegrino, Chudowsky, & Glaser, 2001).

Research using intelligent tutoring systems has shown improvements over typical classroom instruction in several disciplines, including mathematics, i.e., cognitive tutors (Koedinger & Corbett, 2006); computer science, i.e., constraint-based tutors (Mitrovic, Mayo, Suraweera, & Martin, 2001); microbiology, i.e., narrative-based tutors (Rowe et al., 2011); and physics and computer literacy, i.e., dialogue-based tutors (Graesser, VanLehn, Rose, Jordan, & Harter, 2001). Targeted approaches to scaffolding specific SRL processes have also been conducted using animated pedagogical agents embedded in TREs for science learners (Azevedo, Cromley, Moos, Greene, & Winters, 2011). Multi-pedagogical agent environments provide feedback on specific learning skills that provide advantages similar to those of one-on-one tutoring.

LEARNING FROM MODELS OF PERFORMANCE AND COMPETENCE

Studies of expertise demonstrate that proficiency is achieved through deliberate practice of target skills with feedback (Ericsson et al., 1993). TREs can help learners deliberately practice skills with appropriate scaffolding—a pedagogical process experts use to help learners perform tasks they cannot do by themselves. Scaffolding can be a cognitive support for problem-solving or a motivational support to help learners realize their potential (D’Mello & Graesser, 2012). As students become competent and confident, scaffolding eases away so that they become independent learners (Azevedo & Hadwin, 2005; Lajoie, 2005; Pea, 2004). When activities are too difficult for learners to do alone, experts (human or computer) can model skills to help learners perform a task efficiently (Bandura, 1977; Lajoie, 2007; Zimmerman, 2008). Using technology that assists learners along their individual learning trajectories with appropriate scaffolds can help them achieve learning goals that they would be incapable of reaching without such assistance. Several mechanisms are used to establish such models. One is to identify the cognitive competencies needed to solve a specific task, so that the learner’s model of performance on the task can be compared to the competencies that need to be acquired. Once competencies are identified, appropriate scaffolds can be determined; understanding superior human scaffolding allows better models of computer scaffolding to be established. TREs can be designed with different models of human and computer tutoring.

LEARNING THROUGH INTERACTION

Technology can provide learners with opportunities to interact with real-world problems and can serve as a learning partner that forms the basis for interaction. Learning partnerships involve interaction with others (e.g., tutors, teachers, real or virtual peers, computers, books, and media). Learners’ experiences can be supported by technology and situated in the context in which they are working, including the people, the tasks, and the tools that are available to help perform those tasks (von Glasersfeld, 1995). Situated learning theories describe human thought and action in response to the ways that complex environments provide opportunities for integrating information from multiple sources and promote the social construction of knowledge (Clancey, 1997; Greeno, 1998). Understanding the mechanisms of human–human tutorial dialogue can assist in the design of natural-language tutorial dialogue systems in TREs. Lester and colleagues use computer and cognitive science methodologies to explore these issues, such as (a) how learner characteristics influence the structure of tutorial dialogue, (b) how human tutors balance cognitive and motivational scaffolding, and (c) how these variables affect learning and self-efficacy gains (Lester, Rowe, & Mott, 2013).

LEARNING THROUGH MEANINGFUL TASKS

Learners are typically more engaged in their learning efforts when they value the activity and when they feel in control of their learning (Pekrun, 2006). According to Pekrun’s control-value theory, emotions that individuals experience during learning activities are determinants of learning success. Technology provides a tool for self-directed inquiry, whereas Weimer (2010) asserted that traditional educational contexts promote dependency because so much is outside students’ control—from the selection of content to in-class activities to participation and assessment. This type of restriction leaves little time and opportunity for students to develop independent study habits and higher-order cognitive skills. When learners demonstrate or articulate their ability to do a task on their own, then scaffolding can be faded and eventually removed. A major goal of instruction is that students must learn to learn and become self-regulated learners (Zimmerman, 1986). Advanced learning technologies can foster students’ SRL processes that lead them to attain their learning goals. Azevedo and colleagues (Azevedo et al., 2008; Azevedo et al., 2013; Azevedo, Johnson, et al., 2010) have examined the relationship between scaffolding and SRL in science and demonstrated how to scaffold learners to become independent and capable of monitoring and controlling their own performance.

CONCLUSION

The metaphor of using technology as a tool to augment our thinking has undergone significant change during the last decade, so much so that technology-rich learning environments rely on it to target not only cognitive, but also metacognitive, affective, and social processes. Therefore, we have argued that the metaphor of cognitive tools should be broadened in terms of the breadth and depth of constructs that are targeted through the use technology. These tools are embedded in technology-rich learning environments with the aim of enhancing learning and performance across a broad variety of tasks and disciplines, to the benefit of a range of learners with varying characteristics, backgrounds, and needs. Given these rapid developments in learning theories, the challenge is to design affective and metacognitive tools that are capable of accurate and reliable assessment of these latent and complex processes.

Pellegrino et al. (2001) claimed that effective teaching relies on a triad of factors—curriculum topic, instructional approach, and assessment method— that complement each other in terms of achieving a well-defined objective. This paper has reviewed several examples of technology-rich learning environments studied in the LEADS partnership that align curriculum, instruction, and assessment, enabling the underlying system to adapt itself to the varying and important events that characterize learning processes and outcomes before, during, and after task performance (Lajoie, 2014). These affordances of technology-rich learning environments can take many forms, as learners engage in optimal learning through one-on-one tutoring by pedagogical agents, models of proficiency that are externally represented through the system interface, repeated opportunities for practice and feedback, and increased autonomy and self-directedness in task performance.

A direction that needs to be explored in the area of adaptive technologies is one that considers the social-emotional perspectives of learning. As theories of learning have adjusted to consider social influences on cognition, many self-regulation researchers have broadened their scope to examine co-regulation and socially shared regulation of knowledge (Hadwin & Oshige, 2011; Jӓrvelӓ & Hadwin, 2013; Volet et al., 2013). These researchers described how group members influence the growth of self-regulation (Hadwin & Oshige, 2011; Jӓrvelӓ & Hadwin, 2013). As these theories grow, so do the designs of advanced technologies to encourage co-regulation and to assess the influence of social-emotional processes on learning using technologies (Lajoie et al. 2015). Methodological innovations must be created to represent and analyze the multiple voices that contribute to co-regulation and learning.

Advances in learning analytics and data-mining techniques are assisting researchers in mining large data sets to find meaningful patterns in group discourse that lead to learning. We see this as an emerging field. TREs are now capable of accumulating large volumes of data on learners, and computational challenges have arisen that call for further research in the refinement and development of learning analytics and reporting tools (Buckingham Shum et al., 2013; Cooper & Sahami, 2013; Kay, Reimann, Diebold, & Kummerfeld, 2013; Siemens & Long, 2011; Williams, Renkl, Koedinger, & Stamper, 2013). There has been rapid growth in the development of data mining and analytics in the field of education during the last decade (Baker & Yacef, 2009; Romero & Ventura, 2010). Notable initiatives within this field include the Pittsburgh Science of Learning Center DataShop, the largest open repository of data on learner interactions with intelligent systems (Koedinger et al., 2010), and the creation of standards for logging educational data, e.g., Aggregator for Game Environments and Generalized Intelligent Framework for Tutoring (Owen & Halverson, 2013; Sottilare, Brawner, Goldberg, & Holden, 2012; Sottilare, Hu, & Graesser, 2013). Using log-file databases of user interactions, researchers have utilized data-mining techniques, including clustering, classification, and sequential pattern-mining algorithms, in order to accomplish several tasks, such as predicting learner performance, generating recommendations, detecting undesirable behaviors, and grouping learners into usable profiles (see Baker & Yacef, 2009: Romero & Ventura, 2010). The large-scale databases that are generated in the context of such platforms provide an important opportunity for researchers to exploit educational data in order to develop automated methods of assessment. Furthermore, as the learning theories that inform design metaphors continue to expand, there is a need for increased research into multimodal data mining and learning analytics in order to target the cognitive, metacognitive, affective, and social processes that mediate learning.

Acknowledgement

The authors acknowledge the Social Sciences and Humanities Research Council of Canada for funding their research (grant #895-2011-1006) and making this paper possible.

References

Anderson, J., Greeno, J. G., Reder, L., & Simon, H. A. (2000). Perspectives on learning, thinking and activity. Educational Researcher, 29(4), 11–13.

Azevedo, R. (2009). Theoretical, methodological, and analytical challenges in the research on metacognition and self-regulation: A commentary. Metacognition & Learning, 4(1), 87–95.

Azevedo, R., & Aleven, V. (Eds.) (2013). International handbook of metacognition and learning technologies. Amsterdam, The Netherlands: Springer.

Azevedo, R., Cromley, J. G., Moos, D. C., Greene, J. A., & Winters, F. I. (2011). Adaptive content and process scaffolding: A key to facilitating students’ self- regulated learning with hypermedia. Psychological Testing and Assessment Modeling, 53, 106–140.

Azevedo, R., & Feyzi-Behnagh, R. (2010). Dysregulated learning with advanced learning technologies. In R. Pirrone, R. Azevedo, & G. Biswas (Eds.), Proceedings of the 2010 AAAI Fall Symposium on Cognitive and Metacognitive Educational Systems (pp. 5–10). Menlo Park, CA: AAAI Press.

Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self-regulated learning and metacognition: Implications for the design of computer-based scaffolds. Instructional Science, 33, 367–379.

Azevedo, R., Harley, J., Trevors, G., Duffy, M., Feyzi-Behnagh, R., Bouchet, F., & Landis, R. S. (2013). Using trace data to examine the complex roles of cognitive, metacognitive, and emotional self-regulatory processes during learning with multi-agents systems. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (pp. 427–449). Amsterdam, The Netherlands: Springer.

Azevedo, R., Johnson, A., Chauncey, A., & Burkett, C. (2010). Self-regulated learning with MetaTutor: Advancing the science of learning with MetaCognitive tools. In M. S. Khine & I. M. Saleh (Eds.), New science of learning: Computers, cognition, and collaboration in education (pp. 225–247). Heidelberg, Germany: Springer.

Azevedo, R., Moos, D. C., Greene, J. A., Winters, F. I., & Cromley, J. G. (2008). Why is externally-facilitated regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development, 56(1), 45–72.

Azevedo, R., Moos, D., Johnson, A., & Chauncey, A. (2010). Measuring cognitive and metacognitive regulatory processes used during hypermedia learning: Issues and challenges. Educational Psychologist, 45(4), 210–223.

Azevedo, R., & Witherspoon, A. M. (2009). Self-regulated learning with hypermedia. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 319–339). Mahwah, NJ: Routledge.

Baker, L., & Brown, A. L. (1984). Metacognitive skills and reading. In P. D. Pearson (Ed.), Handbook of research in reading (pp. 353–395). New York: Longman.

Baker, R. S., & Yacef, K. (2009) The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1(1), 3–17.

Bandura, A. (1977). Social learning theory. New York: General Learning Press.

Bloom, B. S. (1984). The 2-sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16.

Bouchet, F., Azevedo, R., Kinnebrew, J. S., & Biswas, G. (2012). Identifying students’ characteristic learning behaviors in an intelligent tutoring system fostering self-regulated learning. In K. Yacef, O. Zaïane, H. Hershkovitz, M. Yudelson, & J. Stamper (Eds.), Proceedings of the 5th International Conference on Educational Data Mining (pp. 65–72).

Brown, A. L. (1994). The advancement of learning. Educational Researcher, 23(8), 4–12.

Buckingham Shum, S., Hawksey, M., Baker, R. S., Jeffery, N., Behrens, J. T., & Pea, R. (2013, April). Educational data scientists: a scarce breed. In D. Suthers, K. Verbert, E. Duval, & X. Ochoa (Eds.), Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 278–281). New York, NY: ACM.

Clancey, W. (1997). Situated cognition: On human knowledge and computer representations. Cambridge, MA: Cambridge University Press.

Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Hillsdale, NJ: Erlbaum.

Collins, A., & Halverson, R. (2009). Rethinking education in the age of technology: The digital revolution and schooling in America. New York, NY: Teachers College Press.

Conati, C., Gertner, A., & VanLehn, K. (2002). Using Bayesian networks to manage uncertainty in student modeling. User Modeling and User-Adapted Interaction, 12(4), 371–417.

Cooper, S., & Sahami, M. (2013). Reflections on Stanford's MOOCs. Communications of the ACM, 56(2), 28–30.

Cordova, D. I., & Lepper, M. R. (1996). Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization, and choice. Journal of Educational Psychology, 88, 715–730.

Corno, L. (2001). Volitional aspects of self-regulated learning. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theoretical perspectives (2nd ed., pp. 179–210). Mahwah, NJ: Taylor & Francis.

Corno, L., & Mandinach, E. B. (1983). The role of cognitive engagement in classroom learning and motivation. Educational Psychologist, 18, 88–108.

Derry, S. J., & Lajoie, S. P. (1993). A middle camp for (un)intelligent computing. In S. P. Lajoie & S. J. Derry (Eds.), Computers as cognitive tools (pp.1 –11). Hillsdale, NJ: Erlbaum.

Dinsmore, D. H., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation and self-regulated learning. Educational Psychology Review, 20, 391–409.

D’Mello, S. K., & Graesser, A. C. (2012). AutoTutor and Affective AutoTutor: Learning by talking with cognitively and emotionally intelligent computers that talk back. ACM Transactions on Interactive Intelligent Systems, 2(4), 23:2–23:39.

Doleck, T., Basnet, R. B., Poitras, E., & Lajoie, S. (2014a). Exploring the link between initial and final diagnosis in a medical intelligent tutoring system. In Proceedings of the IEEE International Conference on MOOCs, Innovation & Technology in Education (MITE) (pp. 13-16) India: IEEE.

Doleck, T., Basnet, R. B., Poitras, E., & Lajoie, S. (2014b). Augmenting the novice-expert overlay model in an intelligent tutoring system: Using confidence-weighted linear classifiers. In Proceedings of the IEEE International Conference on Computational Intelligence & Computing Research (IEEE ICCIC) (pp.; 87-90). Tamil Nadu, India.

Doleck, T., Basnet, R. B., Poitras, E., & Lajoie, S. (2017). Investigating learning behaviors in a medical intelligent tutoring system: Using hidden Markov models. Manuscript in preparation.

Du Boulay, B., Avramides, K., Luckin, R., Martinez-Miron, E., Rebolledo Mendez, G., & Carr, A. (2010). Towards systems that care: A conceptual framework based on motivation, metacognition and affect. International Journal of Artificial Intelligence in Education, 20(3), 197–229.

Duffy, M., Lajoie, S. P., Jarrell, A. Pekrun, R., Azevedo, R., & Lachapelle, K. (2015, April). Emotions in medical education: Developing and testing a self-report emotions scale across medical learning environments. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.

Ericsson, K. A., Krampe, R. T., Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906–911.

Gauthier, G., & Lajoie, S. P. (2014). Do expert clinical teachers have a shared understanding of what constitutes a competent reasoning performance in case-based teaching? Instructional Science, 42(4), 579–594.

Gee, J. P. (2005). Learning by design: Good video games as learning machines. E-Learning and Digital Media, 2(1), 5–16.

Graesser, A. C., VanLehn, K., Rose, C., Jordan, P., & Harter, D. (2001). Intelligent tutoring systems with conversational dialogue. AI Magazine, 22, 39–51.

Greeno, J. (1998). The situativity of knowing, learning, and research. American Psychologist, 53(1), 5–26.

Ha, E. Y., Rowe, J., Mott, B., & Lester, J. (2011). Goal recognition with Markov logic networks for player-adaptive games. Proceedings of the Seventh International Conference on Artificial Intelligence and Interactive Digital Entertainment, 32–39.

Hadwin, A., & Oshige, M. (2011). Self-regulation, coregulation, and socially shared regulation: Exploring perspectives of social in self-regulated learning theory. Teachers College Record, 113(6), 240–264.

Harley, J. M., Bouchet, F., Hussain, S., Azevedo, R., & Calvo, R. (2015). A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system. Computers in Human Behavior, 48, 615–625.

Harley, J. M., Poitras, E. G., Jarrell, A., Lajoie, S. P., & Duffy, M. (2016). Comparing virtual and location-based augmented reality mobile learning: Emotions and learning outcomes. Journal of Educational Technology Research and Development, 64(3), 359–388. doi:10.1007/s11423-015-9420-7

Herrera, F., Carmona, C. J., Gonzalez, P., & Jose del Jesus, M. (2011). An overview on subgroup discovery: Foundations and applications. Knowledge and Information Systems, 29(3), 495–525.

Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107.

James, W. (1899). Talks to teachers on psychology: And to students on some of life's ideals. Mineola, NY: Dover.

Jang, E. E., Wagner, M., & Xu, Z. (2014, April). Ecological assessment framework in computer-based learning environments. Paper presented at the American Educational Research Association Annual Meeting, Philadelphia, PA.

Järvelä, S., & Hadwin, A. F. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48(1), 25–39.

Jonassen, D. H., & Reeves, T. C. (1996). Learning with technology: Using computers as cognitive tools. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 693–719). New York, NY: Simon & Schuster.

Kay, J., Reimann, P., Diebold, E., & Kummerfeld, B. (2013). MOOCs: So many learners, so much potential. IEEE Intelligent Systems, 3, 70–77.

Kee, K., & Darbyson, N. (2011). Creating and using virtual environments to promote historical thinking. In P. Clark (Ed.), New possibilities for the past: Shaping history education in Canada. Vancouver, Canada: UBC Press.

Kirschner, P., Sweller, J., & Clark, R. E. (2006). Why unguided learning does not work: An analysis of the failure of discovery learning, problem-based learning, experiential learning and inquiry-based learning. Educational Psychologist, 41(2), 75–86.

Koedinger, K. R., Baker, R. S. J. D., Cunningham, K., Skogsholm, A., Leber, B., & Stamper, J. (2010) A data repository for the EDM community: The PSLC dataShop. In C. Romero, S. Ventura, M. Pechenizkiy, & R. S. J. D. Baker (Eds.), Handbook of educational data mining. Boca Raton, FL: CRC Press.

Koedinger, K. R., & Corbett, A. T. (2006). Cognitive tutors: Technology bringing learning science to the classroom. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 61–78). New York, NY: Cambridge University Press.

Lachman, R., Lachman, J. L., & Butterfield, E. C. (1979). Cognitive psychology and information processing. Hillsdale, NJ: Erlbaum.

Lajoie, S. P. (Ed.). (2000). Computers as cognitive tools (Vol. 2): No more walls. Mahwah, NJ: Erlbaum.

Lajoie, S. P. (2003). Transitions and trajectories for studies of expertise. Educational Researcher, 32(8), 21–25.

Lajoie, S. P. (2005). Cognitive tools for the mind: The promises of technology: Cognitive amplifiers or bionic prosthetics? In R. J. Sternberg & D. Preiss (Eds.), Intelligence and technology: Impact of tools on the nature and development of human skills (pp. 87–102). Mahwah, NJ: Erlbaum.

Lajoie, S. P. (2007). Aligning theories with technology innovations in education. British Journal of Educational Psychology—Monograph Series II (5), Learning through digital technologies, 27–38.

Lajoie, S. P. (2008). Metacognition, self regulation, and self-regulated learning: A rose by any other name? Educational Psychology Review, 20(4), 469–475.

Lajoie, S. P. (2009). Developing professional expertise with a cognitive apprenticeship model: Examples from avionics and medicine. In K. A. Ericsson (Ed.), Development of professional expertise: Toward measurement of expert performance and design of optimal learning environments (pp. 61–83). New York, NY: Cambridge University Press.

Lajoie, S. P. (2014). Multimedia learning of cognitive skills. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 623–646). New York, NY: Cambridge University Press.

Lajoie, S. P., & Azevedo, R. (2006). Teaching and learning in technology-rich environments. In P. Alexander and P. Winne (Eds.), Handbook of educational psychology (2nd ed., pp. 803–821). Mahwah, NJ: Erlbaum.

Lajoie, S. P., & Derry, S. J. (Eds.). (1993). Computers as cognitive tools. Hillsdale, NJ: Erlbaum.

Lajoie, S. P., & Lesgold, A. (1992). Dynamic assessment of proficiency for solving procedural knowledge tasks. Educational Psychologist, 27, 365–384.

Lajoie, S. P., Naismith, L., Hong, Y. J., Poitras, E., Cruz-Panesso, I., Ranellucci, J., Mamane, S., & Wiseman, J. (2013). Technology rich tools to support self-regulated learning and performance in medicine. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies, pp. 229–242. New York, NY: Springer.

Lajoie, S. P., Poitras, E., Naismith, L., Gauthier, G., Summerside, C., Kazemitabar, M., Tressel, T., Lee, L., & Wiseman, J. (2013). Modelling domain-specific self-regulatory activities in clinical reasoning. In C. Lane, K. Yacef , J. Mostow, & P. Pavlik Eds.), International artificial intelligence and education proceedings (pp. 632–635). Amsterdam, the Netherlands: IOS Press.

Lajoie, S. P., Lee, L., Bassiri, M., Cruz-Panesso, I., Kazemitabar, M., Poitras, E., Hmelo-Silver, C., Wiseman, J., Chan, L., Lu, J. (2015). The role of regulation in medical student learning in small groups: Regulating oneself and others’ learning and emotions. In Järvelä, S. & Hadwin, A. (Eds.) Special issue Examining the emergence and outcomes of regulation in CSCL. Journal of Computer and Human Behavior. 52, 601-616. doi:10.1016/j.chb.2014.11.073

Lee, S., Rowe, J., Mott, B. W., & Lester, J. (2014). A supervised learning framework for modeling director agent strategies in educational interactive narrative. IEEE Transactions on Computational Intelligence and AI in Games, 6(2), 203–215.

Lepper, M. R. (1988). Motivational considerations in the study of instruction. Cognition and Instruction, 5, 289–310.

Lesgold, A. M. (2000). What are tools for? Revolutionary change does not follow the usual norms. In S. P. Lajoie (Ed.), Computers as cognitive tools (Vol. 2): No more walls (pp. 399–408). Mahwah, NJ: Erlbaum.

Lester, J., Rowe, J., & Mott, B. (2013). Narrative-centered learning environments: A story-centric approach to educational games. In C. Mouza and N. Lavigne (Eds.), Emerging technologies for the classroom: A learning sciences perspective (pp. 223–238). New York, NY: Springer.

Loyens, S. M., Magda, R. M., & Rikers, J. P. (2008). Self-directed learning in problem-based learning and its relationships with self-regulated learning. Educational Psychology Review, 20(4), 411–427.

Mayer, R. E. (1996). Learners as information processors: Legacies and limitations of educational psychology’s second metaphor. Educational Psychologist, 31(3/4), 151–161.

McQuiggan, S., Rowe, J., & Lester, J. (2008). The effects of empathetic virtual characters on presence in narrative-centered learning environments. Proceedings of the 2008 SIGCHI Conference on Human Factors in Computing Systems, 1511–1520.

Mislevy, R. J., Steinberg, L. S., Breyer, F. J., Almond, R.G., & Johnson, L. (2002). Making sense of data from complex assessments. Applied Measurement in Education, 15, 363–378.

Mitrovic, A., Mayo, M., Suraweera, P., & Martin, B. (2001). Constraint-based tutors: A success story. In L. Monostori, J. Vancza, & M. Ali (Eds.), Proceedings of the 14th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems (pp. 931–940). Berlin, Germany: Springer-Verlag .

Naismith, L. M. (2013). Examining motivational and emotional influences on medical students’ attention to feedback in a technology-rich environment for learning clinical reasoning (Unpublished doctoral dissertation). McGill University, Montreal, Canada.

Naismith, L. M., & Lajoie, S. P. (2014, April). Motivation, emotion, and attention to feedback in a computer-based learning environment for clinical reasoning. Paper presented at the meeting of the American Educational Research Association, Philadelphia, PA.

Neisser, E. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.

Nickerson, R. S. (2005). Technology and cognition amplification. In R. J. Sternberg & D. Preiss (Eds.), Intelligence and technology: Impact of tools on the nature and development of human skills (pp. 3–28). Mahwah, NJ: Erlbaum.

Owen, V. E., & Halverson, R. (2013). ADAGE: Assessment Data Aggregator for Game Environments. In C. Williams, A. Ochsner, J. Dietmeier, & C. Steinkuehler (Eds.), Proceedings of the Games, Learning, and Society Conference, pp. 248-254. Pittsburgh, PA: ETC

Pea, R. D. (2004). The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. Journal of Learning Sciences, 13(3), 423–451.

Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational Psychology Review, 18, 315–341.

Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know. Washington, DC: National Academies Press.

Perkins, D. N. (1985). The fingertip effect: How information processing technology shapes thinking. Educational Researcher, 14, 11–17.

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). San Diego, CA: Academic Press.

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.

Poitras, E., Doleck, T., & Lajoie, S. P. (2014). Mining case summaries in BioWorld. Proceedings of the 9th International Conference on Computer Science & Education (ICCSE), 6–9. doi:10.1109/iccse.2014.6926422

Poitras, E., Jarrell, A., Doleck, T., & Lajoie, S. (2014). Supporting diagnostic reasoning by modeling help-seeking. Proceedings of the 9th International Conference on Computer Science & Education, 10–14. doi:10.1109/iccse.2014.6926422

Poitras, E., Kee, K., Lajoie, S. P., & Cataldo, D. (2013). Towards evaluating and modelling the impacts of mobile-based augmented reality applications on learning and engagement. In C. Lane, K. Yacef , J. Mostow, & P. Pavlik Eds.), International artificial intelligence and education proceedings. (Eds.), International Artificial Intelligence and Education Proceedings (pp. 868–871). Amsterdam. the Netherlands: IOS Press.

Romero, C., & Ventura, S. (2010). Educational data mining: A review of the state of the art. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications And Reviews), 40(6), 601–618.

Rowe, J. P., & Lester, J. (2015). Improving student problem solving in narrative- centered learning environments: A modular reinforcement learning framework. In Proceedings of the Seventeenth International Conference on Artificial Intelligence in Education (pp. 419–428), Madrid, Spain.

Rowe, J. P., Mott, B. W., McQuiggan, S. W., Robison, J. L., Lee, S. & Lester, J. C. (2009). Crystal Island: A narrative-centered learning environment for eighth-grade microbiology. Workshop on Intelligent Educational Games, AIED (pp. 11–20).

Rowe, J. P., Shores, L., Mott, B., & Lester, J. (2011). Integrating learning, problem solving, and engagement in narrative-centered learning environments. International Journal of Artificial Intelligence in Education, 21(1–2), 115–133.

Sabourin, J., Mott, B., & Lester, J. (2013). Discovering behavior patterns of self-regulated learners in an inquiry-based learning environment. Proceedings of the Sixteenth International Conference on Artificial Intelligence in Education, 209–218.

Salomon, G., Perkins, D. N., & Globerson. T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2–9.

Shute, V. J. (2008). Focus on formative assessment. Review of Educational Research, 78, 153–189.

Shute, V. J., & Ke, F. (2012). Games, learning, and assessment. In D. Ifenthaler, D. Eseryel, & X. Ge (Eds.), Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 43–58). New York, NY: Springer.

Shute, V. J., Leighton, J., P., Jang, E. E., & Chu, M.-W. (2016). Advances in the science of assessment. Educational Assessment, 21(1), 34–59. doi:10.1080/10627197.2015.1127752

Shute, V. J., Ventura, M., & Kim, Y. J. (2013). Assessment and learning of informal physics in Newton’s Playground. The Journal of Educational Research, 106, 423–430.

Shute, V. J., & Zapata-Rivera, D. (2012). Adaptive educational systems. In P. Durlach & A. Lesgold (Eds.), Adaptive technologies for training and education. New York, NY: Cambridge University Press.

Siemens, G., & Long, P. (2011). Penetrating the fog: analytics in learning and education. EDUCAUSE Review 46(5), 30.

Sottilare, R. A., Brawner, K. W., Goldberg, B. S., Holden, H. K. (2012). The Generalized Intelligent Framework for Tutoring (GIFT). Orlando, FL: U.S. Army Research Laboratory–Human.

Sottilare, R. A, Graesser, A., Hu, X., & Holden, H. (Eds.). (2013). Design recommendations for intelligent tutoring systems: Vol. 1—Learner modeling. Orlando, FL: U.S. Army Research Laboratory.

Sottilare, R., Hu, X., & Graesser, A. (2014). Design recommendations for adaptive intelligent tutoring systems: Instructional strategies, Vol. II. Orlando, FL: U.S. Army Research Laboratory.

Thorndike, E. L. (1911/1965). Animal intelligence. New York, NY: Macmillan.

VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.

Volet, S., Vauras, M., Khosa, D., & Iiskala, T. (2013). Metacognitive Regulation in Collaborative Learning: Conceptual Developments and Methodological Contextualizations. In S. Volet, & M. Vauras (Eds.), Interpersonal Regulation of Learning and Motivation (pp. 67-101). New York, NY: Routledge.

Von Glasersfeld, E. (1995). Radical constructivism: A way of knowing and learning. London, U.K.: Falmer Press.

Vosniadou, S. (2007). . Educational Psychologist, 42(1), 55–66.

Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press.

Weimer, M. (2010). Taking stock of what faculty know about student learning. In J. Mighty & J. Christensen (Eds.), Taking stock: Research on reaching and learning in higher education (pp. 81–93). Montreal, Canada: McGill-Queen’s Press.

Williams, J. J., Renkl, A., Koedinger, K., Stamper, J. (2013). Online Education: A Unique Opportunity for Cognitive Scientists to Integrate Research and Practice. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society, 113-114. Austin, TX: Cognitive Science Society.

Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 531–566). San Diego, CA: Academic Press.

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology & Psychiatry & Allied Disciplines, 17(2), 89–100.

Woolf, B., Burleson, W., Arroyo, I., Dragon, T., Cooper, D., & Picard, R. (2009). Affect-aware tutors: Recognising and responding to student affect. International Journal of Learning Technology 4(3–4), 129–164.

Zimmerman, B. J. (1986). Becoming a self-regulated learner: Which are the key subprocesses? Contemporary Educational Psychology, 11(4), 307–313.

Zimmerman, B.J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. American Educational Research Journal, 45(1), 166–183.

Zimmerman, B. J., & Schunk, D. H. (2001). Self-regulated learning and academic achievement (2nd ed.). Mahwah, NJ: Erlbaum.




Cite This Article as: Teachers College Record Volume 119 Number 3, 2017, p. 1-30
https://www.tcrecord.org ID Number: 21770, Date Accessed: 10/26/2021 9:24:09 PM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Susanne Lajoie
    McGill University
    E-mail Author
    SUSANNE P. LAJOIE is a Professor and Canadian Research Chair Tier 1 in Advanced Technologies for Learning in Authentic Settings in the Department of Educational and Counselling Psychology at McGill University and a member of the Centre for Medical Education. She is a Fellow of the American Psychological Association and the American Educational Research Association. Dr. Lajoie explores how theories of learning and affect can be used to guide the design of advanced technology-rich learning environments in different domains, such as medicine, mathematics, and history.
  • Eric Poitras
    University of Utah
    E-mail Author
    ERIC POITRAS is an Assistant Professor for Instructional Design and Educational Technology in the Department of Educational Psychology at the University of Utah. He graduated from McGill University, where he earned a graduate degree in the Learning Sciences and worked as a postdoctoral researcher at the Learning Environments Across Disciplines research partnership. His research aims to improve the adaptive capabilities of instructional systems and technologies designed as cognitive and metacognitive tools as a means to foster self-regulated learning. In particular, his work focuses on the capabilities of intelligent tutoring systems and augmented reality applications to capture and analyze learner behaviors in order to deliver the most suitable instructional content in domain areas such as medical diagnostic reasoning, historical thinking, and teacher professional development.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS