Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

Assessing Metacognitive Deficiencies and Effectively Instructing Metacognitive Skills

by Marcel V. J. Veenman - 2017

Metacognitive skills refers to individual abilities for regulating and controlling learning behavior. Orientation, goal setting, planning, monitoring, and evaluation are manifestations of those skills. Given that metacognitive skills directly affect learning behavior, they are a strong predictor of learning performance. Students display a huge variation in metacognitive skillfulness, dependent on age and experience. In this article, metacognitive skills are considered to be an acquired program of self-instructions, that is, an orderly series of condition-action rules that contain conditional knowledge about when to apply which skill, and operational instructions for how to implement a particular skill. This notion has implications for effective metacognitive instruction in deficient students. Prior to instruction, on-line assessments of metacognitive skillfulness during actual task performance are indispensable for the identification of deficient students and for tailoring metacognitive instruction to the individual needs of students. Instruction should subsequently address what skill to perform when, why, and how (WWW&H), embedded within the context of a given task. Moreover, instruction should explicitly inform students about the benefits of applying metacognitive skills to make them exert the required effort. Finally, teachers may act as role model to students by including explicit metacognitive instruction in their lessons.

The concept of metacognition was introduced in the 1970s by Flavell (1976, 1979). He defined metacognition as thinking about one’s own mental activities. Being a neo-Piagetian, he especially took interest in children’s changing knowledge of the cognitive system in the course of development. This type of knowledge is nowadays referred to as metacognitive knowledge. Brown and DeLoache (1978) complemented the conceptualization of metacognition with regulatory control over one’s cognitive system. The latter component is referred to as metacognitive skills or increasingly as self-regulation, although both terms originated from different theoretical perspectives (Dinsmore, Alexander, & Loughlin, 2008).


The first component, metacognitive knowledge, pertains to declarative knowledge of the interplay between person, task, and strategy variables (Flavell, 1979). A student can, for instance, recognize that he or she (person variable) finds mathematics a difficult subject (task variable), and that he or she will therefore have to practice a lot by doing math exercises (strategy variable). On the other hand, another student may think that math is an easy subject and consequently skip the homework. This second student may likely fail the test.

The example of the second student shows that metacognitive knowledge does not by definition have to be correct, although this knowledge may affect the student’s learning behavior. A student may have all sorts of beliefs about himself, the learning task, and the effort needed for all learning activities that do not correspond with reality. Even when the metacognitive knowledge of a student is correct, this does not guarantee that the student will also adequately regulate the learning behavior (Veenman, Van Hout-Wolters, & Afflerbach, 2006). The student from the first example may have the intention to practice a lot but may fail to do so for all sorts of reasons. The student may be insufficiently motivated to invest the necessary time and effort, or he or she may be distracted by other appealing pursuits. The student may find it difficult to judge whether practice is relevant: Doing exercises that have already been mastered is easy but less relevant, whereas doing exercises that are not yet mastered is difficult but relevant. Also, the student may lack the skills needed to execute the task, so that each attempt at practice is nipped in the bud. Thus, correct metacognitive knowledge is an essential, but not sufficient, condition for adequate regulation of learning behavior (Veenman, 2017).

A specific component of metacognitive knowledge is conditional knowledge about when a particular metacognitive strategy or skill should be applied and to what purpose (Schraw & Moshman, 1995). Poor students often do not know what skill to choose, when, and why. Even adequate conditional knowledge, however, does not guarantee the actual execution of a skill because a student may still miss the procedural knowledge for how the skill should be enacted. In fact, conditional knowledge provides an entry to the first stage of skill acquisition, where a metacognitive strategy has to be consciously applied step-by-step and gradually transformed into a skill through proceduralization (Anderson & Schunn, 2000).


The second component of metacognition, metacognitive skills, pertains to the procedural knowledge or skills for the actual regulation of and control over learning behavior. Task orientation, goal setting, planning, monitoring, evaluation, recapitulation, and self-reflection are examples of metacognitive skills. The overt behavior that results from applying metacognitive skills is referred to as metacognitive activities. Metacognitive skills affect learning behavior through these activities and, thus, directly determine learning outcomes.

To be effective, metacognitive skills have to be employed in an orderly fashion during task performance. A metacognitively proficient student will first prepare for the task by orienting on the task. First, the task assignment will be analyzed to identify the nature of the task and to determine what the task demands are. Meanwhile, prior knowledge can be retrieved from memory, enabling a better understanding of the task on the one hand, and preparing memory for storage of new knowledge on the other.

On the basis of this task analysis, an appropriate goal for the assignment needs to be set. Based on this goal, the student can devise a plan of action for attaining that goal, preferably before taking action. Experience, however, shows that students usually first devise a rough plan, which is further specified while performing the task. When lacking knowledge of the subject matter, starting with a rough plan is not such a bad idea. Gradually, knowledge is gathered that can be used to adapt the plan to the task requirements.

During the execution phase, the student usually adheres to the preconceived plan unless there are legitimate reasons for revision of the plan. Moreover, the student keeps a close watch on his or her behavior to avoid mistakes or correct them in a timely manner, to check one’s understanding of the material, and to assess progress made toward the goal. This observation and assessment of one’s own functioning from a helicopter view is called monitoring or process control. Making detailed, functional notes while performing a task supports the process of monitoring because the student can then retrace earlier steps in the execution of the task.

After this execution phase, the student will evaluate whether the goal has been reached. Moreover, by rereading the assignment, the student may verify whether task requirements have been met. Reflection on how the task was executed may entail learning for future performance.

For metacognitively proficient students, this entire process is cyclic in nature (Veenman, 2013a; Zimmerman, 2000). If the student gets stuck during task performance, monitoring processes instigate reorientation on the assignment or adapting the plan of action. Task evaluation after task execution can also lead to reorientation or revision of plans until the goal has been reached.

Metacognitively poor students, on the other hand, almost completely skip the preparatory phase. The assignment is often only read partially or superficially, after which they immediately start acting. With text comprehension, this usually leads to linear, mechanical reading from start to finish (Veenman & Beishuizen, 2004). With problem solving, trial-and-error behavior is often seen, whereby different solution methods are selected and tried out at random (Veenman, 2005). In the end, the student cannot see the wood for the trees. Monitoring is done sparsely and only when an evident mistake compels the student to do so. Typically, the execution of the task ends abruptly when the last word in the text has been read, or when a solution has been found, even if this solution does not provide an answer to the question. Thus, the evaluation phase is practically skipped (Veenman, 2013a).


A theory of metacognition ought to distinguish metacognitive “executive” processes from cognitive “execution” processes but also clarify how both types of processes interact (Kluwe, 1987; Veenman, 2017). Nelson (Nelson & Narens, 1990) postulated that the cognitive system is composed of two layers. The object level represents lower order cognitive activities that are needed for the execution of the task. The meta level comprises higher order processes of evaluation and planning that govern the object level. Monitoring processes convey information about the state of the object level to the meta level, whereas control processes transfer directions from the meta level to the object level. Thus, when errors occur on the object level, metacognitive monitoring processes will inform the meta level, and control processes will be activated to resolve the problem. Because metacognitive control is elicited by anomalies in cognitive activity, this is essentially a bottom-up model.


Veenman (2017) extended Nelson’s model with a top-down approach. Metacognitive skills are perceived as an acquired program of self-instructions at the meta level for controlling and regulating cognitive activity at the object level. This program of self-instructions is activated whenever a task is presented to the student. Self-instructions are condition-action rules (IF-THEN rules; Anderson & Schunn, 2000) that contain conditional knowledge about when to apply a particular skill and operational instructions for how to implement the skill on the object level. Monitoring processes identify which conditions are satisfied for activating self-instructions. Thus, self-instructions from the meta level are self-induced, that is, they are not necessarily activated by anomalies in task performance. Metacognitively skilled students have an orderly set of self-instructions available that will help them proceed with the task. The output of an implemented self-instruction will subsequently satisfy the conditions for the next self-instruction. Such a program of self-instructions is acquired through experience and training, much in the same way as cognitive skills are learned (Veenman, 2013a, 2017).



Between the age of 3 and 5 years, a child learns to develop a theory of mind (TOM). The child realizes that the knowledge and thinking of others do not necessarily correspond with his or her own. TOM marks the start of metacognitive development (Lockl & Schneider, 2006). In the following years, children develop knowledge of their own cognitive system, such as how memory operates, but they are still limited in their ability to regulate and control their own behavior. For example, when 5-year-olds in play are asked to distribute teddy bears over a limited number of chairs (Whitebread et al., 2009), it seems that young children can only apply elementary forms of planning and self-correction. They may start by putting one teddy on each chair to see how many teddies are left over, or they take back a teddy if there are too many on one chair. The development of metacognitive skills in school contexts, however, does not commence until the age of 8 or 9. From late childhood through adolescence, students show a steep growth of metacognitive skills (Alexander, Carr, & Schwanenflugel, 1995; Veenman, Wilhelm, & Beishuizen, 2004).


Apart from an increase of metacognitive skills with age, a qualitative change in the nature of metacognitive skills occurs. Until 12 years, metacognitive skills have a distinctive domain-specific component (Van der Stel & Veenman, 2014; Veenman & Spaans, 2005). Students apply the same metacognitive skills to tasks that are highly similar but to a lesser degree for tasks that differ. Thus, it appears that metacognitive skills initially develop on “separate islands of tasks that are very much alike” (Veenman & Spaans, 2005). After the age of 12, metacognitive skills become increasingly task and domain surpassing. A crucial age is 14, when the increase in metacognitive skills is temporarily suspended, while metacognitive skills are transformed into a general domain-surpassing repertoire at the meta level (Van der Stel & Veenman, 2014; Veenman, Elshout, & Meijer, 1997). After the age of 14, growth resumes, and the repertoire of general metacognitive skills is further expanded. The importance of this general repertoire cannot be overemphasized. It facilitates the processing of new tasks in the absence of domain-specific knowledge, for instance, when entering a new field of study or job.


Metacognitive development, however, does not imply that skills develop at the same rate in all students. Within each age group, there are large individual differences in metacognitive skillfulness between students (Veenman et al., 2004). Some students lack adequate skills, whereas others are metacognitively proficient relative to their same-age peers. These individual differences persist throughout university. When studying text, some undergraduates skip the titles and abstracts of articles and book chapters, mark whole pages with a yellow marker pen without distinguishing main points from side issues, and do not check their comprehension by recapitulating what they have read (Veenman & Beishuizen, 2004). The same metacognitive ineptitude can be seen in students who have difficulty with organizing study projects or writing a thesis (Veenman & Verheij, 2003).

Sternberg (1990) postulated that metacognitive skills are a core component of intelligence. This assumption, however, has been contested by research. In an overview of studies with students from various age groups, performing different tasks in different domains, Veenman (2008) established that intelligence uniquely accounted for 10% of variance in learning performance, metacognitive skillfulness uniquely accounted for 18% of variance, and both predictors shared another 22% of variance in learning performance. Metacognitive skills appear to be a profound predictor of learning outcomes independent of intelligence. Moreover, recent research has shown that intellectually gifted students are equally susceptible to metacognitive deficiencies as their less gifted classmates (Veenman, Bavelaar, De Wolf, & Van Haaren, 2014). Forty-five percent of students with an IQ ≥ 130 in 11th grade of preuniversity education attained a score on metacognition below the average of their less gifted classmates with an IQ below 130. Possibly, these gifted students entirely rely on their intelligence when performing regular tasks in secondary education, and, as a result, they fail to see the necessity of developing their metacognitive skills. Only when the complexity of learning materials inevitably calls on metacognitive skills, such as in senior high school or at university, problems of study delay or dropout may arise (Veenman & Verheij, 2003). In conclusion, metacognitive skills cannot be equated with intelligence. Variation in metacognitive development is not directly caused by intellectual differences (Alexander et al., 1995). Developmental pace, rather, is contingent on the opportunity one gets for acquiring metacognitive skills from exemplary teachers and parents (Veenman, 2015).


In methods for the assessment of metacognitive skills, a distinction should be made between off-line and on-line methods (Veenman, 2005). Off-line methods pertain to measurements administered disjoint from task performance. For instance, typical off-line methods are questionnaires presented to students either prior to or after task performance. On-line methods, on the other hand, are applied during the student’s task performance. Behavioral observations and thinking aloud are often used as on-line methods. An essential difference between the two types is that off-line methods rely on the self-reports from individual students, whereas in on-line methods, the actual student behavior is judged by an external agency on preset criteria.



There is a wide range of questionnaires used for the assessment of metacognitive skills. Some questionnaires (e.g., MSLQ, Pintrich & de Groot, 1990) have separate scales for metacognitive strategy use and self-regulation, next to scales for other learner characteristics. Others, such as the MAI (Schraw & Dennison, 1994), entirely focus on metacognition. They address students with questions about (the frequency of) their metacognitive strategy use and skill application. For instance, an item from the MSLQ is: “I ask myself questions to make sure I know the material I have been studying,” which is to be answered on a Likert scale ranging from 1 (not at all true of me) to 7 (very true of me). The advantage of questionnaires is that they are easy to administer in large groups.


The SRLIS instrument of Zimmerman and Martinez-Pons (1986) is often used as an interview instrument. Students are asked to describe their use of self-regulatory strategies in six learning situations (classroom situation, studying at home, writing assignment, math assignment, preparing for and taking a test, and completing homework when poorly motivated), and these self-reports are scored on 14 SRL-strategy categories. Although SRLIS yields more fine-grained information than questionnaires, the procedure is far more labor intensive.

Off-line self-report instruments, however, suffer from serious validity problems (Veenman, 2011, 2017). A first fundamental problem is that students need to reconstruct their earlier strategic behavior from memory. Consequently, their self-reports are subject to memory failure, distortion, and interpretive reconstruction (Ericsson & Simon, 1993; Nisbett & Wilson, 1977). For retrospective self-reports, memory-reconstruction problems can be mitigated by questioning students immediately after task performance or by stimulated recall of students watching video recordings of their task performance (Artzt & Armour-Thomas, 1992). Yet, memory bias is not fully resolved by such procedures. When off-line assessments are administered prior to or disjunct from actual performance, severe memory problems may occur because answers must be based on the student’s previous experiences. Thus, memory-based self-reports may not be representative of the students’ actual metacognitive behavior.

A second problem of off-line methods is that questioning may interfere with the student’s spontaneous self-report of strategy use. Retrospective questions may prompt the recall of strategy use that in fact never occurred. Students may be inclined to give social-desirable answers. They may also be triggered by questions to label their behavior accordingly. For instance, when asked after summarizing, they may take their loose notes for a summary. Incorrect labeling occurs, in particular, when students have poor declarative metacognitive knowledge (Veenman, 2011).

A third problem pertains to the closed-answer format of questionnaires. When students have to decide whether they seldom, regularly, or often use a particular strategy, they have to compare themselves with others. The reference points chosen, however, may vary from one student to another. Students may choose the teacher or the best student as a reference point, but also the poorest classmate. Even when a student consistently adheres to the same reference point while answering all questions, disparity in answers occurs between students with different reference points. Consequently, indices for reliability or internal consistency of the questionnaire may be high, while the validity of the instrument is low (Veenman, Prins, & Verheij, 2003).

How critical are these validity problems? A review study by Veenman (2005) revealed that off-line self-reports of metacognitive skills or strategy use hardly correspond to actual metacognitive behavior in a task situation. Moreover, correlations among different off-line measurements of metacognition were low to moderate, although measurements intended to assess the same construct. These findings have been corroborated by later multimethod studies in the domain of reading (Bannert & Mengelkamp, 2008; Cromley & Azevedo, 2006; Hadwin, Nesbit, Jamieson-Noel, Code, & Winne, 2007), mathematics (Desoete, 2008; Veenman & Van Cleef, 2007), and solving puzzles (Li, Zhang, Du, Zhu, & Li, 2015). Veenman (2013b) estimated that off-line and on-line measures only have about 2% of variance in common (r = .15 on the average). Apparently, students do not actually do what they earlier said to do, and they cannot adequately report afterward what they have done. In fact, it is not clear what off-line methods essentially measure. Perhaps they capture some elements of metacognitive knowledge, but that remains to be further investigated (Veenman, 2017). Until these validity problems are resolved, researchers and professionals should refrain from using off-line methods for assessing metacognition in general, and metacognitive strategy use and skillfulness in particular. Nevertheless, off-line measures are omnipresent in metacognition research. Dinsmore et al. (2008) estimated in a review of about 200 studies that 37% of the metacognition assessments (24% questionnaires and 13% interviews) and 68% of the SRL assessments (59% questionnaires and 9% interviews) relied on off-line methods.



Thinking aloud is often used as an on-line assessment method. Students are asked to verbalize their ongoing thoughts while performing a particular task. Unlike introspection, mere verbalization means that students refrain from interpreting their thoughts or behavior (Ericsson & Simon, 1993). When students are properly instructed, thinking aloud does not interfere with cognitive processes (Ericsson & Simon, 1993) or, more specifically, with metacognitive regulatory processes (Bannert & Mengelkamp, 2008; Veenman, Elshout, & Groen, 1993). It may, however, slightly slow down these processes because of concurrent verbalization efforts. Thinking-aloud protocols are individually recorded on tape and subsequently transcribed. Two or more “blind” judges independently analyze the protocols according to a detailed scoring schema (see Veenman, 2013a). Thus, thinking-aloud protocols allow for a fine-grained analysis of metacognitive skills that are either mastered by the student or deficient in the student’s repertoire. A disadvantage, on the other hand, is the labor-intensive and time-consuming procedure. Therefore, the thinking-aloud method is mainly used in research and for further investigation of individual students with learning deficiencies (Veenman, 2013a), but it is not a practical instrument for screening a large group of students.


Sometimes the task is not suitable for thinking aloud. For instance, the task may call on highly automated processes, or the task may be extremely difficult and demanding. Thinking-aloud protocols are likely to be incomplete under these circumstances (Ericsson & Simon, 1993). Also the verbal proficiency of students may be insufficient to obtain adequate thinking-aloud protocols during task performance. This may apply to students who have to think aloud in a foreign language that is not mastered fluently or, more profoundly, to young children who are still in the process of developing their verbal skills (Whitebread et al., 2009). Protocols may be incomplete or distorted as verbalization occupies too much working-memory space, relative to the processing demands of the task (Veenman, 2017). In those cases, thinking aloud can be replaced or complemented by observations through video recordings of the student’s task performance. These video protocols need to be analyzed by multiple judges with a detailed scoring schema of overt behavioral indicators.

Unless combined with thinking aloud, behavioral observations do not give direct access to mental processes underlying behavior. For instance, recalculation may result from an evaluation to check the outcome (good metacognition), but equally from lost information due to sloppy note taking (poor metacognition). Consequently, judges need to infer the metacognitive nature of behavior by scrutinizing patterns of activities. Direct observation, that is, scoring behavior without video recordings, is not recommended for that reason. Obviously, such analysis of video protocols is time consuming and labor intensive as well.


Additional tools for metacognitive assessment have recently been developed with the on-line registration of metacognitive activities in computer logfiles (Greene & Azevedo, 2010; Li et al., 2015; Veenman et al., 2004, 2014). Obviously, the task should be suitable as a computerized version, otherwise the ecological validity of assessments is compromised. During task performance on a computer, all student activities are logged into a background file, and frequencies are calculated for a selection of metacognitive activities. Moreover, trace data in logfile registrations allow for the detection of meaningful patterns in the sequence of activities (Hadwin et al., 2007; Veenman, 2013b; Winne, 2014).

Logfile registration only captures the concrete, overt behavior on the object level, and it does not give access to the student’s metacognitive considerations. Therefore, a set of metacognitive activities relevant to the task should be selected beforehand on rational grounds, and this potential set of activities should be validated against other on-line measures (Veenman, 2013b; Veenman et al., 2014). Validation is required prior to logfile registration, because the coding of student activities during task performance is (partly) automated. Validation may rule out potential activities when they eventually appear to equivocally represent metacognitive skills.

In the study of Veenman et al. (2014), students had to perform computerized experiments to discover relations between five independent variables and a dependent variable. Repeating an earlier experiment was expected to result from sloppy metacognition. For some students, repeating experiments was superfluous indeed. Other students, however, circumvented the cumbersome scrolling through a list of earlier experiments and outcomes by quickly redoing the experiment. Thus, repeating experiments appeared to be a diffuse indicator of metacognition in the validation study, and it was removed from the final coding system (Veenman et al., 2014). In the end, the adequacy of the automated coding system determines the quality of assessment outcomes. Assessment through logfile registration is minimally intrusive to students. Moreover, it can be administered to large groups at the same time (Dinsmore et al., 2008; Veenman, 2013b).

Contrary to off-line measures, correlations among on-line measures usually are moderate to high (Cromley & Azevedo, 2006; Veenman, 2005; Veenman et al., 2014). Apparently, different on-line measures tend to converge in their assessments of metacognitive skills. Convergence in different assessments of the same construct contributes to construct validity (Veenman, 2011). Moreover, on-line measures are good predictors of learning performance, relative to off-line measures (Bannert & Mengelkamp, 2008; Cromley & Azevedo, 2006). Veenman (2005) estimated that correlations with learning performance range from .45 to .90 for on-line measures, whereas they range from slightly negative to .36 for off-line measures. According to metacognition theory, metacognitive skills are expected to contribute to learning performance (Veenman, 2007). Therefore, the substantial correlations of on-line measures with learning outcomes are indicative of external validity.

The relevance of adequately assessing metacognitive skills is not labeling students as metacognitively poor or proficient, merely for reasons of selection. The focus here is on the consequences of assessments for the instruction and training of metacognitive skills. For metacognitively proficient students, instruction and training is not merely superfluous and a waste of time; it may even interfere with their spontaneous execution of adequate metacognitive skills (Veenman, 2013a)—much like when an experienced driver is told how to drive a car. Hence, only students with metacognitive deficiencies should be instructed and trained. Moreover, adequate assessments could provide useful information about which deficiencies should be addressed. Some students may specifically suffer from poor orientation and planning, whereas others lack monitoring and evaluation skills. Identifying such deficiencies could make instruction and training more effectively tailored to the specific needs of a particular student (Veenman, 2013a).

Metacognitive assessments are also needed for establishing the effects of remedial instruction and training on metacognitive behavior. Most training studies only report effects on learning performance, not on metacognitive behavior (Veenman, 2007). Instructional effects on learning outcomes, however, could equally be attributed to confounding variables such as extended time-on-task. Instructional effectiveness is evaluated by ascertaining the causal chain of instruction leading to improved metacognitive behavior and, thus, to enhanced learning performance (Veenman, 2013a).



Three principles are essential to effective instruction of metacognitive skills: (a) the synthesis position, (b) informed training, and (c) prolonged instruction (Veenman et al., 2006; Veenman, 2013a). The synthesis position (Volet, 1991) asserts that metacognitive instruction should be embedded in the context of a task in order to attune the execution of metacognitive skills to specific task demands. Embedded instruction enables the student to link task-specific conditional knowledge of which skill to apply when to the procedural knowledge of how the skill is applied in the context of the task. Studies that violated this principle, for instance by merely presenting instructions on a sheet of paper prior to actual task performance (e.g., Stoutjesdijk & Beishuizen, 1992), invariably lacked training effects.

According to the second principle, informed instruction (Campione, Brown, & Ferrara, 1982), students should be explicitly informed about the benefits of applying metacognitive skills to make them exert the initial extra effort. When students do not spontaneously use metacognitive skills, the execution of the instructed skills initially requires extra effort and occupies working-memory space. This may result in cognitive overload, particularly when the task is demanding. Students may be inclined to abandon the instructed skills unless they appreciate why metacognitive skills facilitate task execution and learning performance. However, rather abstract explanations, such as “planning helps you to organize your work,” may not suffice to convince all students. When students are asked why they do not fully read an assignment before engaging in reading or problem solving, metacognitively poor students especially indicate that they are anxious about running out of time (Veenman, 2015). They do not realize that initial investment of time in orientation and planning activities will render time during further execution of the task. They will likely make fewer mistakes, and they can easily track and remedy the mistakes that occur. It appears that most students are susceptible to three practical arguments in favor of using metacognitive skills: It saves time, it reduces errors and mistakes, and it will result in better learning outcomes (i.e., higher marks). Abstract explanations become more persuasive when they are subsequently translated into these practical arguments (Veenman, 2015).

Finally, the third principle, prolonged instruction, implies that instruction and training should be extended over time, thus allowing for the smooth and maintained execution of metacognitive skills. Duration of instruction, however, may vary depending on the quantity and complexity of skills. The instruction period may be relatively short for mastering a limited set of metacognitive skills (Kramarski & Mevarech, 2003; Veenman, Kok, & Blöte, 2005). For establishing enduring effects on spontaneous metacognitive functioning, however, the instruction period may cover a year or more (Mettes, Pilot, & Roossink, 1981), especially for students with learning disabilities (Pressley & Gaskins, 2006).

Any successful instructional program for the acquisition of metacognitive skills adheres to these three principles. Veenman (2013a) refers to these principles as the WWW&H rule for complete instruction of metacognitive skills, indicating that students should be instructed, modeled, and trained when to apply what skill, why and how in the context of a task. The relevance of these four components in metacognitive instruction has been acknowledged in the past (Borkowski, Carr, & Pressley, 1987; Brown, 1978). According to the WWW&H rule, however, all four components need to be attended to in the initial phase of metacognitive-skill acquisition.


Not all students are alike in their need for metacognitive instruction. Students with poor metacognitive behavior may suffer from either an availability deficiency or a production deficiency (Veenman, Kerseboom, & Imthorn, 2000; Veenman, 2013a). Availability-deficient students do not have metacognitive skills at their disposal. For instance, they do not know what planning is and how to do planning. They are not capable of planning, even when their parents frequently urge them to “do some planning” in their schoolwork. Cues or prompts that merely remind these students of using metacognitive skills during task performance neither affect their metacognitive behavior nor result in enhanced learning performance (Connor, 2007; Veenman et al., 2000). Students with an availability deficiency need to receive complete instruction and training in line with the WWW&H rule from scratch on.

Students with a production deficiency, on the other hand, have metacognitive skills at their disposal, but they do not spontaneously apply those skills for various reasons (Brown & DeLoache, 1978; Flavell, 1976). For instance, they do not know when metacognitive skills are appropriately deployed, they do not recognize the relevance of those skills for a particular task, they may overestimate themselves by thinking that they can manage without, or the execution of skills may be obstructed by test anxiety (Veenman et al., 2000). Obviously, these students do not need training in how to execute metacognitive skills because they are capable of exhibiting skillful behavior in other situations. The production deficiency results, rather, from a lack of knowledge about when or why skills should be applied in this particular learning context. Therefore, metacognitive instruction should focus on these two components of the WWW&H rule. Indeed, production-deficient students may benefit from providing them with cues or prompts that remind them of skill application in the course of task performance (Veenman et al., 2000).


Step-by-step action plans are often used in metacognitive instruction to clarify what metacognitive skills should be applied when by the student (Veenman, 2013a). Such a step-by-step plan includes an orderly series of questions or keywords, addressing metacognitive skills that should be subsequently executed in the course of task performance. At the onset of the task, skills for task analysis, activating prior knowledge, goal setting, and planning are called on. Systematic execution of plans, monitoring, and note taking are advocated during task performance, whereas evaluation, recapitulation, and reflection are endorsed at the end of the task. In step-by-step action plans, the rather abstract descriptions of skills are translated into concrete activities that apply to the task at hand. For instance, monitoring during problem solving mainly pertains to checking the outcomes of calculations, whereas comprehension monitoring during text studying refers to checking whether words and phrases are understood (Veenman, 2013a).

Step-by-step action plans have been successfully included in training programs for metacognition, such as the systematical approach to problem solving in physics (Mettes et al., 1981), IMPROVE for mathematics (Kramarski & Mevarech, 2003; Mevarech & Fridkin, 2006), and metacognitive cueing in mathematics (Veenman et al., 2005). With the introduction of the computer in educational settings, computerized metacognitive scaffolds are given according to a latent model for step-by-step actions, either as a fixed array of scaffolds (Manlove, Lazonder, & de Jong, 2007; Veenman, Elshout, & Busato, 1994), or adaptively by an intelligent tutoring system (Molenaar, Sleegers, & Van Boxtel, 2014; Roll, Aleven, McLaren, & Koedinger, 2011). What these programs have in common is that they advance the execution of appropriate metacognitive skills at the right time within the context of a given task.

Step-by-step action plans are informative of what to do when, but they do not cover the why and how of metacognitive instruction. The latter issues need to be addressed by teachers in natural classroom settings and remedial teachers through modeling and scaffolding. They should demonstrate a particular strategy or skill while explaining its usefulness. Subsequently, they should let students reproduce the skill while providing them with feedback, and, finally, they should let students repeatedly practice the skill (Veenman, 2015). Unfortunately, teachers tend to give implicit rather than explicit instruction. That is, they spontaneously use examples of their own metacognitive activity in their lessons, but they are not inclined to explain the metacognitive nature and benefit of doing these activities to their students. Veenman, Haan, and Dignath (2009) observed lessons of teachers from various disciplines. Teachers varied considerably in the number of metacognitive examples given, but 96% of all examples were dealt with in an implicit way, and only 4% concerned explicit instruction. By doing so, teachers unintentionally violated the principle of informed instruction.

By making their instruction of metacognitive skills explicit, teachers can become a role model to their students. Pressley and Gaskins (2006) developed a teaching program in special education for students with very low reading ability. Throughout the day, teachers of all school disciplines addressed the students with a broad array of metacognitive instructions for reading. They incessantly explained, modeled, and prompted the use of reading-comprehension strategies. After spending 4 to 8 years at this special school, students returned to regular education with above-average scores on a national reading test. Teachers can make a difference, but it takes time and effort from both teachers and students.


Metacognitive skillfulness is the most important predictor of learning performance, outweighing intelligence, motivation, and social-economic background, among other predictors (Veenman, 2008; Wang, Haertel, & Walberg, 1990). Moreover, metacognitive skills can be effectively instructed and trained, resulting in enhanced learning performance. In fact, the effects of metacognitive instruction may transfer to other tasks and domains, especially when metacognitive skills have become a general repertoire after the age of 14 (Veenman et al., 1997; Van der Stel & Veenman, 2014). Thus, metacognitive instruction is a powerful educational instrument for facilitating students to become independent learners in a rapidly changing world.


Alexander, J. M., Carr, M., & Schwanenflugel, P. J. (1995). Development of metacognition in gifted children: Directions for future research. Developmental Review, 15, 1–37.

Anderson, J. R., & Schunn, C. D. (2000). Implications of the ACT-R learning theory: No magic bullets. In R. Glaser (Ed.), Advances in instructional psychology (Vol. 5, pp. 1–33). Mahwah, NJ: Erlbaum.

Artzt, A. F., & Armour-Thomas, E. (1992). Development of a cognitive-metacognitive framework for protocol analysis of mathematical problem solving in small groups. Cognition and Instruction, 9, 137–175.

Bannert, M., & Mengelkamp, C. (2008). Assessment of metacognitive skills by means of instruction to think aloud and reflect when prompted. Does the verbalization method affect learning? Metacognition and Learning, 3, 39–58.

Borkowski, J. G., Carr, M., & Pressley, M. (1987). “Spontaneous” strategy use: Perspectives from metacognitive theory. Intelligence, 11, 61–75.

Brown, A. L. (1978). Knowing when, where, and how to remember: A problem of metacognition. In R. Glaser (Ed.), Advances in instructional psychology (Vol 1., pp. 77–165). Hillsdale, NJ: Erlbaum.

Brown, A. L., & DeLoache, J. S. (1978). Skills, plans, and self-regulation. In R. S. Siegel (Ed.), Children's thinking: What develops? (pp. 3–35). Hillsdale, NJ: Erlbaum.

Campione, J. C., Brown, A. L., & Ferrara, R. A. (1982). Mental retardation and intelligence. In R.J. Sternberg (Ed.), Handbook of human intelligence (pp. 392–490). Cambridge, England: Cambridge University Press.

Connor, L. N. (2007). Cueing metacognition to improve researching and essay writing in a final year high school biology class. Research in Science Education, 37, 1–16.

Cromley, J. G., & Azevedo, R. (2006). Self-report of reading comprehension strategies: What are we measuring? Metacognition and Learning, 1, 229–247.

Desoete, A. (2008). Multi-method assessments of metacognitive skills in elementary school children: How you test is what you get. Metacognition and Learning, 3, 189–206.

Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educational Psychology Review, 20, 391–409.

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis. Cambridge, MA: MIT Press.

Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature of intelligence (pp. 231–235). Hillsdale, NJ: Erlbaum.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906–911.

Greene, J. A., & Azevedo, R. (2010). The measurement of learners’ self-regulated cognitive and metacognitive processes while using computer-based learning environments. Educational Psychologist, 45, 203–209.

Hadwin, A. F., Nesbit, J. C., Jamieson-Noel, D., Code, J., & Winne, P. H. (2007). Examining trace data to explore self-regulated learning. Metacognition and Learning, 2, 107–124.

Kluwe, R. H. (1987). Executive decisions and regulation of problem solving behavior. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 31–64). Hillsdale, NJ: Erlbaum.

Kramarski, B., & Mevarech, Z. R. (2003). Enhancing mathematical reasoning in the classroom: The effects of cooperative learning and metacognitive training. American Educational Research Journal, 40, 281–310.

Li, J., Zhang, B., Du, H., Zhu, Z., & Li, Y. M. (2015). Metacognitive planning: Development and validation of an online measure. Psychological Assessment, 27, 260–271.

Lockl, K., & Schneider, W. (2006). Precursors of metamemory in young children: The role of theory of mind and metacognitive vocabulary. Metacognition and Learning, 1, 15–31.

Manlove, S., Lazonder, A. W., & De Jong, T. (2007). Software scaffolds to promote regulation during scientific inquiry learning. Metacognition and Learning, 2, 141–155.

Mettes, C. T. C. W., Pilot, A., & Roossink, H. J. (1981). Linking factual and procedural knowledge in solving science problems: A case study in a thermodynamics course. Instructional Science, 10, 333–361.

Mevarech, Z., & Fridkin, S. (2006). The effects of IMPROVE on mathematical knowledge, mathematical reasoning and metacognition. Metacognition and Learning, 1, 85–97.

Molenaar, I., Sleegers, P., & Van Boxtel, C. (2014). Metacognitive scaffolding during collaborative learning: A promising combination. Metacognition and Learning, 9, 309–332.

Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. Psychology of Learning and Motivation, 26, 125-173.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we know: Verbal reports on mental processes. Psychological Review, 84, 231–259.

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated leaning components of classroom academic performance. Journal of Educational Psychology, 82, 33–40.

Pressley, M., & Gaskins, I. (2006). Metacognitive competent reading is constructively responsive reading: How can such reading be developed in students? Metacognition and Learning, 1, 99–113.

Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011). Improving students’ help-seeking skills using metacognitive feedback in an intelligent tutoring system. Learning and Instruction, 21, 267–1280.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460–475.

Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7, 351–371.

Sternberg, R. J. (1990). Metaphors of the mind: Conceptions of the nature of intelligence. Cambridge, England: Cambridge University Press.

Stoutjesdijk, E., & Beishuizen, J. J. (1992). Cognitie en metacognitie bij het bestuderen van informatieve tekst [Cognition and metacognition during the study of informative texts]. Tijdschrift voor Onderwijsresearch, 17, 313–326.

Van der Stel, M., & Veenman, M. V. J. (2014). Metacognitive skills and intellectual ability of young adolescents: A longitudinal study from a developmental perspective. European Journal of Psychology of Education, 29, 117–137.

Veenman, M. V. J. (2005). The assessment of metacognitive skills: What can be learned from multi-method designs? In C. Artelt & B. Moschner (Eds.), Lernstrategien und Metakognition: Implikationen für Forschung und Praxis (pp. 75–97). Berlin, Germany: Waxmann.

Veenman, M. V. J. (2007). The assessment and instruction of self-regulation in computer-based environments: A discussion. Metacognition and Learning, 2, 177–183.

Veenman, M. V. J. (2008). Giftedness: Predicting the speed of expertise acquisition by intellectual ability and metacognitive skillfulness of novices. In M. F. Shaughnessy, M. V. J. Veenman, & C. Kleyn-Kennedy (Eds.), Meta-cognition: A recent review of research, theory, and perspectives (pp. 207–220). Hauppauge, NY: Nova Science Publishers.

Veenman, M. V. J. (2011). Alternative assessment of strategy use with self-report instruments: A discussion. Metacognition and Learning, 6, 205–211.

Veenman, M. V. J. (2013a). Training metacognitive skills in students with availability and production deficiencies. In H. Bembenutty, T. Cleary, & A. Kitsantas (Eds.), Applications of self-regulated learning across diverse disciplines: A tribute to Barry J. Zimmerman (pp. 299–324). Charlotte, NC: Information Age.

Veenman, M. V. J. (2013b). Assessing metacognitive skills in computerized learning environments. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (pp. 157–168). New York, NY: Springer.

Veenman, M. V. J. (2015). Het onderkennen en herkennen van metacognitieve vaardigheden en metacognitieve deficiënties. Werkboek workshop voor docenten VO [The recognition and identification of metacognitive skills and metacognitive deficiencies. Workbook workshop for secondary-school teachers] (3rd ed.). Hillegom, The Netherlands: Institute for Metacognition Research.

Veenman, M. V. J. (2017). Learning to self-monitor and self-regulate. In R. Mayer & P. Alexander (Eds.), Handbook of research on learning and instruction (2nd rev. ed., pp. 233–257). New York, NY: Routledge.

Veenman, M. V. J., Bavelaar, L., De Wolf, L., & Van Haaren, M. G. P. (2014). The on-line assessment of metacognitive skills in a computerized environment. Learning and Individual Differences, 29, 123–130.

Veenman, M. V. J., & Beishuizen, J. J. (2004). Intellectual and metacognitive skills of novices while studying texts under conditions of text difficulty and time constraint. Learning and Instruction, 14, 619–638.

Veenman, M. V. J., Elshout, J. J., & Busato, V. V. (1994). Metacognitive mediation in learning with computer-based simulations. Computers in Human Behavior, 10, 93–106.

Veenman, M. V. J., Elshout, J. J., & Groen, M. G. M. (1993). Thinking aloud: Does it affect regulatory processes in learning? Tijdschrift voor Onderwijsresearch, 18, 322–330.

Veenman, M. V. J., Elshout, J. J., & Meijer, J. (1997). The generality vs. domain-specificity of metacognitive skills in novice learning across domains. Learning and Instruction, 7, 187–209.

Veenman, M. V. J., Haan, N., & Dignath, C. (2009). An observation scale for assessing teachers’ implicit and explicit use of metacognition in classroom settings. Paper presented at the 13th Biennial Conference for Research on Learning and Instruction (EARLI), Amsterdam, The Netherlands.

Veenman, M. V. J., Kerseboom, L., & Imthorn, C. (2000). Test anxiety and metacognitive skillfulness: Availability versus production deficiencies. Anxiety, Stress, and Coping, 13, 391–412.

Veenman, M. V. J., Kok, R., & Blöte, A. W. (2005). The relation between intellectual and metacognitive skills at the onset of metacognitive skill development. Instructional Science, 33, 193–211.

Veenman, M. V. J., Prins, F. J., & Verheij, J. (2003). Learning styles: Self-reports versus thinking-aloud measures. British Journal of Educational Psychology, 73, 357–372.

Veenman, M. V. J., & Spaans, M. A. (2005). Relation between intellectual and metacognitive skills: Age and task differences. Learning and Individual Differences, 15, 159–176.

Veenman, M. V. J., & Van Cleef, D. (2007). Validity of assessing metacognitive skills for mathematic problem solving. In A. Efklides & M.H. Kosmidis (Eds.), 9th European Conference on Psychological Assessment. Program and abstracts (pp. 87–88). Thessaloniki, Greece: Aristotle University of Thessaloniki.

Veenman, M. V. J., Van Hout-Wolters, B. H. A. M., & Afflerbach, P. (2006). Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning, 1, 3–14.

Veenman, M. V. J. & Verheij, J. (2003). Identifying technical students at risk: Relating general versus specific metacognitive skills to study success. Learning and Individual Differences, 13, 259–272.

Veenman, M. V. J., Wilhelm, P., & Beishuizen, J. J. (2004). The relation between intellectual and metacognitive skills from a developmental perspective. Learning and Instruction, 14, 89–109.

Volet, S. E. (1991). Modelling and coaching of relevant metacognitive strategies for enhancing university students' learning. Learning and Instruction, 1, 319–336.

Wang, M. C., Haertel, G. D., & Walberg, H. J. (1990). What influences learning? A content analysis of review literature. Journal of Educational Research, 84, 30–43.

Whitebread, D., Coltman, P., Pasternak, D. P., Sangster, C. Grau, V., Bingham, S., . . . Demetriou, D. (2009). The development of two observational tools for assessing metacognition and self-regulated learning in young children. Metacognition and Learning, 4, 63–85.

Winne, P. H. (2014). Issues in researching self-regulated learning as patterns of events. Metacognition and Learning, 9, 229–237.

Zimmerman, B. J. (2000). Attainment of self-regulation: A social cognitive perspective. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation, research, and applications (pp. 13–39). Orlando, FL: Academic Press.

Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal, 23, 614–628.

Cite This Article as: Teachers College Record Volume 119 Number 13, 2017, p. 1-20
https://www.tcrecord.org ID Number: 21923, Date Accessed: 10/18/2021 10:51:41 AM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Marcel Veenman
    Institute for Metacognition Research
    E-mail Author
    MARCEL VEENMAN was formerly affiliated with the Universities of Amsterdam and Leiden for 28 years as cognitive psychologist. Currently, he is director of the Institute for Metacognition Research. Besides coordinating research, he gives lectures and workshops for teachers in primary, secondary, and higher education. He published over 100 scientific articles and book chapters on metacognition and self-regulation. From 2006 to 2011, he was founding editor of Metacognition and Learning, an international journal published by Springer.
Member Center
In Print
This Month's Issue