Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Cross-Country Generalizability of the Role of Metacognitive Knowledge in Students’ Strategy Use and Reading Competence


by Cordula Artelt & Wolfgang Schneider - 2015

Background/Context: Because metacognitive knowledge includes knowledge about adequate learning strategies, and because an effective use of learning strategies is associated with higher levels of performance, substantial relationships can be assumed between metacognitive knowledge, strategic behavior, and performance. However, such a pattern of results is rarely found in the research literature. In part, this may be due to inadequate indicators of strategy use.

Purpose of study: Prior research showed that high scores on self-reported strategy use were only mirrored in high levels of performance when students had sufficient metacognitive knowledge. To test the cross-country generalizability of the relationship between metacognitive knowledge, strategy use, and reading competence, we analyzed data from the PISA 2009-study, in which similar measures of metacognitive knowledge as well as of students’ strategy use were used.

Research Design: The study uses a cross-sectional correlational design. It draws on representative samples of fifteen-year-old students from 34 OECD countries taking part in the PISA 2009 study. The relations between students’ reading competence scores, their metacognitive knowledge as well as their self-reported use of learning strategies were analyzed. We used correlations, mediation- as well as moderator regression analyses to predict students’ reading competence.

Findings/Results: Results showed consistently moderate to high correlations between metacognitive knowledge and reading competence. There were also lower, but still significant, relationships between strategy use and both reading competence and metacognitive knowledge. Testing a “mediator model” with strategy use as a mediator resulted in small but significant effects of strategy use as mediator. Assuming that metacognitive knowledge might be a necessary precondition for effective strategy use, the study tested whether it served as a moderator. Results confirmed this moderator effect for many but not all countries. However, across all countries, there was a consistently high effect of metacognitive knowledge on reading competence, independent of the level of self-reported use of strategies.

Conclusions/Recommendations: The results are very similar across countries. Taken together, the findings suggest that metacognitive knowledge as measured by a test tapping declarative, conditional and relational strategy knowledge is an important predictor of students’ reading competence, and contributes significantly to our understanding of what helps students to become better readers. Metacognitive knowledge captures the prerequisite of adaptive strategic processing of texts. Increasing students’ knowledge in this domain is a promising approach when it comes to fostering self-regulated reading.



It is well established that readers often do not construct coherent representations of text information (e.g., Collins Block & Pressley, 2002). However, in the process of reading a long text, keeping relevant information active in working memory often requires readers to monitor the coherence of the evolving representations. If learning from texts is to be effective, it is important for learners to adopt an active role in their reading and learning by making inferences, filling gaps, and generating macrostructures and elaborations. Engaging in such strategic learning activities implies an awareness of text structure and how it facilitates comprehension, and it also involves an understanding for the differential processing demands of different kinds of tasks. Furthermore, it is important to use discourse and topic knowledge in a strategic way in order to identify relevant information, selectively reinstate previous text information, retrieve or reinstate information from long-term memory, or perform a combination of all three (e.g., Kintsch, 1998). Thus, students need to be aware of adequate learning strategies in order to process texts in a goal-oriented manner.

This knowledge about cognitive processes is a central element in the concept of metacognition. Although the literature on cognitive development has applied various conceptualizations of the term “metacognition,” it has usually been defined broadly as “any knowledge or cognitive activity that takes as its object, or regulates, any aspect of any cognitive enterprise” (Flavell, Miller, & Miller, 2002, p. 150). According to this conceptualization, metacognition refers to people’s knowledge about their own information-processing skills, the nature of cognitive tasks, and the strategies for coping with such tasks. Moreover, it also includes executive skills related to the monitoring and self-regulation of one’s own cognitive activities. Thus, during the process of reading a text, metacognition is related to three things: (a) reflection on the ongoing reading process (e.g., comprehension monitoring), (b) the strategic activities triggered by this reflection, and (c) the metacognitive knowledge base from which these activities are derived. Engaging in metacognitive processes is a salient feature of effective self-regulation. Within these processes, the activities of strategy selection and application are operationalized in an ongoing attempt to plan, check, monitor, select, revise, evaluate, and so forth.

Although there are substantial relations between the procedural (actual regulation) and declarative aspects (knowledge base) of metacognition, both from an analytical point of view and based on research findings on the development of these components, it seems worthwhile to distinguish between the two (see also Goswami, 2008; Hacker, Dunlosky, & Graesser, 2009; Schneider & Artelt, 2010; Schraw & Moshman, 1995). Metacognitive knowledge (i.e., the declarative component of metacognition) is stable in the sense that there is a more or less rich knowledge base of adequate strategies during reading that stems from former reading and learning experience. From early adolescence on, it is also stable in the sense that individual differences do not change much over time. On the other hand, however, the procedural component of metacognition is situated because the actual regulation of learning depends on the learners’ familiarity with the task as well as on their motivation and emotions. Individuals need to regulate their thoughts about which strategy they are using and adjust it to the situation in which it is being applied. Given that the selection and application of strategies during text comprehension depends not only on metacognitive knowledge but also on individual goals, standards, situational affordances, text difficulty, task demands, and so forth (Campione & Armbruster, 1985; see also Winne & Hadwin, 1998), it cannot be assumed that strategies will be applied whenever possible. However, an individual who uses a particular strategy intelligently ought to have some metacognitive knowledge of that strategy. In other words, there is a correlation between appropriate pieces of metacognitive knowledge and the effective use of strategies. Thus, metacognitive knowledge is necessary but insufficient for reflective and strategic learning. And reflective and strategic learning should, in turn, result in high levels of text comprehension.

Paris and Byrnes (1989; see also Brown, 1978) distinguished between declarative strategy knowledge (“knowing that”), procedural strategy knowledge (“knowing how”), and conditional strategy knowledge (“knowing when”). All three knowledge components are necessary in order to apply strategies effectively. Taking into account Borkowski’s metamemory model (Borkowski, Milestead, & Hale, 1988), it also seems worthwhile to look at students’ knowledge about the usefulness of a certain strategy in relation to other strategies, that is, their relational strategy knowledge. Relational strategy knowledge is particularly important when individuals have a repertoire of strategies at their disposal and have to decide which is most adequate. In particular, aspects of conditional and relational strategy knowledge are central components of the metacognitive knowledge measure used in the PISA 2009 assessment. As described below, the PISA measure of metacognitive knowledge captures these knowledge components by asking students explicitly to judge the appropriateness and (relative to other strategies) the quality of specific strategies for a given learning situation (Artelt, Beinecke, Schlagmüller, & Schneider, 2009).

STANDARDS FOR THE ASSESSMENT OF METACOGNITION

Although a strong relationship between metacognitive knowledge, high quality strategy use, and text comprehension is theoretically plausible, it would be rather demanding and difficult to prove. Whereas the relationship between metacognitive knowledge and the use of strategies (regulation of learning) in a concrete learning situation also depends on situational variables, the indicators used to assess metacognitive knowledge, and procedural metacognition need to fulfill certain standards in order to deliver high correlations between metacognition and performance. In addition to the requirement of reliability (which represents a serious problem for many indicators of both declarative as well as procedural aspects of metacognition; see Schneider & Pressley, 1997), both declarative and procedural aspects of metacognition require a clear benchmark.

For the knowledge component of metacognition, which is usually assessed independently from any concrete learning situation (Cavanaugh & Perlmutter, 1982), the applicability of a benchmark criterion means that knowledge needs to be understood as correct, and that high scores on the knowledge measure do, in fact, indicate that an individual possesses adequate beliefs about the way in which task, strategy, and person variables as well as their interaction (if all these aspects of declarative metacognition are integrated) influence learning and memory. The benchmark criterion is applicable to most measures of metacognitive knowledge, as long as it is conceptualized in Flavell’s sense as correct knowledge. This applies, for example, to interview techniques such as those used in early studies of metacognitive knowledge by Kreutzer, Leonard, and Flavell (1975), because it is not only possible but also common practice within an interview context to verify the appropriateness of the cognitive and metacognitive strategies that students report. The benchmark criterion is also applicable when metacognitive knowledge is assessed via a test format (e.g., Best & Ornstein, 1986), because students’ item responses are either correct (adequate strategies) or false (inadequate strategies). This also applies to the test on metacognitive knowledge administered in the German national extension to the PISA 2000 assessment (see Artelt & Neuenhaus, 2010; Artelt, Schiefele, & Schneider, 2001; Schlagmüller & Schneider, 2007). Because the construction rationale for the metacognitive knowledge test in PISA 2000 (Germany) was also used for the international PISA 2009 assessment of metacognition, both the test and the expert standards used for coding students’ strategy choices will be described in more detail below.

Regarding the assessment of procedural metacognition, the applicability of a clear benchmark means that the indicator reflects the level of control or regulation (correct comprehension monitoring, selection of adequate strategies for the task at hand, and/or appropriate application of strategies). This is the case for most indicators of procedural metacognition described in the framework of monitoring and control activities during the main stages of remembering Nelson and Narens (1990) proposed. As concurrent measures, they are assessed just before, during, or just after a learning situation (Cavanaugh & Perlmutter, 1982) indicating the perception and appraisal in this situation (i.e., confidence judgments, judgments of learning) or the activities of metacognitive control (i.e., allocation of study time; see Dunlosky & Metcalfe, 2009; Pieschl, 2007; Schneider & Lockl, 2008). In each case, the higher the score on the indicator, the more accurate these perceptions and appraisals.

However, the criterion of a clear benchmark is no longer applicable when people are asked about their strategy use directly, for example, when questionnaires ask them to indicate how often they apply certain strategies—either in general or related to more or less broadly defined learning occasions in the past (e.g., Lernstrategien im Studium [LIST], Wild, Schiefele, & Winteler, 1992)—or how typical or true a certain strategic behavior is for them (e.g., the Learning and Study Strategies Inventory [LASSI], Weinstein, 1987; or the Motivated Strategies for Learning Questionnaire [MSLQ], Pintrich, Smith, Garcia, & McKeachie, 1991).

Self-report data on the frequency of strategy use cannot be evaluated as indicators of more or less rich metacognitive knowledge, nor can they be evaluated as indicators of adequate strategic behavior. Hence, it is not possible to assess either the appropriateness of the selection of strategies against the background of the specific demands of situations or the quality of application of the strategies reported (Artelt & Neuenhaus, 2010). In addition, it remains an open question whether people who are asked to indicate how often they use strategies really take the broad range of potential situations into account in order to deliver a valid report on their behavior during learning. Against the background of information-processing theory as well as the literature on judgment heuristics and social cognition, it is not very likely that this indicator mirrors behavior. It is more probable that simple heuristics or schemata are used when answering these questions (see also Artelt, 2000; Kahneman, 2011; Winne & Perry, 2000).

Hence, the benchmark criterion is not generally applicable to self-reported strategy use in more or less broadly defined learning situations, because the appropriateness of the strategy selection as well as the quality of the strategy used are unknown. It is unclear whether self-reports can be seen as valid indicators of strategic behavior.

METACOGNITION AND READING COMPETENCE IN PISA

The OECD-PISA 2009 assessment provides a unique database for studying the relationship between metacognitive knowledge, self-reported strategy use, and reading competence. Instruments tapping metacognitive knowledge and strategy use were included in the assessment, and the resulting indicators can be related to students’ reading competence (PISA’s combined reading literacy score, OECD, 2010a). Moreover, the internationally comparative design allows an estimation of the cross-country generalizability of the results. Given that results on metacognition, strategy use, and achievement vary as a function of the specific assessment type (e.g., self-report or test data, see above), it is necessary to specify the assessment approach taken by PISA in order to explain which patterns of results are to be expected. Therefore, a brief outline of the measures will be given, followed by theoretical arguments on the assumed relationships addressed in this research.

The authors of this paper constructed a short test on metacognitive knowledge (related to text comprehension) for the 2009 PISA assessment (Artelt et. al., 2009) that allows the knowledge indicator to be interpreted as conditional and relational strategy knowledge (i.e., knowing which strategies to choose on what occasions and for which tasks). The indicator reflects students’ knowledge about the relative strengths and limitations of the application of certain strategies relative to a given learning goal. The benchmark criterion was applicable, because the judgments were evaluated relative to those of experts (see below): the higher the scores on this metacognitive knowledge test, the more elaborated the metacognitive knowledge base of the students. In addition, three scales assessing habitual strategies use (control, elaboration, and memorization strategies) were administered. These used items were tapping the general use of these strategies during learning. The benchmark criterion was not applicable to these self-report eddata on (the frequency of) strategies used during learning (habitual strategy use), because neither the appropriateness of the strategy selection nor the quality of strategy use were known. Given that it is also unclear whether self-reports could be seen as valid indicators of strategic behavior, the theoretical as well as practical implications of the resulting scores were not clear-cut.

Metacognitive Knowledge and Reading Competence

When looking at the relationship between metacognitive knowledge and reading competence (reading literacy as measured in PISA), we expected a moderate to high correlation. Students who demonstrate a rich knowledge base about strategies, that is, knowledge about their effectiveness and appropriateness for a specific situation, task, and text demands (as measured by the metacognitive knowledge test in PISA), will be likely to apply them when necessary. However, this does not mean that they will apply them whenever a situation calls for it. Other circumstances may lead to a decision not to invest effort in the given task (e.g., task difficulty is low or way too high, or students are not motivated to engage in the task). However, if students have metacognitive knowledge available, they basically know how to use appropriate strategies and thus are likely to have accumulated a high level of proficiency via a regular, task-appropriate, and efficient way of applying them.

Self-reported Strategy Use and Reading Competence

As outlined above, a positive effect of self-reported habitual strategy use (as it is also measured within the PISA assessment) can be assumed when (a) self-reports are valid indicators of the habitual strategic behavior; (b) the strategies are appropriate for the specific demands (e.g., whereas elaboration strategies are not always the best approach for memorization tasks, memorization strategies are not appropriate when it is assumed that readers build a situational [mental] model of the text); and (c) the strategies are carried out correctly (e.g., summarizing or underlining the most important ideas of a text should lead to a significant reduction of the text volume). Given that the indicators for self-reported habitual use of strategies are plagued with the problem of ambiguous meaning, that is, the lack of a benchmark and the often-cited validity problems, the relationship between self-reported strategy use and reading competence will not necessarily be as pronounced as that between metacognitive knowledge and reading competence. Furthermore, even if students’ strategy reports are valid, it is not clear whether the relationship between strategy use and performance is linear. There is at least some doubt as to whether students who report using memorization, elaboration, or control strategies whenever possible, actually are high-performing students. Using strategies has to do with choices, and there are many situations in which in-depth reading (or extensive rehearsal) strategies are not adequate or at least not efficient.

Metacognitive Knowledge and Self-reported Strategy Use

In a concrete learning situation, the application of strategies depends on several factors including the metacognitive knowledge base that enables individuals to make well-justified decisions about which strategies to apply. As mentioned above, a person who uses strategies intelligently ought to have some metacognitive knowledge of that strategy. When looking at the relationship aggregated over different learning episodes and when taking into account that the indicator of strategy use is based on self-reports (as in the PISA 2009 assessment), this should result in a moderately positive correlation between metacognitive knowledge and self-reported strategy use. To the (unknown) degree that the indicator of strategy use reflects intelligent use of strategies (appropriate for the specific demands and carried out correctly) and to the degree that it can be regarded as a valid indicator of habitual strategic behavior, correlations should be higher. However, because we do not have information on these properties, it seems reasonable to assume a lower overall relationship between self-reported strategy use and metacognitive knowledge.

Knowledge, Strategies, and Performance

Metacognitive knowledge is assumed to exert an effect on reading competence via the use of strategies. Thus, strategy use should be a mediator. However, given that the relationship between habitual use of strategies and performance is complex (Can a high-quality application of strategies be assumed from self-reported data? Are linear relationships between strategy use and performance likely? Are indicators of habitual strategy use a valid indicator of this kind of behavior?), we assume that there will be only a partial mediator effect. Over and above the mediator effect, there should still be a direct effect of metacognitive knowledge on performance. Alternatively, it can also be assumed that metacognitive knowledge is a prerequisite (probably even a necessary condition) for choosing appropriate strategies for the task at hand and their high-quality application. This would result in a moderator effect: The relationship between strategy use and performance should be strong when students have metacognitive knowledge at their disposal, whereas when their metacognitive knowledge is low, the relationship between strategy use and performance should be substantially lower.

Cross-country Generalizability

Without specific insight into possible culture specific patterns and mechanisms there does not seem to be reason to assume that the structural relations described above differ across countries: Within each of the 34 OECD countries, higher degrees of metacognitive knowledge (and to a certain degree also strategy use) should be associated with higher degrees of reading competence. However, cultural patterns and tendencies (Bempechat, Jimenez, & Boulay, 2002; Heine, Lehman, Markus, & Kitayama, 1999; Van de Vijver & Leung, 1997) are assumed to have an impact on the level of self-reported strategy use, leading to problems when comparing absolute values on the different scales across countries (Artelt, Baumert, Julius-McElvany, & Peschar, 2003). For the level of metacognitive knowledge—as for the indicator of students’ reading competence—however, no effect of cultural bias needs to be assumed. Given that level comparisons of the indicator of strategy use are not subject to this article and also for the sake of correct interpretations of the effect of the moderator, the values of strategy use were standardized within each country, so that the values of the moderator are interpretable relative to the country specific mean. Taken together, because we know of no reason to assume that the structural relationship between metacognitive knowledge, strategy use, and reading competence varies between countries, we do not assume specific cross-country differences.

METHOD


PARTICIPANTS

The analysis was conducted on data from the international PISA 2009 assessments. For the sake of clarity, results were analyzed only for the 34 OECD countries participating in PISA. In all OECD countries together, 298,454 students aged 15 years participated in 34 national samples in the PISA study. Due to differences in the sampling frame (e.g., because of different regions, school systems, or school types) and because of national sample extensions, sample sizes varied between 3,664 students in Iceland and 38,250 in Mexico (OECD, 2012). Based on the sampling frame, however, students’ sample weights were generated by the international consortium and applied for the analysis presented in this article in order to represent the reference population of 15 year olds within each country correctly (OECD, 2012). The database thus offers representative samples of the population of 15 year olds enrolled in schools for each participating (here: OECD) country.

MEASURES

Metacognitive Knowledge

PISA’s approach to measuring metacognition about reading and text comprehension focuses on the idea that the core assessment component is students’ knowledge base about learning strategies for reading. It is based on a procedure already used successfully in Germany’s 2000 PISA assessment (Artelt et al., 2001; Schlagmüller & Schneider, 2007). The final version of the test for the PISA 2009 assessment presented two different reading scenarios (short vignettes) to students. For each scenario, students were asked to evaluate the quality and usefulness of five different reading and text comprehension strategies for reaching the intended learning or memory goal. A decision about whether students’ answers to the test can be regarded as indicators of their knowledge requires a prior decision about which answers are correct and incorrect. This means that the superiority or inferiority of items in each (theoretically meaningful) pair of strategies had to be taken as a benchmark against which the ratings of the students could be evaluated. For all relevant pairwise comparisons from the two scenarios, an expert survey with 68 experts from 42 countries was conducted within the PISA 2009 field trial. Only 17 comparisons (i.e., judgments about priority or superiority of items in each theoretically relevant pair of strategies) on which at least 80% of the experts agreed were included in the final assessment (Artelt et al., 2009). The correspondence between the rankings of experts and students was thus reflected in the metacognition score ranging from zero (no correspondence with the expert rating) to 17 (full correspondence with the expert rating). This score indicates the degree to which students were aware of the best ways of storing text information and understanding memory and comprehension goals. In order to achieve high scores on the metacognition test, students had to activate knowledge about cognitive resources, the nature of the task, and strategies that facilitate comprehension and remembering as well as recalling information. The criterion of a benchmark (i.e., high vs. low knowledge) was applicable for the resulting indicator.1 The items used for the two scenarios are presented in the Figure 1.

Figure 1. Assessment of Metacognitive Knowledge Using Two Scenarios


Scenario 1

Reading task: You have to understand and remember the information in a text.

How do you rate the usefulness of the following strategies for understanding and memorizing the text?

 

    

Not useful at all

 

Very useful

 

(1)

(2)

(3)

(4)

(5)

(6)

A. I concentrate on the parts of the text that are easy to understand

n1

n2

n3

n4

n5

n6

B. I quickly read through the text twice

n1

n2

n3

n4

n5

n6

C. After reading the text, I discuss its content with other people

n1

n2

n3

n4

n5

n6

D. I underline important parts of the text

n1

n2

n3

n4

n5

n6

E. I summarize the text in my own words

n1

n2

n3

n4

n5

n6

F. I read the text aloud to another person

n1

n2

n3

n4

n5

n6


Scenario 2

Reading task: You have just read a long and rather difficult two-page text about fluctuations in the water level of a lake in Africa. You have to write a summary.

How do you rate the usefulness of the following strategies for writing a summary of this two-page text?

 

Not useful at all

  

Very useful

 

(1)

(2)

(3)

(4)

(5)

(6)

A. I write a summary. Then I check that each paragraph is covered in the summary, because the content of each paragraph should be included

n1

n2

n3

n4

n5

n6

B. I try to copy out accurately as many sentences as possible

n1

n2

n3

n4

n5

n6

C. Before writing the summary, I read the text as many times as possible

n1

n2

n3

n4

n5

n6

D. I carefully check whether the most important facts in the text are represented in the summary

n1

n2

n3

n4

n5

n6

E. I read through the text, underlining the most important sentences. Then I write them in my own words as a summary

n1

n2

n3

n4

n5

n6



The total possible score for the indicator of metacognitive knowledge was 17 points. In the first scenario 9 points could be scored if each of the alternatives C, D, and E was rated to be more useful than the alternatives A, B, and F. In the second scenario, eight additional points could be scored. The maximum score was attained if each of the alternatives A, C, D, and E were judged as being more useful than B; when D and E were judged as being more useful than C; and when D and E were judged as being more useful than A.

Strategy Use

In PISA 2009—as in the former PISA assessments (see Artelt et al., 2003)—learning strategies were assessed on the basis of students’ self-reports: Two indices related to cognitive strategies (memorization/rehearsal and elaboration strategies) and one related to metacognitive activity (control strategies) were created, each consisting of four or five items (OECD, 2010b). For each of the three indices, students had to indicate on a four-point scale (almost never, sometimes, often, almost always) how often they did what was described in the items. A sample item for elaboration strategies is: “When I study, I figure out how the information might be useful outside school.” The items forming the three strategy indices are presented in the appendix. Data from the PISA 2000 assessment indicated that these indicators were highly reliable within countries and that structural equivalence (factorial invariance) among the three constructs could be assumed (see Artelt et al., 2003), thereby indicating that the constructs could be interpreted meaningfully within each country. The index for each type of strategy was constructed and transformed so that the mean value across all OECD countries was zero and the standard deviation was one. Hence, values both within and between countries can be interpreted relative to the OECD mean. Note, however, that due to possible cultural differences (in, e.g., response styles, standards of comparisons, conscientiousness, or modesty rules), the scalar equivalence (Van de Vijver & Leung, 1997) for these kinds of items (at least for those measuring control strategies and elaboration strategies) cannot be taken for granted and has not been confirmed using data from the international PISA 2000 assessment (Artelt et al., 2003, pp. 87–88). This implies that the raw scores (regardless of the transformation applied) are unlikely to have the same meaning/interpretation in each country. For the purpose of this article, we thus do not report mean level country differences for the indicators of self-reported strategy use (for metacognitive knowledge either, although this would have been possible). Furthermore, we did not use the metric provided by the international PISA consortium, but used a country-specific z transformation, resulting in a mean of zero and a standard deviation of one for each participating OECD country. This was done primarily because standardization was necessary to compute the moderator regressions (reported below).

Reading Competence

The dependent variable for the analyses presented below is OECD PISA’s combined reading literacy score reflecting students’ competence in accessing and inferring information, forming a coherent interpretation, and reflecting upon the form and content of authentic reading material (OECD, 2010a). The indicator was based on the data from PISA 2000 (OECD, 2001), with a mean value for the students in the OECD countries in the year 2000 of 500, and a standard deviation of 100. Through item anchor techniques and the application of item response models (OECD, 2012), the metric for the reading test of PISA 2009, which has a partial overlap with the item pool from PISA 2000, can be mapped on the original scale, resulting in scores that indicate the relative change compared to the performance in PISA 2000.

RESULTS


The bivariate relations between metacognitive knowledge, strategy use, and reading competence are depicted in Table 1. Despite some variation among the coefficients between countries, the general picture that emerges is clear. As expected, we found moderate to high correlations between metacognitive knowledge and reading competence ranging from r = .37 in Greece to r = .60 in Switzerland and Belgium with an OECD mean of r = .48 (see Table 1, Column 2). Relationships between strategy use and reading competence (Columns 6, 7, and 8) were much lower. Correlations with reading competence for the whole OECD sample of r = .02 (for elaboration strategies) and r = −.02 (for memorization strategies) indicated that not much variance in the reading competence index could be explained by students’ use of these strategies. This pattern of findings was also found in the within-country correlations. For the elaboration scale, coefficients varied from r = −.09 in Israel to r = .31 in Korea; for the memorization scale, from r = −.25 in the Netherlands to r = .31 in Korea. It seems that especially memorization strategies produced ambivalent results, with 14 countries revealing correlations below r = −.05 and 11 countries above r = .05. However, a closer look at Table 1 shows that Korea revealed especially high positive correlations between students’ use of memorization strategies (as well as elaboration strategies) and reading competence. Without Korea, the highest correlations for memorization and reading did not exceed r = .10 (in France, Luxembourg, Australia, and Sweden), and the maximum correlation between elaboration strategies and reading was r = .23 (Portugal). Regarding the index of control strategies, overall higher relationships were found with reading competence ranging from r = .17 (Austria and Hungary) to r = .45 in Korea. In this case, however, Korea did not seem to be an outlier, given that correlations above r = .4 were also found for students in France and Portugal. It seems that control strategies had an overall positive association with reading competence. Nevertheless, the magnitude of the relationship indicated that metacognitive knowledge was a far better predictor than the use of control strategies (r = .25 OECD mean).

Table 1. Correlations between Metacognitive Knowledge, Strategy Use, and Reading Competence


 

 r metacognitive knowledge with

r reading competence with

Country

read

elab strat

mem strat

cont strat

elab strat

mem strat

cont strat

Australia

0.53

0.13

0.13

0.36

0.11

0.10

0.39

Austria

0.55

0.03

−0.07

0.21

0.02

−0.09

0.17

Belgium

0.60

0.04

−0.07

0.31

0.00

−0.13

0.30

Canada

0.43

0.05

0.06

0.30

0.03

0.04

0.33

Chile

0.52

0.03

−0.01

0.22

0.07

0.06

0.30

Czech Republic

0.57

0.13

-0.07

0.29

0.13

−0.07

0.30

Denmark

0.54

0.17

0.01

0.26

0.10

−0.10

0.21

Estonia

0.51

0.05

−0.02

0.17

0.10

−0.07

0.18

Finland

0.54

0.13

0.10

0.31

0.15

0.03

0.30

France

0.53

0.08

0.05

0.31

0.07

0.10

0.41

Germany

0.57

0.01

−0.03

0.24

0.01

−0.05

0.25

Greece

0.37

0.10

−0.01

0.20

0.13

0.06

0.30

Hungary

0.55

0.01

0.01

0.16

0.00

0.03

0.17

Iceland

0.48

0.11

0.03

0.25

0.11

−0.01

0.28

Ireland

0.49

0.08

0.02

0.27

0.07

0.08

0.31

Israel

0.49

−0.08

−0.08

0.14

−0.09

−0.06

0.21

Italy

0.51

0.04

−0.08

0.25

0.06

−0.10

0.29

Japan

0.54

0.11

0.04

0.26

0.18

0.07

0.35

Korea

0.56

0.19

0.22

0.31

0.31

0.31

0.45

Luxembourg

0.53

0.05

0.11

0.31

−0.02

0.10

0.26

Mexico

0.47

0.05

−0.04

0.19

0.07

0.00

0.25

Netherlands

0.57

0.07

−0.13

0.29

0.02

−0.25

0.27

New Zealand

0.52

0.08

0.06

0.33

0.02

0.03

0.34

Norway

0.50

0.18

0.09

0.30

0.19

0.04

0.28

Poland

0.49

0.04

0.02

0.22

0.08

0.08

0.30

Portugal

0.57

0.19

−0.06

0.34

0.23

−0.03

0.42

Slovak Republic

0.51

0.12

−0.12

0.28

0.10

−0.22

0.27

Slovenia

0.53

0.05

−0.15

0.23

0.05

−0.17

0.26

Spain

0.46

0.05

0.02

0.28

0.11

0.06

0.33

Sweden

0.54

0.15

0.07

0.28

0.13

0.10

0.27

Switzerland

0.60

0.02

−0.03

0.29

0.01

-0.06

0.28

Turkey

0.44

0.06

−0.14

0.14

0.12

−0.17

0.24

United Kingdom

0.48

0.02

0.03

0.25

0.05

0.01

0.28

United States

0.44

0.10

0.04

0.28

0.00

-0.05

0.27

OECD mean

0.48

0.06

0.01

0.25

0.02

−0.02

0.25


Note. read = reading competence; elab strat = elaboration strategies mem strat = memorization strategies; cont strat = control strategies.

Table 1 also reports the bivariate relations between metacognitive knowledge and the (self-reported) use of strategies. As can be seen in the last row (Columns 3, 4, and 5), the association between knowledge and strategy use was rather weak—particularly for elaboration (r = .06) and memorization strategies (r = .01). The highest relationship between knowledge and the use of elaboration strategies was found for Korea and Portugal (both r = .19) and lowest (even slightly negative) for Israel (r = −.08). Also within countries, the interrelation between memorization strategies and metacognitive knowledge was rather low, and even negative in four countries with r < −.10 (Slovenia, Turkey, the Netherlands, and the Slovak Republic). In Korea, on the other hand, there was a moderately positive relation (r = .22). Again, however, Korea seemed to be an outlier because the next lowest correlation was r = .13 (Australia). For the last dimension of strategies—control strategies—the relationship to students’ metacognitive knowledge was somewhat higher. Overall, 6% of the variance in students’ reading competence (r = .25, OECD mean) could be explained through control strategies. Once again, correlations varied between countries, ranging from r = .14 in Israel and Turkey to r = .36 in Australia.

MEDIATOR MODEL

The results presented in Table 1 indicate that it was unlikely that the hypothesized mediator effect would be found for all three types of strategy use. Therefore, we limited our test of the mediator assumption to examining the effect of metacognitive knowledge on reading competence via students’ use of control strategies. Figure 2 displays the results of this model as computed for all students from OECD countries in the sample. Figure 3 shows the relevant results of the mediator models broken down by county.



Figure 2. Effects of the Mediator Model of Metacognitive Knowledge, Strategy Use, and Reading Competence.







Note. Single line arrows indicate the bivariate effects; double line arrows, the effect of the mediator when controlling for the predictor.



Figure 3. Results of the Mediator Model: Standardized Regression Coefficients by Country.


[39_17695.htm_g/00002.jpg]


In line with the criteria introduced by Baron and Kenny (1986), we can speak of a low but partly mediated effect of metacognitive knowledge on reading competence. All bivariate effects were significant, and the effect of the predictor (metacognitive knowledge) on the dependent variable decreased after controlling for the effect of the mediator (students’ use of control strategies). The Sobel test was significant, indicating that the mediator effect was—at least—different from zero. The pattern of results for the whole sample was mirrored to a great extent within each of the 34 OECD countries (see Figure 3). As can be inferred by comparing the two bars for the effect of the metacognitive knowledge on reading competence (the first indicating the bivariate effect and the second controlling for students’ use of [control] strategies), a small amount of the effect of metacognitive knowledge on reading competence was mediated by students’ use of control strategies. This could be inferred from the finding that the effect decreased (in every country) when the effect of the mediator was estimated simultaneously. In each case, the Sobel test confirmed that a certain amount of the effect of metacognitive knowledge on reading competence was in fact mediated by students’ use of control strategies.

However, for the effect within countries (Figure 3) as well as across countries (Figure 2), the regression coefficients seemed to support another interpretation, namely, that metacognitive knowledge was a mediator for the effect of strategy use on reading competence. In all countries, the effect of students’ use of control strategies decreased considerably when the effect of the metacognitive knowledge was estimated simultaneously (ß = 25 to ß = .14 for the whole sample, see Figure 2; difference between the first two bars by country in Figure 3). This general pattern of results (including a significant Sobel test for metacognitive knowledge as mediator) showed that there seemed to be more reason to assume that metacognitive knowledge mediated the effect of students’ use of control strategies on reading competence rather than vice versa. From a theoretical point of view, however, the idea that the use of strategies exerts an effect on reading competence through students’ metacognitive knowledge is not very plausible. It might, however, be the case that this pattern indicates yet another effect.

MODERATOR MODEL

Considering the validity problems related to the indicator of students’ strategy use and also the fact that the indicator provided no information on either the quality of execution of strategies nor the quality of the selection of strategies for the task at hand, it is likely that the effect of strategy use on reading competence was moderated by students’ metacognitive knowledge. That is, positive effects are more likely to be found when students have a rich knowledge base at their disposal. If students’ metacognitive knowledge is low, however, it is more likely that the selected strategies are either not well chosen or poorly executed. Thus, it can be assumed that metacognitive knowledge is a necessary prerequisite for the effective use of strategies.

To test whether metacognitive knowledge served as such a moderator, we conducted a moderator regression within each country using (country-specific) standardized scores for both predictors (i.e., metacognitive knowledge and use of control strategies) as well as for the product term for testing the interaction effect (moderator effect) of these two variables on reading competence (see Table 2).

Table 2. Effects of the Moderator Regression by Country

 

 

Predictor I

Predictor II

Predictor I x Predictor II

 

Constant

B

SE

B

SE

B

SE

Δ R2

Australia

517.14

41.69

0.17

21.81

0.17

3.38

0.15

0.00

Austria

475.19

51.18

0.28

6.93

0.29

4.78

0.25

0.00

Belgium

512.98

52.70

0.23

12.91

0.24

6.86

0.20

0.01

Canada

525.20

32.01

0.13

19.50

0.14

2.85

0.12

0.00

Chile

451.03

37.35

0.14

15.52

0.14

2.79

0.13

0.00

Czech Republic

481.63

45.57

0.22

13.86

0.22

5.24

0.20

0.00

Denmark

495.81

41.74

0.29

6.92

0.30

3.74

0.24

0.00

Estonia

501.92

39.04

0.61

8.14

0.62

1.36

0.55

0.00

Finland

536.57

41.16

0.29

11.96

0.29

1.10

0.25

0.00

France

501.54

43.61

0.11

26.53

0.11

0.40

0.09

0.00

Germany

504.82

48.23

0.09

11.38

0.09

5.20

0.08

0.00

Greece

483.22

29.39

0.28

22.22

0.28

3.34

0.24

0.00

Hungary

494.56

47.01

0.23

8.64

0.23

3.73

0.20

0.00

Iceland

502.19

40.58

1.23

15.36

1.26

3.61

1.08

0.00

Ireland

500.22

38.47

0.35

16.54

0.36

0.01

0.30

0.00

Israel

481.35

48.79

0.29

14.06

0.29

-0.10

0.27

0.00

Italy

487.13

43.32

0.11

16.59

0.12

3.04

0.09

0.00

Japan

521.46

45.73

0.08

20.64

0.08

-1.65

0.07

0.00

Korea

540.24

34.90

0.08

22.23

0.08

-1.67

0.07

0.00

Luxembourg

475.78

48.22

1.24

11.28

1.28

3.33

1.07

0.00

Mexico

427.08

35.08

0.06

14.07

0.06

4.24

0.06

0.00

Netherlands

510.10

45.68

0.17

12.60

0.17

7.73

0.15

0.01

New Zealand

523.18

44.90

0.37

18.89

0.38

3.25

0.33

0.00

Norway

504.66

39.69

0.33

13.28

0.34

1.53

0.28

0.00

Poland

502.16

37.60

0.11

17.31

0.11

1.54

0.10

0.00

Portugal

489.42

40.31

0.22

21.48

0.22

2.56

0.20

0.00

Slovak Republic

478.80

40.67

0.29

12.98

0.30

2.89

0.26

0.00

Slovenia

485.20

42.50

0.54

14.32

0.56

6.31

0.49

0.01

Spain

482.33

33.29

0.12

19.46

0.13

2.17

0.10

0.00

Sweden

500.74

46.46

0.24

11.30

0.25

0.39

0.23

0.00

Switzerland

499.95

51.07

0.26

11.34

0.27

6.52

0.23

0.01

Turkey

465.80

32.29

0.08

13.77

0.08

0.54

0.08

0.00

United Kingdom

496.19

39.46

0.10

16.26

0.10

3.59

0.09

0.00

United States

499.70

36.94

0.05

15.80

0.05

7.25

0.04

0.01


Note. B = unstandardized regression coefficients; SE = standard error; Δ R2= change in explained variance (relative to the model without the interaction term).

When interpreting the effect, it has to be considered that the proof of a mediator using a moderator regression is plagued by the problem of cumulated effects of the unreliability of the predictor (Aiken & West, 1991). However, given the large sample size within PISA, even very small effects become statistically significant. As Table 2 shows, the interaction term was significant in almost all countries (except for Sweden, Israel, and Ireland). When also taking into account the size of the effect of the interaction term as well as the change in R2 (indicating the difference between the models with and without the interaction term), it became clear that the moderator effect was not very pronounced. In fact, the additional amount of variance explained after the interaction term often entered into the regression equation was often less than 1%. However, there was also some variation among countries.

To illustrate the direction of the interaction effect, we dichotomized the metacognitive knowledge indicator using an absolute criteria (scores above the 50th percentile of all OECD students were classified as high; scores below the 50th percentile as low metacognitive knowledge) and then compared the correlations between strategy use and reading competence for these two groups. The results of this comparison are shown in Table 3. Given the reduction of variance in the metacognition score, we did not expect the effects of the comparison to correspond perfectly with those of the moderator regression analysis. However, the pattern of results was, of course, similar. We found a large group of countries in which the relationship between students’ use of control strategies and their ultimate reading competence was significantly higher when they had a rich knowledge base about strategies for text understanding and memory at their disposal, as opposed to the group of students with a low level of metacognitive knowledge. The difference between the correlation coefficients was most pronounced in Belgium, Switzerland, Slovenia, Germany, the Netherlands, the Czech Republic, Hungary, and Mexico (see Table 3). These countries were also reported to show a comparatively high interaction effect (cf. Table 2, Column 7). However, there were also countries in which the opposite effect could be observed: For instance, in Japan and Korea, it was particularly control strategies that correlated more highly with reading competence in the group of students with low metacognitive knowledge as opposed to those with higher levels.

Table 3.

Correlation Between Strategy Use and Reading Competence for Students With High or Low Metacognitive Knowledge and Mean Levels of Reading Competence for Different Student Groups

 

Correlation between strategy use and reading competence

Mean levels of

reading competence

 

Meta low

Meta high

Meta low strat low

Meta low strat high

Meta high strat low

Meta high strat high

Australia

0.29

0.31

457

500

531

572

Austria

0.06

0.14

422

435

507

526

Belgium

0.09

0.27

451

468

539

570

Canada

0.25

0.28

475

511

529

569

Chile

0.23

0.26

408

434

472

502

Czech Republic

0.16

0.27

433

456

508

538

Denmark

0.11

0.16

452

461

519

537

Estonia

0.15

0.13

453

470

522

537

Finland

0.23

0.19

485

514

560

581

France

0.37

0.31

429

487

513

558

Germany

0.09

0.23

444

461

526

554

Greece

0.27

0.25

443

474

492

527

Hungary

0.08

0.18

444

455

515

541

Iceland

0.20

0.24

451

474

522

547

Ireland

0.25

0.23

447

488

516

543

Israel

0.19

0.13

421

449

508

530

Italy

0.23

0.22

429

465

508

535

Japan

0.33

0.23

448

505

536

571

Korea

0.43

0.33

488

534

555

586

Luxembourg

0.14

0.19

427

446

504

530

Mexico

0.16

0.26

386

406

440

473

Netherlands

0.12

0.26

459

476

530

561

New Zealand

0.23

0.26

463

497

542

581

Norway

0.22

0.20

457

485

522

548

Poland

0.25

0.24

454

487

519

553

Portugal

0.34

0.33

429

470

500

542

Slovak Republic

0.21

0.20

434

458

498

525

Slovenia

0.14

0.28

439

458

504

539

Spain

0.29

0.26

433

469

491

525

Sweden

0.20

0.16

448

474

529

553

Switzerland

0.09

0.25

443

458

524

555

Turkey

0.21

0.19

427

454

483

509

United Kingdom

0.20

0.25

446

470

507

543

United States

0.16

0.25

456

477

506

548

 

Note. Meta low = low metacognitive knowledge; Meta high= High metacognitive knowledge; Strat high = frequent use of control strategies; Strat low = infrequent use of control strategies.

Table 3 also reports the level of reading competence broken down for the four groups of students that result from combining either high or low metacognitive knowledge with either frequent or infrequent use of control strategies. A comparison of the mean level of reading competence for these four groups (see Columns 4 to 7 in Table 3) revealed that the level of reading competence was highest in the group of students with high metacognitive knowledge, and this was particularly the case when high knowledge was combined with a frequent use of control strategies. Students with low metacognitive knowledge performed better when they used control strategies frequently. The poorest performing students were those with low scores on metacognition and strategy use. These patterns of results, which are shown in Figure 3, were similar across all countries in our sample. Each country revealed an increase in the level of reading competence between the four groups of students in line with the pattern illustrated in Figure 4 for the overall sample.

Figure 4. Level of reading competence in the four groups of students within the whole

[39_17695.htm_g/00004.jpg]


High levels of reading competence were more likely when students had metacognitive knowledge at their disposal. However, given that the effects of the moderator model were not as pronounced as expected and even took the opposite direction in some countries, it was inappropriate to assume that metacognitive knowledge is a necessary condition for the efficiency of students’ use of control strategies. Nevertheless, the highest performing students in the PISA reading competence assessment in all countries were those with above-average levels of metacognitive knowledge as well as self-reported strategy use. The lowest performing students were those with below-average scores on these two variables. For the remaining groups of students, those with above-average metacognitive knowledge and below-average strategy use outperformed those with above-average strategy use and below-average metacognitive knowledge, indicating the specific relevance of metacognitive knowledge as a predictor of reading competence within and across all countries.

DISCUSSION


Metacognition plays a central role in the process of reading (see also Pintrich, 2002; Pressley, 2002). It is assumed to be relevant for reflecting on the ongoing reading process as well as for selecting and applying strategies that are triggered by this reflection and derived from a metacognitive knowledge base. There is clear evidence for an age-dependent increase in the importance of declarative as well as procedural metacognitive knowledge for cognitive performance in general and reading in particular (Schneider, 2010). PISA 2009 administered indicators of self-reported strategy use as well as declarative metacognitive knowledge. Based on the data of students from 34 OECD countries participating in the study, we interpret that the knowledge component of metacognition is associated with students’ reading competence to a much higher degree than students’ habitual use of control, elaboration, or memorization strategies: For students’ habitual use of memorization as well as elaboration strategies, the overall correlations with reading competence were close to zero (r = .01, r = 0.6); for their self-reported use of control strategies, an overall correlation of r =.25 was found. The bivariate effect of students’ metacognitive knowledge on reading comprehension was r = .48 (ranging from r = .37 in Greece to r = .60 in Switzerland and Belgium). This picture of rather strong effects of metacognitive knowledge on reading compared to minor ones for self-reported strategy use was also mirrored in the results of the mediator model that provide evidence for only a very small mediator effect (via the use of control strategies) of metacognitive knowledge on reading comprehension, but a strong direct effect.

When interpreting these results, it is important to bear in mind that the self-report indicator of strategy use has several limitations due to the conceptual weakness of the measure. We argued that the indicators of metacognitive knowledge and strategy use require a benchmark criterion before any inferences can be drawn about their effects on achievement and correlations between metacognition and performance can be assessed with any validity (Artelt & Neuenhaus, 2010). Whereas this is the case for the newly developed indicator of metacognitive knowledge (primarily measuring relational and conditional strategy knowledge for understanding and remembering information of texts) used within this study, it remains unclear whether the strategies the students had in mind when reporting on the frequency of their use during learning in general (self-reported habitual strategy use) were chosen adequately and applied correctly. The benchmark criterion thus does not seem to be applicable in the case of self-reported strategy use in more or less broadly defined learning situations, mainly because the appropriateness of the strategy selection as well as the quality of strategy use is unknown. Furthermore, it is unclear whether self-reports can be seen as valid indicators of strategic behavior (see also Winne & Perry, 2000). Many of these limitations were known before, and thus we did not expect to find strong correlations between strategy use and reading competence.

One of the potentials of the PISA assessment in this regard, however, is the possibility of analyzing metacognitive knowledge in relation to strategy use (either as predictor or moderator variable) and shedding light on the question of the generalizability of structural effects across countries. Taken together, the ratio of similarities and differences between countries reported throughout this article is in favor of similarities. Most results indicated that it is possible to generalize across countries. The structural patterns were rather similar, indicating that comparable mechanisms might be at work. This conclusion applies particularly to the effects of metacognitive knowledge: Students who know that they have a repertoire of strategies at their disposal and also know which to select for a given task, clearly outperform their classmates with a less pronounced metacognitive knowledge base. This pattern of results could be observed in all 34 countries. Furthermore, it did not seem to be the case that the effect of metacognitive knowledge on reading competence was mediated—at least not to a large extent—by students’ use of strategies: The effect of metacognitive knowledge on reading competence changed only slightly when the impact of strategy use on reading was estimated simultaneously.

We further tested whether students’ metacognitive knowledge can be regarded as a precondition for students’ strategy use to be effective. This moderator effect could be found in a large number of countries: For students with a rich knowledge base about when and where to apply strategies (conditional and relational strategy knowledge), correlations between their self-reported use of control strategies and reading competence were higher than for students with a low knowledge base. However, given that the moderator effect was generally not that pronounced, a simple way to avoid the described ambiguity of the strategy indicator does not seem to work. Although it is more likely that students report using strategies that they know well and potentially also choose adequate strategies for the task at hand, we still do not know whether their self-reported use of strategies is a valid indicator of real (habitual) behavior. Part of this ambiguity could be resolved by administering self-report measures that do not refer to learning in more or less broadly defined learning situations, but to more specific situations that allow respondents to be more precise about their conditional use of strategies (see Butler, Cartier, Schnellert, Gagnon, & Giammarino, 2011, for an example).

Although many similarities in the reported results were found across countries, some countries produced a somewhat different picture. Differences seem to be due to students’ self-reports on habitual strategy use. This was especially the case for Korea and—to some extent—also Japan. In these countries, the self-report measures of strategy use correlated more closely with the achievement measure than in other countries, and the correlation (for use of control strategies and achievement) was even higher for the group of students with below-average metacognitive knowledge than for those with above-average metacognitive knowledge. It is beyond the scope of this article to explain the reasons for these differences. However, it seems worth analyzing further whether the decision-making process related to items of strategy use was different (was probably more valid) in these countries than in other countries due to cultural factors (e.g., differences in conscientiousness, different standards for comparisons) or whether the results were due to possible differences in the role of strategy instruction in the school curriculum.

When looking at practical implications, both with respect to the interventions in the fields of reading and self-regulated learning, as well as related to the suitability of the indicators for the purposes of a large-scale assessment project such as PISA, we can conclude that the indicator of metacognitive knowledge—although more distal to what students do—is more significant and relevant than that of self-reported habitual strategy use. With respect to practical implications, a benchmark and goal for interventions proves to be the students’ potential to use adequate strategies for the task at hand (not necessarily related only to the two concrete scenarios used within the knowledge test, but also more generally speaking). The rather high correlations between metacognitive knowledge and students’ reading competence were very similar across countries. To some extent, this relationship can be regarded as a causal effect of metacognitive knowledge on reading competence. Nevertheless, a cross-sectional study such as PISA is unable to rule out the possibility that the effect is exactly the other way around or reciprocal in nature. A study using the same (although longer) instruments to measure metacognitive knowledge within a longitudinal design provided clear evidence for a reciprocal effect (Artelt, Neuenhaus, Lingel, & Schneider, 2012). With this in mind, the reported similarity across countries in the effects related to metacognitive knowledge suggests implications for practical consequences that also apply to all countries. Furthermore, there is more reason to assume universal rather than country- or culture-specific developmental pathways of metacognition. What are the practical pedagogical implications of our findings? Obviously it seems worthwhile to foster metacognitive knowledge. When looking at the mean number of correctly solved pairwise comparisons by country (no table) ranging from 11.18 to 8.37 (OECD mean = 9.55, SD = 4.48) relative to a possible score of 17, it becomes obvious that there is room for improvement in every country. If students are to build up a rich knowledge base about different strategies, their execution, efficiency, and their relative strengths and weaknesses (declarative, procedural, conditional, and relational strategy knowledge), they need to encounter multiple experiences with their use. There seems to be consensus in the literature that explicit instruction and feedback (also administrable by peers) and the provision of many occasions to use and reflect upon strategies are needed in order to build up metacognitive knowledge (McNamara, 2007; National Reading Panel, 2000; Pressley, 2002).

Metacognitive knowledge is regarded as a necessary, although not a sufficient condition for the efficient use of strategies (note that the latter was not assessed within PISA). Students with high levels of metacognitive knowledge may in fact not be motivated to invest effort or they may simply have different learning goals. Assessing metacognitive knowledge alone does not tell us anything about the actual (motivated) use of strategies in a given situation. However, this kind of knowledge is rarely provided in large-scale assessments such as PISA. Other large scale studies and assessments are needed (and exist) to inform us about the quality of strategy applications related to reading, as well as about specific ways of fostering students’ effective use of strategies (i.e., National Reading Panel, 2000; see also McNamara, 2007; Paris & Stahl, 2005; Perry & Winne, 2006; Winne, 2010).

When assessing the significance of various indicators of students’ approaches to learning (see OECD, 2010b), it seems worthwhile to use those indicators that are: (a) comparable across countries, (b) valid, and (c) include a clear benchmark. As has been shown above, none of these criteria seems to be applicable to the indicators of self-reported habitual strategy use (see Artelt et al., 2003; Winne & Perry, 2000). Generally speaking, researchers have to take into account the constraints and the inherent assumptions (as well as pros and cons) of the respective conceptualizations of metacognition and strategy use. Against this background, there are many advantages in using the indicator of metacognitive knowledge in future international large-scale studies on educational achievement. It can be interpreted in terms of more or less pronounced knowledge. Moreover, it has proven to be a highly reliable predictor of students’ academic achievement in each of the 34 OECD countries participating in PISA 2009 and offers concrete practical consequences for fostering reading comprehension.

APPENDIX


MEASURING STUDENTS’ HABITUAL USE OF STRATEGIES

Students’ habitual use of memorization, elaboration, and control strategies was measured with 13 self-report items. Students had to indicate on a four-point scale (almost never, sometimes, often, almost always) how often they did what was described in the items. The items were presented in random order after the following general question and instruction: “When you are studying, how often do you do the following? (Please tick only one box in each row). For this article, the items were sorted by the underlying type of strategy (memorization strategies, elaboration strategies, control strategies) for ease of presentation. Information about the underlying strategy type was not accessible to the students. The scores for the three strategy scales were derived by computing the mean value of the items for each scale.

MEMORIZATION STRATEGIES (FOUR ITEMS)

When I study, I try to memorize everything that is covered in the text. When I study, I try to memorize as many details as possible. When I study, I read the text so many times that I can recite it. When I study, I read the text over and over again.  

ELABORATION STRATEGIES (FOUR ITEMS)

When I study, I try to relate new information to prior knowledge acquired in other subjects. When I study, I figure out how the information might be useful outside school. When I study, I try to understand the material better by relating it to my own experiences. When I study, I figure out how the text information fits in with what happens in real life.

CONTROL STRATEGIES (FIVE ITEMS)

When I study, I start by figuring out what exactly I need to learn. When I study, I check if I understand what I have read. When I study, I try to figure out which concepts I still haven’t really understood. When I study, I make sure that I remember the most important points in the text. When I study and I don’t understand something, I look for additional information to clarify this.

Notes


1. That the indicator for metacognitive knowledge used within this article is not available in the international dataset, because separate indicators were constructed for the two scenarios.


References


Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interaction. Newbury Park, CA: Sage.


Artelt, C. (2000). Strategisches Lernen [Strategic learning]. Münster, Germany: Waxmann.


Artelt, C., Baumert, J., Julius-McElvany, N., & Peschar, J. (2003). Learners for life. Student approaches to learning. Results from PISA 2000. Paris, France: OECD.


Artelt, C., Beinecke, A., Schlagmüller, M., & Schneider, W. (2009). Diagnose von Strategiewissen beim Textverstehen [Diagnosing knowledge of strategies in text comprehension]. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 41, 96–103.


Artelt, C., & Neuenhaus, N. (2010). Metakognition und Leistung [Metacognition and performance]. In W. Bos, E. Klieme, & O. Köller (Eds.), Schulische Lerngelegenheiten und Kompetenzentwicklung (pp. 127–146). Münster, Germany: Waxmann.


Artelt, C., Neuenhaus, N., Lingel, K., & Schneider, W. (2012). Entwicklung und wechselseitige Effekte von metakognitiven und bereichsspezifischen Wissenskomponenten in der Sekundarstufe [Development and reciprocal effects of metacognitive and domain-specific knowledge components in secondary school]. Psychologische Rundschau, 63(1), 18–25.


Artelt, C., Schiefele, U., & Schneider, W. (2001). Predictors of reading literacy. European Journal of Psychology of Education, 16, 363–384.


Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.


Bempechat, J., Jimenez, N. V., & Boulay, B. A. (2002). Cultural-cognitive issues in academic achievement: New directions for cross-national research. In A. C. Porter & A. Gamoran (Eds.), Methodological advances in cross-national surveys of educational achievement (pp. 117–149). Washington, DC: National Academic Press.


Best, D. L., & Ornstein, P. A. (1986). Children’s generation and communication of mnemonic organizational strategies. Developmental Psychology, 22, 845–853.


Borkowski, J. G., Milstead, M., & Hale, C. (1988). Components of children's metamemory: Implications for strategy generalization. In F. E. Weinert & M. Perlmutter (Eds.), Memory development: Universal changes and individual differences (pp. 73–100). Hillsdale, NJ: Erlbaum.


Brown, A. L. (1978). Knowing when, where and how to remember: A problem of metacognition. In R. Glaser (Ed.), Advances in instructional psychology (Vol. 1, pp. 77–165). Hillsdale, NJ: Erlbaum.


Butler, D., Cartier, S.C., Schnellert, L, Gagnon, F. & Giammarino, M. (2011). Secondary students’ self-regulated engagement in reading: Researching self-regulation as situated in context. Psychological Test and Assessment Modeling, 53, 73−105


Campione, J. C., & Armbruster, B. B. (1985). Acquiring information from texts: An analysis of four approaches. Thinking and learning skills. In J. W. Segal, S. F. Chipman, & R. Glaser (Eds.), Thinking and learning skills (pp. 317–359). Hillsdale, NJ: Erlbaum.


Cavanaugh, J. C., & Perlmutter, M. (1982). Metamemory: A critical examination. Child Development, 53, 11–28.


Collins Block, C., & Pressley, M. (2002). (Eds.). Comprehension instruction: Research-based best practices. New York, NY: The Guilford Press.


Dunlosky, J., & Metcalfe, J. (2009). Metacognition (1st ed.). Thousand Oaks, CA: Sage.


Flavell, J. H., Miller, P. H., & Miller, S. A. (2002). Cognitive development (4th ed.). Upper Saddle River, NJ: Pearson Education.


Goswami, U. (2008). Cognitive development: The learning brain (2nd ed.). Hove, England: Psychology Press.


Hacker, D. J., Dunlosky, J., & Graesser, A. (Eds.). (2009). Handbook of metacognition in education. New York, NY: Taylor & Francis.


Heine, S. J., Lehmann, D. R., Markus, H. R., & Kitayama, S. (1999). Is there a universal need for positive self-regard? Psychological Review, 106(4), 766–794.


Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.


Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge, England: Cambridge University Press.


Kreutzer, M. A., Leonard, C., & Flavell, J. H. (1975). An interview study of children’s knowledge about memory. Monographs of the Society for Research in Child Development, 40(1), 1–60.


McNamara, D. S. (Ed.). (2007). Reading comprehension strategies: Theories, interventions and technologies. Mahwah, NJ: Lawrence Erlbaum Associates.


National Reading Panel (2000). Report of the National Reading Panel: Teaching children to read. (National Institute of Child Health and Human Development, NIH, DHHS). Washington, DC: US Government Printing Office.


Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The psychology of learning and motivation (pp. 125–141). New York, NY: Academic Press.


OECD. (2001). Knowledge and skills for life. First results from PISA 2000. Paris, France: OECD.


OECD. (2010a). PISA 2009 Results: What students know and can do: Student performance in reading, mathematics and science. Paris, France: OECD.


OECD. (2010b). PISA 2009 results: Learning to learn. Student engagement, strategies and practices. Paris, France: OECD.


OECD. (2012). PISA 2009 [Technical report]. Paris, France: OECD.


Paris, S. G., & Byrnes, J. (1989). The constructivist approach to self-regulation and learning in the classroom. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theory, research and practice (pp. 169–200). New York, NY: Springer.


Paris, S. G., & Stahl, S. (2005). Children’s reading comprehension and assessment. Mahwah, NJ: Lawrence Erlbaum Associates.


Perry, N. E., & Winne, P. H. (2006). Learning from learning kits: gStudy traces of students’ self-regulated engagements with computerized content. Educational Psychology Review, 18, 211–228


Pieschl, S. (2007). To calibrate or not to calibrate? Conditions and processes of metacognitive calibration during hypermedia learning (Doctoral dissertation, Universitat Münster, Germany). Retrieved from urn:nbn:de:hbz:6-14569461694


Pintrich, P. R., (2002). The role of metacognitive knowledge in learning, teaching and assessing. Theory into Practice, 41(4), 219–225.


Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). The motivated strategies for learning questionnaire (MSLQ). Ann Arbor, MI: NCRIPTAL, The University of Michigan.


Pressley, M. (2002). Comprehension strategies instruction. In C. C. Block & M. Pressley (Eds.), Comprehension instruction: Research-based best practices (pp. 11–27). New York: Guilford Press.


Schlagmüller, M., & Schneider, W. (2007). WLST 7-12. Würzburger Lesestrategie Wissenstest für die Klassen 7 bis 12 [Wurzburg knowledge of reading strategies test for Grades 7 to 12]. Göttingen, Germany: Hogrefe.


Schneider, W. (2010). Metacognition and memory development in childhood and adolescence. In H. Salatas Waters & W. Schneider (Eds.), Metacognition, strategy use, and instruction (pp. 54–81). New York: Guilford Press.


Schneider, W., & Artelt, C. (2010). Metacognition and mathematics education. ZDM - The International Journal on Mathematics Education, 42, 149–16.


Schneider, W., & Lockl, K. (2008). Procedural metacognition in children: Evidence for developmental trends. In J. Dunlosky & R. A. Bjork (Eds.), A handbook of metamemory and memory (pp. 391–410). New York: Psychology Press.


Schneider, W., & Pressley, M. (1997). Memory development between two and twenty (2nd ed.). Mahwah, NJ: Erlbaum.


Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7(4), 351–371.


Van de Vijver, F., & Leung, K. (1997) Methods and data analysis of comparative research. In J. W. Berry, Y. H. Poortinga, & J. Pandey (Eds.), Handbook of cross-cultural psychology. Vol. 1: Theory and method (pp. 257–300). Needham Heights, MA: Allyn and Bacon.


Weinstein, C. E. (1987). Learning and study strategies inventory (LASSI). Clearwater, FL: H & H Publishing Company.


Wild, K.-P., Schiefele, U., & Winteler, A. (1992). LIST: Ein Verfahren zur Erfassung von Lernstrategien im Studium [A procedure for assessing study learning strategies]. In A. Krapp (Ed.), Arbeiten zur empirischen Pädagogik und pädagogischen Psychologie (Vol. 20). Neubiberg, Germany: Gelbe Reihe.


Winne, P. H. (2010). Improving measurements of self-regulated learning. Educational Psychologist, 45, 267–276.


Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Mahwah, NJ: Erlbaum.


Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 531–566). Orlando, FL: Academic Press.





Cite This Article as: Teachers College Record Volume 117 Number 1, 2015, p. 1-32
https://www.tcrecord.org ID Number: 17695, Date Accessed: 10/27/2021 8:16:56 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Cordula Artelt
    University of Bamberg, Germany
    E-mail Author
    CORDULA ARTELT is currently Professor of Educational Research at the University of Bamberg, Germany, where she teaches psychology and educational research. She also holds the position of a scientific manager of for competence development across the life course within the German National Educational Panel Study (NEPS). Her primary interests are competence assessments (including teacher professionalism), self-regulated learning, metacognition as well as text-comprehension and reading.
  • Wolfgang Schneider
    University of Würzburg, Germany
    E-mail Author
    WOLFGANG SCHNEIDER is currently Professor of Psychology at the Department of Psychology, University of Würzburg, Germany, where he teaches educational and developmental psychology courses. His research interests include the development of memory and metacognition, giftedness and expertise, the development of reading and spelling, as well as the prevention of reading and math difficulties.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS