Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Lessons Learned: What International Assessments Tell Us About Math Achievement


reviewed by Elizabeth Oldham - April 07, 2009

coverTitle: Lessons Learned: What International Assessments Tell Us About Math Achievement
Author(s): Tom Loveless (Ed.)
Publisher: Brookings Institution, Washington D.C.
ISBN: 0815753349, Pages: 275, Year: 2007
Search for book at Amazon.com


International studies of mathematics education involving assessment of student achievement have provoked discussion, and dissension, for more than forty years. On the one hand, the major cross-national studies have – in varying degrees – presented systematic accounts of curriculum and extensive data on students, teachers and schools, in addition to the student achievement scores that are their best-known feature; they provide rich data sets organized in a way that facilitates thorough analysis. On the other hand, when the results are reported chiefly in terms of the participating countries’ (or education systems’) mean assessment scores, rank ordered, shorn of context and in many ways divested of meaning, they may mislead more than they inform. Reporting of this kind provokes criticisms such as those of Keitel and Kilpatrick (1999): “International comparative studies … are cited as though the results they provide go without question…. [They] not only compare the incomparable, they rationalise the irrational” (p. 254).


It has long been a contention of this reviewer that international studies of achievement fulfill their potential only when hard questions are asked: when the results, and also the methodologies, are subjected to informed critique. Questioning can take place within participating countries, with regard to aspects such as the match between the content of the tests and that of the country’s curriculum, and the extent to which either of these reflects the aims of the country’s education system. However, questions can also be discussed in the light of profound secondary analyses at the international level, nowadays taking advantage of very powerful statistical techniques.


Lessons Learned exemplifies both types of questioning, but especially the latter. Stemming from papers delivered at a conference designed to probe the vast database assembled by the Trends in International Mathematics and Science Study (TIMSS), it contains an introduction and eight further chapters, all mining the data in depth and with subtlety, and all requiring – and deserving – close study. The results are fascinating. On account of the complexity of the issues, this review provides little in the way of summaries of the findings; summaries short enough to figure here would lose too much context and hence be in danger of “comparing the incomparable.”


The Introduction, by editor Tom Loveless, sets out the commitment to move beyond the “horse race” view of international studies in order to illuminate issues of policy and practice. The two following chapters deal in different ways with the development and scope of the studies over time. Ina Mullis and Michael Martin provide an historical account of large-scale international studies. They note, for example, that the Second International Mathematics Study (SIMS) placed great emphasis on curriculum as an explanatory variable, and also introduced a longitudinal element to the investigation of achievement (with eighth-grade students in some countries being tested twice); and they record the increasing sophistication of successive rounds of TIMSS. The “longitudinal” aspect is picked up in Jan-Eric Gustafsson’s chapter on causal influences on achievement through within-country analysis of differences for eighth-grade students between TIMSS 1995 and TIMSS 2003 results; he considers student age and class size.


Three chapters focus in varying ways on curricular matters. In their concern primarily though not uniquely for the U.S., they provide examples of the first type of questioning described above: reflecting on a national issue in international context. William Schmidt and Richard Huang examine curricular “coherence” (logical progression in accordance with the structure of mathematics) and “focus” (on a comparatively limited range of topics). As well as investigating the ways in which these two important concepts are related to achievement, the chapter raises methodological issues: how precisely the curriculum can be described for international studies while producing manageable data. As one of the team of people completing the “General Topic Trace Mapping” for the Irish response for TIMSS 1995, this reviewer recalls the difficulties in identifying the first appearance of some mathematical concepts in the curriculum, especially in the case of topics that start very informally for young children and reappear more formally in higher grades. The “ground rules” chosen by raters in different countries, both on this issue and with regard to “milestones” or “topics of great importance” in the curriculum, might differ. Studies in the TIMSS tradition constantly face such qualitative issues, as well as the challenges associated with cutting-edge work in the quantitative area.


In the other mainly curricular chapters, Jeremy Kilpatrick, Vilma Mesa and Finbarr Sloane use item level data to examine strengths and weaknesses of the U.S. performance in algebra – a matter of special interest because of the unusual way (in international terms) that the topic appears in U.S. curricula. The effect of the reform agenda in mathematics curricula, with regard to teachers’ adoption of practices in the reform tradition, is considered by Laura Hamilton and José Felipe Martínez.


The three remaining chapters address a variety of issues. First, there are policy implications in Gabriela Schütz’s study of school size and student achievement in TIMSS 2003. Secondly, Elena Papanastasiou and Efi Paparistodemou discuss educational technology and achievement. This chapter lends itself more than some of the others to a straightforward summary: the promise inherent in technology has not yet been fulfilled, in that greater technology use is not associated with higher achievement – rather (though not invariably) the other way about; it appears that teachers do not yet know how to avail themselves of the offerings of technology for higher-level cognitive processes. Finally, Dougal Hutchison and Ian Schagen compare TIMSS with the OECD’s Programme for International Student Assessment, PISA. They note that while the two studies are similar with regard to methodology, the rationale is different: in TIMSS the aim is to test mathematics common to the curricula of as many countries as possible, whereas for PISA there is an agenda with regard to the use of mathematics in the world beyond school. The authors look for research to be done on the differences in rank orderings of countries participating in both studies. It is worth noting here that research findings from comparing earlier studies point to sensitivities for rank ordering, for instance, with regard to item type (see for example O’Leary (2001)). PISA has proportionately more constructed-response items than does TIMSS, so this could be a source of variation. Moreover, PISA not only has a different purpose from TIMSS, but also is driven by a very clear philosophy of mathematics education in the “Realistic Mathematics Education” tradition. With the tests aiming to tap different aspects of mathematics, aspects that are given different curricular weight according to countries’ differing cultures of mathematics education, variation in rank ordering is perhaps predictable rather than unexpected.

 

That contention leads to another point about future research. While curriculum analyses have identified many similarities in mathematics curricula, they have not always collected data in a form that facilitates the highlighting of minor but interesting differences. Moreover, when such differences have appeared, they have not always been empathized in the reports. For TIMSS, this is understandable because of the need to focus on areas of reasonable commonality for valid testing, but it is a limitation nonetheless. (PISA does not collect data about school curricula; however, in view of the importance of curriculum as a variable in interpreting the results, it is not surprising that some countries have carried out a test-curriculum matching exercise to identify the areas in PISA tests not in their school curricula and vice versa. An example from the reviewer’s country is reported by Cosgrove, Shiel, Sofroniou, Zastrutzki & Shortt (2005)). Documentation about “fringe” areas of curriculum – perhaps flagging topics that are being newly introduced, or are being phased out – could generate very interesting data with regard to the character and the evolution of school mathematics, and this could further provide a systematic account of the curricula offered to non-specialists as well as those taking mathematics as a main subject. An extended study of curriculum encompassing these aspects could be reported in a collection of papers complementary to, and as valuable, interesting and challenging as, those in the present volume.



References


Cosgrove, J., Shiel, G., Sofroniou, N., Zastrutzki, S., & Shortt, F. (2005). Education for life:  The achievement of 15-year-olds in Ireland in the second cycle of PISA. Dublin: Educational Research Centre.


Keitel, C., & Kilpatrick, J. (1999). The rationality and irrationality of international comparative studies. In G. Kaiser, E. Luna, & I. Huntley (Eds.), International comparisons in mathematics education, Studies in Mathematics Education Series No. 11 (pp. 241-256). London: Falmer Press.


O’Leary, M. (2001). Item format as a factor affecting the relative standing of countries in the Third International Mathematics and Science Study (TIMSS). Irish Educational Studies, 20, 153-176.




Cite This Article as: Teachers College Record, Date Published: April 07, 2009
https://www.tcrecord.org ID Number: 15613, Date Accessed: 10/17/2021 2:30:06 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Elizabeth Oldham
    Trinity College Dublin, Ireland
    E-mail Author
    ELIZABETH OLDHAM is a lecturer in Trinity College Dublin, Ireland, specializing in mathematics education. For several years she also worked with the Irish National Council for Curriculum and Assessment. She has a long-standing interest in international studies of curriculum and achievement; she was a member of the Curriculum Analysis Group in the Second International Mathematics Study, and is now on the National Advisory Committee for the Irish component of the Programme for International Student Assessment (PISA). Recent publications deal with Irish performance in PISA and with projects on technology in mathematics and in teacher education.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS