A Call for Consensus in the Use of Student Socioeconomic Status Measures in Cross-National Research using the Trends in International Mathematics and Science Study (TIMSS)
by Amita Chudgar , Thomas F. Luschei & Loris Fagioli - June 16, 2014
The objectives of this research note are to: (1) illustrate variability in approaches to capture student Socioeconomic Status (SES) in current cross-national educational literature using Trends in International Mathematics and Science Study (TIMSS); (2) demonstrate that the choices researchers make about SES measures have important consequences for their conclusions about relationships between student performance and school resources; and (3) invite a conversation among researchers using cross-national data (especially TIMSS) that will lead to greater consensus as to how to measure student SES in educational research.
TIMSS is the worlds most comprehensive source of multi-year, cross-national data on student performance in science and mathematics. The TIMSS data, which provide nationally representative information on fourth and eighth grade students in dozens of school systems, offer tremendous potential to explore mathematics and science achievement across diverse contexts. From a policy perspective, researchers can use TIMSS datawhich also contain information on teachers and other school resourcesto investigate school and teacher variables associated with national and cross-national variation in mathematics and science achievement. To do so, researchers must account for the socioeconomic status (SES) of students as accurately as possible, to reduce bias resulting from unmeasured relationships between school resources and student SES (Buchmann, 2002; Coleman, 1966, 1975). Yet educational research often lacks a systematic conceptualization of SES (Harwell & LeBeau, 2010; Sirin, 2005). Although researchers have proposed a tripartite measure of family SES comprising parental education, parental occupation, and family income (e.g., Buchmann, 2002, Sirin, 20051), these variables are often missing in classroom-based surveys like TIMSS.
Lack of extensive SES measures in TIMSS and other surveys creates a tension between the desire for a conceptually grounded measure of SES and the realities of what is available in the data. In many cases, this tension is resolved through ad hoc conceptualizations of SES. As Harwell and LeBeau argue, SES often seems to be defined solely by the variable regarded as capturing this information (Harwell and LeBeau, 2010, p. 121). This task is further complicated in cross-national educational research, due to the lack of social class measures that are culturally relevant to the particular society or community being studied (Fuller & Clarke, 1994, p. 167). Within this backdrop, what guidelines are available to researchers wishing to account for SES in the TIMSS data? Unfortunately, current research offers few clear directions.
The objectives of this research note are to: (a) illustrate variability in approaches to capture student SES in current cross-national educational literature using TIMSS; (b) demonstrate that the choices researchers make about SES measures have important consequences for their conclusions about relationships between student performance and school resources; and (c) invite a conversation among researchers using cross-national data (especially TIMSS) that will lead to greater consensus as to how to measure student SES in educational research. To provide a model for the type of effort needed to reach such a consensus, we discuss the recommendations of a panel convened to address the measurement of student SES in the United States National Assessment of Educational Progress (NAEP).
WHY IS CONSENSUS IMPORTANT AND WHAT IS NEEDED TO REACH A CONSENSUS?
Student SES is a critical consideration in virtually all educational research. As Harwell and LeBeau (2010) note, more than nine decades of research (p. 120) document a relationship between student SES and student achievement. A task force of the American Psychological Association observed, socioeconomic factors and social class are fundamental determinants of human functioning across the life span, including development, well-being, and physical and mental health (APA, 2007, p. 1). The practical importance of accounting for SES is echoed in a widely cited report published by the Economic Policy Institute (Carnoy & Rothstein, 2013). The authors of this report argue that reporting on international differences in student achievement does not adequately address student SES. In their analysis of data from the Programme for International Student Achievement (PISA), the authors find that, after adjusting for social composition and sampling error, the United States increases its rank out of 34 members of the Organization for Economic Cooperation and Development (OECD) from 14th to 4th in reading and from 25th to 10th in math (Carnoy & Rothstein, 2013). In other words, to understand the United States relative educational underperformance, we must pay much greater attention to cross-country differences in student SES.2
Yet even in the United States, a country with a long history of systematic collection of education data, measuring SES is difficult. For instance, although a great deal of educational research in the US relies on a single measure of student SES (eligibility for the national School Lunch Program, or NSLP), evidence suggests that this measure may be inadequate and possibly even biased (Harwell & LeBeau, 2010).
The panel convened to address the measurement of student SES in the United States NAEP made several important observations that apply to both US and cross-national educational research. First, the panel identified the Big Three factors of family income, parental education, and parental occupation, as well as the possibility of including neighborhood and school SES, psychological variables, and more subjective measures, such as students own beliefs about their SES (Cowan et al., 2012). Second, addressing the debate over whether to use individual measures of SES or to combine them into a composite measure, the NAEP panel argued in favor of a composite measure because of the simplicity in reporting and avoiding conflicting stories about relationships to achievement. Despite the advantages of using individual constituent components of SES to help policy makers target resources toward specific interventions (Deaton, 2002; Willms, 2003), the NAEP panel concluded that the advantages of a composite variable generally outweigh the disadvantages (Cowan et al., p. 22).
Finally, the NAEP panel discussed the treatment of missing data in constructing SES measures. Although any researcher must address the potential for bias stemming from the use of data that are not missing at random, dealing with the issue of missing data may be more critical in the case of composite variables compared to single variables & simply because there are more opportunities for data to be missing (p. 26). Yet according to the panel, the researcher can address this problem through data imputation, as there are probably no special problems associated with imputing missing data in the case of computing the SES composite (p. 26). At the same time, the NAEP panel asserts, further study is necessary to address missing data issues in SES measurement (p. 26).
The NAEP panels observations regarding selecting variables, combining these variables, and treating missing data in these variables all apply to cross-national research. Indeed, scholars working on cross-national research have considered similar issues and their implications for some time. For example, Buchmann (2002) noted that in addition to the above considerations, comparative researchers must straddle the fine line between sensitivity to local context and the concern for comparability across multiple contexts (Buchmann, 2002, p. 168). Comparative scholars must also consider differences across developed and developing countries (Fuller & Clarke, 1994). Together, these challenges have long presented cross-national educational researchers who work with existing datasets like TIMSS with many complex decisions. And despite their recognition of these challenges, these researchers have few guidelines to follow. In the following section, we discuss how this complexity results in significant diversity in researchers approaches to which variables to use to measure SES, how to use them, and how to deal with missing data. We also discuss the implications of these choices.
DIVERSITY IN THE USE OF SES MEASURES IN TIMSS LITERATURE
To demonstrate the variation in approaches used to measure SES, we conducted a review of recent, representative literature using the ERIC database. Our criteria included relevance, recency, an empirical nature, and quality. To ensure relevance and recency, we searched for articles mentioning TIMSS during the period 2003 to 2013. Quality criteria included publication in a peer-reviewed journal indexed with an impact factor in the Thomson Reuters Web of Knowledge Journal Citation Reports, Social Sciences Edition 2011. Limiting the search to peer-reviewed articles resulted in 239 articles, 81 of which were in indexed journals. Each of the 81 articles was read to determine if TIMSS data were used and if the author(s) used SES or home background/context variables in the analyses. Elimination of articles that did not use TIMSS data or SES variables resulted in a final sample of 21 articles from a diverse set of journals. The three authors coded each of these articles in relation to the three key decisions highlighted by the NAEP report: (a) variables used to measure SES, (b) method used to combine these variables, and (c) handling of missing data (see Table 1). In reporting these results we identify the journal and year of publication, but not author names. Our intention is not to identify strengths and weaknesses of specific studies; instead, we aim to illustrate the diversity of approaches to measure the same construct across a group of recently published articles.
Variables selected: As demonstrated in Table 1, there is little consistency in how the authors chose and treated SES variables, along with a few similarities. The two most common variables used were parental education (N=17, 81%) and books in the home (N=16, 76%). For parental education authors used the highest value of either mothers or fathers education, a combination of both, or only mothers education. Ten articles used student-reported measures of the availability of possessions in the home. The most frequently used possessions were computer (N=9, 43%) and study desk (N=9, 43%), followed by dictionary (N=6, 29%), calculator (N=5, 24%), and Internet, which was used by one study.
The majority of articles also included additional measures of home background as identified by the authors. These included country-specific possessions in the home,3 exposure to test-language at home, immigration status, household size, living with both parents, and less frequently, parent and student expectations, and student aspirations.
Combining SES variables: We also found diversity in how researchers used the variables they selected for their models, from simple to more sophisticated approaches. Most studies included each variable individually (N=14, 67%), but some included some of the variables individually while combining home resource variables into an index (N=4, 19%). Three articles (14%) created composite measures using either simple summation or more sophisticated methods like factor analysis or confirmatory factor analysis.
Approaches to missing data: The 21 studies varied considerably in how they treated and discussed missing data. Almost half of the studies (N=10, 48%) did not explicitly mention missing data. Of the articles that did address missing data, five employed listwise deletion and three used mean imputation. The others used regression imputation, Full Information Maximum Likelihood, and/or Expectation Maximization.
In our assessment, the variability of approaches we found in recent peer-reviewed literature using TIMSS data stems from a lack of guidelines or best practices on how to approach the challenge of accounting for student SES using these data.4 To explore the consequences of this lack of consensus on the results and interpretation of cross-national research, we conducted a simple empirical exercise using the TIMSS data. We discuss our results below.
THE IMPLICATIONS OF SES MEASURE INCLUSION ON MODEL FIT AND SAMPLE SIZE
For this exercise we use the 2007 TIMSS 8th grade data from diverse world regions.5 With Fuller and Clarkes (1994) critique in mind, we selected two countries each from seven world regions: Norway and Sweden from Scandinavia; Italy and England from Western Europe; the Russian Federation and Romania from Eastern Europe; Oman and Qatar from the Gulf Region; Botswana and Ghana from Sub-Saharan Africa; Japan and South Korea from East Asia; and Thailand and Malaysia from Southeast Asia. We also included the United States. According to CIA World Factbook data (2011), this sample is diverse in terms of both income and inequality. GDP per capita at purchasing power parity ranges from $179,000 in Qatar to $2,500 in Ghana, while the Gini index ranges from 63 in Botswana to 23 in Sweden (a higher number indicates greater income inequality).
We conducted a simple step-wise regression analysis to explore the relative importance of variables commonly used to measure SES. The baseline regression (Regression 1) controls for the childs age, sex, language spoken at home, access to a computer, time spent on subject homework, index of student confidence in the subject, an index of the perceived value of the subject, an index of the students positive affect towards the subject, and an index of the students perception of being safe in school.6 All of the indices are provided in the TIMSS data. Each of the next four models adds home background variables available in TIMSS to measure SES. First, we added the number of books (Books) in the home as a continuous variable (Model 1). In Model 2, we added five separate variables indicating the availability of a study desk, a dictionary, a computer, a calculator, and an Internet connection at home (Common). In Model 3 we add four country-specific items (Specific).7 In the final model (Model 4) we added fathers and mothers education (ParentEd)8 as two separate continuous variables. We add this variable last due to a large amount of missing data in many countries. In preparing parental education variables, we code as missing student responses of I dont know. This is arguably a stringent approach. A child who reports that she does not know her parents education level is in fact providing some information, but it is not immediately evident how to interpret this information. Our decision to code I dont know responses as missing, illustrates the many choices that researchers face in working with TIMSS data.
We employed two approaches to account for missing data, one simple and the other more sophisticated. The simple approach amounted to dropping an observation (listwise deletion with the first plausible value as the dependent variable) if any data were missing. The more sophisticated approach involved the use of multiple imputation (MI) regressions to create 5 imputed datasets using the command ice in STATA (Royston, 2009). This approach allowed us to use all five plausible values as the dependent variable. Coefficients were aggregated according to Rubin (1987) and Harel (2009). All regressions used the appropriate sample weight TOTWGT. While the first analysis allows us to examine the impact of missing data on sample size, we use the second analysis to focus on the variance explained by the inclusion or exclusion of different independent variables.
While we conducted the analysis for mathematics and science, in the interest of space we only present the results for mathematics in tables 2 and 3 and figures 1 and 2. (The results for science are similar and available upon request.) The figures display the same information as the table but are easier to read at a glance. In Figure 1, while there are differences of magnitudes between estimates generated from listwise deletion and MI, the overall patterns in variance explained hold across both approaches.
As expected, in most countries the addition of SES variables across models increases the explained variance in student math performance. Yet the models seem to explain far more variation in student performance in developed countries like Korea or Norway. In developing countries like Ghana or Botswana, as well as in the non-Western contexts of Qatar and Oman, the same sets of variables explain less overall variation.9 The second bar for Model 1 (books in the home) shows the single largest improvement to the baseline adjusted R-squared. Once again the gains vary across countries, with a vast jump in adjusted R-squared in England compared to much lower gains in Botswana, Ghana or Qatar. Improvements in the models explanatory capacity after the inclusion of common and country-specific possession are generally lower. With some exceptions we find the smallest changes in explained variance when we included country-specific items (Model 3, fourth bar). The inclusion of parental education (Model 4, fifth bar) appears to add somewhat greater explanatory power, especially in certain countries. This illustrates the importance of variable choices available to researchers. Although a few variables like the number of books in the home may offer high-yield opportunities to understand variation in student performance and to account for home background, these variables are not consistently important across diverse contexts.
The right-hand panels of Table 2 and Figure 2 demonstrate changes in sample sizes that would occur if the researchers did not carefully address the missing data issue. In such cases, statistical programs would by default drop observations with any missing information (i.e., listwise deletion). In this manner, including parent education (Figure 2, Model 4, fifth bar) leads to a drastic reduction in sample size. As seen in Table 2, right-hand panel in Model 4, the reduction of sample size from MI to listwise deletion ranges from 23.5% in South Korea to 70.5% in Sweden. The reduction in sample size between MI regressions and listwise deletion is also substantial in Models 1 to 3, ranging from 3.8% to 33.8% in these models. Of note, 17 of the 21 articles we reviewed included parental education, but the majority of these 17 either used listwise deletion or provided no discussion of their treatment of missing data. Our results suggest the potential for considerable missing data consequences of including parental education in combination with listwise deletion.
The brief exploration above highlights three important points. First, recent peer-reviewed literature using TIMSS lacks consensus on how SES should be operationalized along at least three important dimensions: variable selection, variable combination, and treatment of missing data. Second, these decisions may have an important impact on the results.
Additionally, variable selection to explain SES or account for home background may have varying importance across different contexts. Third, addressing missing data is critically important, depending on the choice of variables. While we have not explored different approaches to combine variables, it is quite likely that researchers choices in this area will also influence their results.
In the United States, the importance of measuring and accounting for SES in educational research has inspired high-profile efforts to come to consensus for future data collection and analysis (e.g., Cowan et al., 2012). As others before us have also shown, key issues to consider in developing this consensus include which measures of SES to include, whether and how to combine these measures, and how to address missing data. In collecting data for large US and cross-national surveys, relevant agencies may need to carefully consider the first of these three issues when deciding which variables to collect information on. Our review of recent published research using TIMSS data finds that the research community also remains divided in their approach to these three issues. In our assessment, the variability of approaches in working with these data is the result of a lack of consensus or best practices on how to account for student SES in cross-national educational research. Our empirical exercise demonstrates that variability in approaches may lead to variability in results and interpretation. There are some excellent recent examples of scholarships that attend to these issues systematically (e.g., Nonoyama-Tarumi, 2008).10 However these advances are limited, leading us to call for a renewed conversation among researchers to come to consensus regarding how to approach the measurement of student SES in cross-national research using the TIMSS data.
Table 1. A Review of Recent Peer-reviewed Articles Using TIMSS Data to Illustrate Diversity in Use of SES Measures
Table 2. Change in Model Adjusted R-Squared and Sample Size by Sequential Inclusion of Student Home Background Information, by Country, Listwise Deletion, Dependent variable Math test-score first plausible-value
Table 3. Change in Model Adjusted R-Squared by Sequential Inclusion of Student Home Background Information, by Country and sample size, Multiple Imputation, Dependent variable Math test-score five plausible values
Figure 1. Change in Adjusted R-Squared across Five Regression Models for Math test-score, Listwise Deletion and Multiple Imputation, by Country.
1. Y-axis represents adjusted r-square values
Figure 2. Changes in Sample Size across Five Regression Models for Math test-score, Listwise Deletion and Multiple Imputation, by Country.
1. Y-axis represents sample size
1. Sirin (2005), who covers many of the issues we discuss here, offers an excellent and exhaustive review of related US-focused literature.
2. Carnoy and Rothsteins primary measure of social class (home) influences is the number of books in the home. They also use other measures such as mothers education and an overall index provided by PISA, but their results remain unchanged (Carnoy & Rothstein, 2013, p. 11).
3. Studies have previously acknowledged the importance of country-specific measures (Fuller & Clarke, 1994; May 2005, Traynor & Raykov, 2013). These studies show that including country-specific items can be valuable but computationally demanding. Country-specific items can improve the validity and reliability of student level-SES scores if used in conjunction with international anchor items that have similar psychometric characteristics (May, 2005). Further, to account for between-country variation Item Response Theory or weights might be necessary(May, 2005; Traynor & Raykov, 2013).
4. Other widely used cross-national data such as PISA and the Progress in International Reading Literacy Study (PIRLS) have more extensive home background measures, as the first surveys older children and the second conducts home surveys. PIRLS and PISA data provide pre-prepared indices that could reasonably be used as SES controls. However, in a brief review of literature (not reported here) we found that many studies using these data do not use the available indices. We also found broad patterns similar to the TIMSS-based studies discussed here, with limited attention to variable selection and missing data issues.
5. We use the 8th grade date because they include information on parental education. The 4th grade questionnaire does not ask students about their parents education, which, although understandable, presents another challenge for measuring student SES in TIMSS.
6. The US data do not include the school safety index. The science data from Romania, Russia and Sweden do not have information on time spent on subject homework, index of student confidence in the subject, index of the perceived value of the subject, and the index of the students positive affect towards the subject.
7. In TIMSS 2007, each country chose up to four specific possession items to include in its survey. England, Malaysia, and Qatar gathered information on only three country-specific variables.
8. Data from England did not provide any information on fathers or mothers education. We did not have a third Western European country in the sample in 8th grade.
9. These findings have an interesting parallel with Sirin (2005), who after extensive meta-analysis found that in the US, the SES-academic achievement relationship tends to be smaller for minority students compared to White students.
10. The author uses cultural capital and wealth theories to generate a conceptualization of SES which is then applied to the PISA data. The author finds that this multidimensional measure of SES explains greater variation in student performance across study countries.
Chudgar and Luschei are equal authors. The author order was determined by a coin toss.
Akiba, M., LeTendre, G. K., & Scribner, J. P. (2007). Teacher quality, opportunity gap, and national achievement in 46 countries. Educational Researcher, 36(7), 369387.
American Psychological Association [APA]. (2007). Report of the APA task force on socioeconomic status. Washington, DC: American Psychological Association.
Buchmann, C. (2002). Measuring family background in international studies of education: Conceptual issues and methodological challenges. In A. C. Porter & A. Gamoran (Eds.), Methodological advances in cross-national surveys of educational achievement (pp. 150197). Washington, DC: National Academy Press.
Carnoy, M., & Rohstein, R. (2013, January). What do international tests really show about U.S. student performance? Washington, DC: Economic Policy Institute.
Central Intelligence Agency [CIA]. (2011). The World Fact Book. Retrieved from https://www.cia.gov/library/publications/the-world-factbook.
Coleman, J. S. (1966). Equality of educational opportunity. Washington, DC: National Center for Educational Statistics.
Coleman, J. S. (1975). Methods and results in the IEA studies of effects of school on learning. Review of Educational Research, 45, 335386.
Cowan, C. D., Hauser, R. M., Kominski, R. A., Levin, H. M., Lucas, S. R., Morgan, S. L., Spencer, M. B., & Chapman, C. (2012, November). Improving the measurement of socioeconomic status for the National Assessment of Educational Progress: A theoretical foundation (Recommendations for the National Center for Education Statistics). Washington, DC: National Center for Education Statistics.
Deaton, A. (2002). Policy implications of the gradient of health and wealth. Health Affairs, 21(2), 1330.
Fuller, B. & Clarke, P. (1994). Raising school effects while ignoring culture? Local conditions and the influence of classroom tools, rules, and pedagogy. Review of Educational Research, 64, 119157.
Harel, O. (2009). The estimation of R2 and adjusted R2 in incomplete data sets using multiple imputation. Journal of Applied Statistics, 36(10), 11091118.
Harwell, M., & LeBeau, B. (2010). Student eligibility for a free lunch as an SES measure in education research. Educational Researcher, 39(2), 120131.
May, H. (2006). A multilevel Bayesian item response theory method for scaling socioeconomic status in international studies of education. Journal of Educational and Behavioral Statistics, 31(1), 6379.
Nonoyama-Tarumi, Y. (2008). Cross-national estimates of the effects of family background on student achievement: A sensitivity analysis. International Review of Education, 54(1), 5782.
Royston, P. (2009). Multiple imputation of missing values: further update of ice, with an emphasis on categorical variables. Stata Journal, 9(3), 466477.
Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. New York: Wiley.
Sirin, S. R. (2005). Socioeconomic status and academic achievement: A meta-analytic review of research. Review of educational research, 75(3), 417453.
Traynor, A., & Raykov, T. (2013). Household possessions indices as wealth measures: a validity evaluation. Comparative Education Review, 57(4), 662688.
Willms, J. D. (2003, February). Ten hypotheses about socioeconomic gradients and community differences in childrens developmental outcomes (0-662-89586-X, RH63-1/560-01-03F). Gatineau, Quebec: Human Resources Development Canada.
Aricles reviewed for Table 1
Akiba, M. (2008). Predictors of student fear of school violence: A comparative study of eighth graders in 33 countries. School Effectiveness and School Improvement, 19(1), 5172.
Ammermuller, A., Heijke, H., & Wößmann, L. (2005). Schooling quality in Eastern Europe: Educational production during transition. Economics of Education Review, 24(5), 579599.
Aypay, A., Erdogan, M., & Sozer, M. A. (2007). Variation among schools on classroom practices in science based on TIMSS-1999 in turkey. Journal of Research in Science Teaching, 44(10), 14171435.
Cho, I. (2012). The effect of teacher-student gender matching: Evidence from OECD countries. Economics of Education Review, 31(3), 5467.
Dumay, X., & Dupriez, V. (2007). Accounting for class effect using the TIMSS 2003 eighth-grade database: Net effect of group composition, net effect of class process, and joint effect. School Effectiveness and School Improvement, 18(4), 383408.
Hansson, A. (2012). The meaning of mathematics instruction in multilingual classrooms: Analyzing the importance of responsibility for learning. Educational Studies in Mathematics, 81(1), 103125.
Heuveline, P., Yang, H., & Timberlake, J. M. (2010). It takes a village (perhaps a nation): Families, states, and educational achievement. Journal of Marriage and Family, 72(5), 13621376.
Ismail, N. A., & Awang, H. (2008). Differentials in mathematics achievement among eighth-grade students in Malaysia. International Journal of Science and Mathematics Education, 6(3), 559571.
Kaya, S., & Rice, D. C. (2010). Multilevel effects of student and classroom factors on elementary science achievement in five countries. International Journal of Science Education, 32(10), 13371363.
Lee, J. (2007). Two worlds of private tutoring: The prevalence and causes of after-school mathematics tutoring in Korea and the United States. Teachers College Record, 109(5), 12071234.
Leow, C., Marcus, S., Zanutto, E., & Boruch, R. (2004). Effects of advanced course-taking on math and science achievement: Addressing selection bias using propensity scores. American Journal of Evaluation, 25(4), 461478.
Luyten, H. (2006). An empirical assessment of the absolute effect of schooling: Regression-discontinuity applied to TIMSS-95. Oxford Review of Education, 32(3), 397429.
Lynn, R., & Mikk, J. (2007). National differences in intelligence and educational attainment. Intelligence, 35(2), 115121.
Mohammadpour, E. (2012). A multilevel study on trends in Malaysian secondary school students' science achievement and associated school and student predictors. Science Education, 96(6), 10131046.
Park, H., Lawson, D., & Williams, H. E. (2012). Relations between technology, parent education, self-confidence, and academic aspiration of Hispanic immigrant students. Journal of Educational Computing Research, 46(3), 255265.
Pugh, G., & Telhaj, S. (2008). Faith schools, social capital and academic attainment: Evidence from TIMSS-R mathematics scores in Flemish secondary schools. British Educational Research Journal, 34(2), 235267.
Rindermann, H. (2008). Relevance of education and intelligence at the national level for the economic welfare of people. Intelligence, 36(2), 127142.
Rodriguez, M. C. (2004). The role of classroom assessment in student performance on TIMSS. Applied Measurement in Education, 17(1), 124.
Schmidt, W. H., Cogan, L. S., Houang, R. T., & McKnight, C. C. (2011). Content coverage differences across Districts/States: A persisting challenge for U.S. education policy. American Journal of Education, 117(3), 399427.
Wang, Z., Osterlind, S. J., & Bergin, D. A. (2012). Building mathematics achievement models in four countries using TIMSS 2003. International Journal of Science and Mathematics Education, 10(5), 12151242.
Wiseman, A. W., & Anderson, E. (2012). ICT-integrated education and national innovation systems in the gulf cooperation council (GCC) countries. Computers & Education, 59(2), 607618.