In "What Can Student
Drawings Tell Us About High-Stakes Testing in
Massachusetts?"(Wheelock, Babell, & Haney, 2000), we
reported on an exploratory study of students' reactions to MCAS,
the Commonwealth's high-stakes test, expressed in drawings done at
the invitation of teachers to "Draw a picture of yourself taking
the MCAS." Summarizing the results of the coding of the drawings,
we reported a rich array of responses. Students' self-portraits
expressed reactions to test content, format, length, and
difficulty. They also conveyed a wide range of affective responses.
In a positive vein, some students portrayed themselves as diligent,
confident thinkers and problem-solvers. A greater number of
drawings, however, depicted students as anxious, angry, bored,
pessimistic, or withdrawn from testing.
Overall, the individuality and diversity of the responses lead
us to consider that students' beliefs, attitudes, and feelings
about testing may well have an impact on their testing behavior and
test scores. Students' responses, however, are rarely factored into
decision making regarding high stakes testing policies. In this
paper we consider students' reactions to testing in light of
differences in students' grade level and schooling experiences and
reflect on how the MCAS drawings challenge policy makers'
assumptions about testing, especially the impact of high stakes on
HIGH-STAKES TESTING AND STUDENT MOTIVATION
political rhetoric regarding high stakes testing hypothesizes a
simple relationship between curriculum standards, instruction,
testing, and student performance. As the argument goes, once
students are provided with a standardized curriculum, and once
"passing" scores are defined at a "challenging" level, students
will work to meet the standards set for them. Rewards and sanctions
attached to these standards reinforce the value of hard work and
effort (Keller, 2000).
Students' responses to testing, however, render this formula of
doubtful general validity. In practice, student motivation to
achieve depends on a complex mix of individual beliefs, attitudes,
and feelings that interact with personal relationships, classroom
practices, and school routines at every level of schooling (Ames
& Ames, 1984; Ames & Ames, 1985; Anderman & Maehr,
1994; Brophy, 1998; Covington, 1992; Henderson & Dweck, 1990;
Kohn, 1999). Rewards or threats linked to standardized tests may
carry relatively little weight in this more complex picture.
Indeed, a review of the literature on motivation and testing
sponsored by the American Educational Research Association (AERA)
concluded that, contrary to claims that external examinations
inspire greater student effort, such testing not only fails to
energize most students but may precipitate harmful outcomes,
including higher dropout rates (Kellaghan, Madaus, & Raczek,
Despite this research, assumptions regarding the relationship of
test-based consequences and effort color the lens through which
many educators and policy makers view MCAS scores. Many educators
interpreted low performance in the first years of MCAS testing as a
reflection of the fact that scores did not yet "count" for
graduation. As one Massachusetts educator explained, "In some ways,
the tenth-graders didn't make the kind of progress we had hoped
for.... But they knew this wasn't a hoop they had to jump through"
(Barber, 1999). Likewise, officials at the Massachusetts Department
of Education rationalized low scores on the assumption that because
scores would not determine graduation until 2003, students had yet
to put their effort into the test (Parker, 1999). These
observations reflect a narrow understanding of motivation. In fact,
recent research on testing suggests that such high stakes are not
necessary to persuade students to "take tests seriously" (Russell
& Haney, 1997).
The MCAS drawings suggest that students now in line for the high
stakes consequences of MCAS are aware that MCAS is a critical
"hoop" they must jump through to obtain a high school diploma.
Nevertheless, rather than bolstering a willingness to persist, the
high stakes attached to MCAS may backfire, discouraging the best
effort of many students. Students may choose to withdraw from
testing to preserve a sense of integrity rather than endure what
feels like an insult to their competence. In the long run, many may
also decide that withdrawing from school entirely is preferable to
the possibility of repeating their MCAS failure and the public
humiliation associated with that prospect.
Supporters of high stakes testing make their case, in part, on
the assumption that students will choose the immediate costs of
persisting on a long test they may find "tedious," "tricky," or
"stupid," retaking it over and over if necessary, in order to
ensure the long-term benefits of a high school diploma. Some
students, however, may calculate the costs and benefits in a
different way. For reasons perhaps related to their individual
natures, perhaps to prior testing or schooling experiences that
limit opportunities to learn, the immediate costs of testing for
these students involve anxiety, diminished confidence, anger, and
helplessness. If these costs feel intolerable, students may
withhold effort from testing to obtain relief, even as they are
aware of the consequences of doing so. Eventually, believing they
are unlikely to pass MCAS, those who have little patience with the
prospect of repeating the tests, especially those who are already
overage for their grade, may rationally conclude that "school is
not for me" and abandon school altogether (Clarke, Haney, &
Madaus, 2000; Fine, 1986; Kellaghan, Madaus, & Raczek, 1996,
Wehlage & Rutter, 1986; Wheelock & Dorman, 1988).
INDIVIDUAL STUDENTS' RESPONSES TO MCAS
fourth, eighth, and tenth graders to draw pictures of themselves
taking the MCAS elicited highly personal responses. In many cases,
these responses were at an intensity not reflected in our coding
scheme. For example, the word "angry" used in our coding does not
adequately convey students' feelings more accurately described as
"seething," "pissed off" or "defiant." Likewise, students' drawings
often went beyond a one-dimensional response to communicate several
layers of opinion and emotion. Students coupled portraits of
themselves as disheartened or bored test-takers with a critique of
MCAS as "stupid," "too long," or "annoying." These feelings and
beliefs as expressed in the drawings call into question a number of
assumptions about high stakes testing in Massachusetts, for
example, that scores always accurately reflect students'
accomplishments and that consequences attached to their scores will
motivate students in a consistent manner.
The wide variation in students' responses to MCAS captured in
our sample brings to mind earlier research that has highlighted the
complex relationship between test-making and test-taking. For
example, when Haney and Scott (1987) asked individual students to
explain their answers to particular multiple-choice questionsons on
standardized tests, they found that depending on students' life
experiences, developmental maturity, understanding of test rules,
and other factors, students' idiosyncratic reactions to test items
could affect their selection of "right" answers. They
What it is that a test item measures (that is, its content
validity) depends not on what adult experts or critics think
it measures nor on what item statistics suggest about the item but
rather on how individual test-takers perceive and react to the test
or item.... To delve into what it is that a test or test item
measures for particular test-takers requires some kind of
observation or communication with them on an individual basis
(Haney & Scott, 301- 302).
Likewise, students' classroom experiences play into their
perceptions and responses to the fairness of testing. Reporting on
her interviews with students, Thorkildsen (1999) found that their
judgment of how much testing is fair had to do, in part, with the
curriculum and instruction they experienced on a day-to-day basis.
Consequently, those taking a test that differs radically in mode or
content from their classroom learning may respond negatively to the
test itself, especially if the test does not allow them to
demonstrate what they know how to do. In Massachusetts, researchers
have documented such a mismatch in Wellesley and Worcester, where
test scores of students using pencil and paper to respond to MCAS
writing prompts significantly under-represent the quality of the
written work the same students do when they use a computer, their
usual mode of classroom work (Russell & Haney, 1997; Russell
& Plati; 2000).
Those who support high stakes testing assume that serious
consequences will cause students' to take testing seriously.
However, students do not come to testing as blank slates. They
bring with them beliefs about themselves as learners, opinions
about the value of the test, expectations for success, aspirations
for their future, and feelings about being assessed and judged.
These dispositions may play out in different ways and to different
degrees during testing itself. Thus, Kellaghan, Madaus, and Raczek
(1996: 21) ask, "How likely is it that a high-stakes examination
will motivate and change the behavior of students who are low in
'confidence,' behave 'helplessly,' or who believe their 'ability'
is low and non-malleable?" They conclude, "The extent [to which]
situational and personal factors... affect the motivational process
would lead us to expect considerable variation among individuals in
their reactions to the incentives, penalties, and hurdles
incorporated in a system(s) of external examinations" (Kellaghan,
Madaus, and Raczek, 1986, p. 21).
Students' MCAS drawings remind us of the enormous variability of
students' responses to high stakes testing and highlight the fact
that no single test will motivate all students in the same way. On
one hand, some students may indeed approach MCAS as serious
scholars. Our sample of MCAS drawings suggests that some students,
about 20%, may appear willing to comply with test requirements
without comment. Another 20% work diligently, think about problems,
and feel confident as test -takers. However, for a sizable third
group of students, MCAS may have neither an uplifting or even a
neutral impact. For these students, MCAS evokes more cynical, even
despairing, responses of boredom, helplessness, anxiety, and anger.
Students who are apprehensive or self-doubting are unlikely to give
their best effort through many days of testing. If lengthy and
tedious testing generates hostility and withdrawal, more students
may choose to disengage from schooling itself (Clarke, Haney, &
Madaus, 2000; Kellaghan, Madaus, & Raczek, 1996).
Students' unique reactions to testing as expressed in the MCAS
drawings stand out in high relief. Such individual responses demand
recognition from policy makers that high stakes testing does not
generate a uniform reaction from the diverse population of students
attending Massachusetts schools. In reality, students' emotional
reactions to testing vary greatly, influencing the ways in which
some engage with the high stakes MCAS so that results may well
underestimate what individual students know and can do.
GRADE-LEVEL DIFFERENCES IN RESPONSES TO MCAS
on student attitudes toward testing in Ohio, Michigan, Arizona,
California, and Florida suggests that student responses to
standardized tests also vary considerably depending on students'
ages (Debard & Kubow, 2000; Paris, Lawton, Turner, & Roth,
1991; Paris, Herbst, & Turner, in press; Urdan & Davis,
1997). Typically, elementary students hold positive views of
testing, believe in standardized tests as a reasonable way to
assess their achievement, and are optimistic about their prospects
for success on the tests. However, by early adolescence, many
report a growing skepticism about the effectiveness of standardized
tests to assess their knowledge and skills. These older students
more frequently report that they answer test questions
halfheartedly, fill in "bubbles" at random, withhold effort, and
give up (Paris, Lawton, Turner, & Roth, 1991; Paris, Herbst,
& Turner, in press).
Grade-level differences in students' MCAS drawings suggest
similar patterns. Although fourth graders were more likely than
secondary students to refer to test difficulty and describe MCAS as
"hard," they were also more likely to draw themselves as confident
test-takers, focused on their task, thinking about test questions,
and using test-taking skills. They were also more likely to draw
themselves with a smile on their face, and less likely to convey
generally negative attitudes toward MCAS.
In contrast to the younger students, eighth and tenth graders
more often portrayed themselves as disaffected test-takers. Older
students conveyed nonspecific negative reactions to MCAS at almost
three times the rate of younger students. Although they were less
likely to refer to the difficulty of the test or describe MCAS as
"hard," eighth and tenth grade students were more likely to draw
themselves as angry, bored, or asleep at their desks. They were
also less likely to portray themselves as diligent test-takers,
engaged in thinking or solving problems.
Grade level differences in MCAS drawings suggest that students
for whom MCAS will "count" in a life-changing way are, indeed,
aware that "failing" MCAS could result in repeated attempts to
achieve passing scores at best, and in not graduating from high
school at worst. Drawings by fourth graders conveyed anxiety or
concerns about a bad grade slightly more often than those from
older students. But while the younger students combined anxiety
with a large dose of diligence, students in the secondary grades
combined anxiety with anger and a high rate of "nonspecific
negative" responses. Thus, the knowledge that MCAS "counts" may
register with Massachusetts students; but this awareness, and the
anxiety that may accompany it do not necessarily translate into
greater investment in testing.
Prior research suggests a number of variables that could
contribute to grade level differences in students' responses to
standardized testing; (Paris, Lawton, Turner, & Roth, 1991;
Paris, Herbst, & Turner, in press; Urdan & Davis, 1997). By
eighth grade, many students may have taken so many standardized
tests that testing loses its meaning as students become more aware
of the ways in which schools use tests to judge, label, rank, and
compare students (Anderman & Maehr, 1994). Older students'
anger in regard to MCAS may reflect their understanding that test
scores are used to determine grade retention or tracking, practices
that set students apart from their peers and accelerate
disinvestment from schooling (Smith & Shepard, 1989; Oakes,
DIFFERENCES IN SCHOOLING EXPERIENCES AND RESPONSES TO MCAS
different children respond to schooling in different ways,
different schools offer children different experiences and
resources for learning. With the gap between high- and low-spending
districts standing at nearly $4000 per student annually, students
in Massachusetts have different kinds and levels of resources
available for learning from one end of the state to another
(Quality Counts, 2000). These differences translate into unequal
opportunities to learn, depending on where students live and the
school they attend (See, for example, Associated Press, 1999;
Coleman, 1999, 2000; Cook, 2000; B. Daley, 2000; Pressley, 1999.).
In turn, differences in learning conditions affect learning itself.
As one Massachusetts student says, "It's easier to learn in a
classroom than in a hallway" (McElhenny, 2000).
Differences in schooling experiences may also affect how
different groups of students perceive MCAS. In our sample, urban
students were more likely than suburban students to describe MCAS
as difficult and overlong. Urban and suburban students were equally
likely to portray themselves as diligent. However, urban students
were less likely to show themselves thinking or solving problems
and more likely to depict themselves anticipating a score for their
work, particularly a low score. Finally, urban students were more
likely to draw themselves as angry, bored, withdrawn from testing,
or relieved that testing was over.
The differing self-portraits of urban and non-urban students,
like the differing profiles of younger and older students, may
reflect, in part, differences in access to appropriate curricula,
resources, or opportunities for test preparation. But just as these
varying classroom experiences may set some students up for
different responses to MCAS, the larger social context of testing
may have an influence on others. For example, recent research
suggests that some students may retreat from situations, including
testing, that threaten to confirm widely held negative stereotypes
about their group. In particular, research psychologist Claude
Steele has found that when African American students fear that
their performance might validate derogatory assumptions about their
capabilities, many, especially the most accomplished, may withdraw
from testing, withhold effort, and produce test results that are
inferior to work they do in less threatening situations (Steele,
1997; Steele & Aronson, 1998). These findings raise concerns
that the high stakes nature of MCAS within an overall climate of
racism could contribute to some minority students' failing to put
their best effort into test-taking, resulting in test scores that
distort students' real achievements. The Massachusetts Department
of Education itself may have reinforced students' fears of
stereotyping by releasing 1999 MCAS results by race, a story
reported widely in the headlines of the state's major newspapers
while the Spring 2000 MCAS testing was in process (Hayward, 2000c;
STUDENTS' RESPONSES TO MCAS, MOTIVATION, AND THE CONTEXT OF
Encountering resistance to testing from parents, teachers, and
students, policy makers ask, "What's going to spur achievement if
there's no pressure brought on by the high-stakes test?" Without
doubt, encouraging students to put their best effort into their
schoolwork is a worthy goal. However, a strategy tied to a
high-stakes test alone is likely to backfire, especially when
implemented alongside existing practices, policies, and conditions
that actually work against strengthening students' motivation to
Student effort and engagement depend on many factors. In
relation to testing, for example, student effort benefits when
students know they will receive results and comments immediately
after the test, rather than after a delay of even a day or more
(Haney, 1996). Yet, while students take MCAS in May, results are
not available until the following November. By then, many students
are not only assigned different teachers but may, especially in the
case of eighth graders, be enrolled in an entirely different school
where teachers have little idea of their students' prior learning
Yet motivation and engagement also depend on other factors not
so immediately connected with testing conditions. Human
relationships that allow for both caring and respect, students'
beliefs that intelligence is not inborn and fixed, the extent to
which students value a particular assignment, and the way students
assess the probability of mastering that assignment all foster
student motivation (Brophy, 1999). If we acknowledge that the
attitudes, beliefs, and habits students develop in their day-to-day
schooling experiences may spill over into the testing situation to
influence students' willingness to invest in MCAS, we must then
consider how certain "regularities" of schooling interact with
testing practices. Further we must assess prevailing practices and
policies in light of their effect on students' willingness to
engage their best effort in testing. In particular, we must
consider how MCAS testing is positioned in relation to the positive
teacher-student relationships necessary for learning, and in
relation to the sorting practices of many schools, especially in
urban districts, that may contribute to students' developing
beliefs about intelligence that sabotage effort and motivation,
both in day-to-day learning and under testing conditions.
Testing and teacher-student relationships
For many students, the motivation to learn develops largely
within teacher-student relationships that are grounded in what
Theodore Sizer (1996) calls "rigorous caring." Such relationships,
which evolve from teachers' respect for students' integrity and
identity and allow teachers to personalize learning, are essential
to high expectations for teacher commitment and to student learning
(Delpit, 1995). These relationships offer students the social
support necessary for achievement. In fact, schools may push
students to achieve, but without a concomitant degree of social
support, students will not engage and learn at high levels (Lee,
Smith, Perry, & Smylie, 1999). Finally, these relationships are
an essential element of school culture that encourage and value
student work that meets high standards of quality (Wheelock,
High stakes testing in Massachusetts may, however, impoverish
the very personal relationships and social support that learning
depends on. In some districts, the push for higher test scores may
limit the time available for teachers to get to know students well.
Explaining how MCAS eats into teacher-student relationships, one
superintendent reports, "It takes us almost a month to do all of
the testing. There certainly is less time for talking about things
and, quite frankly, it is important to talk about things"
A supportive climate of "rigorous caring" may also become more
difficult to realize as MCAS introduces a depersonalized and
competitive culture into schools, one in which teachers, parents,
and students view students' work through the lens of the state's
standardized work samples rather than in the context of carefully
constructed assignments and the genuine demands for quality
teachers make of students in their classrooms (Freedman, 1999).
Moreover, as one well-regarded and trusted middle school principal
has confessed to parents, the emphasis on test scores inclines
educators to form an impression of students in light of their
potential MCAS scores, rather than in terms of their presence as
creative, promising children (Heller, 2000). In the absence of
strong, personal relationships between teachers and students,
students may decide to "not-learn," choosing resistance to learning
as a means of preserving a coherent sense of self (Kohl, 1992). To
the extent that MCAS eats away at positive teacher-student
relationships, students lose a resource that is essential to
motivation and learning.
Overexposure to standardized testing in urban schools
For many Massachusetts students, MCAS is not the only battery of
standardized tests they take. Although suburban or rural educators
report that they may test students in reading and math during the
late elementary years, they add that they typically abandon
standardized testing by the secondary grades. In contrast, urban
districts layer MCAS on top of standardized testing programs
already required in almost every grade. As a result, the time urban
students spend preparing for and taking all standardized tests may
expand to a month or more of the school year, threatening student
motivation to take any test seriously.
In addition to MCAS, many urban students in Massachusetts
already take the Stanford Achievement Tests (SAT9), Metropolitan
Achievement Tests (MAT), or Iowa Tests of Basic Skills (ITBS) every
year through the twelfth grade. Often these tests are given in the
two weeks before MCAS testing in May. During the rest of the year,
many urban students sit for additional standardized testing
specifically for placement in selective programs, special
education, or English as a second language classes. In some
districts, schools participating in special projects may follow
individual student progress using formative writing prompts and
reading tests from the Diagnostic Reading Assessment (DRA) or the
Scholastic Reading Inventory (SRI) up to three times a year, and
formative math and science prompts twice a year. Depending on the
district, grade promotion or acceptance into special programs may
require meeting specific scores on these tests.
Recognizing the harms of continuous mass assessment of students,
the Educational Testing Service has recently cautioned policy
makers about the dangers of testing policies that generate an
avalanche of numbers but provide little information that improves
learning (Barton, 1999). Such warnings echo research showing that
extensive and repetitive testing seems to have a cumulative
negative effect on students' attitudes toward assessment as they
get older (DeBard and Kebow, 2000; Paris, Lawton, Turner, &
Roth, 1991; Paris, Herbst, & Turner, in press; Urdan &
Davis, 1997). The multiple testing programs of many urban districts
may compound the age-related effects of overexposure to
standardized testing for urban students, undermining some students'
willingness to engage in MCAS testing.
Testing and student sorting practices: Grade retention and
In many schools, test scores and practices that label, sort, and
group students go hand in hand. Using test scores as indicators of
perceived ability, many schools adopt grade retention and ability
grouping practices in the belief that homogeneous classes will
benefit student engagement in learning. In fact, however, such
practices typically affect both achievement and motivation
negatively (Oakes, 1985; Oakes, 1990; Smith & Shepard, 1989).
In districts that depend on these practices, students' willingness
to invest effort in high stakes testing is likely to be
An overwhelming body of research underscores that repeating a
grade undermines school engagement, predicts truancy and placement
in low track classes, and contributes powerfully to dropping out
(Heubert & Hauser, 1999; Smith & Shepard, 1989; Weitzman,
et al., 1985). Despite these negative outcomes, thousands of
Massachusetts students, disproportionately from urban schools, are
not promoted with their peers to the next grade every year. By the
time they have taken MCAS in eighth grade, the state's urban
students are nearly four times more likely than suburban students
to have experienced grade retention (Massachusetts Department of
Education, 1990; Wheelock, 1998). Then, as ability grouping and
tracking further define differences in opportunities to learn for
"top" and "bottom" students in the secondary grades, and as
students in different groups are ranked and compared with others,
many students come to believe that the labels attached to their
group reflect their inherent learning capacities (Dentzer &
Wheelock, 1990; Oakes, 1985; Oakes, 1990).
The practice of sorting students based on test scores takes a
toll on the way students think about intelligence and reinforces
beliefs that undermine student motivation to invest in academic
work. Students whose school experiences foster the belief that
intelligence is inborn and fixed are especially vulnerable to
anxiety, helplessness, and loss of motivation when confronted with
new challenges, effects that often become more pronounced after the
elementary grades (Henderson & Dweck, 1990; Midgley, 1993).
Focused on the need to "look smart" so as not to lose their place
at the "top," students in "fast" groups may avoid situations that
require sustained effort when tasks become more difficult than
usual and the "one right answer" is not obvious. Those in the
"slow" groups may also come to view complex tasks as beyond their
capabilities (Brophy, 1998; Dweck, Kamins, and Mueller, 1997; Kohn,
1999; Oakes, 1985).
The overuse of standardized testing, grade retention, and
ability grouping are all inextricably intertwined in an approach to
schooling that separates the "successes" from the "failures."
Separately and together, these practices reinforce beliefs that
some are inherently "smart" and some "not so smart." As a result,
students caught up in this web of practices are vulnerable to
developing beliefs that reinforce self-doubt and debilitate
motivation to learn (Brophy, 1998; Covington, 1992). In turn, these
beliefs put students at risk of diminished motivation to engage in
high stakes testing. No matter how closely schools follow state
curriculum frameworks or prepare students for MCAS, multiple
testing programs, grade retention, and ability grouping limit
students' beliefs about themselves as learners and constitute a
powerful undertow that works against other efforts to help students
approach high stakes test-taking in a positive manner. To the
extent that these practices shape schooling in any given district,
they will color students' responses to MCAS.
Ames, R. & Ames, C., (Eds.). (1984). Research on
Motivation in Education. Vol. 1, Student Motivation. Orlando:
Ames, C. & Ames, R., (Eds.). (1985). Research on
Motivation in Education. Vol. 2, The Classroom Milieu. Orlando:
Anderman, E.M. & Maehr, ML (1994). Motivation and schooling
in the middle grades. Review of Educational Research 64(2):
Barber, C. (1999), Hampshire scores above state. Daily
Hampshire Gazette, 16 December: http://www.gazettenet.com/12161999/schools/19764.htm.
Barry, S. (2000). Pressure on schools to improve. Springfield
Union-News. 13 April: http://www.masslive.com/news/pstories/ae413mca.html.
Barton, P. (1999). Too Much Testing of the Wrong Kind; Too
Little of the Right Kind in k-12 Education. Princeton, NJ: Policy
Information Center, Educational Testing Service.
Brophy, J. (1998). Motivating Students to Learn. Boston:
Clarke, M., Haney, W., & Madaus, G. (2000). High stakes
testing and high school completion. NBETPP Statements, Vol.
1, No. 3. National Board on Educational Testing and Public Policy,
Boston College. January.
Coleman, S. (1999). Educators say libraries must turn a page.
Boston Globe, 19 January: B01.
Coleman, S. (2000). School libraries stacked with outdated
volumes. Boston Globe, 18 January: A01.
Cook, N. W. (2000). Cramped school library a lesson in termites,
rainwater, and elbows. Sharon Advocate, 13 January:
Covington, M.V. (1992.) Making the Grade. New York:
Cambridge University Press.
Daley, B. (2000). New books with limits; teachers insist texts
stay in class. Boston Globe, 24 January: A01.
Delpit, L. (1995). Other People's Children: Cultural Conflict
in the Classroom. New York: New Press.
Dentzer, E. & Wheelock, A. (1990). Locked In/Locked Out:
Tracking and Placement Practices in Boston Public Schools. Boston:
Massachusetts Advocacy Center.
Debard, R. & Kubow, P. K. (2000). Impact of Proficiency
Testing: A Collaborative Evaluation. Report prepared for Perrysburg
(OH) Schools. Bowling Green State University and Partnerships for
Dweck, C., Kamins, M. & Mueller, C. (1997). Praise,
criticism, and motivational vulnerability. Symposium paper
delivered at the biennial meeting of the Society for Research in
Child Development, Washington, DC.
Fine, M. (1986). Why urban
adolescents drop into and out of public high school.
Teachers College Record 87(3). Spring: 393-409.
Freedman, S. (1999). Opinion. Cambridge Chronicle, 22
Gentile, D. (2000a). Monument students working to remove MCAS
requirement. Berkshire Eagle, 14 April: 01.
Gentile, D. (2000b). Some students to shun MCAS. Berkshire
Eagle, 11 April: 01.
Haney, W. (1996). Focusing assessments on teaching and learning.
New Schools, New Communities 12(2), Winter: 11-20.
Haney, W. & Scott, L. (1987). Talking with children about
tests: An exploratory study of test item ambiguity. In Freedle, R.
O. and Duran, R. P. (Eds.). Cognitive and Linguistic Analyses of
Test Performance. Norwood, NJ: Ablex.
Hayward, E. (2000c). Minority pupils lagging on MCAS exams.
Boston Herald, 18 May:
Henderson, V. & Dweck, C. (1990). Motivation and
achievement. In S.S. Feldman & G.R. Elliott (Eds.). At the
Threshold: The Developing Adolescent. Cambridge, MA: Harvard
Kellaghan, T., Madaus, G.F., & Raczek, A. (1996). The use of
external examinations to improve student motivation. Washington,
DC: American Educational Research Association.
Kohl, H. (1992). I won't learn from you! Thoughts on the role of
assent in learning. Rethinking Schools 7(1): 16-17, 19.
Keller, B. (2000). Incentives for test-takers run the gamut.
Education Week, 3 May: 01
Lee, V.E, Smith, J.B., Perry, T.E., & Smylie, M.A. (1999).
Social support, academic press, and student achievement: A view
from the middle grades in Chicago. Chicago: Consortium on Chicago
Levin, B. (1993). Students and educational productivity.
Educational Policy Analysis Archives, 1(5), 4 May: http://olam.ed.asu.edu/epaa/v1n5.html.
Massachusetts Department of Education. (April 1990). Structuring
Schools for Student Success: A Focus on Grade Retention.
McElhenny, J. (2000). Senate proposes $350 million increase for
education this year. Associated Press, 15 May.
Midgley, C. (1993). Motivation and middle level schools. In
Pintrich, P.R. and Maehr, M. L. (Eds.). Advances in Motivation
and Achievement, Vol. 8: Motivation and Adolescent Development,
Greenwich, CT: JAI Press, 1993.
Oakes, J. (1985). Keeping Track: How Schools Structure
Inequality. New Haven: Yale University Press.
Oakes, J. (1990). Multiplying Inequalities: The Effects of Race,
Social Class, and Tracking on Opportunities to Learn Mathematics
and Science. Santa Monica, CA: RAND.
Oliveira, R. (1999). No time to share: Students say MCAS leaves
no room to talk with teachers. New Bedford Standard-Times, 6
June: 01: http://www.s-t.com/daily/06-99/06-06-99/a01lo004.htm.
Paris, S.G., Lawton, T.A., Turner, J.C., & Roth, J.L.
(1991). A developmental perspective on standardized achievement
testing. Educational Researcher 20(5), June-July: 12-20.
Paris, S. G., Herbst, J.R., & Turner, J.C. (In press.)
Developing disillusionment: StudentsÕ perceptions of academic
achievement tests. Issues in Education.
Parker, P. E. (1999). MCAS puts graduation in question,
Providence Journal, 14 December: 1C.
Pressley, D. (1999). Boston Latin breaks ground on library,
media center. Boston Herald, 8 May:
Quality Counts. (2000). Education Week, 13 January.
Russell, M. & Haney, W. (1997). Testing Writing on
Computers: An Experiment Comparing Student Performance on Tests
Conducted via Computer and via Paper-and-Pencil. Educational
Policy Analysis Archives 5(3): http://epaa.asu.edu/epaa/v5n3.html.
Russell, M. & Plati, T. (2000). Mode of Administration
Effects on MCAS Composition Performance for Grades Four, Eight, and
Ten. National Board on Educational Testing and Public Policy,
Boston College: http://www.nbetpp.bc.edu/reports.html
Sizer, T. R. (1996). Horace's Hope: What Works for the
American High School. Boston: Houghton Mifflin.
Smith, M. L. and Shepard, L. (Eds.) (1989). Flunking Grades:
Research and Policies on Retention. New York: Falmer Press.
Steele, C. M. & Aronson, J. (1998). Stereotype threat and the
test performance of academically successful African Americans. In
Jencks, C. and Phillips, M. (Eds.) The Black-White Test Score
Gap. Washington, DC: Brookings.
Steele, C. M. (1997). A threat in the air: How stereotypes shape
intellectual identify and performance. American Psychologist
Thorkildsen, T. A. (1999). The way tests teach: Children's
theories of how much testing is fair in school. In M. Leicester, C.
Modgil & S. Modgil (Eds.) Education, Culture, and Values,
Vol. III. Classroom Issues: Practice, Pedagogy, and Curriculum.
Urdan, T. & Davis, H. (1997, June). Teachers' and students'
perceptions of standardized tests. Paper presented at the
Interdisciplinary Workshop on Skills, Test Scores, and Inequality.
The Roy Wilkins Center for Human Relations and Social Justice,
University of Minnesota. Minneapolis, Minnesota.
Vigue, D. I. (2000). Race gap endures on MCAS results: Minority
leaders, state plan talks. Boston Globe, 19 May: B01.
Wehlage, G. and Rutter, R. (1988). Dropping out: How much do schools
contribute to the problem? Teachers College Record
87(3). Spring: 374-392.
Weitzman, M. et al. (1985). Demographic and educational
characteristics of inner city middle school problem adolescent
students. American Journal of Orthopsychiatry 55(3):
Wheelock, A. (1998). Safe To Be Smart: Building a Culture for
Standards-Based Reform in the Middle Grades. Columbus, OH: National
Middle School Association.
Wheelock, A., Babell, D. J., Haney, W. (2000) What can students' drawings tell us
about high-stakes testing in Massachusetts? Teachers College
Record ID Number: 10634.
Wheelock, A. & Dorman, G. (1988). Before It's Too Late:
Dropout Prevention in the Middle Grades. Chapel Hill: Center for
Wilgoren, J. (2000). Study finds most in US see college as
essential. New York Times, 4 May: