Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Instructional Uses of Computers for Writing: The Effect of State Testing Programs


by Michael Russell & Lisa Abrams - 2004

Over the past two decades, the presence and use of technology in the workplace and in schools has increased dramatically. At the same time, the importance of test-based educational accountability has also increased. Currently, formal testing programs are used in 49 states. Some observers have raised concerns that the testing programs that make high-stakes decisions about students and/or schools have adverse impact on instructional practices. This study examines the extent to which teachers believe they are modifying instructional uses of computers for writing in response to state testing programs. Through a national survey of teachers, the study finds that a substantial percentage of teachers believe they are decreasing instructional uses of computers for writing as a result of paper-based state tests. This study also reports that computing skills of students in urban and poor performing schools are reported by teachers to be less developed than students in suburban schools. The combined effect is that those students who most need opportunities to develop skills in using computers for writing in school may not be acquiring those opportunities as a result of teachers' responses to paper-based state tests.


In 1983, the authors of A Nation at Risk identified a major shortcoming of the U.S. educational system: a failure to prepare students for an increasingly technological, global and competitive workplace (National Commission of Excellence in Education, 1983). Nearly 20 years later, the most popular remedy to this perceived crisis is the establishment of educational standards and accountability systems linked to those standards. To date, content standards or curriculum frameworks have been established in 49 states and educational accountability systems have been established in all 50 states. Although the federal government has shied away from establishing national standards, accountability has become a critical component of national educational policy. While the specifics of accountability systems vary by state, student testing stands at the center of all accountability programs. This dominance is made clear in Education Week’s now annual attempt to rate the quality of state standards and accountability systems. As Orlogsky and Olson (2001) point out, the factors that influence ratings for standards and accountability include:


Whether the state tests students


Whether the tests are norm-referenced or criterion-referenced


The subject areas tested (English, mathematics, science and social studies tests are required to receive ‘‘full credit’’)


The types of test items used (multiple-choice, open-ended, essay, portfolio, etc.)


The extent to which the tests are aligned with the state standards in elementary, middle, and high school


Whether the state requires school report cards (ratings of schools)


Whether the state rates, rewards, sanctions, and/or provides

assistance to schools based on student test scores


The emphasis placed on testing by Education Week is clearly evident in President Bush’s No Child Left Behind Act of 2001 (Public Law No. 107– 110). As approved by both houses of Congress and issued into law in January, 2002, federal education funding requires that states implement tests for all students in Grades 3–8 in reading and mathematics. As stated in the White House’s summary of the legislation, ‘‘These systems must be based on challenging State standards in reading and mathematics, annual testing for all students in Grades 3–8, and annual statewide progress objectives ensuring that all groups of students reach proficiency within 12 years’’ (White House, 2002). Although the president’s education policy does not stipulate how states should use test scores, the legislation itself and rhetoric surrounding the legislation exemplify the extent to which many education and political leaders equate educational accountability with frequent testing.


The assumption underpinning the establishment of standards and test-based accountability systems is that they a) motivate teachers and schools to improve student learning; and b) encourage teachers to focus on specific types of learning. Some observers have raised concerns that the focus on specific types of learning too often translates into ‘‘teaching to the test.’’ As Shepard (1990) notes, however, teaching to the test means different things to different people. In many cases, state and local educational leaders, as well as classroom teachers, interpret this phrase to mean ‘‘teaching to the domain of knowledge represented by the test’’ (p. 17) rather than teaching only the specific content and/or items that might appear on the test. Under this definition, many educational leaders would argue that one goal of testing is to influence what is taught by teachers. Based on interviews with state testing directors in 40 high-stakes states, Shepard writes:


When asked, ‘‘Do you think that teachers spend more time teaching the specific objectives on the test(s) than they would if the tests were not required?’’ the answer from the 40 high-stakes states was nearly unanimously, ‘‘Yes.’’ The majority of respondents went on to describe the positive aspects of this more focused instruction. ‘‘Surely there is some influence on the content of the test on instruction. That’s the intentional and good part of testing, probably.’’ . . . Other respondents (representing about one third of the high-stakes tests) also said that teachers were spending more time teaching the specific objectives on the test but cast their answer in a negative way: ‘‘Yes. There is some definite evidence to that effect. I don’t know that I should even say very much about that. There are some real potential problems there. . . . Basically the tests do drive the curriculum.’’ (p. 18)


But beyond influencing what and, potentially, how teachers teach, state accountability programs that rely heavily on test results can have negative consequences. Recognizing this potential for negative consequences, Linn, Baker, and Dunbar (1991) suggest that the extent to which unintended consequences, whether positive or negative, result from the standards-based assessment system should be examined. Among the negative consequences, Herman, Brown, and Baker (2000) list increases in retention, increases in dropout rates, narrowing of the curriculum, and decreased attention to topics and subjects not tested. In addition, some observers have noted that the high-stakes associated with some state-level testing programs lead to questionable educational practices such as focusing instruction on test-taking skills, falsely classifying poor-performing students as SPED so that their scores are excluded from averages, altering test administration conditions, providing inappropriate instruction during testing, and, in some extreme cases, altering student response sheets.


In the remainder of this article, we focus on potential changes in teachers’ instructional practices in response to state test-based accountability systems in just one area─ instructional use of computers for writing. We have opted to focus on a single topic for three reasons:


1. The importance of technology in the workplace has increased rapidly over the past two decades;


2. Schools have invested heavily in acquiring computer-based technologies and are thus in a position to use computers as a tool to improve student learning, especially in the area of writing; and


3. There is growing evidence that paper-based tests severely underestimate the performance of students accustomed to writing with computers.


Given the importance of technology in the workplace and the increasing use of computers in schools to improve student learning, the fundamental question addressed here is the extent to which teachers believe they are altering instructional uses of computers for writing in response to the current delivery mode of state tests on paper. Before addressing this question, however, we present evidence to support the three observations that drive this investigation.




IMPORTANCE OF TECHNOLOGY IN THE WORKPLACE


As Bennett (2002) observes, ‘‘In the business world, almost everywhere one looks new technology abounds─ computers, printers, scanners, personal digital assistants, mobile phones, and networks for these devices to plug into.’’ (p. 1) Over the past two decades, use of computers in the workplace has increased dramatically. Since 1984 the percentage of people using computers in the workplace nearly doubled from 24.6% to 45.8% in 1993 and then rose to 56.7% in 2001 (Newburger, 1997; Cooper & Victory, 2001). In 2001, the four most common uses of computers in the workplace include Internet/e-mail, word processing/desktop publishing, spreadsheets/ databases, and calendar/scheduling. Projecting 5 years into the future, Henry and colleagues (1999) predicted that about half of the workforce will work for companies that are either producers or users of computer-related products or services.


The increased reliance on technology in business has led several blue ribbon panels and business leaders to call for better preparation of today’s students to participate in an increasingly technological workplace by placing greater attention on the development of skills requisite for a digital economy. For example, the President’s Information Technology Advisory Committee (2001) calls for the federal government to make ‘‘the effective integration of information technology with education and training’’ a national priority. Among several recommendations, the Secretary’s Commission on Achieving Necessary Skills (SCANS) Report (Secretary’s Commission on Achieving Necessary Skills, 1991) encourages schools to prepare students to work with a range of technologies and recognizes the role computer-based technologies play in supporting communication, acquisition of new knowledge, and reasoning. Most recently, the No Child Left Behind legislation provides $700 million in aid to schools for educational technology and stipulates the need to identify ways in which it can be used to increase student learning.




SCHOOL INVESTMENT IN COMPUTER-BASED TECHNOLOGY


Although schools have been slower in acquiring computer-based technologies than has the workplace, the presence and use of computers in schools has increased rapidly. Schools had 1 computer for every 125 students in 1983, 1 for every 9 students in 1995, 1 for every 6 students in 1998, and 1 for every 4.2 students in 2001 (Glennan & Melmed, 1996; Market Data Retrieval, 1999, 2001). Today, some states such as South Dakota report a computer-to-student ratio of 1:2 (Bennett, 2002).


As the availability of computers in schools has increased, so too has their use. A national survey of teachers indicates that in 1998, 50% of K–12 teachers had students use word processors, 36% had them use CD ROMS, and 29% had them use the World Wide Web (Becker, 1999). More recent national data indicate that 75% of elementary school students and 85% of middle and high school students use a computer in school (U.S Department of Commerce, 2002).


Despite considerable debate as to the impact that computer use has on student learning, there is a growing body of research that suggests the impact is generally positive, particularly in the area of writing. The research on computers and writing suggests many ways in which writing on computers may help students produce better work. Although much of this research was performed before large numbers of computers were present in schools, formal studies report that that writing with a computer can increase the amount that students write, the extent to which they edit their writing (Dauite, 1986; Etchinson, 1989; Vacc, 1987), which, in turn, leads to higher quality writing (Hannafin & Dalton, 1987; Kerchner & Kistinger, 1984; Williamson & Pence, 1989). More recently, a meta-analysis of research on computers and writing conducted between 1992 and 2002 reports significant positive effects on the quality (effect size = .41) and quantity (effect size = .5) of student writing when computers are used to help students develop their writing skills (Goldberg, Russell, & Cook, 2003). This positive effect is strongest for students with learning disabilities, early elementary students, and college students.




MODE OF ADMINISTRATION EFFECT IN STUDENT PERFORMANCE ON STATE TESTS


Despite the increasing presence and use of computers in schools, particularly for writing, state testing programs require that students produce responses to open-ended and essay questions using paper-and-pencil. There is increasing evidence, however, that tests which require students to produce written responses on paper underestimate the performance of students who are accustomed to writing with computers (Russell & Haney, 1997; Russell, 1999; Russell & Plati, 2001, 2002). In a series of randomized experiments, this mode of administration effect has ranged from an effect size of about 0.4 to just over 1.0. In practical terms, the mode of administration found in the first study indicated that when students accustomed to writing on computer were forced to use paper-and-pencil, only 30 percent performed at a ‘‘passing’’ level; when they wrote on computer, 67% ‘‘passed’’ (Russell & Haney, 1997). In a second study, the difference in performance on paper versus on computer for students who could keyboard approximately 20 words a minute was larger than the amount students’ scores typically change between Grade 7 and Grade 8 on standardized tests. However, for students who were not accustomed to writing on computer and could only keyboard at relatively low levels, taking the tests on computer diminished performance (Russell, 1999). Finally, a third study that focused on the Massachusetts Comprehensive Assessment System’s (MCAS) Language Arts Tests demonstrated that removing the mode of administration effect for writing items would have a dramatic impact on the study district’s results. As Figure 1 indicates, based on 1999 MCAS results, 19% of the fourth graders classified as ‘‘Needs Improvement’’ would move up to the ‘‘Proficient’’ performance level. An additional 5% of students who were classified as ‘‘Proficient’’ would be deemed ‘‘Advanced’’ (Russell & Plati, 2001, 2002).



[39_11575.htm_g/00001.jpg]

Source: From Russell and Plati (2001)



Figure 1: Mode of Administration Effect on Grade 4 MCAS Results




IMPACT OF STATE TESTING PROGRAMS ON COMPUTER USE IN SCHOOLS


In January, 2000, Russell and Haney (2000) discussed several problems that result from the mode of administration. Among these problems are


Mismeasuring the performance of students and, in turn, misclassifying the achievement level of students accustomed to writing with computers


Underestimating the impact that the use of computers for writing has

on students’ writing skills


Decreased use of computers for writing so as to minimize the mode of administration effect


With respect to the third issue, Russell and Haney (2000) provided two lines of evidence that teachers in two schools had already begun to reduce instructional uses of computers so that students become less accustomed to writing on computers. In one case, following the introduction of the new paper-and-pencil test in Massachusetts, the Accelerated Learning Laboratory (a K–8 school in Worcester that was infused with computer-based technology) required students to write more on paper and less on computer. Fearing that students who write regularly with a computer might lose penmanship skills, a principal in a second school increased the amount of time teachers spent teaching penmanship and decreased the amount of time students wrote using computers.


To examine the extent to which teachers believe they are altering instructional uses of computers for writing in response to state-level testing programs, data collected as part of a larger national survey of teachers were examined. The analyses presented below begin by focusing on responses by teachers in Massachusetts, which was oversampled in the national study. We begin with a focus on Massachusetts for three reasons. First, research on the mode of administration effect has been conducted in Massachusetts and has been the focus of discussion in several Massachusetts newspapers and during several conferences conducted by the Massachusetts Superintendents Association. It has also been discussed publicly by political leaders in Massachusetts. Second, Massachusetts State Department of Education officials have considered, but have thus far declined, requests from several school and business leaders to allow students to use computers during the written portions of the state test. Third, the State Legislature approved, and the Governor vetoed, funding to study further the mode of administration effect in Massachusetts schools. Together, it is likely that these events have made substantial numbers of teachers and school leaders in Massachusetts aware of the mode of administration effect and, in turn, may influence their instructional practices.


To examine the extent to which findings in Massachusetts generalize to other states with other types of testing programs, a second set of analyses focus on the national sample and examine the relationship between the stakes associated with the accountability program and teachers’ instructional uses of technology for writing. A third set of analyses focus specifically on teachers who work in states with testing programs that place high stakes on schools and on students, just as Massachusetts does with the MCAS.




METHODOLOGY


As part of a larger study conducted by the National Board on Testing and Public Policy (NBETPP), an 80-item survey was used to collect information on teachers’ attitudes and opinions about state testing programs (Pedulla, Abrams, Madaus, Russell, Ramos, & Miao, 2003). Many of the items in the survey were designed to collect information about the beliefs of classroom teachers concerning the influence of their state’s test on classroom instruction and student learning. The survey primarily consisted of items that were in the form of statements or questions for which a Likert response scale was employed to assess the relative strength or intensity of opinion. For the majority of the items, teachers were asked to indicate whether they strongly agreed, agreed, disagreed, or strongly disagreed with statements relating to test-based educational reform.


Among the several Likert-response questions, two focused specifically on instructional uses of technology.


Teachers in my school do not use computers when teaching writing because the state-mandated writing test is handwritten.


My school’s (district’s) policy forbids using computers when teaching writing because it does not match the format of the state-mandated writing test.


Three other items focused on the use of computers.


What percent of your students have a computer at home?


What percent of your students prefer to write first drafts using a computer?


What percent of your students can keyboard moderately well (20 words per minute of more)?


The survey was administered by mail during January and March, 2001. A systematic approach that included one follow-up mailing and an incentive was used to encourage participation in the study (Dillman, 2000).



SAMPLING


A multistage sampling design that included 12,000 teachers nationwide was employed. The sampling frame was stratified according to the stakes attached to the state-wide testing program, type of school (e.g., elementary, middle, and high school), school location (e.g., urban and non-urban) and, finally, by subject/content area for the high school teachers only (see Pedulla et al., 2003, for a detailed description of the sampling frame).


When classifying states according to the stakes associated with their testing program, two types of stakes were considered: stakes for students and stakes for schools. In each case, stakes were classified into three categories: high, moderate, and low. The term high stakes refers to state regulated or legislated sanctions for schools, teachers, and/or students. Examples of high-stakes decisions are using test results to determine whether (1) a student receives a high school diploma, (2) a student is promoted to the next grade, or (3) a school remains accredited (Heubert & Hauser, 1999).


The low stakes category included states with testing programs that do not have any known consequences attached to test scores. The midpoint along the test-stakes continuum is the moderate category. Moderate stakes, for example, include public dissemination of test results, awarding college tuition credit, reporting scores on transcripts, and determining eligibility for driver’s licenses (Shore, Pedulla, & Clarke, 2001). Generally, if the stakes attached to a state test did not meet characteristics of the high or low stakes, then the state was classified in the moderate category. To determine the type of decisions made on the basis of student test performance, a variety of sources were used: (1) state departments of education websites, documents, and personal communications; (2) state testing legislation and (3) guides to state testing programs produced by the Council of Chief State School Officers and Education Week. Based on the categorization process, a nine-cell testing program matrix emerged, which is displayed in Table 1.


Also incorporated into the sampling stratification was an oversample of Massachusetts teachers: 1,000 teachers were selected from this state. These teachers were selected based on the same criteria of grade level, subject area, and school location. Table 2 presents the sampling framework, numbers, and percentages of teachers in the population, and numbers and percentages sampled.



DESCRIPTION OF TEACHER PARTICIPANTS


Of the 12,000 teachers who received the national survey, 4,195 returned useable surveys, a response rate of 35%.1 As Table 3 indicates, the teachers varied widely with respect to personal characteristics and their professional



Table 1.

Categorization of State Testing Programs According to Student and School Stakes


 

Stakes for Students

Stakes for Schools

High

Moderate

Low



High

Alabama (s)

California*

Delaware*

Florida

Georgia*

Indiana*

Louisiana

Maryland*

Massachusetts*

Mississippi

Nevada

New Jersey

New Mexico

New York

North Carolina

South Carolina

Tennessee

Texas

Virginia*


Arkansas

Connecticut

Illinois

Michigan

West Virginia

Pennsylvania


Colorado*

Kansas

Kentucky

Missouri

Oklahoma*

Rhode Island

Vermont*



Moderate

Arizona*

Alaska*

Ohio

Minnesota

Washington*

Wisconsin*

Oregon

Hawaii

Maine

Montana

Nebraska

New Hampshire

North Dakota

South Dakota

Utah*

Wyoming

Low

Idaho*

 

Iowa


*Indicates that the program is not yet fully in place.



experience. The overwhelming majority of the teachers were middle-aged females with considerable teaching experience. Approximately, 67% of those who responded were over 40 years old. In addition, 40% of the teachers had more than twenty years of teaching experience. In comparison with the national population, the teachers who completed the NBETPP survey were generally comparable in terms of age, race/ethnicity, the type of school in which they worked (elementary, middle or high school), and teaching experience. However, more English and mathematics teachers at the high school level responded to the survey. This result is not surprising, given that most state testing programs focus on English and mathematics. In addition, roughly the same numbers of teachers from science, social



Table 2. Basic Sampling Frame with oversampling in MA by 1,000 teachers


 

Total Number of Teachers

% of

Population

% of

Sample

Number of Sampled Teachers

     

High/High

1,488,226

56.83

18.33

2,200

High/Moderate

392,672

14.99

18.33

2,200

High/Low

238,417

9.10

18.33

2,200

Moderate/High

320,514

12.24

18.33

2,200

Moderate/Low

122,060

4.66

18.33

2,200

Massachusetts

57,097

2.18

8.33

1,000

Total

2,618,986

100.00

99.98

  12,000





Table 3: Characteristics of Survey Respondents1


RESPONDENT CHARACTERISTICS

N

% of Respondents

% of Population

GENDER2

Male

764

18

26

Female

3,396

81

74

AGE3

    

20-30

520

12

11

31-40

816

19

 

41-50

1,325

32

67

(are 40 or older)

51-60

1,356

32

60+

130

3

RACE/ETHNICITY3

    

African American

298

7

7

American Indian/ Alaskan Native

57

1

1

White

3,621

86

91

Asian/Pacific Islander

39

1

1

Hispanic

199

5

4

GRADE LEVEL2

    

Elementary School

2,448

58

60

Middle School

836

20

40 (secondary)

High School

911

22

CONTENT AREA OF HIGH SCHOOL TEACHERS3,4

    

English

368

40

24

Math

214

23

17

Science

139

15

13

Social Studies

165

18

13

Special Education

149

16

2

TEACHING EXPERIENCE

(years)3

    

1

71

2

17

(5 years or less)

2-3

284

7

4-8

723

17

Average is 16 years

9-12

508

12

13-20

898

22

20 +

1,679

40

46

SCHOOL LOCATION5

    

Urban

1,109

26

32

Suburban

1,793

43

38

Rural

1,304

31

30

TESTING STAKES FOR TEACHERS, SCHOOLS, DISTRICTS/STAKES FOR STUDENTS5

High/High

2,549

61

60

High/Moderate

642

15

15

High/Low

355

9

9

Moderate/High

471

11

12

Moderate/Low

180

4

4

1.

Numbers are weighted using estimates of the national population.


2.

Population estimates based on NEA Rankings Estimates: Rankings of the States 2000 and Estimates of School Statistics 2001


3.

Population estimates based on NEA Status of the American Public School Teacher, 1995-96: Highlights and Digest of Education Statistics, 2000.


4.

Total percent of respondents exceeds 100 because some high school teachers reported teaching more that one content area.


5.

Market Data Retrieval population estimates, fall 2000




studies, and special education completed surveys, providing numbers sufficient to make comparisons across the sampled high school content areas.


Two sets of sampling weights were applied, using the probability of selection from (1) the national teaching population, and (2) the populations of the state testing programs to provide for a more accurate representation of the teaching force. Weights were calculated by taking the inverse of the probability that the teacher would be selected from these populations. The national population weights were applied when estimating teachers’ responses nationwide, while state testing program weights were used when making comparisons among the different types of testing programs. (See Pedulla et al., 2003, for a full description of the sample weights).


It is important to note the evolutionary nature of state testing programs. Since this national survey was administered, state testing programs have reached different points in their implementation. Thus, specific state classifications made at the time of the study may not reflect the current situation. The findings about various stakes levels, however, should generalize to states that have those stakes levels now.




RESULTS


The survey items examined in this study focused on two general topics: teachers’ beliefs about instructional uses of computers for writing in response to state testing programs and teachers’ perceptions about student access to computers, use of computers, and keyboarding skills.


For instructional uses of computers for writing, the data were analyzed in three ways. First, the sample of teachers in Massachusetts was analyzed. Second, the national sample was analyzed with a specific focus on examining differences by type of testing program (as defined by the stakes for students and for schools). Third, the sample of teachers from states that place high stakes on schools and high stakes on students (High/High) was analyzed to examine the extent to which results in Massachusetts are representative of High/High testing programs in general. Note that the Massachusetts teachers were not included in the analysis of either national data or data from High/High states.


When examining results for Massachusetts and for High/High testing programs, data were analyzed in four ways. First, descriptive statistics for the total group were generated. Then, results were disaggregated by (a) the location of the school (urban, suburban, or rural); (b) the grade level of the school (elementary, middle, or high), and (c) the performance level of the school. To classify the school’s performance level, teachers’ responses to the following questions were used: ‘‘How do your school’s results on the state- mandated test compare to those of other schools in your state?’’ The response options included: Above average, Average, and Below Average.2


Before examining the results, we would like to observe that a minority of teachers indicated that they believe teachers have decreased instructional use of computers for writing in response to the state testing program. However, since the instructional practices of even a relatively small number of teachers can have an impact on a substantial number of students in their classrooms, we have opted to focus the presentation of the findings on the negative rather than the neutral response of teachers. In doing so, we recognize that within each of the analyses presented below, the majority of teachers report that they believe that teachers have not altered their instructional uses of computers for writing in response to the state testing program.



STATE TESTING PROGRAMS AND INSTRUCTIONAL USES OF COMPUTERS FOR WRITING


Results for Massachusetts


For the 1,000 teachers in Massachusetts who were mailed surveys, 381

completed surveys were received. As Table 4 indicates, 22.4% of the



Table 4.

Computer Use and Testing in Massachusetts

  


Do Not Use Computers

Policy Forbids Computer Use

 

N

Agree

Disagree

Agree

Disagree

All MA Teachers

381

22.4%

77.6%

8.3%

91.7%

School Location

     

Urban Teachers

115

20.4

79.6

14.1

85.2

Suburban Teachers

202

25.2

74.8

5.2

94.8

Rural Teachers

64

18.5

81.5

7.8

92.2

School Performance

     

Above Average

98

19.5

80.5

3.2

96.8

Average

156

23.3

76.7

6.6

93.4

Below Average

112

24.5

73.5

14.6

85.4

School Type

     

Elementary School

211

27.0

73.0

11.4

88.6

Middle School

76

16.5

83.5

2.8

97.2

High School

94

15.7

84.3

4.4

95.6

Note. A chi-square was performed to test for statistical differences within each set of comparisons. Shaded cells indicate statistical differences at the .05 level.



Massachusetts teachers report that they agree or strongly agree that they do not use computers when teaching writing because the state-mandated writing test is handwritten. 8.3% of the Massachusetts teachers also report that they agree or strongly agree that their school or district forbids using computers when teaching writing because it does not match the format of the state writing test. When disaggregated by the location of the school, the percent of teachers agreeing that they do not use computers because the state test is handwritten ranges from 18.5% of teachers in rural schools, to 20.4% in urban schools and 25.2% in suburban schools. The percent of teachers reporting that their school or district forbids use of computers for writing shows a different pattern with the percent of teachers agreeing or strongly agreeing ranging from 7.8% in rural schools and 5.2% in suburban schools to 14.1% in urban schools.


Based on these data, it appears that (a) close to one-quarter of teachers report that they are not using computers to teach writing because the state test is handwritten and (b) decisions by teachers or administrators not to use computers for writing because of the state test occur less frequently in rural schools. Based on these findings, it also appears that (a) urban schools are nearly three times more likely than suburban schools to have policies in place that forbid the use of computers to teach writing and (b) suburban teachers are more likely to report that they decide on their own not to use computers to teach writing because the test requires students to handwrite their responses.


National Results


For the 12,000 teachers across the nation who were mailed surveys, 4,195 completed surveys were received. Of these, 381 teachers were from Massachusetts. For the nationwide analyses reported here, the Massachusetts teachers were removed from the sample.


As Table 5 indicates, 30.2% of the teachers nationwide report that they agree or strongly agree that they do not use computers when teaching writing because the state-mandated writing test is handwritten. 4.4% of teachers also report that they agree or strongly agree that their school or district forbids using computers when teaching writing because it does not match the format of the state writing test. The percentage of teachers nationwide that reports that they agree that they do not use computers because of the format of the state test is about 7.8 percentage points higher than in Massachusetts. However, the percentage agreeing that their school and/or district has a policy that forbids computer use because of the state test is almost two times higher in Massachusetts than it is across the nation.


When disaggregated by the location of the school, the percent of teachers agreeing that they do not use computers because the state test is handwritten is similar across urban and rural schools and is lower in suburban schools. However, as seen in Massachusetts, teachers in urban schools are more likely to report that their school or district has a policy that forbids use of computers for writing because of the state test. When disaggregated by performance level, teachers in high performing schools are the least likely to report that they do not use computers for writing because of the test while teachers in average performing schools are the most likely. However, a lower percentage of teachers in average schools report policies that forbid use of computers for writing. Conversely, a higher percentage of teachers in low performing schools report policies that forbid computer use for writing. The differences among high, average, and low performing schools are smaller within the national sample as compared to Massachusetts.



Table 5.

Computer Use and Testing Nationwide

  


Do Not Use Computers

Policy Forbids Computer Use

 

N*

Agree

Disagree

Agree

Disagree

All Teachers Nationwide

4100

30.2%

69.8%

4.4%

95.6%

School Location

     

Urban Teachers

1080

32.0

68.0

7.1

92.9

Suburban Teachers

1732

27.4

72.6

2.8

97.2

Rural Teachers

1288

32.3

67.7

4.3

95.7

School Performance

     

Above Average

1417

27.0

73.0

4.2

95.8

Average

1437

33.8

66.2

3.8

96.2

Below Average

912

31.1

68.9

6.0

94.0

School Type

     

Elementary School

2395

33.1

68.9

4.4

95.6

Middle School

723

30.5

69.5

4.8

95.2

High School

827

21.7

78.3

4.2

95.8

*Note that the N’s for the national sample are weighted. Although the 381 MA teachers are excluded from the national analyses, their weighted contribution to the national sample represented 95 teachers.

A chi-square was performed to test for statistical differences within each set of comparisons. Shaded cells indicate statistical differences at the .05 level.


Differences Among Testing Programs


It is important to note that the types of testing programs vary widely across states. To develop a sense of how practices and policies vary across types of testing programs, teachers were grouped by the stakes associated with their state’s program. As is described in greater detail previously, the stakes associated with a state-testing program vary at the student level and the school level. For this reason, differences among test stake levels are examined separately according to stakes for students and stakes for the school. The combined effect of student- and school-level stakes is also examined.


State testing programs were placed into one of three categories based on the ways test scores were used to make decisions at the student-level. The three groups included (a) high stakes, including decisions about diplomas or retention in grade level; (b) moderate stakes, including college tuition credit, reporting scores on transcripts, and eligibility for driver’s license; and (c) low stakes, indicating that no known decisions about individual students are made based on state test scores.


State testing programs were placed into one of three categories based on the ways in which test scores are used to make decisions at the school- and/ or teacher-level. The three groups include (a) high stakes, including sanctions, receivership, and financial awards; (b) moderate stakes, including public display of school averages and/or ranking of schools absent decisions or actions directed at individual schools or teachers; and (c) low stakes, indicating that results are not publicized or used to make decisions about schools or teachers. Note that only one state (Idaho) was categorized as having low stakes for schools and teachers. For this reason, the low stakes category was not included in the analyses.


Considering both stakes for students and stakes for schools, five groups were formed: (1) high stakes for schools and high stakes for students (high/ high); (2) high stakes for schools and moderate stakes for students (high/ mod); (3) high stakes for schools and low stakes for students (high/low); (4) moderate stakes for schools and high stakes for students (mod/high); and (5) moderate stakes for schools and low stakes for students (mod/low). Table 6 shows that as the stakes for students associated with a state testing program increase, the percentage of teachers who report that they opt not to allow their students to use computers for writing also increases. Similarly, teachers who work in states that place high stakes on schools are much more likely to report that they do not have students use computers for writing than are teachers in moderate-stakes states. In particular,


Table 6 shows that teachers in high/high states are over 1.5 times more likely to report that they do not use computers for writing than are teachers in mod/low states. With respect to policies that forbid computer use, Table 6 indicates that teachers who teach in states that place high stakes at the school level are more likely to report that such policies exist. It is important to note, however, that a small percentage of teachers in all settings report that such policies exist.


High/High States


Similar to the analyses conducted for Massachusetts, differences in instructional use of computers were disaggregated by school locations and performance level for all teachers in high/high states. As Table 7 shows, rural teachers in high/high states are more likely to report that they do not



Table 6.

Computer Use and Testing Nationwide by Stakes


  


Do Not Use Computers

Policy Forbids Computer Use

 

N

Agree

Disagree

Agree

Disagree

Student Stakes

     

 High

3018

31.3%

68.7%

4.4%

95.6%

 Mod

642

28.9

71.1

5.3

94.7

 Low

535

24.1

75.9

4.2

95.8

      

School/Teacher Stakes

     

 High

3449

31.6

69.4

4.8

95.2

 Mod

651

22.7

77.3

2.6

97.4

      

Student/School Stakes

     

 High/High

655

33.1

68.9

4.7

95.3

 High/Mod

753

28.9

71.1

5.3

94.7

 High/Low

766

26.1

73.9

4.5

95.5

 Mod/High

800

23.5

76.5

2.2

97.8

 Mod/Low

837

20.1

79.9

3.7

96.3

Note. A chi-square was performed to test for statistical differences within each set of comparisons. Shaded cells indicate statistical differences at the .05 level



allow students to use computers for writing due to the test format. Urban teachers are also more likely than suburban teachers to report that they do not use computers. In contrast, urban teachers are three times more likely and rural teachers are nearly twice as likely as suburban teachers to report that their schools have policies that prohibit instructional uses of computers for writing. When examined by performance level, teachers in average performing schools are more likely to report that they do not use computers for writing than are teachers in high- and low-performing schools. Similarly, teachers in high performing schools are more likely to report that they do not use computers for writing than are teachers in schools that report below average. Finally, teachers in elementary and middle school are more likely to report that they do not use computers for writing than are teachers in high schools.



Table 7.

Computer Use and Writing in High/High States


  


Do Not Use Computers

Policy Forbids Computer Use

 

N

Agree

Disagree

Agree

Disagree

All High/High Teachers

655

33.1%

68.9%

4.7%

95.3%

School Location

     

Urban Teachers

177

32.2

67.8

7.5

92.5

Suburban Teachers

285

29.5

70.5

2.5

97.5

Rural Teachers

171

40.3

59.7

5.9

94.1

School Performance

     

Above Average

234

32.0

68.0

5.1

94.9

Average

224

38.4

61.6

3.8

96.2

Below Average

145

29.7

70.3

5.5

94.5

School Type

     

Elementary School

375

35.2

64.8

4.3

95.7

Middle School

132

34.9

65.1

5.4

94.6

High School

127

24.4

75.6

5.5

94.5

Note. A chi-square was performed to test for statistical differences within each set of comparisons. Shaded cells indicate statistical differences at the .05 level.




DISCUSSION


Test-based accountability programs are intended to improve teaching and learning in schools and to prepare students to perform successfully in the workplace. As the workplace becomes increasingly oriented to technology, it is important that schools provide students sufficient opportunity to develop knowledge and skills in this area. As we have seen, a significant number of teachers believe that the format of state testing programs may work against providing such opportunity in the classroom.


The aim of state test-based accountability programs is to improve learning in all schools, but attention is often focused almost exclusively on urban and low-performing schools. While many states have reported increases in students’ scores on state tests (e.g., Texas, Massachusetts, and California, among others, have all celebrated gains over the past few years), the analyses presented above suggest that these gains may come at the expense of opportunities in school to develop skills in using computers, particularly for writing. Although the majority of teachers report that they do not believe that the use of computers for writing has been affected by the state testing program, 22.4% of teachers in Massachusetts and 30.2% of teachers across the nation do believe that they are not using computers for writing because the test is handwritten. Across the nation, a higher percentage of teachers in urban locations and in lower performing schools, as compared to suburban and high performing schools, believe they have decreased instructional use of computers for writing because of the format of the state test. Across the nation, teachers working in states that place high stakes on test results for students and/or schools are more likely to believe that they are not using computers for writing because students must use paper and pencil for the state test. These patterns are cause for concern for a number of reasons.


In the workplace, computers are the primary tool used to produce text and to communicate written forms of information. In many schools, the use of computers for writing is also rapidly increasing. Beyond providing students with the technological skills commonly used in the workplace, an extensive body of research documents that the use of computers can have positive effects on writing. Students write more, are more willing to write more often, revise what they write more extensively, show their writing to fellow students more frequently, and respond critically to each other’s work more often. In effect, when students use computers regularly over an extended period of time, they become better writers.


Although computers are expensive and most school districts do not have enough computers for all students in a given grade to write on computers at the same time, schools are beginning to acquire relatively inexpensive writing devices like AlphaSmarts (a portable word processor that can up- and down-load text to a computer or printer) so that large numbers of students have appropriate access to computer-based technologies. In Massachusetts, these investments are consistent with the Department of Education’s benchmark for technology, which sets a student to computer ratio of 5:1 for 2003. Whether it is full-scale computers or devices like AlphaSmarts, it is clear that schools are investing in technology with the goal of improving student learning.


Unfortunately, the fact that state tests administered on paper do not appropriately measure the achievement of students who regularly use computers for writing makes it difficult to assess the impact the use of technology has on student achievement. Even more troubling, the analyses presented above indicate that the mismatch between how some students regularly produce writing in the classroom and how they are required to produce writing on a test is leading many teachers and some schools to dissuade students from writing with a computer in school.


As part of a separate study, researchers in the Technology and Assessment Study Collaborative have interviewed over 200 district and school leaders (including superintendents, curriculum directors, principals, department heads and head librarians) in 20 Massachusetts districts about issues related to district technology programs. Although none of the questions focused on the state testing program, at least one leader in eight districts mentioned that teachers were either not using or had decreased use of computers because of the program. In one district, the superintendent spoke at length about how teachers stop using the writing lab several weeks before the test and instead require students to write using paper in order to prepare them for the test. In another district, the English Department Head recounted the story of a teacher whose classroom was equipped with 21 computers several years ago so that all students could write with a computer at the same time. Over the past few years, the teacher ‘‘severely backtracked on having kids write constantly with technology, due to the MCAS exam and even AP English essays. Kids came back initially from exams and expressed frustration that they did not know how to write with paper and pencil. So now they are being provided with many of their writing experiences without technology’’ (personal interview, March 21, 2002). This practice was also reported in a district that has implemented a 1:1 laptop program for students in fourth and fifth grade. For most of the year, students produce writing on their own laptop. The month prior to testing, however, students are discouraged from writing on their laptops in order to prepare them for the state test (personal interview, December 13, 2002).


To reduce the mode of administration effect and to promote instructional uses of computers for writing, Russell and colleagues have suggested that students be given the option of composing written responses for state-level tests either with paper or with a word processor. This policy was adopted over a decade ago by the Province of Alberta in Canada. In Massachusetts, state educational officials, however, have raised concerns that this policy might place students in urban and/or underfunded districts at a disadvantage because they might not have computers available to them in school if they were given the option of using them during testing. The analyses presented above, however, suggests that these students are already being placed at a disadvantage because their teachers discourage them from using computers for writing in school and their schools are nearly three times as likely to have policies that prohibit use of computers for writing. Moreover, as shown in Table 8, teachers in urban and low-performing schools report that fewer of their students have access to computers at home or write regularly using computers as compared to students in suburban



Table 8.

Students and Computers in High/High States


  

Home Computer

Write with Computer

 

N

0-30

31-60

61-100

0-30

31-60

61-100

All High/High

1034

39.8%

28.5%

31.7%

72.2%

13.9%

13.9%

School Location

       

Urban Teachers

269

59.9

23.7

16.3

76.2

12.7

11.2

Suburban Teachers

454

23.5

28.5

48.0

64.4

15.2

20.3

Rural Teachers

256

47.7

33.

18.7

81.9

12.7

5.3

School Performance

       

Above Average

364

21.7

31

47.3

63.8

16.4

19.8

Average

355

44.0

30.2

25.9

75.4

12.7

11.9

Below Average

216

65.7

22.2

12.1

80.8

11.1

8.2

Note. Percentages represent the percentage of teachers. As an example, 39.8% of teachers indicate that fewer than 30% of their students have computers in their homes.



and high-performing schools. Thus, the same students whose teachers are more likely not to have them use computers for writing because of the format of the state test are significantly less likely to have computers in their homes and therefore are less able to develop proficiency in writing with computers (or work with computers more generally) outside of school. Despite rising test scores, school policies that limit the use of computers coupled with limited access to computers at home underprepares many students in urban and low-performing schools for the workplace.


In addition, since the mode of administration effect reported by Russell and colleagues only occurs for students who are accustomed to writing with computers, and because students in suburban and high-performing schools are much more likely to be accustomed to writing with computers, state tests likely underrepresent the difference in academic achievement between urban and suburban schools. Despite concerns about the need to close the gap between urban and suburban schools, the current testing policy that prohibits use of a computer calls into question the use of state tests to document this gap.


Clearly, there is not an easy solution to the multiple problems caused by administering a test only on paper. While I have advocated that state-level testing programs provide students the option of using paper and pencil or a computer (without access to spelling- and grammar-checkers), this approach presents additional challenges. To begin with, students need to have appropriate access to computers. In many cases, students who use computers regularly for writing attend schools that have a sufficient number of computers to be used for testing. But this is not always the case.


Moreover, as noted by Powers et al. (1994), allowing examinees to produce responses in handwritten form or as computer-text also complicates the scoring process. Although conventional wisdom is that readers give preference to neatly formatted computer-printed responses, several studies have documented that readers tend to award higher scores to handwritten responses (Powers, et al., 1994; Russell, 2002a, 2002b). Research has, however, shown that this presentation effect can be reduced or eliminated with careful training prior to scoring (Powers, et al., 1994; Russell, 2002b).


A third problem arises due to the way in which most testing companies score open-ended responses. While the practices vary to some extent, careful procedures are employed to maintain the link between a student’s identification, his/her responses, and the scores awarded to the responses. In Massachusetts, student identifications are placed onto a test booklet. In order to preserve the link between students and their responses, they are required to write all their responses in the test booklet. In fact, if students run out of room in their test booklet, they are instructed to erase and rewrite their work so that it will fit into the test booklet. Additional sheets attached to the test booklet are not accepted. Allowing students to compose responses using a computer would require testing programs to alter their current administration procedures in order to preserve the link between students and their responses.


While none of these problems is insurmountable, state-testing programs need to begin to address them immediately. To do anything less will likely lead more teachers and schools to decrease the use of computers for writing and consequently further exacerbate the technology-use gap between urban and suburban students. Continued decreases in the instructional use of computers for writing, particularly in urban and underperforming schools, will perpetuate an educational process that underprepares many students for a workplace that is increasingly dependent on technology oriented skills. On the other hand, given the role state tests play in shaping teaching in the classroom, a move to incorporate computers as part of state writing tests would likely increase instructional uses of computers for writing, particularly in those classrooms that currently do not use computers regularly throughout the writing process. Although the transition to computer-based testing of writing skills must be conducted thoughtfully, it is incumbent upon state departments of education, testing companies, and policymakers to implement state testing programs that support classroom instruction and student learning, and in turn provide for more accurate assessments of student achievement.


We thank the National Board on Educational Testing and Public Policy for providing access to its national survey of teachers. We also thank the Atlantic Philanthropies for the generous support for the National Board on Educational Testing and Public Policy and the 4,195 teachers who took the time to complete the survey. Finally, we thank Walt Haney, George Madaus, and the anonymous reviewers for their comments and suggestions.




Notes


1 Hoffman, Assaf, and Paris (2001) conducted a similar study in Texas to explore the costs of standards-based reform on teachers and instruction. They used mail surveys with a reminder and a follow-up mailing; they did not, however, provide an incentive. Their sample was significantly smaller (n = 750) and received 200 useable surveys (27% response rate).


2 Given the large number of teachers, schools, and states, it was not feasible to collect actual school results on the state tests. As a check on the validity of teachers’ classification of their school test results, the correlation between teachers’ categorization and the actual school performance was calculated for teachers sampled in Massachusetts. For this analysis, each school’s actual performance on the 2000 MCAS Language Arts test was categorized into one of three groups: within 0.5 standard deviations of the state mean, above 0.5 standard deviations of the state mean, and below 0.5 standard deviations of the state mean. The correlation between the teachers’ categorizations and the actual performance levels was .67. The exact agreement between teachers’ categorizations and the actual performance levels was 67.3%. Interestingly, teachers under-rated their school performance three times more than they over-rated it.




References


Becker, H. J. (1999). Internet use by teachers: Conditions of professional use and teacher-directed student use. Irvine, CA: Center for Research on Information Technology and Organizations.


Bennett, R. E. (2002). Inexorable and inevitable: The continuing story of technology and assessment. Journal of Technology, Learning, and Assessment, 1(1). Retrieved March 20, 2004, from http://www.bc.edu/research/intasc/jtla/journal/v1n1.shtml


Daiute, C. (1986). Physical and cognitive factors in revising: insights from studies with computers. Research in the Teaching of English, 20, 41–59.


Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method. New York: John Wiley & Sons.


Etchison, C. (1989). Word processing: A helpful tool for basic writers. Computers and Composition, 6(2), 33–43.


Glennan, T. K., & Melmed, A. (1996). Fostering the use of educational technology: Elements of a national strategy. Santa Monica, CA: RAND.


Goldberg, A., Russell, M., & Cook, A. (2003). The effect of computers on student writing: a meta-analysis of studies from 1992–2002. Journal of Technology, Learning, and Assessment, 2(1). Retrieved March 1, 2003, from http://www.bc.edu/research/intasc/jtla/journal/v2n1.shtml.


Hannafin, M. J., & Dalton, D. W. (1987). The effects of word processing on written composition. The Journal of Educational Research, 80, 338–342.


Henry, D., Cooke, S., Buckley, P., Gill, G., Dumagan, J., LaPorte, S., & Pastore, D. (1999). The emerging digital economy II. Washington, DC: U.S. Department of Commerce.


Herman, J. L., Brown, R. S., & Baker, E. L. (2000). Student assessment and student achievement in the California public school system. CSE Technical Report 519. Los Angeles, CA: Center for the Study of Evaluation, Center for Research on Evaluation, Standards, and Student Testing.


Heubert, J. P., & Hauser, R. M. (Eds.), (1999). High-stakes: Testing for tracking, promotion and graduation. Washington, DC: National Academy Press.


Hoffman, J. V., Assaf, L. C., & Paris, S. G. (2001). High-stakes testing in reading: Today in Texas, tomorrow? The Reading Teacher, 54(5), 482–494.


Kerchner, L. B., & Kistinger, B. J. (1984). Language processing/word processing: Written expression, computers, and learning disabled students. Learning Disability Quarterly, 7(4), 329–335.


Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20(8), 15–21.


Market Data Retrieval. (1999). Technology in education 1999 (Report issued by Market Data Retrieval). Shelton, CN: Author.


Market Data Retrieval. (2001). Technology in Education 2001 (Report issued by Market Data Retrieval). Shelton, CN: Author.


National Commission of Excellence in Education. (1983). A nation at risk: The imperative for educational reform. Washington, DC: U.S. Government Printing Office.


Newburger, E. C. (1997). Computer use In the United States. Washington, DC: U.S. Census Bureau.


Orlofsky, G. F., & Olson, L. (2001). The state of the states. Education Week, 20(17), 86–88.


Pedulla, J., Abrams, L., Madaus, G., Russell, M., Ramos, M., & Miao, J. (2003). Perceived effects of state-mandated testing programs on teaching and learning: Findings from a national survey of teachers. Chestnut Hill, MA: National Board on Educational Testing and Public Policy.


Powers, D., Fowles, M., Farnum, M., & Ramsey, P. (1994). Will they think less of my handwritten essay if others word process theirs? Effects on essay scores of intermingling handwritten and word-processed essays. Journal of Educational Measurement, 31(3), 220–233.


President’s Information Technology Advisory Committee Panel on Transforming Learning. (2001). Report to the President: Using information technology to transform the way we learn. Retrieved April 19, 2002, from http://www.itrd.gov/pubs/pitac/pitac-tl-9feb01.pdf


Russell, M., & Haney, W. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil. Educational Policy Analysis Archives, 5(1). Retrieved March 20, 2004, from http://epaa.asu.edu/epaa/v5n3.html


Russell, M., & Haney, W. (2000). Bridging the gap between testing and technology in schools. Education Policy Analysis Archives, 8(19). Retrieved March 20, 2002, from http://epaa.asu.edu/ epaa/v8n19.html.


Russell, M., & Plati, T. (2001). Effects of computer versus paper administration of a state- mandated writing assessment. Teachers College Record. Retrieved March 20, 2004, from http://www.tcrecord.org/Content.asp?ContentID=10709


Russell, M., & Plati, T. (2002). Does it matter with which I write: Comparing performance on paper, computer and portable writing devices. Current Issues in Education, 5 (4). Retrieved March 20, 2004, from http://cie.ed.asu.edu/volume5/number4/


Russell, M. (1999). Testing writing on computers: A follow-up study comparing performance on computer and on paper. Educational Policy Analysis Archives, 7(20). Retrieved March 20, 2004, from http://epaa.asu.edu/epaa/v7n20/


Russell, M. (2002a). Effects of handwriting and computer-print on composition scores: A follow-up to Powers et al. Chestnut Hill, MA: Technology and Assessment Study Collaborative. Retrieved March 20, 2004, from http://www.bc.edu/research/intasc/publications.shtml


Russell, M. (2002b). The influence of computer-print on rater scores. Chestnut Hill, MA: Technology and Assessment Study Collaborative. Retrieved March 20, 2004, from http://www.bc.edu/research/intasc/publications.shtml


Secretary’s Commission on Achieving Necessary Skills. (1991). What work requires of schools: A SCANS report for America 2000. Washington, DC: U.S. Department of Labor.


Shepard, L. (1990). Inflating test score gains: Is the problem old norms or teaching the test. Educational Measurement: Issues and Practice, 15–22.


Shore, A., Pedulla, J., & Clarke, M. (2001). The building blocks of state testing programs. National Board on Educational Testing and Public Policy Statements, 2(4).


U.S. Department of Commerce. (2002). A nation online: How Americans are expanding their use of the Internet. Washington, DC: Author. Retrieved April 19, 2002, from http://www.ntia.doc.-gov/ntiahome/dn/nationonline_020502.htm.


Vacc, N. N. (1987). Word processor versus handwriting: A comparative study of writing samples produced by mildly mentally handicapped students. Exceptional Children, 54(2), 156–165.


White House. (2002). Fact sheet: No Child Left Behind Act. Retrieved, May 26, 2002, from http://www.whitehouse.gov/news/releases/2002/01/print/20020108.html


Williamson, M. L., & Pence, P. (1989). Word processing and student writers. In B. K. Briten & S. M. Glynn (Eds.), Computer writing environments: Theory, research, and design (pp. 96–127). Hillsdale, NJ: Lawrence Erlbaum.





Cite This Article as: Teachers College Record Volume 106 Number 6, 2004, p. 1332-1357
https://www.tcrecord.org ID Number: 11575, Date Accessed: 5/22/2022 10:36:04 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS