Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

A Grand Experiment in Reading Instruction: Interim Report 1


by Dick Schutz - June 01, 2012

The UK and the US are implementing very different models for teaching reading, constituting a Natural Experiment. Preliminary results of the experiment are reported.

The experiment is a natural planned variations design involving two treatment models: the U.K. model and the U.S. model.


U.K. MODEL


The U.K. government is committed to teaching all children to read by Year/Grade 2 using alphabetic codebased instruction termed “synthetic phonics.” A 40item screening check administered to all children at the end of Year/Grade 1 will mark the progress in accomplishing this goal, and the check will be administered again at the end of Year/Grade 2 to children who did not pass the check earlier. The test will be rolled out in June 2012. The test is low stakes, about 10 minutes or less per child, and is administered by classroom teachers as a routine instructional activity.


U.S. MODEL


The U.S. government’s “Race to the Top” initiative extends reading instruction from kindergarten to Grade 12. The commitment is that all students will graduate from high school “college and careerready by 2020.” Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects have been formulated, and “new and better” tests to be administered annually in Grades 3–8 and 11 are being constructed. The tests will be rolled out at the end of the 2014–2015 school year. The tests are high stakes with security precautions, and the results are to be used, at least in part, to evaluate teachers for tenure and compensation purposes.


EVENTS IN THE UNITED KINGDOM


The government is making available 3,000 GBP in matching funds to each school (with a further 10% for orders placed by March 30, 2012) for instructional resources and training. The catalog of products and training can be accessed at http://content.yudu.com/Library/A1ukc2/SystematicSyntheticP/resources/index.htm?referrerUrl=. The matched funding will run until March 30, 2013. The Department for Education has issued two reports documenting a pilot tryout of the screening check in 300 schools in June 2011. A descriptive report of the pilot can be accessed at https://www.education.gov.uk/publications/eOrderingDownload/DFE-RR159.pdf. The pilot found no major obstacles for either teachers or children in the administration of the check. A technical report can be accessed at http://media.education.gov.uk/assets/files/phonics%20screening%20check%202011%20pilot%20technical%20report.pdf.


The technical report deals primarily with methodological psychometric matters, but it also includes some substantive findings. Eighteen forms of the check were constructed. The contents of a sample form are shown in Table 1.


Table 1. Sample Screening Check


Section 1

Section 2

tox

bim

vap

ulf

geck

chorm

tord

thazz

blan

steck

hild

quemp

shin

gang

week

chill

grit

start

best

dentist

hooks

voo

jound

terg

fape

snemp

blurst

spron

stroft

day

slide

newt

phone

blank

trains

strap

scribe

rusty

finger

starling


A cut score for Pass was set between 30 and 34, depending on the form. Setting a cut score is in the psychometric tradition. However, the screening check constitutes a Guttmanlike scale rather than a comparative scale. That is, a capable reader could read all 40 items; any score less than 40 represents a capability flaw. An analog is the Snellen eye test of visual acuity used in screening for driver’s licenses. “Perfect vision” is 2020, but a cut score of 2040 is set for a driver’s license.

 

Although the check is being administered at the end of Year/Grade 1, it is appropriate at any age to determine if an individual can handle the grapheme/phoneme correspondences that constitute the English alphabetic code, the link between the spoken and written English language. A cut score of 30 on the test is low, but it is not unreasonable for Year/Grade 1 children; they still have another year of instruction in what in the United Kingdom is termed “Key Stage 1.”


The mean score on the 18 forms of the check was 25.5, with a standard deviation of 10.9. That’s a low mean and a high standard deviation. A total of 31% of the Year/Grade 1 children were above the cut score. Unfortunately, no score-frequency information was reported, so the magnitude of the instructional task remaining for Year/Grade 2 is not known. The important consideration is that the check identified children who needed further instruction. When the check is rolled out in June of this year (2012), that information will be available to the children’s Year/Grade 2 teachers. The results will also be reported to parents by each school.


A total of 43% of the schools involved in the pilot reported that the check had helped them to identify pupils with decoding issues that they were not aware of previously. That bodes well. What does not bode well: One quarter of the schools reported that they did not plan to take any action to change their reading instruction (because of concerns about the suitability and feeling that it would not add to their current knowledge). Another quarter reported that they regularly reviewed reading instruction regardless of the check. Another quarter reported that they planned to make changes in their reading instruction in light of the check, and the other quarter reported that they may make changes but felt it necessary to wait for the results of the June 2012 check before making any firm decisions.


The check constitutes a test that teachers can and should teach to. If children can read the words on the check, they can read any English text with the same understanding as if the text had been read to them. The check is testing the instruction, not the children. That’s a switch from conventional achievement testing, in which students, rather than the instruction they received, is tested, and achievement or lack thereof is attributed to students and/or parents rather than to the instruction the student received. To pass the check, children have to be able to handle the grapheme/phoneme correspondences that comprise the alphabetic code; that capability is what teachers have to teach to if children are to pass the check. Table 2 sheds some light on the instructional determinants of the results.


Table 2. Teaching Practice


 

N

(Children)

Met the

Standard

(%)

We always encourage children to use phonics as the strategy to decode unfamiliar phonically regular words.

We encourage children to use a range of cueing systems, such as context or picture cues, as well as phonics.

1989


5964

37


30

We always teach phonics in discrete sessions.


2258


36


We mostly teach phonics in discrete sessions, sometimes integrating phonics into literacy sessions/other curriculum work.

4870

30


Since 2006, Parliament has mandated that alphabetic code-based instruction, termed synthetic phonics, be used in teaching reading, with the instruction starting in reception/kindergarten. The data in Table 2 indicate that, by far, teachers are interpreting this as continuing the “mixed methods” they were employing before the legislation—with detrimental consequences. Alarmingly, though, the results for teachers who reported that they were teaching phonics methods explicitly and discretely weren’t that much better than those for teachers who were not.


Schools and teachers participating in the pilot were required to report which instructional program(s) they were using, and the technical report states that one of the purposes of the pilot was “to identify which phonics programmes are currently taught in schools participating in the pilot and how these are delivered.” The same specification is also in the framework for the check. However, the technical report includes no information on program usage.


The absence of program usage information is puzzling and troubling, because this information is key in increasing instructional effectiveness and in monitoring instructional costs. Such information is necessary to get inside the black box of reading instruction. The technical report provides a few hints of what is going on within the black box, but only a quick peek. More analysis is needed.


Table 3 provides information on the “usual” demographic disaggregation of interest.


Table 3. Disaggregation of Subgroups


  

N

(Children)

Met the

Standard

(%)

Gender

   

Male

4,258

30

Female

4,241

33

Free Meal Eligible

   

No

6,903

34

Yes

1,562

22

Special Needs

   

No SN

7,091

36

SN

1,374

10

English an

Additional Language

   

English

6,948

31

Other

1,502

33


Girls perform slightly better than boys. This replicates general findings in both the United Kingdom and the United States.


The percentage of socioeconomic status-proxy free-meal-eligible children (18%) is much lower than for U.S. students (2010: 31.7 million eligible of total K–12 enrollment of 49.6 million = 64%). As in the United States, performance is considerably lower for eligible children than for noneligible children. This finding differs from that obtained in an earlier U.K. administration of the check in 16 “best case” schools, where little difference was found among different SES settings. Again, more analysis is needed.


The surprising information in the special needs data is that as many as 10% of these children passed the check. Whatever these children’s special need, it was not connected to reading achievement.


The biggest surprise in the disaggregation is the performance of English as an additional language (EAL) children. The EAL children outperformed their English-language peers by as much as aggregate girls outperformed boys. At this point, one can only speculate why. My guess is that EAL children perceive less of a difference between words and pseudo-words, and their tendency to guess and to use other faulty shortcuts is lower as compared with their English-language peers.


In sum, in the tradition of educational research, the U.K. findings to date are “provocative.” Somewhat less traditionally, the findings clearly point the way to further analyses that will further clarify what goes on within the black box of reading instruction.


EVENTS IN THE UNITED STATES


The United States is implementing the four assurances of Race to the Top:


Increase teacher effectiveness and address inequities in the distribution of highly qualified teachers.

Establish and use pre-K through college and career data systems to track progress and foster continuous improvement.

Make progress toward rigorous college- and career-ready standards and quality assessments.

Support targeted, intensive support and effective interventions to turn around schools identified for corrective action and restructuring.


Insofar as reading instruction is concerned, the operative considerations are the Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects and the “new and better tests” that are being developed under the auspices of two Consortia of States.


The standards and new tests will not be rolled out until the 2014–2015 school year. Meanwhile, events are creating additional opportunities for variations in the natural experiment. Secretary of Education Arne Duncan is granting waivers to states seeking relief from the No Child Left Behind requirement of adequate yearly progress. To receive the waiver, a state must agree to adopt college- and career-ready standards, focus on 15% of their most troubled schools, and create guidelines for teacher evaluations based in part on student performance.


Some states have applied for and been granted a waiver; other states are in the process of applying; and others have indicated that they will not apply. This will yield at least three treatment variations.


In addition, whereas most states have adopted the national (Common Core State) standards, some have not; some are reneging on their adoption, and others are likely to do so as the cost of the implementation looms closer. That will yield at least two treatment variations.


Some states and local education agencies received School Improvement Grants (investing in Innovation I3), which varied in amount, with some as much as $50 million. These “innovations” will constitute a third category of treatment variations.


Finally, what has not happened may well shape the entire U.S. treatment. The U.S. Congress has, for all practical purposes, deferred reauthorization of the Elementary and Secondary Education Act until 2013. The outcome of the congressional and presidential elections in November 2012 will be far more consequential in determining the shape of the reauthorization than any professional or technical consideration.


CODA


The United Kingdom and the United States are on very different reading instruction trajectories. The natural experiment is in progress, but the results are still very preliminary. Further reports will be forthcoming.


Acknowledgments


The background and scope of the experiment have been described in a previous Teachers College Record article (Schutz, 2012); the design can be recapped for purposes of this interim report.


References


Schutz, D. (2012). A grand educational experiment in reading instruction: Toward methodology for building the capacity of pre-collegiate schooling. Teachers College Record. http://www.tcrecord.org ID Number: 16667




Cite This Article as: Teachers College Record, Date Published: June 01, 2012
https://www.tcrecord.org ID Number: 16786, Date Accessed: 11/29/2021 10:42:01 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Dick Schutz
    3RsPLus, Inc.
    E-mail Author
    DICK SCHUTZ is President of 3RsPlus, Inc. a firm conducting R&D and constructing educational products. He was formerly Professor of Educational Psychology at Arizona State University and Executive Director of the Southwest Regional Laboratory for Educational Research and Development. He has served as the founding editor of the Journal of Educational Measurement, the founding journal editor of the Educational Researcher, and editor of the American Educational Research Journal. His recent technical papers can be accessed from the Social Science Research Network http://ssrn.com/author=1199505
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS