skip to navigation skip to content
WCER - Wisconsin Center for Education Research Skip Navigation accessibility
 
School of Education at the University of Wisconsin-Madison

ABOUT WCER NEWS Events Cover Stories Research News International Research Press WHAT'S THE RESEARCH ON...? PROJECTS All Active Projects All Completed Projects PUBLICATIONS LECTURE SERIES PEOPLE Staff Directory Project Leaders ERG - EVALUATION RESOURCES GROUP RESOURCES Conference Rooms Equipment GRANT SERVICES GRADUATE TRAINING SERVICE UNITS Director's Office Business Office Technical Services Printing & Mail EMPLOYMENT CONTACT INFO MyWCER WORKSPACE LOGIN

   
Home > News > Cover Stories >
Evaluating a Core Reading Program
Evaluating a Core Reading Program

Geoffrey Borman
Geoffrey Borman

October 2009

If you’re a fourth-grader you could be the victim of an achievement gap equivalent to nearly 3 years of learning, depending on whether you are African-American, Hispanic, White, poor, or non-poor (U.S. Department of Education, 2005).

Despite many efforts to close this gap, early elementary literacy instruction and learning still fails many of America’s poor and minority students.

Educators are trying. But they can’t find all they need in the professional literature on core reading programs. UW–Madison education professor Geoffrey Borman finds that current published research offers few studies that examine the impact of these programs on children’s reading skills.

To help remedy this shortcoming, Borman and colleagues Maritza Dowling and Carrie Schneck evaluated and reported on one such program, Open Court Reading (OCR). OCR is a phonics-based K–6 curriculum grounded in research-based practices. It has been widely used since the 1960s.

But despite its widespread use, OCR had not been evaluated rigorously. Borman and Dowling initiated a study to answer two questions:

  • Is it possible or desirable to use randomized field trials to measure widely used core reading programs?
  • How effective is OCR in particular?

During the 2005–06 school year Borman and Dowling studied elementary school classrooms from Grades 1 through 5. This was a randomized controlled trial: Some classrooms were assigned to use OCR, the others were not. The question was, compared with traditional classrooms, do the OCR curricular and professional development materials improve literacy outcomes for elementary students?

Borman’s final sample included 5 schools, from which 49 Grade 1–5 classrooms and 917 to 923 students participated. (Sample and data attrition claimed some of the control students, the treatment students, and participating classrooms.)

The study findings should prove significant for curriculum leaders, literacy leaders, researchers, and policymakers.

  • Curriculum and literacy leaders: Borman’s study determined that the average student from an OCR classroom outperformed nearly 58% of the students in classrooms that were not assigned to OCR. Overall, students from OCR classrooms scored from 12% to 19% of 1 standard deviation higher on reading assessments. (The effect sizes for OCR are essentially equivalent to the impact for class-size reductions found through the Tennessee Student-Teacher Achievement Ratio [STAR] study.)
  • Researchers and policymakers:  Cluster randomized field trials (like this one) involving widely replicated school-based interventions (like OCR) are both possible and desirable for producing unbiased estimates of the effects of educational treatments. 

These outcomes provide evidence of the promising 1-year effects of OCR on students’ reading outcomes, Borman says. They also suggest that these effects may be replicated across varying contexts with rather consistent and positive results.