|
Experiments Important Part of Research Mix
May 2004
Politicians are calling loudly for educational improvement, and many insist that more testing and more accountability will usher in an era of improved student achievement.
UW-Madison education professor Geoffrey Borman agrees, to a point.
Developing and improving programs and practices in U.S. schools and classrooms, he says, will also require research methods that
- separate fact from advocacy;
- provide the most believable results; and
- answer with confidence the question, "What works?"
To answer the question, "What works?" the U.S. Department of Education's Institute of Education Sciences (IES) is pressing educators to adopt randomized experiments as the preferred research method. The IES aims to advance the field of education research, making it more rigorous in support of evidence-based education.
Borman notes that randomized experiments have become the standard for testing and developing innovations in many fields, most notably medicine. Yet experiments have been used infrequently in education research. Why is this?
For one thing, the work of classroom teachers is not usually driven by scientific knowledge of the efficacy of their practices, Borman says. Instead, it tends to be reinforced by psychic rewards that teachers feel when they reach their students. As a result, key instructional decisions in the classroom are driven and perpetuated by highly subjective criteria that often have no foundation in evidence on what works.
And in a larger context, schooling occurs within a complex system including federal policies, state mandates, district policies, and school-level leadership. How can one improve, or even study, such a complicated system by focusing on the relatively simple causal connections suggested by experimental designs?
A blessing and a curse
The field of education research itself is highly diversified. Borman says this is both a blessing and a curse: It is a blessing in that the various research methods and perspectives create a rich, vibrant, and democratic discourse about what really matters in education and what should be done to improve it.
It is a curse because all too often the diversity in education research is perceived as confusion and contradiction. As a result, little consensus emerges about what needs improvement and how we should go about doing it.
What distinguishes a true experiment from other research methods is the random assignment of different treatments to the individuals or groups of students involved in the study. Another component of the randomized experiment is that it involves conscious manipulation of the environment.
For example, the researcher chooses at random to provide one group of students the opportunity to participate in a pilot test of a new mathematics curriculum while the other group does not receive the innovation. At the end of the trial, students are assessed, and the scores for those from the pilot program are compared to those of students who did not participate. The true impact of the new math curriculum can then be evaluated. A study such as this rules out the bane of social science research-selectivity-that is caused when one compares schools, teachers, or students who actively choose the mathematics curriculum to a similar group that did not choose it. In this study, it is not clear whether the motivation to choose the new curriculum or the effectiveness of the curriculum itself was the true "cause" of any improvements one might find.
Resistance to experiments
But some writers don't consider randomization practical in education research. Randomized experiments don't fit with their view of schools as complex social organizations. Complex organizations seem to respond better to the tools of management consulting than to those of science. As a result, researchers doing true experiments in education usually work in the disciplines of public health, psychology, economics, and policy sciences.
Random assignments often require teachers and administrators to give evaluators some authority over curriculum, student placements, or pedagogical technique. Yet most educators resist surrendering authority to people they don't consider professionally competent to judge their work.
Educators, parents, and students have political clout. If someone is suspicious of the ethics or general merits of random assignment, these groups have formal and informal channels through which to register their opposition.
If randomization is to be more widely accepted and implemented in education, Borman offers these suggestions:
- The ethical and political dilemma of withholding services must be addressed.
- Randomized field trials must be adapted to fit the messy and complex world of schools and classrooms.
- A stronger centralized federal role is needed to foster and sustain experimentation and improvement of educational practices.
Some options
In many circumstances, Borman says, mixed-method designs can combine the strengths of randomized experiments with qualitative research to uncover the actual school and classroom processes and practices that underlie causal effects. This combination of findings is likely to provide practitioners and policymakers better information about how to improve practice.
Randomized experiments composed of relatively large and heterogeneous samples of schools and districts allow evaluators to test causal effects over a range of contexts. These studies can address empirically the extent to which treatment effects generalize across diverse settings and can generate causal conclusions that are sensitive to context.
Policymakers and practitioners should demand the best evidence on the effects of educational interventions, Borman says. As researchers in nearly every other discipline, and especially in medicine, already acknowledge, the experiment is the best method for establishing the causal effects of innovations. However, randomized experiments are not replacements for the variety of other methods in education research.
Funding for Borman's research was provided by the U.S. Office of Educational Research and Improvement/OERI (now IES) and the Smith-Richardson Foundation's Children and Families at Risk Program.
Some material in this article originally appeared in different form in the Peabody Journal of Education, vol. 77, no. 4, pp. 7-27.
|