|
||||||||||||||||||||||||||||||||||||||
|
ABOUT WCER NEWS Events Cover Stories Research News International Research Press WHAT'S THE RESEARCH ON...? PROJECTS All Active Projects All Completed Projects PUBLICATIONS LECTURE SERIES PEOPLE Staff Directory Project Leaders ERG - EVALUATION RESOURCES GROUP RESOURCES Conference Rooms Equipment GRANT SERVICES GRADUATE TRAINING SERVICE UNITS Director's Office Business Office Technical Services Printing & Mail EMPLOYMENT CONTACT INFO MyWCER WORKSPACE LOGIN |
||||||||||||||||||||||||||||||||||||||
The Promises of Value-Added Evaluation in Milwaukee
June 2006 You’re a principal or a school district administrator, and you want more detailed achievement information for individual students, in particular subjects, and in particular classrooms. You want evaluation methods that mesh with the No Child Left Behind Act (NCLB) indicators and performance targets. Detailed information is becoming more available through the efforts of a unique partnership. For the past 7 years, WCER staff have worked with Milwaukee Public Schools (MPS) district staff to develop MPS capacity to analyze and use data on students and schools. That partnership has developed a sophisticated system for measuring and tracking the productivity of MPS schools, producing data that forms the core of the district’s school report card and accountability system. (NCLB requires school districts that receive federal funds to provide a report card on how its schools and the school district are doing. The report includes the combined test scores of the students at all the district's schools.) What is value added? During the first 3 years of the project (1998–2001), MPS lacked the necessary data to launch a comprehensive value-added system. Meyer and colleagues helped build capacity within the district to understand options for using data to measure school performance and using value-added performance data as part of a district accountability system. Too often, administrators must often rely on weak and limited student achievement data. Limited data results in inaccurate estimates of school and program productivity. But value-added measures paint a more detailed and more accurate picture (see sidebar). Beyond NCLB Some educators express concern that NCLB accountability provisions are unfair to schools. The NCLB law judges schools primarily on the percentage of children who perform at the “proficient” level on state tests. Schools don’t get credit for students who show substantial achievement in a given year but still fail to reach the proficiency bar, or for advanced students who continue to progress. Value-added measures enable states and districts to set and monitor explicit performance objectives for schools with low-scoring students, students of color, transient students, and other policy-significant groups. That’s important because school characteristics such as racial mix and poverty level can influence the rate at which children learn. Unless the analyses account for such differences, they will not accurately isolate the contributions of teachers or schools to student learning. For example, Milwaukee has a highly mobile student population, in part because of the district’s school choice program. When students move into a district or state late in the year, after regular standardized assessments are administered, a well-designed value-added system assesses these students at the point of entry and incorporates them into an appropriately generalized value-added model. Historically, these students have not been included in standardized assessments. Obtaining better evidence of students’ academic growth is needed not only out of fairness, but also because this evidence provides a more accurate picture of school effectiveness when combined with measures of absolute achievement. Equity Issues Value-added analysis makes it possible to separate out individual student characteristics, including gender, previous test results, number of absences, eligibility for free school meals, length of time in the school, and special education status. It also accounts for school-level characteristics such as the percentage of students receiving subsidized meals and the number of children enrolled in the tested grade. Conventional assessments don’t measure the degree to which teachers and schools reduce achievement gaps among different student groups (high and low achievers, poor and non-poor, etc.). Equity-oriented value-added indicators, however, make it possible to determine whether the productivity of schools and teachers differs for students with different characteristics. For example, a given school might be very effective with talented and gifted students but less so with students with low prior achievement. Data-Driven Improvement Meyer’s research represents a comprehensive approach to harnessing information on educational resources and student achievement, or, to put it another way, system inputs and outputs. “We want to help create a learning environment characterized by continuous, data-driven improvement,” Meyer says. “Value-added indicators aim to find out ‘what works.’ In particular, we want to pinpoint the determinants of high teacher productivity, effective professional development, and alternative instructional strategies.” The MPS value-added system has until now measured the performance of schools at the grade level, but not at the teacher or classroom level. Improved value-added methods promise to make available a much-needed quantitative component for teacher evaluation, to be combined with other sources of information, such as classroom observations. Meyer credits Milwaukee Superintendent William Andrekopoulos and Director of Assessment and Accountability Deborah Lindsey for their leadership in this work. MPS has committed its own resources to support this research, rather than relying on external funding agencies. Specifically, researchers and district staff are enhancing the current MPS value-added system to enable it to:
The problem of selection bias is likely to be especially acute in districts like MPS that provide substantial opportunities for school choice. For example, students with persistently high or low achievement may systematically choose (or be assigned to) different schools and teachers. Test-taking rates may also differ across students due to student choices or to school policies. Statistical tools that adjust for this bias yield more accurate and useful information. A New Center The Value-Added Research Center (VARC) launched this winter at WCER as an umbrella organization for Meyer’s many theoretical and applied research projects. VARC research is comprehensive in that it encompasses policy-relevant expansion of the current MPS value-added system, dissemination of value-added information in a clear and accessible fashion, and policy implementation and professional development. A hallmark of Meyer’s research is what he calls full-spectrum inquiry. His team conducts theoretical research, statistical research, applied qualitative and quantitative research, program evaluation, program design, and program implementation—with the goal of eventually helping other schools, districts, and states enhance their evaluation programs and professional development. VARC extends Meyer’s prior work by helping MPS use student transcript data to document various paths Milwaukee students take to proficiency across the district and by evaluating the performance and effectiveness of schools, teachers, programs, and policies over the long term. The project has grown into a network that now includes the states Michigan, Minnesota, Wisconsin, MPS, and the Minneapolis Public Schools. More states are expected to join in as time goes on. Meyer’s generalized value-added model is useful in several ways. First, it can produce valid estimates of school performance free of student selection bias. Second, it can be used as a standard to evaluate simpler models. (Most, if not all, existing value-added models are special cases of this generalized model.) The generalized value-added model also can be implemented in situations in which at least some students change schools from one grade to the next. In this work, qualitative and quantitative research inform each other. “Our qualitative research generates hypotheses about what works. Then quantitative research (surveys and other resources) takes the qualitative work to scale,” Meyer says. “Next, we use a validity study to ‘marry’ the results of qualitative work with survey results. Finally, we analyze large-scale student outcomes data.” “We want to know why super-successful schools work, and why underperforming schools don’t,” he continues. “What is the ‘unknown’ in those schools?” Along the way, the researchers and MPS staff develop hypotheses about student achievement in the context of their schools. “Through research we find out which hypotheses matter, which innovative ideas are successful. The final task will be building diagnostic tools to help schools, teachers, and district administrators respond to accountability results with concrete steps to improve performance.” Funding VARC is funded through the Longitudinal Data Systems grants recently awarded by the U.S. Department of Education’s Institute of Education Sciences. VARC principal investigators led a Tri-State Partnership (Minnesota, Michigan, and Wisconsin) that won 3 out of 14 grants. VARC will provide overall project coordination, data warehouse design assistance, and support for researchers who are embedded in the districts.
|
||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||



