Which testing accommodations do teachers suggest most often? Ths is one of the research questions that Elliott and his colleagues (2001) recently asked teachers who were designing accommodations packages for use on performance assessment tests. Verbal encouragement and reading the directions were two of the most popular accommodations and were almost always part of a package of accommodations. In the study, students performed half of the tasks with accommodations and half without. The packages of accommodations had double the effect size for students with disabilities compared to students without disabilities. Read more.
What types of test items, multiple-choice or short-answer constructed response, do accommodations effect students' response on the most? Participants in a study conducted by Schulte and her colleagues (2001) performed one standard mathematics test with accommodations, and one without accommodations. Both students with disabilities and students without disabilities experienced beneficial effects from testing accommodations. Students with disabilities profited from accommodations on the multiple-choice items, while students without disabilities did not. Both groups profited equally from accommodations on the constructed response items. Read more.
Extra time: Does it make a difference on test scores or just reduces students' test anxiety? Marquart, in a dissertation study in 2000, investigated the effect of extended time as a testing accommodation for 8th graders. All students were given two standardized math tests. One test was completed within the standard time (20 minutes) and the second test within extended time (40 minutes). The results showed no increases in scores when given extended time, regardless of disability status. Students did report preferring the extended time condition. Read more.
Reading aloud a reading test to a student: Is this a fair and valid accommodation? McKevitt and Elliott (2002) examined the effects of teacher-recommended accommodations both with and without the read-aloud accommodation added to the accommodation package for a standardized reading test. The teacher recommended accommodations without the read-aloud accommodation did not significantly help students with or students without disabilities. Test scores on the reading test significantly improved for both groups when the read-aloud accommodation was added. Read more.
the effects of testing accommodations on the scores of students with and
without disabilities. Journal of School Psychology, 39(1), 3-24.
Elliott, Kratochwill, and McKevitt (2001) conducted a study designed to (a) describe the nature of information on testing accommodations listed on students¡¯ IEPs, (b) document the testing accommodations educators actually use when assessing students via performance assessment tasks, and (3) examine the effect accommodations have on the test results of students with and without disabilities. Participants in the study included 218 fourth grade students from urban, suburban, and rural school districts. Of the 218 participants, 145 students did not have disabilities and 73 students had disabilities in a variety of categories (including learning disabilities, speech and language impairments, etc.). The researchers asked teachers to list accommodations that would be helpful for each student who had a disability. Teachers used the Assessment Accommodations Checklist (AAC, Elliott, Kratochwill, and Schulte, 1999), a list of accommodations often used in classroom and testing situations. Project staff and teachers then administered a set of math and science performance tasks to the students, utilizing an alternating treatments design, over the course of four 1-hour sessions.
These performance tasks were designed to draw on a full range of knowledge from each content area, were shown to have known psychometric values, and were found to nearly equivalent and nonbiased among a group of over 200 students with disabilities. The tasks were scored on a five-point continuum from "inadequate" to "exemplary" by trained project assistants using established criteria. All students with disabilities performed half of the tasks with accommodations and half of the tasks without accommodations. Students without disabilities were separated into three groups by accommodation status: no accommodations, standard accommodations, and teacher-recommended accommodations. Students in the no accommodations group did not receive accommodations on any of the performance tasks. Students in the standard accommodations group received a standard set of accommodations. The alternating treatments design allowed for both intraindividual and intergroup comparisons without the need for baseline conditions. An individual's performance during the accommodated condition could be compared with his or her performance during the non-accommodated condition. Also, the effect of accommodations on students with disabilities could be compared with the effect of accommodations on students without disabilities. The researchers used effect sizes to make comparisons both within individuals and between groups.
The Elliott et al. (2001) study indicated that the most common accommodations recommended by teachers were "verbal encouragement" and "read the directions," followed by "simplify language," "reread subtask directions," and "read test questions and content." Teachers typically recommended packages of between 10 and 12 accommodations for each student. The average effect size between accommodated and non-accommodated conditions for students with disabilities was .88, approximately double the comparable effect size for students without disabilities. On an individual level, accommodations had "medium" to "large" positive effects for 78.1% of students with disabilities and 54.5% of students without disabilities. They had "small" effects or no effect on 9.6% of students with disabilities and on 32.3% of students without disabilities, and they had negative effects on 12.3% of students with disabilities and on 13.1% of students without disabilities.
The results of this study indicate that accommodations are recommended in packages for students, rather than independently. Accommodation packages have moderate to large effects on performance assessment scores for most students with disabilities and for some students without disabilities. This increase in scores for students without disabilities raises questions about the validity of the accommodations. If changes in testing procedure affect students without disabilities in the same direction and degree that they affect students with disabilities, these changes are not truly acting as accommodations.
the effects of testing accommodations on students¡¯ standardized achievement
test scores. School Psychology Review, 30(4), 527-547.
Schulte, Elliott, & Kratochwill (2001) conducted a study to determine whether accommodations on standardized tests would affect students with disabilities differently than they affect students without disabilities. The authors predicted that accommodations would significantly improve the test scores of students with disabilities, but would not significantly improve the test scores of students without disabilities. Participants in the study were 86 fourth grade students, including 43 students with disabilities (entitled students with mild disabilities) and 43 students without disabilities. The students' performances were measured on two equivalent versions of the TerraNova math test, a math subtest aligned with the National Council of Teachers of Mathematics (NCTM, 1989) standards.
Teachers of participants who had disabilities reviewed their IEPs to determine which accommodations the research team would use. Each student who did not have a disability was paired with a student who did have a disability, and the research team administered the TerraNova to the students in pairs. Both students in each pair received the accommodations outlined on the IEP of the student who had the disability. All students participated in a practice session to become familiar with the testing procedures and accommodations, and all students took one version of the test with accommodations and one version of the test without accommodations. The researchers randomly assigned the order of accommodated and non-accommodated conditions, as well as the pairs of students. The key independent variables in the study were testing condition (accommodated versus non-accommodated) and disability status (with disability versus without disability) The dependent variables in the study were the scores from the TerraNova Multiple Assessments. Both groups improved significantly when accommodated condition was compared to non-accommodated condition. Students with disabilities benefited more from accommodations on multiple choice questions, and both groups benefited equally on constructed response questions. For multiple choice questions alone, students with disabilities yielded an effect size of .41 between accommodated and non-accommodated conditions, while students without disabilities yielded an effect size of 0. On constructed response questions alone, those effect sizes were .31 and .35, respectively. On an individual level, there was essentially no difference between the effects of accommodations on students with disabilities and the effects of accommodations on students without disabilities. Twenty-seven out of 43 students with disabilities, and 29 out of 43 students without disabilities, achieved higher scores on the test when accommodations were available. Seventeen out of 43 students with disabilities, and 16 out of 43 students without disabilities, achieved higher proficiency levels on the test when accommodations were available. Twenty out of 43 students with disabilities, and 21 out of 43 students without disabilities, experienced no change in proficiency levels on the test when accommodations were available.
The finding that both groups of students experienced benefits from testing accommodations indicates that the changes in test procedure may be affecting both construct-relevant and construct-irrelevant variance. The differential interaction between accommodation group and question type could indicate that constructed response questions are more difficult for all students, and that accommodations remove barriers to these questions that are not present in multiple choice questions. These findings reinforce the notion that research on testing accommodations must take an individual perspective, and that all students must take the tests in both accommodated and non-accommodated conditions, for researchers to determine whether accommodations truly help performance.
standardized mathematics test: An investigation of effects on scores and
perceived consequences for students with various skill levels. Madison, WI:
Unversity of Wisconsin.
In a dissertation study conducted by Marquart (2000), the use of an "extended time" accommodation on a mathematics test was examined. Marquart predicted (a) students with disabilities, but not students without disabilities, would score significantly higher in the extended time condition than in the standard time condition, (b) students with low math skills, but not students with higher math skills, would
score significantly higher in the extended time condition, and (c) all student groups would perceive the extended time condition as helpful in reducing anxiety, in allowing them to exhibit what they know, and in increasing their motivation to finish tests. Participants in the study included 69 eighth-grade students, 14 of their parents, and 7 of their teachers. Among the students, 23 were classified as having disabilities, 23 were classified as educationally at-risk in the area of mathematics, and 23 were classified as students performing at grade level. Teachers used the Academic Competence Evaluation Scales (ACES), a rating scale to classify students without disabilities as at-risk or performing at grade level. Student participants completed the TerraNova Multiple Assessments-Mathematics, as well as a survey about the effects of the "extended time" accommodation. Each testing session included students from each of the three groups. Marquart randomly assigned the order of conditions (accommodated and non-accommodated) in which each student performed the test. When performing in the accommodated condition, students had up to 40 minutes to complete the test. When performing in the non-accommodated condition, students had 20 minutes to complete the test. Parents and teachers of students in the study also completed the survey about the effects of the "extended time" accommodation.
Marquart found that the effect of the "extended time" accommodation was not significant for students without disabilities, who yielded an effect size of .34. The accommodation was not significant for students with disabilities, either, as their effect size was .26. The three groups (students with disabilities, at-risk, and grade level) were not significantly different in their amount of change between accommodated and non-accommodated conditions, either. When students without disabilities were considered as at-risk and grade level groups, the students in the at-risk group experienced an effect size of .48 between accommodation conditions, and students in the grade level group experienced an effect size of .20. However, according to the survey, most students felt more comfortable, were more motivated, felt less frustrated, thought they performed better, reported the test seemed easier, and preferred taking the test under the extended time condition. Most teachers (88%) but few parents (21%) indicated that a score from an accommodated test is as valid as a score for the same test without accommodations. Many parents (43%) but no teachers believed that the score from an accommodated test is less valid, and some members from both groups (parents = 36%, teachers = 12%) were uncertain. Most members of each group (teachers = 63%, parents = 56%) believed that if accommodations are used on a test those accommodations should be reported with the test results.
testing accommodations on a standardized reading test. Manuscript
submitted for publication.
McKevitt and Elliott (2002) studied the effects of testing accommodations on standardized reading test scores and the consequences of using accommodations on score validity and teacher and student attitudes about testing. The following predictions were tested: (a) teachers would select accommodations they consider valid and fair for use on standardized reading tests; (b) individualized packages of testing accommodations, including a read-aloud accommodation, would have a positive impact on the reading test scores of students with disabilities, but not on the scores of students without disabilities; (c) students with disabilities would score higher when the test was read aloud to them versus when other accommodations were used; and (d) students would perceive the accommodations to be helpful and teachers would have a positive attitude about testing and accommodations. While read-aloud accommodations are considered invalid by the testing policies in many states, to date there have been no published studies that actually analyzed their effects on reading test performance. To test these hypotheses, the reading performance of 79 eighth-grade students was tested on the Terra Nova Multiple Assessments Reading Battery-Research Version (Form A; CTB/McGraw Hill, 1999). Forty of those students were diagnosed with an educationally defined disability and received special education services in the area of reading and/or language arts. The other 39 students were general education students used for comparison purposes. Four special education teachers and one general education teacher participated by recommending testing accommodations for these students using the Assessment Accommodations Checklist (Elliott, Kratochwill, & Schulte, 1999). They also rated students' reading achievement levels using the Academic Competence Evaluation Scales (DiPerna & Elliott, 2000). An additional 43 teachers and all tested students completed surveys about their perceptions of and attitudes about testing accommodations and standardized testing.
Once students were identified, they were divided into two groups (students with disabilities and students without disabilities). Within those groups, students were then divided into two test conditions (students receiving teacher-recommended accommodations and students receiving teacher-recommended accommodations plus a read-aloud accommodation). Students in each group and each condition completed two alternate parts of the reading test--one with accommodations (either teacher recommended accommodations or teacher recommended accommodations plus read aloud) and the other without accommodations. The part of the test that was accommodated was determined by random assignment. This design yielded a repeated measures ANOVA analysis with effect size calculations used to test the predictions. Overall, the results of the McKevitt and Elliott (2001) study indicated mixed support for the predictions. First, as predicted, teachers selected accommodations they considered valid and fair for use on a standardized test. They did not recommend using a read-aloud accommodation, as this accommodation would interfere with the purpose of the test (i.e., to measure reading ability) and thus would invalidate resulting test scores. Next, the accommodations that teachers recommended did not significantly affect test scores for either group of student. However, the read-aloud accommodation, when used in addition to those recommended by the teacher, did positively and significantly affect test scores for both groups of students. There was no differential benefit from the read-aloud accommodation, indicating overall score boosts for both groups of students, rather than the boost only for students with disabilities which was predicted.
Interestingly, there was much individual variability in the accommodation effects. As indicated by effect size statistics, the accommodations positively affected the scores for half of all students with disabilities and 38% of all students without disabilities. Furthermore, neither group of students scored significantly higher when the test was read aloud to them as compared to the groups that received other accommodations. While the read aloud helped both groups compared to their own performance without accommodations, there was not a significant effect from the read aloud when groups receiving the read aloud were compared to those receiving only the teacher-recommended accommodations.
Finally, McKevitt and Elliott found that students and teachers had mixed feelings about the accommodations. Students were generally positive about their use, but expressed some concern that the read-aloud accommodation was too difficult to follow. Likewise, teachers felt positive about the use of accommodations for students with disabilities, but also were concerned about how accommodations would affect test score validity. Teachers reported they rely primarily on professional judgment when making accommodations decisions, rather than on their own empirical testing of accommodations effects. Therefore, it is great important to ensure teachers are knowledgeable about the use and effects of testing accommodations.
In summary, the McKevitt and Elliott study contributed to the increasing evidence that accommodations may have positive or negative effects for individual students with and without disabilities. It also lends support to the popular belief that reading aloud a reading test to students as an accommodation invalidates test scores. The lack of differential boost (i.e., the finding that both groups of students profited from a read-aloud accommodation) observed in the study is one piece of evidence of the invalidating effect of a read-aloud accommodation. But the lack of differential benefit alone may not be sufficient to conclude invalidity of scores resulting from the use of accommodations. In the case of the students receiving the teacher-recommended accommodations alone, a differential boost also was not observed and scores did not improve significantly for either group. One may not conclude, however, just by this evidence that the accommodations were invalid. The accommodations still may have served to remove a disability-related barrier for the student tested, yet still did not have a significant effect on scores. Thus, evidence to support the validity of accommodations needs to come from multiple sources, examining student factors, test factors, and the accommodations themselves.
2001. Assessing One and All. Last Modified
Questions or comments about this site? email