Student Assessment of Learning Gains -- Instrument Description
Student Assessment of Learning Gains
Instrument Description

RegisterGo to the home pageLogin to Site

Introduction  |  Basis for design  |  Development  |  Field-Testing  |  Outcomes for faculty

The Student Assessment of Learning Gains Instrument is designed for instructors from all disciplines who wish to learn more about how students evaluate various course elements in terms of how much they have gained from them. Feedback from the instrument can guide instructors in modifying their courses to enhance student learning. It may be used at any point during a course (for formative feedback) as well as at the end.

Introduction    up to top

An ever-increasing number of faculty from all disciplines are seeking ways to learn how the various elements of their courses affect student learning. This search includes assessment methods that are designed primarily for faculty use rather than for institutional data gathering or program evaluation. Feedback from instruments designed primarily for classroom use can guide faculty in modifying their courses to enhance student learning.

The Assessment of Student Learning Gains instrument is intended to supplement or replace the type of assessment instrument with which faculty are most familiar, namely departmental or institution-wide "student classroom evaluation" instruments. This type of survey is intended to provide faculty with feedback from their students. It is also the main method by which departments and institutions evaluate faculty teaching. However, a common faculty critique of these instruments is that they serve neither function well. A new instrument, grounded in recent research data, has been designed by Elaine Seymour (Ethnography & Evaluation Research, the University of Colorado, and co-evaluator for the ChemLinks and ModularChemistry coalitions). The instrument is intended as a supplement or as an alternative to student classroom evaluations in current use. It is designed to offer faculty more information about what students feel that they have gained from particular aspects of their classes, and from the class overall.

Basis for the Design of the New Instrument    up to top

From a review of evaluation instrument samples in current use, Elaine Seymour noted that student classroom evaluation questions typically focus upon student ratings of faculty teaching performance and solicit overall class ratings without reference to the criteria by which students are expected to make such judgements. In her interviews with faculty in the undergraduate chemistry coalitions, it was clear that they found little useful feedback in the reports generated by these types of questions and were apt, therefore, to dismiss the numeric scores they generated as "popularity" ratings. It was notable, however, that faculty paid careful attention to students’ written comments. Faculty also expressed concern that departmental evaluation of their efficacy as classroom teachers (for tenure and promotion purposes) is largely based on the scores of instruments that ask the wrong types of questions. Those faculty engaged in pedagogical innovation find themselves at special risk of lowered evaluation scores in the early stages of course redesign partly because traditional classroom evaluations give students insufficient opportunity to estimate the value added to their learning by new class features.

Analysis of a sample of students’ responses to open-ended questions from student classroom evaluations used at four participating campuses revealed that students’ comments focused not on the teacher’s professional performance, but on how much they did, or did not, gain from the class. This observation was confirmed by findings from a focus group and interview study which formed part of the formative evaluation of the coalitions’ work: 353 students were interviewed in a matched sample of "modular" and more traditional introductory chemistry classes at ten participating institutions (3 research universities, 3 liberal arts colleges, 2 community colleges, 1 comprehensive state university, and 1 historically-black college). Although students gave positive and negative ratings to specific features of both the modular and comparative classes, the grand totals for all students’ comments evaluating faculty teaching strategies were (for both the modular and the comparative classes) broadly 50% positive and 50% negative. This finding reflects the common experience of faculty that asking students what they "liked" or "valued" about their classes, or how they evaluated their teacher’s professional performance, offers little information about what students actually gained from the class. By contrast, in both the modular and comparative classes, students gave clear indications about what they had "gained" (in understanding, skills, approach to learning and to the subject) from the various aspects of their classes. Some specific gains cited by students in the modular classes were:
  • greater understanding of the connections between concepts—both within chemistry, and between chemistry and other areas of science and mathematics
  • an enhanced ability to apply what they had learned
  • an increased interest in and enthusiasm for chemistry and a clearer appreciation of its nature and methods
  • greater comfort with complex material
  • increased confidence in their ability to "do chemistry"
  • a belief that they would remember more of what they learned in modular than in more traditional classes

Both areas of gain and areas of concern about the impact of the modular forms of teaching on their learning were clearly delineated. The interviews were done at an early stage in the development and testing of modules, and the module developers have used the feedback from interview data to enhance the structure and teaching of their modules. Although the specific gains reported by students in the matched modular and comparative classes were different, the teachers of each type of class also received a clear picture of the overall gains reported by their students. Student observations were of three types—answers to interviewers’ questions, spontaneous observations, and agreement with observations made by other focus group members. When all gain-related observations were totaled and divided into three types—positive (things gained), negative (things not gained) and mixed reviews about how much was gained—for both the modular and comparative classes, 55% of the observations were positive, 11% (modular) and 13% (traditional) were negative, and 11% (modular) and 13% (comparative) were "mixed." The strong similarity between the totals (though not for particular items) is likely to reflect the very early stage of the modular classes sampled at the time of the interviews. However, the issue here is not the relative merits of modular or more traditional chemistry classes, but the possibilities of getting useful feedback to all faculty--wherever they stand on the innovation-traditional spectrum--by asking students gain-related questions.

Development of an Exclusively "Gains-Related" Instrument    up to top

In the light of all of these findings, Elaine Seymour developed a pilot instrument (summer 1997) that is exclusively based on questions about how much students thought they had gained from specific aspects of their classes. The instrument asks students to rank the elements of any course with respect to their learning of concepts and skills, appreciation of the subject and its application, and estimates of what they will retain from the current class or have retained from a prior class. The aspects of learning and course activities selected reflect learning objectives as well as objectives of special interest to modular chemistry developers and adapters. The pilot instrument was initially tested in three chemistry departments and was used and well received by both modular and non-modular faculty. It is currently being field-tested by modular chemistry faculty in 20 institutions of different types (with support from the Exxon Education Foundation) by faculty in an array of disciplines who have accessed the instrument from either the FLAG or Consortium web sites. These disciplines include chemistry, engineering, computer science, physics and sociology. It has subsequently been adopted as the formal end-of-semester class evaluation instrument by two of these departments and is being considered as a departmental or institutional evaluation tool by others.

The pilot instrument has been continuously amended and augmented in the light of feedback from users with the assistance of the ChemLinks/ModularChem co-evaluator, Joshua Gutwill (University of California at Berkeley), Susan Millar, Director of the Learning through Evaluation, Assessment and Dissemination (LEAD) Center (the University of Wisconsin, Madison), Steve Kosciuk (of the New Traditions Chemistry Coalition evaluation team, also a member of the LEAD Center) and Sue Daffinrud(also of the LEAD Center).

Although it was originally written for faculty interested in discovering the efficacy of a particular approach to the teaching of chemistry, the structure of the instrument is such that the content of the questions can be changed by users with different learning objectives (whether in chemistry, or any other discipline) while preserving the format that is exclusively designed to obtain students' assessment of their learning gains.

Dissemination, Field-Testing, and Instrument Validation    up to top

The instrument was placed on the ChemLinks/ModularChem web sites in December 1997, and on the Field-Tested Learning Assessment Guide (FLAG) web-site in February 1998. The FLAG web site was originally developed by Elaine Seymour and Steve Kosciuk for the National Institute for Science Education with the support of the New Traditions Chemistry Coalition (both of which are at the University of Wisconsin-- Madison). The site is intended as a resource for science and mathematics faculty seeking assessment methods appropriate to their learning objectives.

The following paper presentation at the American Chemical Society provides some evidence that the SALG elicits ratings from students that reflect their assessment of their own understanding and of the effectiveness of the course strategies in helping them to learn. This paper also presents data on faculty users' perceptions of the SALG instrument and website. Download it with this link.

Further refinement of the instrument in the light of feedback from users is anticipated. Field-tested data and user feedback will be requested at this site.

Outcomes for Faculty    up to top

Faculty who use this web site will have quick and easy ways to obtain meaningful information about how their students view the effects of various course elements on their own learning. This will allow faculty to make informed changes in their classroom and laboratory practices and further increase their students’ learning gains.

This site was created with funding courtesy of the The ExxonMobil Foundation and the following National Science Foundation-funded projects:
New Traditions (NT)
ChemLinks
ModularChemistry (MC2)
The National Institute for Science EducationThe AAC&U SENCER Institutes
Original Content Copyright ©1997 Elaine Seymour. All rights reserved. Your comments are welcome.