Why are students a useful data source for teaching evaluations?
Students are uniquely positioned to provide useful insights on our instruction because they spend the greatest amount of time engaging with our teaching—and in contrast to our peers, can offer feedback from the perspective of the naive learner (Supiano, 2017; Weimer 2010). Fink (1999, p. 5) specifies that, compared to other data sources, “[s]tudents are the best source for understanding the immediate effects of our teaching; i.e. the process of teaching and learning.” Findings from the literature suggest that using student data as formative feedback to evaluate and improve teaching can be highly effective in refining good teaching practices (Clayson, 2009; Finelli et al., 2008).
Students are also a useful data source when we move past our traditional reliance on student perspectives and include evidence relating to their learning, growth, and development. Braskamp and Ory (1994) suggest that “[i]f faculty who teach […] focus on student learning, they are more likely to improve their teaching”; and the faculty evaluation scholars conclude that “[t]he student perspective, particularly when it reflects their learning, has high inherent credibility and considerable utility” in faculty assessment (p. 101).
What counts as student data for evaluating teaching?
The most commonly-used source of student data for teaching evaluation is our recently re-named SPOTs: Student Perceptions of Teaching surveys, and many units have historically relied on SPOTs results for teaching evaluations. However, we recognize the limitations of the SPOTs as measures of teaching quality (Theall & Franklin, 2001). For a more detailed discussion of the utility of and limitations of using SPOTs for evaluating teaching please see Interpreting and Working with Your SPOTs Results: A Guide for Faculty (FIU SPOTs Guide).
As a complement to the SPOTs data, we can consider alternative means of capturing student perceptions and, in doing so, gain nuanced or course-specific insights into our students’ learning experience. At the same time, data on students’ perceptions of teaching do not provide evidence of students’ learning and development—which are the primary outcomes of interest related to the evaluation of teaching (Nilson, 2006). Therefore, evidence from students is more comprehensive when it includes measures of their learning outcomes and behaviors—which come in two primary forms: direct and indirect measures. Indirect measures of learning assess students’ perceived learning, whereas direct measures assess the extent to which students’ actually master the concepts covered in a course—like many of your course exams and written assignments. Nilson (2006) suggests that final exams or capstone papers can serve as especially valuable direct measures of learning. Indirect measures of learning can also be given for course credit.
We must note, however, that using student learning data in teaching evaluation is not without its challenges. As Berk (2005) synthesizes the “crux of the problem,” the difficulty lies in “isolating teaching as the sole explanation for student learning [… given that] performance throughout a course on tests, projects, reports, and other indicators may be influenced by the characteristics of the students, the institution, and the outcome measures themselves, over which faculty have no control (Berk, 1988, 1990)” (p. 55). Some of the activities suggested below attempt to reconcile this challenge, though the use of pre- and post-testing, for instance. We might also heed Fenwick’s (2001) recommendation that the results of standard outcome measures, such as tests, problem-solving exercises, projects, and simulations, be aggregated across groups of students for program evaluation decisions about teaching methods and program improvement.
Please click here for concrete examples of activities to collect data on students’ perceptions of teaching and their learning outcomes, which includes examples of the type of data the activity might yield and how to interpret that data.