The data from the Student Perceptions of Teaching survey (SPOTs) serve as a useful tool for understanding certain aspects of students’ learning and classroom experiences (click here for CATs guide on interpreting SPOTs data). However, SPOTs data can be limited, in that they can be influenced by implicit bias and may not be accurate representations of student learning and development (Theall & Franklin, 2001). Rather than over-relying on SPOTs results to assess teaching quality, faculty should consider collecting additional data from students to complement SPOTs.
But why do we encourage faculty to do this? What kind of student data can they collect, and how can they go about collecting it?
Keep reading to find out!
Why should faculty collect additional student data for teaching evaluations?
Simply put—our students are uniquely positioned to provide useful insights on our instruction because they spend the greatest amount of time engaging with our teaching—and in contrast to our peers, can offer feedback from the perspective of the naive learner (Supiano, 2017; Weimer 2010). In fact, research suggests that using student data as formative feedback to evaluate and improve teaching can be highly effective in refining good teaching practices (Clayson, 2009; Finelli et al., 2008).
Moreover, student data is particularly useful when we move past our traditional reliance on solely using student perspectives, and include evidence related to their growth as independent learners. Braskamp and Ory (1994) concluded that when faculty focus on evaluating their teaching based on what students learned, they are more likely to improve their teaching.
What kind of student data can faculty collect, and how can they collect it?
To complement SPOTs data, faculty might consider: a) collecting student perception data on nuanced or course-specific elements and b) including direct or indirect measures of learning outcomes and behaviors to provide evidence of students’ learning and development (Nilson, 2006).
Data on Student Perceptions
There are many options for faculty considering collecting more targeted student perception data. For instance, faculty can collect mid-semester feedback on their own or with the help of the Center for the Advancement of Teaching, conduct a focus group with a subset of students, use an existing disposition/identity survey, or create a quick survey using Qualtrics.
Direct vs Indirect Measures of Learning
Measures of student learning can be classified as either direct or indirect. Indirect measures of learning assess students’ perceived learning, whereas direct measures assess the extent to which students actually master the concepts covered in a course—like many of your course exams and written assignments. Nilson (2006) suggests that final exams or capstone papers can serve as especially valuable direct measures of learning.
It is important to keep in mind that student performance throughout a course on tests, projects, reports, and other indicators may be influenced by the characteristics of the student, the institution, and the outcome measures themselves, over which faculty have no control (Berk, 1988, 1990). However, faculty have a lot of control over classroom experiences that can influence student perseverance. Using a pre-test can help faculty identify a knowledge baseline for a section, help individual students identify topics they need to revisit, and can provide students with a general view of the course learning objectives. Coupled with a post-test, faculty can measure student learning gains.
For additional information on collecting student data, including concrete examples of activities to collect data on students' perceptions as well as direct and indirect measures of learning, click here!