Last week many of my friends and colleagues were discussing Nancy Bunge’s essay in the Chronicle of Higher Education, in which she addressed student evaluations of teaching. Bunge argues that these evaluations are biased (they are), damaging to the relationship between faculty and student (perhaps), and most damming to me, that “Administrators, who are well paid for supposedly assuring that students get good educations, apparently have never heard of grade inflation or bothered to read the studies questioning the value of evaluations, since they routinely turn the job of ranking the faculty over to the people the instructors grade.” I can’t help but respond.
I was a faculty member for many years, and during that time student evaluations (student opinion surveys, as they are called at WCSU) were a regular part of the mix of how I was evaluated. At my former university, my average scores were not just reported to me and the dean, they were also placed in a context of a) other sections of my course, b) other courses in my department, and c) similar courses across the university. These reports also included average grades earned in these courses, so that I would be keenly aware of potential grade inflation. I confess that this practice was a bit overwhelming at first, but ultimately, I learned a lot from it.
Despite the apparent weight given to them, it is important to note that student evaluations were in no way the only measure of my effectiveness. Over the years toward tenure, I was observed by the dean and my colleagues in the communication department annually, and by faculty outside of my department during my tenure year. The feedback I received from those observations was invaluable. They were gentle with me when expressing suggestions for improvement, but even the smallest observations were helpful. Add to this, my decision to have coffee with colleagues at least once a semester, so we could discuss teaching strategies in similar courses, and I really felt engaged in good practices. I had developed a habit of self-evaluation.
Now I am a provost. I am the last person to review all faculty applications for reappointment, tenure, and promotion. Prior to my review, peers, deans, and the tenure committee have read and commented on the materials. My decisions and observations are influenced by all of that information. The student evaluations/opinion surveys are the least of it. They contribute a small piece of the story, which is then contextualized by everyone else so that I have a good understanding of how to read them.
So, what does this administrator actually look for. In terms of student feedback, I look mostly for patterns of responses. I well know that responses to the two o’clock section of your course may differ from responses to the one at ten. I am aware that challenging gateway courses to a major receive more criticism from students than some other introductory overviews. I am certain that no one likes remedial math. These things are all taken into consideration as I review student feedback.
However, if most students find your courses (not just one) and expectations unclear, I explore that question. I move to the observations of your peers. They will tell me if I should pay attention to those student concerns. I also look at your syllabi. They will tell me if I should pay attention to those student concerns. And most importantly, I look at your self-evaluation.
What I look for, above all, is for faculty who are constantly questioning their approaches to teaching. Do they look at the results of their efforts and adjust their techniques to try to better engage students? Do they try new pedagogies and reflect on their successes or failures? Do they revise courses, infusing them with new materials when warranted? Do they take an honest look at what students are saying, on opinion surveys or in their engagement with the material, and revise or clarify? And do they reflect on their efforts honestly, celebrating successes and re-imagining courses that didn’t go well.
It is my job to cultivate that attitude toward teaching. When I meet with new faculty, I do my best to reassure them that not all classes will receive high praise from students. Indeed, I’d be worried if they did, because we should be trying new things, not all of which will work. I celebrate faculty experimentation, through awards, announcements, and organizing an annual faculty panel on teaching. I also distribute funds for faculty to attend conferences and workshops that explore new approaches to teaching. This is my job and I love doing it.
So what of those student opinion surveys? I think they are helpful to a point, if used in the context I have described. They must be read with nuance, sorting through biases and making room for growth and experimentation. I also think there are ways to do better in terms of how they are constructed, but they will always carry the attitudes that the culture carries toward various groups. Anyone reading them must take that into account.
While I don’t fully agree with Bunge’s argument, there is a way in which this request for student feedback can cultivate a consumer mentality, particularly if it appears to be a top-down request. That is demeaning to the profession. But to not ask for feedback can be demeaning to the student. If students can’t provide feedback, we are devaluing their experience, and I suspect we would be reinforcing passive attitudes toward learning.
Perhaps we can improve the use of feedback from students by changing how we gather it. One way to start is to have multiple opportunities for feedback during the semester, instead of waiting until the last week of classes. Thirds seems like a good approach to me: Let’s ask for feedback three times instead of one.
The first two opportunities for feedback should be collected by professors, helping them to clarify where necessary and change course if appropriate. This might start with three simple questions:
- What is going well in this course (please consider the texts, assignments, and the in-class experience)?
- What needs clarification?
- Is there anything else I should know?
Faculty might read and respond right then, or perhaps in the next class, but they should respond. Giving students opportunities to discuss the course during the semester can help cultivate trust and allow them to feel that they are part of creating the course.
Then a final response, collected by a student in the class at the end of the semester, should reflect the habit of feedback that has been cultivated. It would grow out of a practice of open dialogue about the course, rather than a single opportunity to voice an opinion. These final questionnaires should probably be short, too, with some reference to the other opportunities for feedback.
This approach is less consumer oriented than the once-a-semester evaluation, and I think it would feel less punitive or risky for some faculty. In the best case, it could help students and faculty feel co-ownership of the course outcomes, which is a real win for everyone involved.
So, I don’t know what all administrators think about the role of student feedback in their evaluation of faculty, but I can say that this is how I see it. I can also say that I don’t know any administrator who uses a single measure to evaluate faculty, and it should never be so.
I love your essay and agree with you totally. If THE CHRONICLE had given me more space, I would have suggested multiple evaluations going to the prof, but not administrators, and also that the profs discuss the results with their students. And, in fact, thanks to the students, Michigan State is starting to give evaluations while the course is in process.
Nancy Bunge
That’s great to hear. I think many faculty members do their own mid-course feedback sessions. This is a great practice and often informs the rest of the semester. As an administrator, I do not need to see those forms. I do need evidence that faculty are engaged with students this way. Perhaps, just some reflective essays would do the trick. Thanks for the feedback.