Evaluation, Higher Education, Inclusion

Unfair Measures

Last week I attended the joint meeting of the New England Association of Schools and Colleges (NEASC) and the New England Commission on Higher Education (NECHE).  At this gathering there are K-12 educators and higher education administrators, and most of our sessions were separate.  One that was not was a plenary session featuring Ta-Nehisi Coates.  In front of a standing room only crowd, he reflected on his life as a writer, the taking down of statues linked to our ugliest of histories, the complicity of the Ivy League universities in those ugly histories, and most of all, the unbearable inequities in access to education.

Much of the conversation focused on K-12, but we in higher education are not unfamiliar with the main points of his argument.  To sum up, it is unreasonable to measure a teacher’s or school district’s success by a simple test score, when many teachers are serving  as educators, social workers, therapists, and security officers.  Measuring the test scores in a district where the students are hungry, living in unsafe neighborhoods, and lacking in access to basic educational supports (books at home, paper at home, parents who are able to support homework), as if those scores can be in any way similar to the test scores in Greenwich, CT is beyond destructive.  The conditions created by this notion that tests are objective measures of anything almost inevitably lead to environments with high turnover, low morale, and predictable desperate measures.

In higher education, the parallel experience comes in the rankings of schools. Blunt measures of retention and graduation rates tell us very little, when not placed in the context of the students we serve.  Increasingly, universities like mine, serve students who have graduated from the most challenging K-12 districts.  Our students are doing their best to make the leap to higher education, without having had enough support in their prior education to develop some of the skills necessary for success in college.  They are also burdened with the need to work too many hours, are often food insecure, and on occasion, homeless.

At WCSU, we serve these students in the same classrooms as those who did have adequate preparation and support. This is our mission and we are committed to it.  But you can see where there may be challenges.  As we work to meet the needs of all of our students, adopting new pedagogies, developing robust support systems, and always searching for more funding for our neediest students, we are consistently aware that we are being judged by measurements that do not tell our story.  We strive for equity and equity doesn’t live on a four-year, primarily residential campus.  We should strive to do better, but our attention is squarely on the students in the room, not on those blunt measures. If we attended to those other measures too closely, we would have to change who we are designed to serve.

I took the opportunity to ask Mr. Coates for advice and he issued a very specific challenge.  He turned to the room full of educators and said we had to become active in advocating not just for education, but for the supporting systems whose absences are at the root of the social inequities we are then tasked with curing.  It was an aha moment for me.

Or  perhaps I should say,  it was a duh moment for me. Mr. Coates is so right.  Education has long been seen as this country’s equalizer.  It is meant to provide access to the social mobility at the heart of what we think being an American means. This is a heavy burden. It is no accident that we have had to continuously fight to make education a true equalizer, fighting to allow everyone to pursue it.  We have a horrible history of denying access, to be sure, but access has grown none-the-less.  We continue to segregate, by laws and by funds, the quality of education available to the many, and we battle to cure those inequities in fits and starts, but battle we do. Through it all, we continue to look to education as a cure for all society’s ills. It continues to be what Henry Perkinson called an “imperfect panacea.”

But here, in higher education, perhaps we do need to broaden our advocacy.  We need to change national formulas created by Title IV funding guidelines, to be sure, and fight for better measures of the diversity of colleges and universities, not just the elite schools. But what about the rest of it? We know that college would be better if students didn’t arrive under-prepared.  But the conditions in K-12 are not always conducive to that preparation. So, I’m starting my advocacy to-do list: 1. We need universal pre-K.  2. All schools should have free breakfast and lunch. 3. All education funding formulas need to be re-imagined to balance the inequities that arise from de facto segregation. 4. We need sane housing policies that undermine that segregation and put an end to homelessness.

This list is just a start, but taking these steps has the potential to change the higher education environment significantly. By addressing root causes of the uneven preparation of our students, we might be able to really focus on measures that reflect learning instead of just socio-economic contexts.  This would be real access to education, instead of the band-aid system we now have in place.

Dialogue, Evaluation, Higher Education

Evaluating Teaching

Last week many of my friends and colleagues were discussing Nancy Bunge’s essay in the Chronicle of Higher Education, in which she addressed student evaluations of teaching. Bunge argues that these evaluations are biased (they are), damaging to the relationship between faculty and student (perhaps), and most damming to me, that “Administrators, who are well paid for supposedly assuring that students get good educations, apparently have never heard of grade inflation or bothered to read the studies questioning the value of evaluations, since they routinely turn the job of ranking the faculty over to the people the instructors grade.”  I can’t help but respond.

I was a faculty member for many years, and during that time student evaluations (student opinion surveys, as they are called at WCSU) were a regular part of the mix of how I was evaluated.  At my former university, my average scores were not just reported to me and the dean, they were also placed in a context of a) other sections of my course, b) other courses in my department, and c) similar courses across the university.  These reports also included average grades earned in these courses, so that I would be keenly aware of potential grade inflation. I confess that this practice was a bit overwhelming at first, but ultimately, I learned a lot from it.

Despite the apparent weight given to them, it is important to note that student evaluations were in no way the only measure of my effectiveness.  Over the years toward tenure, I was observed by the dean and my colleagues in the communication department annually, and by faculty outside of my department during my tenure year. The feedback I received from those observations was invaluable.  They were gentle with me when expressing suggestions for improvement, but even the smallest observations were helpful.  Add to this, my decision to have coffee with colleagues at least once a semester, so we could discuss teaching strategies in similar courses, and I really felt engaged in good practices. I had developed a habit of self-evaluation.

Now I am a provost.  I am the last person to review all faculty applications for reappointment, tenure, and promotion.  Prior to my review, peers, deans, and the tenure committee have read and commented on the materials.  My decisions and observations are influenced by all of that information.  The student evaluations/opinion surveys are the least of it.  They contribute a small piece of the story, which is then contextualized by everyone else so that I have a good understanding of how to read them.

So, what does this administrator actually look for.  In terms of student feedback, I look mostly for patterns of responses.  I well know that responses to the two o’clock section of  your course may differ from responses to the one at ten.  I am aware that challenging gateway courses to a major receive more criticism from students than some other introductory overviews.  I am certain that no one likes remedial math.  These things are all taken into consideration as I review student feedback.

However, if most students find your courses (not just one) and expectations unclear, I explore that question. I move to the observations of your peers.  They will tell me if I should pay attention to those student concerns.  I also look at your syllabi.  They will tell me if I should pay attention to those student concerns.  And most importantly, I look at your self-evaluation.

What I look for, above all, is for faculty who are constantly questioning their approaches to teaching.  Do they look at the results of their efforts and adjust their techniques to try to better engage students? Do they try new pedagogies and reflect on their successes or failures? Do they revise courses, infusing them with new materials when warranted?  Do they take an honest look at what students are saying, on opinion surveys or in their engagement with the material, and revise or clarify? And do they reflect on their efforts honestly, celebrating successes and re-imagining courses that didn’t go well.

It is my job to cultivate that attitude toward teaching.  When I meet with new faculty, I do my best to reassure them that not all classes will receive high praise from students. Indeed, I’d be worried if they did, because we should be trying new things, not all of which will work.  I celebrate faculty experimentation, through awards, announcements, and organizing an annual faculty panel on teaching.  I also distribute funds for faculty to attend conferences and workshops that explore new approaches to teaching.  This is my job and I love doing it.

So what of those student opinion surveys? I think they are helpful to a point, if used in the context I have described. They must be read with nuance, sorting through biases and making room for growth and experimentation.  I also think there are ways to do better in terms of how they are constructed, but they will always carry the attitudes that the culture carries toward various groups.  Anyone reading them must take that into account.

While I don’t fully agree with Bunge’s argument, there is a way in which this request for student feedback can cultivate a consumer mentality, particularly if it appears to be a top-down request. That is demeaning to the profession.  But to not ask for feedback can be demeaning to the student.  If students can’t provide feedback, we are devaluing their experience, and I suspect we would be reinforcing passive attitudes toward learning.

Perhaps we can improve the use of feedback from students by changing how we gather it. One way to start is to have multiple opportunities for feedback during the semester, instead of waiting until the last week of classes.  Thirds seems like a good approach to me: Let’s ask for feedback three times instead of one.

The first two opportunities for feedback should be collected by professors, helping them to clarify where necessary and change course if appropriate. This might start with three simple questions:

  1. What is going well in this course (please consider the texts, assignments, and the in-class experience)?
  2. What needs clarification?
  3.  Is there anything else I should know?

Faculty might read and respond right then, or perhaps in the next class, but they should respond.  Giving students opportunities to discuss the course during the semester can help cultivate trust and allow them to feel that they are part of creating the course.

Then a final response, collected by a student in the class at the end of the semester, should reflect the habit of feedback that has been cultivated.  It would grow out of a practice of open dialogue about the course, rather than a single opportunity to voice an opinion. These final questionnaires should probably be short, too, with some reference to the other opportunities for feedback.

This approach is less consumer oriented than the once-a-semester evaluation, and I think it would feel less punitive or risky for some faculty. In the best case, it could help students and faculty feel co-ownership of the course outcomes, which is a real win for everyone involved.

So, I don’t know what all administrators think about the role of student feedback in their evaluation of faculty, but I can say that this is how I see it.  I can also say that I don’t know any administrator who uses a single measure to evaluate faculty, and it should never be so.