Change, Evaluation, Higher Education

The Pace of Change

It is the end of another academic year, and as we move through award ceremonies, research presentations, and finally commencement, I take the time to look at my to-do list from last fall.  It is a bit deflating to see all of the things I didn’t complete.  I expect some of this to happen, after all, not all of my plans were good ones. A few things actually got done, some were re-imagined, a few were abandoned, and some just didn’t get the attention they needed to come to fruition.  It isn’t all bad, but I confess to being a bit disappointed in myself.

Then I remember, higher education is designed to slow the pace of change.  While we are great places for advancing knowledge (yes, new discoveries and inventions do come from higher education), we are best at slow deliberation.  We analyze cultural patterns large and small and try to see them in context, rather than jumping to conclusions.  We look at small changes in forecasting models for weather or economics, tweaking them slightly each year to get closer to a better predictor, and then analyze the results of those changes.  We reflect upon the past to try to divine how we got to this moment.  Change is not something we’re avoiding, it is something we’re vetting.

So here I am, an academic with an administrative role. I understand the care with which my colleagues approach change and I share their suspicions about the innovation of the week.  The brakes they are putting on in the form of more questions, more input, more research are justified.  However, I also spend my time looking at the whole organization and the whole student experience, and I see patterns of successes and failures that are calling for us to move a little faster. I feel the push/pull of the deliberative mindset and the urgency of responding to areas for improvement.

Take, for example, the way this generation of learners is coming to us.  It is well-documented that their experience of reading is very different from that of the generations before them.  (See “The Fall and Rise of Reading” by Steven Johnson in the Chronicle of Higher Education). It isn’t that students can’t read, it’s just that they really haven’t had to grapple with critical reading. The books read and tests taken prior to coming to college are all about short forms, summaries, and highlights.  And of course, there’s the endless interaction on the Internet to reduce the time spent with texts. Reflective reading of long form texts is just not what they are used to doing.  We know this to be true, yet we haven’t reviewed the literature on how to teach critical reading, and then incorporate into our classes.

Maybe we think this isn’t our job. High school was supposed to do it, so just pile on the readings and the students will get it eventually.  But they don’t.  We have to adjust our teaching strategies, and quickly, because we’re losing too many to this gap in skills. Even worse, we are diminishing the conversations we’re having in our classes because we’re not really expecting students to do the reading anymore.  This is a terrible spiral, but the good news is we can stop it from happening. But we have to act, and sooner rather than later.

And then there is the issue that really made me sigh this morning.  After repeated reports on who struggles to succeed at my university, I concluded that the at-risk group is any student who had less than an 85 average in high school.  I learned this two years ago and started a conversation about advising strategies to address the at-risk group. At that time, I used the words “intrusive advising” which is a term found in much of the advising literature. Several of my colleagues objected to the term, so we moved to the idea of enhanced advising.  I brought together a group to develop a protocol and nothing happened.

Then I appointed some faculty members to investigate ways that we might develop an advising protocol for those students.  Like all good faculty members, they went out and talked to their peers. While they found out a few good things about how to support faculty as advisors (and I will work to support those findings), in reality, enhanced advising was set aside in favor of better advising for all.  This is a good idea, but it will take too long to identify and scale those improvements.  Meanwhile, those at-risk students are left with no direct support.

I just got an updated report on at-risk students and it is still students who earned less than an 85 average in high school.  The difference in retention rates for this group is at least 10% lower than those at 85 or above, and the differences in graduation rates are even more stark.  And there’s plenty of literature about how to support these students, so, I’m feeling an urgency.

So, I’m left pondering ways to balance the deliberation with the urgency.  I do respect the reflective and thoughtful nature of my colleagues, but when I keep the larger patterns of student success (or lack thereof) in view, the pace of change is just too slow.  I’m going to have to find a better balance, a better way to move the deliberation along just a little faster.  Because, what I don’t want to do is have this on the unfinished list again next year.

 

DeVos, Evaluation, Higher Education

Under Construction: Peer Review

On Monday, Education Secretary Betsy DeVos, released a list of policy proposals as part of negotiated rule making related to the Higher Education Act.  While I have many questions about the goals of these revisions and how they might impact those most susceptible to education scams, I am intrigued by the part of the list that addresses accreditation.

Several of the policies suggested appear to be a direct assault on regional accreditors.  The reasons that they are a target are complicated, but one such reason seems to be a sense that they are creating barriers for innovation in education.  This argument is fraught with contradictions and for-profit motives, to be sure, but the role of accrediting bodies that are wedded to traditional non-profit educational institutions is not unworthy of review.

Over the past 20 years, regional accreditors have truly transformed from evaluation criteria based on things universities have (faculty, libraries, students, and buildings) to things universities do (retention, graduation, learning/degree outcomes, and graduation placements).  This transformation was, in part, provoked by the emergence of for-profit, online education and our need to grapple with how to evaluate learning in these differing environments.  We resisted, argued, and then re-imagined our responsibilities. We maintained our right to define the parameters of “quality” education, but we no longer argue that we can’t assess our efforts.

As we all became (relatively) comfortable with program assessment and the focus on outcomes, our accrediting bodies asked us to improve our approaches and make it a regular part of what we do.  In the process, we opened the door to developing quality online education because we articulated what our graduates should know.  Whatever the environment, similar degrees should have similar outcomes.  This is the best possible outcome of a robust peer review process. Relying on faculty and administrators from other universities to look at what we do and provide educated and responsible feedback works in this system.  We understand what we’re looking at because there is a lot of common ground. Let’s be honest, this change was hard.  We didn’t like doing it, but here we are.

Now we face something new and the peer review system is going to struggle to define its role again.  In recent years there have been closures of large for-profit and small non-profit educational institutions and, along with those closures have come questions about the oversight provided by accreditors. The withdrawal of accreditation is powerful. It will close an institution, so it is imperative that these bodies develop good ways of monitoring finances. As participants in peer review accreditation processes, we are going to have to figure out reasonable questions to ask that can strengthen evaluation of less traditional and traditional educational institutions alike.

During an accreditation visit, when a university or college is struggling financially, peers are asked to offer feedback. This can be problematic. Most of us from the non-profit world can read a budget and see shortfalls in funding, but we are less prepared to provide insight into how to recover from that shortfall.  Evaluating quality of education and the supporting infrastructures of faculty governance and transparency are things we are comfortable with.  Developing plans for capturing market shares and differentiation of our educational “products” does not come easily to us.

At the heart of this challenge is not that we don’t value or support innovation, something DeVos seems to think we are doing, but that we see innovation in terms of learning, not in terms of money.  We are not blind to changing work landscapes, we respond to those all the time, but we do so on the assumption that we are meeting a societal need, not a bottom line need.  Now we are faced with considering some of the questions that a market oriented evaluation might raise. We will argue and resist, but then we must figure it out for three very important reasons.

Reason 1: Identifying a problem (financial or otherwise) during peer review will not be helpful if we don’t offer some guidance about how to recover. Ours is a collaborative process where we learn from each other.  Ignoring the learning that needs to take place in the financial category makes peer evaluation a threat, not a helpful process.  We have to figure out how to handle this part as a true mentoring opportunity.

Reason 2: We need to continue to impact the definitions of quality and viability, just like we did in the early conversations about online degrees.  By being part of that conversation, we made sure that the measures for online learning were not narrowly evaluated on content delivery, but on learning experiences. We need to shape the questions surrounding financial viability the same way, so that we don’t end up with profit as a primary goal. We want reasonable comparisons to be made between for-profit and non-profit institutions, comparisons that keep students at the center of all we do.

Reason 3: Education is not about widgets.  Students are complex and require nuanced educational experiences that reflect understanding their unique backgrounds and needs.  And the focus of our degrees must anticipate not the job of the moment, but the long-term skills and habits of mind that our students will need as the landscape of work continuously changes. Our traditional, non-profit structures facilitate that nuanced, long-term thinking in ways that are not supported in organizations that produce quarterly reports. This is the thinking valued by regional accreditors associated with these types of educational organizations.

Higher education in the United States has long been built on peer review. It is our strength and it is what allows us to be innovative. One look at the inventions, discoveries, and even changing degree titles will tell the story of responsible, not reactive, innovation. This is supported by the integrity of our peer review processes and we need to hold onto them, because we know our context best.

Yet we are in for some hard work as we reshape the questions we are asking of each other.  Demographics have changed the context for non-profit and for-profit organizations alike.  Finances will have to be considered. But, the questions we ask must reflect our commitment to creating excellent learning environments for all. If they do, the value of our accrediting processes will remain strong.

 

Evaluation, Higher Education, Inclusion

Unfair Measures

Last week I attended the joint meeting of the New England Association of Schools and Colleges (NEASC) and the New England Commission on Higher Education (NECHE).  At this gathering there are K-12 educators and higher education administrators, and most of our sessions were separate.  One that was not was a plenary session featuring Ta-Nehisi Coates.  In front of a standing room only crowd, he reflected on his life as a writer, the taking down of statues linked to our ugliest of histories, the complicity of the Ivy League universities in those ugly histories, and most of all, the unbearable inequities in access to education.

Much of the conversation focused on K-12, but we in higher education are not unfamiliar with the main points of his argument.  To sum up, it is unreasonable to measure a teacher’s or school district’s success by a simple test score, when many teachers are serving  as educators, social workers, therapists, and security officers.  Measuring the test scores in a district where the students are hungry, living in unsafe neighborhoods, and lacking in access to basic educational supports (books at home, paper at home, parents who are able to support homework), as if those scores can be in any way similar to the test scores in Greenwich, CT is beyond destructive.  The conditions created by this notion that tests are objective measures of anything almost inevitably lead to environments with high turnover, low morale, and predictable desperate measures.

In higher education, the parallel experience comes in the rankings of schools. Blunt measures of retention and graduation rates tell us very little, when not placed in the context of the students we serve.  Increasingly, universities like mine, serve students who have graduated from the most challenging K-12 districts.  Our students are doing their best to make the leap to higher education, without having had enough support in their prior education to develop some of the skills necessary for success in college.  They are also burdened with the need to work too many hours, are often food insecure, and on occasion, homeless.

At WCSU, we serve these students in the same classrooms as those who did have adequate preparation and support. This is our mission and we are committed to it.  But you can see where there may be challenges.  As we work to meet the needs of all of our students, adopting new pedagogies, developing robust support systems, and always searching for more funding for our neediest students, we are consistently aware that we are being judged by measurements that do not tell our story.  We strive for equity and equity doesn’t live on a four-year, primarily residential campus.  We should strive to do better, but our attention is squarely on the students in the room, not on those blunt measures. If we attended to those other measures too closely, we would have to change who we are designed to serve.

I took the opportunity to ask Mr. Coates for advice and he issued a very specific challenge.  He turned to the room full of educators and said we had to become active in advocating not just for education, but for the supporting systems whose absences are at the root of the social inequities we are then tasked with curing.  It was an aha moment for me.

Or  perhaps I should say,  it was a duh moment for me. Mr. Coates is so right.  Education has long been seen as this country’s equalizer.  It is meant to provide access to the social mobility at the heart of what we think being an American means. This is a heavy burden. It is no accident that we have had to continuously fight to make education a true equalizer, fighting to allow everyone to pursue it.  We have a horrible history of denying access, to be sure, but access has grown none-the-less.  We continue to segregate, by laws and by funds, the quality of education available to the many, and we battle to cure those inequities in fits and starts, but battle we do. Through it all, we continue to look to education as a cure for all society’s ills. It continues to be what Henry Perkinson called an “imperfect panacea.”

But here, in higher education, perhaps we do need to broaden our advocacy.  We need to change national formulas created by Title IV funding guidelines, to be sure, and fight for better measures of the diversity of colleges and universities, not just the elite schools. But what about the rest of it? We know that college would be better if students didn’t arrive under-prepared.  But the conditions in K-12 are not always conducive to that preparation. So, I’m starting my advocacy to-do list: 1. We need universal pre-K.  2. All schools should have free breakfast and lunch. 3. All education funding formulas need to be re-imagined to balance the inequities that arise from de facto segregation. 4. We need sane housing policies that undermine that segregation and put an end to homelessness.

This list is just a start, but taking these steps has the potential to change the higher education environment significantly. By addressing root causes of the uneven preparation of our students, we might be able to really focus on measures that reflect learning instead of just socio-economic contexts.  This would be real access to education, instead of the band-aid system we now have in place.

Dialogue, Evaluation, Higher Education

Evaluating Teaching

Last week many of my friends and colleagues were discussing Nancy Bunge’s essay in the Chronicle of Higher Education, in which she addressed student evaluations of teaching. Bunge argues that these evaluations are biased (they are), damaging to the relationship between faculty and student (perhaps), and most damming to me, that “Administrators, who are well paid for supposedly assuring that students get good educations, apparently have never heard of grade inflation or bothered to read the studies questioning the value of evaluations, since they routinely turn the job of ranking the faculty over to the people the instructors grade.”  I can’t help but respond.

I was a faculty member for many years, and during that time student evaluations (student opinion surveys, as they are called at WCSU) were a regular part of the mix of how I was evaluated.  At my former university, my average scores were not just reported to me and the dean, they were also placed in a context of a) other sections of my course, b) other courses in my department, and c) similar courses across the university.  These reports also included average grades earned in these courses, so that I would be keenly aware of potential grade inflation. I confess that this practice was a bit overwhelming at first, but ultimately, I learned a lot from it.

Despite the apparent weight given to them, it is important to note that student evaluations were in no way the only measure of my effectiveness.  Over the years toward tenure, I was observed by the dean and my colleagues in the communication department annually, and by faculty outside of my department during my tenure year. The feedback I received from those observations was invaluable.  They were gentle with me when expressing suggestions for improvement, but even the smallest observations were helpful.  Add to this, my decision to have coffee with colleagues at least once a semester, so we could discuss teaching strategies in similar courses, and I really felt engaged in good practices. I had developed a habit of self-evaluation.

Now I am a provost.  I am the last person to review all faculty applications for reappointment, tenure, and promotion.  Prior to my review, peers, deans, and the tenure committee have read and commented on the materials.  My decisions and observations are influenced by all of that information.  The student evaluations/opinion surveys are the least of it.  They contribute a small piece of the story, which is then contextualized by everyone else so that I have a good understanding of how to read them.

So, what does this administrator actually look for.  In terms of student feedback, I look mostly for patterns of responses.  I well know that responses to the two o’clock section of  your course may differ from responses to the one at ten.  I am aware that challenging gateway courses to a major receive more criticism from students than some other introductory overviews.  I am certain that no one likes remedial math.  These things are all taken into consideration as I review student feedback.

However, if most students find your courses (not just one) and expectations unclear, I explore that question. I move to the observations of your peers.  They will tell me if I should pay attention to those student concerns.  I also look at your syllabi.  They will tell me if I should pay attention to those student concerns.  And most importantly, I look at your self-evaluation.

What I look for, above all, is for faculty who are constantly questioning their approaches to teaching.  Do they look at the results of their efforts and adjust their techniques to try to better engage students? Do they try new pedagogies and reflect on their successes or failures? Do they revise courses, infusing them with new materials when warranted?  Do they take an honest look at what students are saying, on opinion surveys or in their engagement with the material, and revise or clarify? And do they reflect on their efforts honestly, celebrating successes and re-imagining courses that didn’t go well.

It is my job to cultivate that attitude toward teaching.  When I meet with new faculty, I do my best to reassure them that not all classes will receive high praise from students. Indeed, I’d be worried if they did, because we should be trying new things, not all of which will work.  I celebrate faculty experimentation, through awards, announcements, and organizing an annual faculty panel on teaching.  I also distribute funds for faculty to attend conferences and workshops that explore new approaches to teaching.  This is my job and I love doing it.

So what of those student opinion surveys? I think they are helpful to a point, if used in the context I have described. They must be read with nuance, sorting through biases and making room for growth and experimentation.  I also think there are ways to do better in terms of how they are constructed, but they will always carry the attitudes that the culture carries toward various groups.  Anyone reading them must take that into account.

While I don’t fully agree with Bunge’s argument, there is a way in which this request for student feedback can cultivate a consumer mentality, particularly if it appears to be a top-down request. That is demeaning to the profession.  But to not ask for feedback can be demeaning to the student.  If students can’t provide feedback, we are devaluing their experience, and I suspect we would be reinforcing passive attitudes toward learning.

Perhaps we can improve the use of feedback from students by changing how we gather it. One way to start is to have multiple opportunities for feedback during the semester, instead of waiting until the last week of classes.  Thirds seems like a good approach to me: Let’s ask for feedback three times instead of one.

The first two opportunities for feedback should be collected by professors, helping them to clarify where necessary and change course if appropriate. This might start with three simple questions:

  1. What is going well in this course (please consider the texts, assignments, and the in-class experience)?
  2. What needs clarification?
  3.  Is there anything else I should know?

Faculty might read and respond right then, or perhaps in the next class, but they should respond.  Giving students opportunities to discuss the course during the semester can help cultivate trust and allow them to feel that they are part of creating the course.

Then a final response, collected by a student in the class at the end of the semester, should reflect the habit of feedback that has been cultivated.  It would grow out of a practice of open dialogue about the course, rather than a single opportunity to voice an opinion. These final questionnaires should probably be short, too, with some reference to the other opportunities for feedback.

This approach is less consumer oriented than the once-a-semester evaluation, and I think it would feel less punitive or risky for some faculty. In the best case, it could help students and faculty feel co-ownership of the course outcomes, which is a real win for everyone involved.

So, I don’t know what all administrators think about the role of student feedback in their evaluation of faculty, but I can say that this is how I see it.  I can also say that I don’t know any administrator who uses a single measure to evaluate faculty, and it should never be so.