Accountability, Change, Higher Education

The Limits of Ad Hoc Committees

In 2012, when I first came to Western Connecticut State University, one of my new colleagues asked me if I was more of a spreadsheet person or an idea person. I happily responded, ideas, and then found my life fully immersed in spreadsheets. The truth is that I am both. I have ideas every single day, but I need to ground them in data, considering the context and cost of an initiative, and planning to measure the impact. As far as I can tell one cannot lead without having some ideas and then vetting them; it is the job.

As I reflect on the work we have done over the last ten years, I see that a lot of good has come out of the back and forth between data and ideas. Our revised general education curriculum seems to have had a positive impact on our graduation rates. I can’t measure causality on this one, but I think the implementation of the First Year Navigation course as part of the gen-ed, coupled with greater transparency in our requirements must be part of the reason why those numbers have improved. I can say for sure that after it was implemented the number of requests for last minute waivers or course substitutions because students had missed a hidden rule in the requirements decreased. There is still room for improvement, but I think the change has overall been a win for student success. This work was completed through the work of our Committee on General Education, working through governance and responding to input. Not everyone was happy, but decisions were made.

Driven by enrollment data and the not so great news about declining high school graduates in New England, I have worked with colleagues to support the development of new graduate degree programs. These programs have been much more informed by regional workforce needs (nursing, addiction counseling, education, healthcare leadership) than our older paths to curriculum development. Indeed, we have had to face facts about enrollment data in some of those older degree programs and we made the decision to close a few. Whether these new programs will be enough to fill the gaps in recruitment at the undergraduate level remains to be seen, but the signs are promising, and I am monitoring outcomes. The curricular changes followed our normal governance processes, with a little extra support in terms of projected demand for new programs and a commitment to looking at our own enrollment trends for faltering programs. Not everyone was happy, but decisions were made.

Close scrutiny of our retention rates has led to the development of our new peer mentor program. After years of asking ourselves who we were losing, and in some cases, just feeling overwhelmed by the many potential factors, a look at the patterns in our retention data gave us clear direction. As the chief evaluator of our outcomes data, I instigated this conversation. Starting first with a standing university committee, I thought the path would be relatively smooth. Unfortunately, it was not, and after being rejected at that level, I moved to an ad hoc committee. The good news is some talented faculty and staff worked together to move this one forward. That bad news is it took three years to implement. There was progress, and we are measuring outcomes, but I was unsuccessful in communicating the urgency of the situation. Still, decisions were made.

As a strong believer in shared governance, I do my very best to move initiatives forward through our normal university processes. At WCSU, we have very positive governance structures in place, structures that I am immensely proud of, because they recognize the collaboration that must take place between faculty, staff, students, and administration. Most of our committees have representation from all of those constituencies on them. This encourages free discussion and the engagement of ideas from all areas. Most of the time this works very well, if somewhat ploddingly. Nevertheless, there are moments when an idea does not really fit in our normal structures and I generally choose to ask for an ad hoc committee to be appointed (sometimes by me, sometimes through Senate leadership), to explore those ideas. Unfortunately, these committees do not seem to reach the point where decisions can be made.

Over the last few years, I have relied on ad hoc committees to try to help me sort through several initiatives or questions. Of those several, only one has managed to truly move an idea forward. It has been a lesson in leadership for me. To be clear, I have utterly failed to impress upon those involved, the importance of the initiatives for the university’s future. It is likely that I was less than clear in the goals as well. I take the blame for the lack of clarity, but I am perplexed as to what to do next.

It isn’t that I expected those committees to return reports that looked exactly like what I thought they would when we started the conversation. If that were the case, it would not be an ad hoc committee exploring a question, but an implementation team. The point of the committees was to look at the starting material/question and then consider a variety of ways to address those questions. In this spirit I met with each group to outline what I thought the pertinent questions about the topic were and then opened things up for the questions and ideas. In each case, the committees asked for clarifications, which I tried to provide, and then they went to work. I stepped back and let the group’s wisdom take hold.

Unfortunately, in nearly every case, people seemed to either be unable to resolve debates within the committee, or they veered off in an unanticipated direction that completely transformed the original charge. Oh well. People did their best. Obviously I was not clear enough. That’s life.

Except some of these committees were formed to address urgent questions, questions that could have an impact on enrollment, or on campus climate, or on the general direction of the university, as we adjust to enrollment challenges and recover from a pandemic. Oh well, just doesn’t cut it. I have failed to lead.

This puts me in a quandary. You see, I don’t just say I embrace shared governance, I mean it. I know the limits of my imagination and I value the dialogue that our processes support. But I have clearly reached the limits of ad hoc committees because they are not leading to action. We need to take action. It is urgent.

I need a new path. I have to figure out how to move urgent things forward, things that have the potential to transform or bolster our campus, so that we might thrive in the face of that demographic cliff we are all staring at. Not all of our next steps will fit neatly into our defined structures so I can’t just default to the usual paths. I still value the input of the many, in all that we do, but I am worried about pace and I am worried about distractions. Decisions have to be made. It is time to regroup.

Accountability, Evaluation, Thinking

Remember the Qualitative Data

The phrase “data-driven decision-making” has become the gold standard for proposing policies, research, or other plans to improve outcomes in higher education. You cannot apply for funding for anything without some evidence to support justify the proposed project and a detailed evaluation plan. This, of course, should not be startling for higher education. Building cases for our research and related decisions is at the heart of all that we do. What has changed is what we define as sufficient evidence. Spoiler alert – it is quantitative data.

Much of this transformation is to the good. We have new tools that allow us to gather quantitative data much more easily than we once did. With Survey Monkey or Qualtrics data from a well-designed questionnaire can be easily launched and analyzed. Getting a good sample is still a challenge, but digital tools make reaching potential respondents better than ever before. The tools for statistical analysis have similarly evolved in ways that help the analysis section of a study to perform functions that were once reserved for the most advanced mathematical thinkers. And with access to large databases, the sky’s the limit for exploring behaviors of various populations.

Then there is the power of “big data” which is so powerful in medical research right now. With access to studies from all over the world, scientists can get at a level of analysis that is much more nuanced than we once experienced. It is so exciting to see that, with all of the information available, it is possible for physicians to move from a generalized Chemo cocktail to one that has been edited more specifically for the genetic traits of an individual. It is truly breathtaking to see the advances in science that these tools provide.

In higher education, the data-driven movement is really impacting our evaluation of university outcomes at every level. We move from the big picture – graduation rates and retention overall, and then fully scrutinize factors that might show us that we are systematically leaving specific groups of students behind. Often referred to as the achievement gap, colleges and universities are no longer (and should no longer be) satisfied with gaps in retention and graduation that break down along gender, income, first-gen, and other socio-cultural groupings.

Attending to these gaps is, indeed, driving policies and programs at many universities. At WCSU, it has led to a revision of our Educational Access Program (Bridge) and to the addition of a peer mentor program. We’re tracking the impact on our overall retention rates, but also taking a deeper dive into different clusters in our community to see where we need to do more. What has really changed for us is that we are designing these efforts with follow up analyses built in from the start, so that we don’t just offer things and then move on. We have a plan to refine as we go. This is a good change.

Still, this focus on statistical data can lead to gaps in understanding that are significant. As always, our results are only as good as the questions we ask. Our questions are only as good as our ability to see beyond our worldview and make room for things we never anticipated. This is a challenge, of course, because we often don’t realize we are making assumptions or that our worldviews are limited. It is the nature of our disciplinary frames; it is the nature of being human.

Although my education has included anthropology and media ecology (both with lots of attention to our biases and qualitative data), I realize that I have been struggling to find ways to incorporate more qualitative analysis into all that we are doing at WCSU. It is tricky because it is more labor and time intensive than analyzing statistical outcomes or neatly structured survey data. It is also tricky, because we need to be informed by the qualitative without falling into the problem of generalizing from the single case. And, of course, it is tricky because, well it takes sustained practice with ethnography to do qualitative well.

I was reminded of this, as I began to read Gillian Tett’s, Anthro-Vision: A New Way to See Business and Life. This text explores the ways in which the habits of anthropology can be transformative to business processes of all kinds. It isn’t so much a “new” way to see things – after all anthropology has existed as a discipline for over a century – nor is it new to see it as a tool of business and governments (see Edward T. Hall for a glimpse of the past) – but it is an excellent reminder that anthropology offers a powerful lens. Tett’s book is full of examples of mis-steps in the tech industry and in marketing because those in charge never even questioned their assumptions about how people interact with technology. The hiring of a full-time anthropologist helped to address some of that. She also reminds us of the difference between asking questions and observing behavior – not because people lie, but because our questions were calculated to get the responses we got and, therefore, missed the bigger context. Our narrow lenses go beyond market research and explain socio-political challenges and misunderstandings on a global scale. These are important reminders, all.

So, I am reminded to take the time to dive into the questions that most statistical research will miss. There is more to understand than calculating the percentage of students who answer the question: “How often do you use the tutoring resource center?” with often, sometimes, or never. There’s the whole long list of feelings that complicate seeking help. There’s that long list of other priorities (work, co-curricular, family). There are the things I haven’t thought of yet, that are barriers to using the resources we are providing. There is research on this, I know, but I think there is more to know.

Yes, I am a fan of quantitative data, but I must admit that I have learned much more from qualitative data over the course of my life. The insights of the unexpected interaction, or the opportunity to observe for long(ish) periods of time, have improved my questions and understandings, and generated much more interesting follow up work than the summary data have ever done. This is important for the work on academic success that we are engaged in at our universities. It is even more important (and not at all unrelated) when we try to see the barriers to creating a diverse, equitable and inclusive environment. I’m thinking it may be time to put a few anthropologists on the institutional research payroll.

Accountability, Quality, Return on Investment

Outcomes Based Funding Metrics

This morning I read with interest a report from The Education Trust, entitled Re-Imagining Outcomes Based Funding. I was following up on Emma Whitford’s piece in Inside Higher Ed that focused on outcomes based funding (OBF) as a tool for supporting equity. I must admit, I shuddered as I considered the hundred ways outcomes funding goes wrong, but Whitford and the report helped me to think about things in new ways. Chief among those ways was that this approach actually supports a focus on who campuses admit, not just retention and graduation rates, and suggests that funding should take that into account. It seems we are getting somewhere on raising awareness about the bluntness of those measures. Hooray.

As I read through the metrics suggested, I saw some thoughtful connections between the students enrolled and the ways that our legislators might think about funding. Instead of just looking at retention and graduation rates, this approach prioritizes investing in campuses that serve more diverse student bodies. It also brings in an important new variable for OBF–campus climate.

Campus climate is often an invisible component in the retention and graduation rates of a university. We spend a lot of time looking at ways to support under-prepared students and we seek out opportunities for scholarships for our under-funded students and these are really important things to do. But, for first generation students and students of color these efforts are not sufficient. They must feel welcome.

So how do we do that? Well, campus climate surveys are one way. Interestingly enough, they are not inexpensive to administer, and they are even more expensive to use. It isn’t enough to gather the data; we need qualified personnel to analyze that same data and help the campus community find opportunities to improve. The funding for this work has to be new dollars. If it isn’t, it will get cut from the budget as soon as we have to prioritize our efforts. We will always focus on direct student support over the broader climate every time. So, I’m glad this idea was raised in the report, but there are important financial implications to consider.

Then there was another piece in the report that gave me pause. In the section called “Ten Steps for Design” (of outcomes based funding models), the following was step five:

“Discourage institutions from reducing access to high-quality degrees or credentials for students from low income backgrounds and students of color.”

This statement is a response to the negative consequences OBF as it has been implemented in the past. In short, the easiest way to improve retention and graduation rates is to change your admissions standards. Better prepared students do better than those on the margins. Better funded students do better than those who struggle to pay for their education. First-generation college students manage more uncertainty than their second or third generation peers and may be retained at somewhat lower levels. All of these students are likely to take longer than four years to graduate. Yes, the older model incentivized a less-inclusive campus. The new suggested strategies are a marked improvement.

At WCSU about 35% of our student body are the first in their families to go to college. We are a relatively affordable school and find that this is attractive to lots of Pell-Eligible students. We are also an increasingly diverse community, something we view as entirely positive, but our history is less so and we are still learning about our invisible barriers and biases, as we seek to be an inclusive campus. Most of what we do fits well into this Re-imagined OBE Funding Framework with its focus on equity. In theory, we should benefit from greater support for our campus based on this model.

But I must admit I do worry about additional unintended consequences if timelines for effectiveness are not robust enough and if there is not continuous dialog with our state representatives about how they read our metrics. For example:

  1. Even when recruiting and admission standards are comparable, a majority residential campus will do better on retention and graduation measures than a majority commuter campus. It is simply easier to help a student who is struggling when they have a regular presence in the campus community.
  2. Sufficient funding to create comparable experiences for our needier students is also an important consideration. Opportunities for internships, research experiences, or study abroad may require a cash infusion or higher need students will skip them for more work hours. They simply need the funds. Unfortunately, these are the same high impact educational experiences that inspire degree completion, applications to graduate schools, and broaden career opportunities. Without that funding stream, schools who serve the less wealthy are likely to have outcomes measures that look weaker than their better funded peers.
  3. Finally, timelines for evaluation are critical. Improvement of anything cannot really be seen in under six years in higher education. While degrees are imagined in four year increments, the students who need more support tend to take five to six years. The effectiveness of an intervention on retention could show up quickly, but its sustained impact will take time. All other interventions will be better seen over the course of a degree. But six years is also a minimum, because you will only be measuring a first cohort at that mark. Sustained improvement is better captured in 8-10 year cycles.

These nuances are hard to convey when elections are in 2- and 4-year cycles. No matter how invested elected officials are in education, there is opportunity for too narrow a view. So, I remain skeptical about the ability to create an outcomes based funding model that can truly support great education that is equitable. But I am very excited to see equity put at the center of the question. That is a great leap forward.