Accountability, Evaluation, Thinking

Remember the Qualitative Data

The phrase “data-driven decision-making” has become the gold standard for proposing policies, research, or other plans to improve outcomes in higher education. You cannot apply for funding for anything without some evidence to support justify the proposed project and a detailed evaluation plan. This, of course, should not be startling for higher education. Building cases for our research and related decisions is at the heart of all that we do. What has changed is what we define as sufficient evidence. Spoiler alert – it is quantitative data.

Much of this transformation is to the good. We have new tools that allow us to gather quantitative data much more easily than we once did. With Survey Monkey or Qualtrics data from a well-designed questionnaire can be easily launched and analyzed. Getting a good sample is still a challenge, but digital tools make reaching potential respondents better than ever before. The tools for statistical analysis have similarly evolved in ways that help the analysis section of a study to perform functions that were once reserved for the most advanced mathematical thinkers. And with access to large databases, the sky’s the limit for exploring behaviors of various populations.

Then there is the power of “big data” which is so powerful in medical research right now. With access to studies from all over the world, scientists can get at a level of analysis that is much more nuanced than we once experienced. It is so exciting to see that, with all of the information available, it is possible for physicians to move from a generalized Chemo cocktail to one that has been edited more specifically for the genetic traits of an individual. It is truly breathtaking to see the advances in science that these tools provide.

In higher education, the data-driven movement is really impacting our evaluation of university outcomes at every level. We move from the big picture – graduation rates and retention overall, and then fully scrutinize factors that might show us that we are systematically leaving specific groups of students behind. Often referred to as the achievement gap, colleges and universities are no longer (and should no longer be) satisfied with gaps in retention and graduation that break down along gender, income, first-gen, and other socio-cultural groupings.

Attending to these gaps is, indeed, driving policies and programs at many universities. At WCSU, it has led to a revision of our Educational Access Program (Bridge) and to the addition of a peer mentor program. We’re tracking the impact on our overall retention rates, but also taking a deeper dive into different clusters in our community to see where we need to do more. What has really changed for us is that we are designing these efforts with follow up analyses built in from the start, so that we don’t just offer things and then move on. We have a plan to refine as we go. This is a good change.

Still, this focus on statistical data can lead to gaps in understanding that are significant. As always, our results are only as good as the questions we ask. Our questions are only as good as our ability to see beyond our worldview and make room for things we never anticipated. This is a challenge, of course, because we often don’t realize we are making assumptions or that our worldviews are limited. It is the nature of our disciplinary frames; it is the nature of being human.

Although my education has included anthropology and media ecology (both with lots of attention to our biases and qualitative data), I realize that I have been struggling to find ways to incorporate more qualitative analysis into all that we are doing at WCSU. It is tricky because it is more labor and time intensive than analyzing statistical outcomes or neatly structured survey data. It is also tricky, because we need to be informed by the qualitative without falling into the problem of generalizing from the single case. And, of course, it is tricky because, well it takes sustained practice with ethnography to do qualitative well.

I was reminded of this, as I began to read Gillian Tett’s, Anthro-Vision: A New Way to See Business and Life. This text explores the ways in which the habits of anthropology can be transformative to business processes of all kinds. It isn’t so much a “new” way to see things – after all anthropology has existed as a discipline for over a century – nor is it new to see it as a tool of business and governments (see Edward T. Hall for a glimpse of the past) – but it is an excellent reminder that anthropology offers a powerful lens. Tett’s book is full of examples of mis-steps in the tech industry and in marketing because those in charge never even questioned their assumptions about how people interact with technology. The hiring of a full-time anthropologist helped to address some of that. She also reminds us of the difference between asking questions and observing behavior – not because people lie, but because our questions were calculated to get the responses we got and, therefore, missed the bigger context. Our narrow lenses go beyond market research and explain socio-political challenges and misunderstandings on a global scale. These are important reminders, all.

So, I am reminded to take the time to dive into the questions that most statistical research will miss. There is more to understand than calculating the percentage of students who answer the question: “How often do you use the tutoring resource center?” with often, sometimes, or never. There’s the whole long list of feelings that complicate seeking help. There’s that long list of other priorities (work, co-curricular, family). There are the things I haven’t thought of yet, that are barriers to using the resources we are providing. There is research on this, I know, but I think there is more to know.

Yes, I am a fan of quantitative data, but I must admit that I have learned much more from qualitative data over the course of my life. The insights of the unexpected interaction, or the opportunity to observe for long(ish) periods of time, have improved my questions and understandings, and generated much more interesting follow up work than the summary data have ever done. This is important for the work on academic success that we are engaged in at our universities. It is even more important (and not at all unrelated) when we try to see the barriers to creating a diverse, equitable and inclusive environment. I’m thinking it may be time to put a few anthropologists on the institutional research payroll.

Evaluation, Higher Education, Hope

Continuous Improvement

With the Passover and Easter upon us and the daffodils beginning to push through the soil, it is that time of year when I feel the joyous rebirth and renewal that comes with spring. It is always a welcome sensation that helps lift me up from the endless to-do lists as I take the opportunity to reflect on all we have accomplished this year. As is natural to our structure, we are heading towards an intense period of productivity – exams, papers, grading, annual reports, assessments, and even a few accreditation visits. It could be too much, except we all know there is a break at the end, so we push ahead in this fury of activity, breathless, exhausted, and I hope, proud.

I have been thinking about our reflective practices a lot lately. In higher education, we have a way of broadening our students’ perspectives while unintentionally narrowing our own. We introduce ideas and worldviews with the passion we feel for our disciplines. We strive to develop the habits of inquiry that have served us so well as scholars, and perhaps even as citizens. But we are also specialists, focused on one field and even one aspect of our field. We train ourselves to attend to the details of that specialty and sometimes we miss the connections to other things that are so important.

If I am totally honest, we also get a little insular, not just in our field, but also within our universities and our departments. This insularity can lead us to think we are better than elsewhere or, much more commonly, thinking that we do not measure up. Neither of these are productive positions for educators. So, as the rituals and rush of spring are upon me, I am thinking about the value of external perspectives on our work.

When I began teaching in an undergraduate program in communication, our department had a habit of cultivating student research so that they might attend the professional conferences in our field. Several of my colleagues routinely took students to the regional and national communication conferences. There was an expectation that I would do so, too. I succeeded in doing so, starting at the regional level, but I must say that I was terrified. I was worried that the work was not good enough and that I had inadvertently set my students up for embarrassment. This did not happen. Participation in this experience showed me that my students were within the normal range of work, some exceeding expectations, and others solidly in the normal range. This boosted my confidence as a professor and did wonders for my students. It was an amazing peer review experience.

Soon I was involved in program review. I contributed to the department report and listened carefully to the feedback from colleagues from two external programs that our department admired. At that university, the norm was to select visitors from programs that we aspired to be. This, too, can make inspire insecurity. Our admiration for the visitor’s programs made us think we were somehow second rate. Yet, the experience was incredibly helpful. There was lots of positive feedback, and some good suggestions for how to improve. We took those suggestions to heart and the impact was clearly visible in our evaluation of our learning outcomes the next year. It was another eye-opening experience.

These days, I spend a lot of time reading reports written for accreditors. While I am fully onboard with regional accreditation, I confess that I have some misgivings about the many discipline specific accreditations that we ascribe to in higher education. Defining the norms and expectations of a field at a national level is incredibly helpful and I have zero doubt that this is productive and supports continuous improvement. What gives me pause is that some of these require overly complex evaluations and, well, the costs are not insignificant. I am not all that convinced that the results are more powerful than the simple peer review provided by colleagues from programs we admire. Nevertheless, there is value in the reflective process and the external perspective that these accreditation processes require.

Really, there is value in all of our self-assessments, external reviews, and even our annual reports. These tasks and processes force us to look up from our to-do lists and think about all we have accomplished. They force us to look around and ask ourselves how we fit into the higher education landscape. They ask us to consider whether we measure up to the expectations of our fields. Best of all, they provide an opportunity to think about what we might do better. For me, that last bit is where the fun begins.

Yes, I said fun. Amid the drudgery of doing assessments, writing annual reports, and preparing for site visits, the excitement is in the possibility for growth. We might revise a course or a program. We might find an opportunity to expand or re-focus our offerings. We might see room for building interdisciplinary partnerships within the university or with external programs and organizations. We might get a new idea. Nothing is more exciting than a new idea.

So, as we welcome spring and face the big race to the finish line, I am inviting everyone to see their to-do lists through this lens. We are not just finishing things; we are looking for opportunities to grow and improve. This is the why of it all and the true opportunity for rebirth.

Evaluation, Higher Education

Outstanding Education?

Nearly twenty years ago, when my children were just getting started in elementary school, I attended a community meeting about the proposed school budget.  I live in a very small town (smaller now, with the regional demographic shifts), so such meetings were an important part of the democratic process. We came together to discuss the details of the budgets before heading to any votes.  At that time, I recall two dominant themes – what do we need to invest in to create a great educational experience and how do we keep the costs down so we do not price people out of our community.  These themes, of course, turned into questions about must haves vs. nice to haves.  Like any town, we had differing opinions about what that meant, but we generally came to some consensus, or at least voted to approve a budget.

Today, I am a member of that school board, and we are still having the same conversations. Our shrinking enrollments and aging tax base have made the conversations a little more strident, but it is still mostly good conversation.  We are spending our time trying to define quality educational experiences that will help our students thrive, without creating an overwhelming cost burden for the town. As I listen to the concerns of my neighbors, and try to keep us from speaking in the hyperbole that so easily divides, I find my mind turning to my job as provost.  We, too, are facing shrinking enrollments (the drop in K-12 enrollments has a necessary consequence in higher education) and a tax base concerned with their ability to support everything the state needs.  As the challenges have descended upon us, we have done a lot of speaking in hyperbole.  Perhaps, it is time to have some honest, if difficult, conversations.

Let’s start those conversations by asking ourselves to identify the necessary components of an outstanding undergraduate education.  No one sets providing mediocre education as a goal, of course, so we want to set the bar high.  However, we do not usually think carefully about what we mean by “outstanding” or how we might achieve it.  Instead, we wait for it to emerge from our offerings.

Because higher education is built on the idea that faculty expertise is our greatest resource, we have a habit of deferring to that expertise at all times.  Much of this habit is a good one.  It would be foolish to hire people who have deep expertise in their disciplines and then tell them what to teach.  No innovation will happen under those circumstances. So we try to create an environment that encourages faculty innovation and hope that this will help us discover excellence.

However, the result of too much deference to this expertise is generally curricular sprawl.  New options or concentrations or majors and courses pop up on a regular basis.  They reflect emerging interests or fields, or sometimes a momentary trend.  These additions to the curriculum are rarely accompanied by a reduction in other offerings, because, well there is a good argument to be made for any course or any major.  Frankly, good arguments are a specialty of higher education. The sprawl is fine until we see sustained dips in enrollment. Then we are faced with low-enrolled courses, degrees, or majors, and the removal process awaits.  We are not good at this part, so we try to avoid the question, or speak in the language of outrage, and try not to eliminate anything.

But, eliminate we must.  Enrollments have made the decision for us.  So has the proportion of our costs that states are willing to fund. The time for avoidance is over. However, we still want to rely on the insights of our faculty.  So, as we take these necessary steps, instead of starting the conversation with dollar figures, we should start by coming together to define the components of an outstanding undergraduate education.

There is a lot to consider, and it isn’t just the number of courses or majors. For example, we might want to look at how we have defined our degrees (BA vs. BS vs. BFA) and the proportion of each in our catalog.  Doing so might uncover a tendency toward more professionally oriented degrees (or the opposite), and reveal that for the students we serve and the faculty expertise we have, we see this as a priority.  At the same time, we might want to take a close look at the differences between options within a major and answer the question of whether that level of specialization really benefits our graduates. Perhaps some things could be a little less career focused.

Then there is the general education curriculum or liberal arts core.  What role is it playing in the overall vision of an outstanding undergraduate education?  Are students encountering varied ideas or are they mastering key skills or some combination of the two?  Is it organized developmentally? Does it support the major? We know we must provide general education, but have we set it up in a way that promises to support critical habits of mind in our graduates?

What pedagogies should we feature? Are there approaches to teaching that every student should experience? If so, how do we organize schedules to support those, pedagogies while keeping the balance of offerings in view?  Is it possible to design schedules around encounters with critical pedagogies, without privileging one approach?

Then there is that very tricky question: How will our graduates be different from the graduates of any other college or university? This is a difficult to answer, of course, because much of what is promised in higher education really is the same everywhere. We are all trying to support graduates who have a reasonable grasp of the world around them and the potential to thrive in an environment where change is a constant. Nevertheless, we have different students, different faculty, and different expertise. Surely, we have a unique point of view that can help shape the decisions we must make about what we offer.

If you read any news about higher education, you will encounter a long list of mergers, financial challenges, closures, and other worrisome trends.  No one in the northeast is immune (well, no one but Harvard and Yale).  It is a scary time, but I think, if we try to come to a consensus about the qualities of an outstanding undergraduate education, we might just start to see what the path forward could be.

 

 

 

Agency, Evaluation, Innovative Pedagogies

Reflection vs. Evaluation

Well, it is December and we are racing toward the end of the semester. As students complete term papers, prepare for final exams or presentations or performances, faculty are making room in the schedule for teaching evaluations.  These evaluations are generally short questionnaires that ask students to give an assessment of the effectiveness of the teaching they just experienced. It is an opportunity to give feedback, which is to the good, but most are constructed in a way suggests expertise where it does not exist (students are not instructional designers, nor will they have depth of knowledge of the discipline), and there is well-documented evidence that they reflect cultural biases throughout.  So, why do them at all?  Good question.

As currently constructed at my university (and at all of the universities where I have taught), there is little value in this exercise.  We have made the whole process about evaluation instead of about learning.  We have also cast our students as consumers, who then provide ratings (stars?) of our work, without really helping them reflect on their learning. What if we reimagined teaching evaluations as course reflections? Instead of using them to tally the effectiveness of a faculty member, they could become a mechanism for collaborative course construction. Instead of seeking an ill-informed critique, we could invite our students to share what they’ve learned from us and give us suggestions for future iterations of the course.

Here’s what it might look like.

Dear Students,

At the end of each semester, I gather information about your experiences in my classes so that I can get a better understanding of what is working well and what new ideas I should explore. Please take a few minutes to reflect on what you have learned in this class and then answer the questions below thoughtfully and honestly.

  1. What was the most interesting or most important thing you learned in this class?  

Why?

    • It provided a foundation for this or another class that I will take.
    • It connected to important topics beyond this course.
    • It helped me see things from a perspective other than my own.
    • Other (please explain).
  1. What was the least interesting or least important thing you learned in this class?

Why?

    • It was too foundational/I’ve encountered it in several other classes.
    • It seemed like a tangent that was not relevant to the class.
    • Other (please explain).
  1. Considering the course overall, were there ideas or assignments that you think will help you succeed in other classes at the university? Please explain your answer.
  2. Considering all opportunities for feedback on your understanding of the material (tests, quizzes, presentations, papers, group work, etc.), which did you find most helpful? Please explain your answer.
  3. Is there an opportunity for feedback on your work that you would like to see added to this course?
  4. Considering things like grading criteria, timing of assignments, or overall organization, do you have any suggestions that you think might improve this course?
  5. Do you have any additional comments that I should consider?

Thank you for your feedback and good luck in your studies.

What I like about this structure is that it invites students to participate in the evolution of the course, instead of asking for some kind of score for performance. By using the first person in the opening paragraph, the faculty are given agency, suggesting that they are fully committed to this dialogue with their students.  It also suggests that students are speaking directly to that faculty member, not some unknown administrator who will then evaluate the professor. 

Moving in this direction, faculty can use the information to learn how students are experiencing their teaching and respond as they deem appropriate.  For example, maybe the thing that students identified as unimportant, was in fact very important.  Perhaps some reframing needs to take place.  Or, maybe several students felt the need for a presentation to be included in the course.  Digging into why would be a good next step.  No doubt some students will ask for extra credit. If the answer is no, then being clear about why not might be a good thing to discuss in the next class.

I also like that this is a disaster for quantitative summaries.  While the current scales from 1 to 5 may be helpful for creating graphs and charts, and they do provide some sense of the instruction in terms of extremes (outside of university norms), in reality they do next to nothing for teaching.  Mostly, they inspire defensiveness. I’m not worried about losing those statistical summaries, because the extremes are easily captured in the syllabi, sample assignments, and peer observations. I’d rather cultivate the reflective practice that this qualitative approach implies.

As one of the people who reads faculty portfolios in their applications for tenure, I am most interested in seeing how faculty respond to student feedback. The most compelling thing that can be included in any tenure packet is a narrative about how one’s teaching has evolved and why.  Evidence of change over time should include sample complaints and sample praise found in these course reflections. If the examples are followed by explanations of how things changed as a result, then I feel confident that I will know enough to fairly review the candidate. I will also know that I have a professor devoted to good teaching.

Let’s drop the ratings model and focus on learning about our teaching. Let’s try to foster an environment where we take student voices to heart, without ceding our expertise.  Let’s listen carefully to concerns and ideas, and work to grow in our profession. Let’s be reflective educators.

 

 

 

Evaluation, Higher Education

Root Causes

Since many institutions of higher education set preparation for and engagement with life-long learning as a goal, it is fitting that I should take my own continued education to heart. Recently, I enrolled in a course called Policy Design and Delivery: A Systematic Approach to strengthen my skills at both developing and analyzing policy proposals.  Motivated by a sense that the many “solutions” to higher education’s problems do not represent a clear and thoughtful analysis of the contexts in which those problems arise, I am searching for a better understanding of how to craft a good policy.  It has been illuminating so far.

Without even using the tools in my policy design course, though, I can see that many of our education policies mistake correlations for causes. For example:

  • Students who participate in co-curricular activities have higher retention rates than those who do not, so we push to require co-curricular activities.  Sounds good, but maybe the students who are committed to staying at a university are the ones who decide to get involved.
  • Students who attend classes regularly are more likely to be successful than those who do not, so let’s adopt attendance tracking technologies and pair them with nudge technology and get students into class.  Attendance matters of course, but not attending is often a decision that no amount of nudging will change. Perhaps attendance is the mark of a student who wants to succeed.
  • And for today’s discussion: Students who commit to a true full-time schedule are most likely to complete their studies…otherwise known as 15 to finish.

Complete College America has invested a lot in the 15 to finish initiative, and for good reason. First of all, we have been confusing our students about the meaning of full-time.  Federal financial aid rules set full-time status at 12 credit hours per semester. This status opens up access to housing and many grants.  A student might logically conclude that 12 credits per semester is sufficient for timely completion of a four-year degree.  Unfortunately, it doesn’t add up. State, national, and accreditation standards define an undergraduate degree as a having a minimum of 120 credit hours (there are some regional differences, but this is the generally accepted standard). The true number is 15 credits per semester.  This is important for us to communicate, so the slogan is a good start.

Second, there is a growing body of research surrounding the notion of “momentum.” Simply put, students who are engaged in their majors early and earn those 15 credits per semester tend to remain motivated to get to the finish line. This is usually linked to guided pathways, which help to limit the chances that students enroll in courses that do not support progress in their degree plan. Success and engagement propel students forward.  Those on a slower path have a greater tendency to stop out, or grow discouraged and lose sight of the finish line.  So, in an ideal world, a truly full-time schedule with a mix of general education and major coursework from the first semester of enrollment is the best approach to degree completion.

Great, but here is where it starts to get problematic. Once we clear up the mystery of degree requirements so that all students understand the 15 to finish concept, there are still myriad reasons why 15 credits per semester is not possible for a significant number of students.

  1. A student may need to work on some foundational skills in their first year of college and the better path to success is 12 or 13 credits, rather than 15.  For example, if a student is behind in their math skills, they might enroll in a 4-credit course in math (rather than the more typical 3-credit general education math course).  To get their schedule just right, with adequate time to pay attention to those math skills, it might be best to stop at somewhere between 13 credits.  If all goes well, the end of the first year might leave that student with 27-29 credits. This is technically behind (below 30 credits) and penalties follow. In this case, the most obvious penalty is their continued status as a first year student. Registration priorities are tied to the number of credits earned.  Higher numbers go first.  A student who took this slightly slower path is at a higher risk of not getting into the next class in a set of requirements because they are still registering with first year students.
  2. A student may be enrolled in a highly competitive program that requires very challenging foundation courses, and opts for a slightly lower number of credits to help manage their time and attention.  For example, science, nursing and pre-med students are likely to take this option.  They might be enrolled in two lab sciences (8 credits), math (3 credits or 4), and humanities (3 credits). If the math class is 4 credits, they will be on track for the 15.  If not, they will be behind and suffer the same penalty as the student with some foundational needs.
  3. A student may have to work while in college (or raise children or report for military service).  The rational decision is to take a lighter load, but the penalties abound.  A part-time student will not be eligible for many grants, will not receive the benefits of the bundling discount (charging the same price for 12-18 credits) and will suffer the registration penalty because they have not yet made it to the status of sophomore or junior or senior.  This means that a hard working part-time student could be in school for eight years, steadily working toward a degree, and never be recognized as deserving of the benefits of higher class standing and never receive any financial support. Now that is a real disincentive to completion.

So, what’s the point. Well, if 15-to-finish were just a catchy slogan it would be of no real concern.  Indeed, some of the things I have described are things that we can fix locally by reimagining our registration priorities and focusing on part-time tuition support. But there are now trends toward additional financial aid strategies being tied to the 15 credits per semester (See New York’s Excelsior Program as a start).  As the nation discusses free tuition, the nuances I have described are frequently missed. Many of these proposals are built around that ideal full-time student.  Yet many of the students who would benefit from this tuition break will not qualify because they will not be able to complete 15 credits per semester.  Again, the penalties can be tremendous.

As for cause and correlation, well it is obviously true that completing 15 credits per semester is correlated with higher graduation rates. But the root causes of student success, which lead to the ability to actually complete those 15 credits, are far more complex than the credit story.  We need to step back and look at the conditions that are driving those behaviors so that the policies we design do not continue to disadvantage those who need our help the most.