Previous Essays

Teaching to Lead
Teaching practices that help students develop leadership

Knowing About vs. Knowing How
The challenges of translating what we know about good practices into making our classes, majors, and programs more effective

Clear and Organized Teaching
Why a basic and under-appreciated teaching skill is critical for student learning

Improving Educational Quality
Lessons learned from liberal arts colleges about the conditions that facilitate the use of evidence to improve student learning

Tuesday
Oct212008

What you say, what they hear

Over the last six years, we have had the good fortune to have worked with faculty, staff, students, and administrators at many different institutions who are trying to use assessment to improve the quality of their colleges and universities. Sometimes these efforts succeed, and unfortunately, sometimes they do not.

Discussions about the success or failure of assessment programs seem to pivot on the elusive variable called “faculty buy-in.” That is, do faculty (and staff and students) engage wholeheartedly in the campus’s assessment activities, or do they attempt to get through the process with as little effort as possible so that they can return to their “real” work?

Faculty, staff, and student engagement in assessment is, of course, linked to the implicit and explicit reward structures that exist at each campus. But before we begin rethinking tenure, merit, and other embedded, difficult-to-change structures, it may be useful to consider whether your campus’s assessment program is built so that dedicated members of the campus community are more likely to see it as a meaningful activity. That is, rather than changing the reward structure to fit assessment, perhaps an easy first step is to tune assessment to fit the implicit and explicit reward structures on your campus.

As an example, the vast majority of faculty and staff with whom we have worked yearn to be good teachers. They are all touched by students who return years after graduation and talk about the change that a course, a class, or even an offhand comment made in their lives. At the same time, most faculty and staff are mortified at the prospect of being “found out” as poor teachers.

How can this help us structure assessment? Our sense is that assessment has a better chance of being seen as a meaningful activity if it connects with the educational and scholarly commitments of faculty, staff, and even students. Specifically, an assessment program has a better chance of creating meaningful work if

  1. it creates practical information that faculty and staff can use directly to improve the quality of their teaching;
  2. it addresses some of the questions and concerns that faculty and staff have about students;
  3. the time and effort that faculty and staff invest in assessment activities pays off in some meaningful way in terms of institutional, department, or program changes;
  4. it helps faculty and staff see things that are normally invisible.

The last of these four items is probably the most obscure and, therefore, merits an example. One of the most interesting things we have learned from the Wabash National Study is that students’ perceptions matter. Regardless of whether they accord with faculty and staff views, students’ perceptions about whether faculty care about their development, are organized and clear in their teaching, and are committed to teaching correlate with the extent to which students grow on independent measures of critical thinking, moral reasoning, well-being, and other outcomes.

But to what extent are faculty and staff aware of students’ perceptions? Assessment can help make students’ perceptions, and the possible disconnect between faculty and student perceptions, visible.

We are just now beginning to analyze faculty surveys from the first round of the Wabash National Study, and an interesting discrepancy between faculty and student perceptions of “prompt” feedback is emerging. The direction of this discrepancy is probably just what you’d expect: faculty give themselves much higher marks for giving prompt feedback than students do. For example, about 30% of the faculty at liberal arts colleges indicated that they used multiple drafts “a great deal,” while 17% of students indicated that they were asked to prepare two or more drafts “very often.” Likewise, 60% of faculty at liberal arts colleges reported they gave frequent feedback to their students “a great deal,” and 52% indicated that they gave detailed feedback “a great deal.” On the other hand, 20% or less of students reported that their faculty “very often” gave them prompt feedback or timely information about their performance. (Click here to see detailed information.)

This finding is not dispositive. For example, we do not know what students and faculty mean by “prompt” or “detailed” feedback. But regardless of whether students are somehow “misperceiving” the behavior of faculty, their perceptions about feedback are correlated with their growth on critical thinking, moral reasoning, and a host of other outcomes, and therefore they merit further inquiry.

One thing we have learned from our own classroom experiences as teachers is that our perceptions of what we have done or said in class may not align with how students heard or understood our actions. Assessment, done well, gives us the opportunity to “make visible” how students are experiencing our efforts.

--CB and KW