Useful – Guidelines for Judging the Effectiveness of Assessing Student Learning, by Larry Braskamp and Mark Enberg
In their 2014 report, Knowing What Students Know and Can Do: The Current State of Student Learning Outcomes Assessment in US Colleges and Universities, Kuh, Jankowski, Ikenberry, and Kinzie conclude by stating:
"Higher education may be on the verge of an inflection point where what follows is a more purposeful use of evidence of student learning outcomes in decision making which, in turn, has the potential to enhance academic quality and institutional effectiveness. To realize this promise sooner than later, colleges and universities must complete the transition from a culture of compliance to a culture of evidence based decision-making in which key decisions and policies are informed and evaluated by the ultimate yardstick: a measurable, positive impact on student learning and success."
We have heard the phrase "culture of evidence" for many years, and while we agree with the idea, we wonder whether the phrase makes evidence an end, rather than a means to an end. Our goal is to get better for students, and evidence is a key, but not the sole, means by which to achieve this goal.
In a recent review of our work with liberal arts colleges we noted that the colleges that made gains in improving student learning have "fostered a culture that supports the ongoing development and honest evaluation of educational experiments by students, staff, and faculty to improve teaching and learning" and that this culture is built on "a willingness to question the impact of courses, majors, and programs; . . . a sense of trust, respect, and collegiality among faculty, staff, administrators, and students; and an emphasis on small-scale, evidence-based improvement efforts that do not require additional resources." [See our report: Improving the Educational Quality of Liberal Arts Colleges]
So the cultures that "close the loop" not only have to collect evidence, but also must have other qualities that give evidence traction. Lots of schools have evidence—only a few schools put that evidence to use. Our long-term interest in learning more about the factors that give evidence actionable weight in some contexts but not in others has led us to begin a new national research project: the Wabash Study of Institutional Change (WSIC). We are kicking off that study today and hope that you will consider participating. For more information about the WSIC and your opportunity to contribute to the study, see the Call for Participation.
"Assessing our assessment" will sound like the 10th circle of hell for many faculty. Yet if we want our assessment efforts to improve, we must reflect on whether our work to gauge, summarize, and report on how well our students are learning is paying off for us and for our students. Using the term "meta-assessment" with your colleagues may reduce their catcalls and derision when you ask them to engage in this important work.
Keston Fulcher and Megan Rodgers Good have written a good short review on assessing assessment called The Surprisingly Useful Practice of Meta-Assessment. In addition to providing a crisp rationale for evaluating our assessment programs, they also include a nice set of documents on departmental assessment.
As reductive as "assessing our assessment" might seem on first glance, we believe that systematic reflection with our colleagues about the purposes and impact of our work is essential to being a liberally educated teacher.
"Parents rarely intervened or solved problems for students by contacting the college, but when they did, it was most often for issues regarding financial aid and bill pay. About 24 percent of parents intervened for each reason."
Since the financial interests of colleges and universities don't necessarily align with those of students, a little hovering on the part of parents is probably a good thing.