About

This is a blog of the Center of Inquiry in the Liberal Arts at Wabash College and the Higher Education Data Sharing Consortium, by Charles Blaich, director, and Kathleen Wise, associate director.

 

Directors' Blog

Wednesday
Nov032010

How are your plans for collecting student work progressing?

Given a choice, most faculty and staff would say that the papers, presentations, and performances their students create are better sources of assessment information than national surveys and standardized tests. And yet, most institutions rely more heavily on the latter than the former in their assessment programs.

In an upcoming report that will be distributed by the National Institute for Learning Outcomes Assessment (NILOA), Kathy Wise and I argue that one of the reasons we are more likely to gather than to use assessment data is because, whatever their limitations, standardized measures make it easier to collect data.

One reason for the difference in how challenging it is to collect versus use evidence is that there are many nationally known standardized tests, surveys, predesigned rubrics, or e-portfolio systems that institutions can adopt to collect assessment data and, in some cases, deliver detailed reports. We have heard these options sometimes referred to as “assessment in a box” or “plug in and play assessment.” This does not mean that gathering assessment evidence is easy, but it cuts down on the things that institutions have to design from scratch.

Most of the schools in the 2010 Wabash Study use some combination of national surveys and standardized tests.

Percentage of schools using the following input measures:

Cooperative Institutional Research Program Freshman Survey – 63%
Beginning College Survey of Student Engagement – 33%
Wabash National Study Incoming Student Survey – 33%

Percentage of schools using the following measures of student experiences:

National Survey of Student Engagement – 93%
Higher Education Research Institution College Senior Survey – 30%
Noel-Levitz Student Satisfaction Inventory – 23%

Percentage of schools using the following outcome measures:

Collegiate Learning Assessment – 50%
Collegiate Assessment of Academic Proficiency Critical Thinking Test – 33%
Wabash Study Outcome Measures – 33%

One of the most challenging parts of the new Wabash Study is that all schools will be examining student work. This means that much of the data collection and analysis that normally gets “outsourced” has to be done by people on campus. Given the goal of reviewing student work by this summer, we have a very tight timeline.

The information we currently have about institutions' plans for collecting and reviewing student work is sparse. Therefore, we ask two things of you. First, please download and review the information we have about your institution's plans for reviewing student work by clicking here (MS Excel document). If you have revisions, just send them in an email to staff@centerofinquiry.org. We will update the document. Second, we have a very short, 13-question survey on the specifics of your school’s plans that we would like you to complete. You can go to the survey by clicking here.

We would like you to complete the survey by November 12, and we will blog about the results by November 18.

Tuesday
Oct262010

How goes your work with rubrics?

All institutions that joined the Wabash Study this fall committed to using rubrics to evaluate some form of student work. According to the Wabash Study timeline, summer 2011 is the time for reviewing student work. This means that you need to be well on your way to (1) identifying the student work you wish to evaluate, (2) contacting the people from whom you will get the work and the prompts for that work, and (3) developing rubrics you will use to evaluate the work.

As we discussed at the kickoff meetings, you can either create your own rubrics from scratch or adapt rubrics that have already been developed by others. Generally, we recommend that you start with a rubric that someone else has developed and then adapt it for your purposes.

Adapting a rubric is a nice way of saying that before you begin using the rubric in earnest, you’ll need to pilot and revise your rubric by applying it to examples of your students’ work and by tuning the levels and descriptors in the rubric to match the unique qualities of your students, their courses, or the outcomes that you’re trying to assess.

This will take time and collaborative work. We suggest at least two or three sessions with different groups of faculty and staff who are representative of the people who will be applying the rubric in the summer. You’ll want to keep tuning your rubric until most graders can use it comfortably and consistently. We also suggest that the samples of student work you use to tune the rubric should range in quality from poor to good to make sure that the levels of the rubric are tested and tuned to the range of student work you will encounter in the evaluation phase of the rubric project.

It is also important to develop a “norming” exercise for your work this summer. Here’s a great description from the Academy of Art University of what norming looks like:

In a norming session, teachers all use the same rubric and score the same pieces without looking at each others’ scores. After three pieces have been scored, the teachers look at all of the scores together and discuss discrepancies, clarifying as they go. This process is repeated until the scores are the same most of the time—depending on the purpose of your norming session. (retrieved from http://faculty.academyart.edu/resource/rubrics.html, October 26,2010)

Resources:

1) Linda Suskie’s Assessing Student Learning: A Common Sense Guide (2nd ed.) includes a clear and concise chapter on using rubrics for assessment. Suskie describes different kinds of rubrics, provides examples of rubrics, and gives sound advice on how to develop and use rubrics. For example, Suskie suggests that rubric rating scales should contain at least three but no more than five levels and that it is important to create short descriptors for each level of the trait being evaluated. She also includes a helpful list of references, such as articles in “Practical Assessment, Research, and Evaluation” (see http://pareonline.net/).

Assessing Student Learning also contains a great deal of useful information about all aspects of assessment from defining assessment jargon to communicating assessment results. It is written for people who are not assessment experts.

2) You can find a shorter, but useful, PowerPoint presentation on Creating, Implementing, and Using Rubrics at http://www.ncsu.edu/assessment/old/presentations/assess_process/creating_implementing.pdf.

3) Many institutions are using AAC&U’s VALUE Rubrics (see http://www.aacu.org/value/rubrics/index_p.cfm?CFID=29677387&CFTOKEN=89446404).

4) You can also take a look at the Advice for Evaluating Student Work Using Rubrics document from the September Wabash Study kickoff meetings.

If you are using other rubrics and are willing to share them, please send them to us, and we will post them for the other Wabash Study institutions. Please also let us know if you are still looking for rubrics to assess specific student traits. We’re glad to see what we can locate.

Thursday
Oct072010

The Wabash Study has begun

First, our thanks to everyone who traveled to Crawfordsville to kick off the new version of the Wabash Study. We are grateful for the chance to work with you. A couple of quick updates—

We are compiling the meeting comments and will post them next week along with our summary comments. Based on the comments, we will be creating an electronic repository for the institutional assessment portfolios, and we will be sending out more information to clarify the content and structure of the portfolio.

Here are two quick examples of helpful follow-up ideas/questions. Jo Beld (St. Olaf) made an important point about the portfolio at the end of the second kickoff meeting—the narrative should include details about what assessment measures should change in response to the program, course, or institutional changes you decide to make. Susan Campbell (Middlebury College) added in a later phone call that the communication plan should also include thoughts on how the main players in implementing the project on campus should also develop a "team communication plan," of sorts, on how they will keep one another up to date on what they are doing amid all their other work on campus. If you have other suggestions, please add them in the comments below.

Parting point – As we said during the meeting, the purpose of assessment is to use evidence to guide improvement. Too often, assessment is understood as the process of measuring student learning, not of measuring student learning so that you can implement improvements. Of course, approaching assessment this way means that things may change, and change can be tough for any organization.

Rosabeth Moss Kanter wrote a short article on seven "Change Agent Bumper Stickers" for the Harvard Business Review. Kanter's audience consists mostly of business women and men, but some of her points are relevant for our work—

Change is a threat when done to me, but an opportunity when done by me . . . Resistance is always greatest when change is inflicted on people without their involvement, making the change effort feel oppressive or constraining. If it is possible to tie change to things people already want, and give them a chance to act on their own goals and aspirations, then it is met with more enthusiasm and commitment. In fact, they then seek innovation on their own.

Change is a campaign, not a decision. How many people make vows to improve their diet and exercise, then feel so good about the decision that they reward themselves with ice cream and sit down to read a book? CEOs and senior executives make pronouncements about change all the time, and then launch programs that get ignored. To change behavior requires a campaign, with constant communication, tools and materials, milestones, reminders, and rewards.

Everything can look like a failure in the middle . . . There are numerous roadblocks, obstacles, and surprises on the journey to change, and each one tempts us to give up. Give up prematurely, and the change effort is automatically a failure. Find a way around the obstacles, perhaps by making some tweaks in the plan, and keep going. Persistence and perseverance are essential to successful innovation and change.

You can read the entire article at http://blogs.hbr.org/kanter/2010/08/seven-truths-about-change-to-l.html

Thursday
Sep232010

It’s not the program that matters, but whether the program creates good practices

In a recent article in Assessment Update, Gary Pike reviews four lessons he has learned in his work with the National Survey of Student Engagement. One of these lessons is that most effects of college are indirect.

“Based on twenty years of research on college students, Pascarella and Terrenzini (1991) concluded that many effects of college on students are indirect. My own research with NSSE has found that participation in learning communities does not lead directly to student learning. Instead, participating in a learning community leads to higher levels of engagement in worthwhile educational activities, and higher levels of engagement in turn lead to gains in learning and development.” (p. 10–11)

For example, using Wabash National Study data Salisbury & Goodman (2009) found that undergraduate  research opportunities, first-year seminars, learning communities, and volunteer activities promoted the development of intercultural competence only if they increased students’ diverse experiences, integrative learning experiences, and clarity and organization of instruction. Thus, a learning community or first-year seminar improved intercultural competence if it was implemented in a way that improved these basic good practices.

Pike added, “The lesson to be learned is that assessment efforts should focus on both student experiences and learning outcomes. Assessment research that focuses exclusively on learning outcomes is poorly positioned to document whether improvement initiatives changes student behaviors in ways that led to improved student learning. Likewise, assessments that focus exclusively on changes in behavior beg the questions of whether changes in behavior result in improved learning outcomes.” (p. 11)

Pike concludes his article saying, “Perhaps the real lesson to be learned from a decade of experience is good assessment is not easy. Accurate and appropriate assessment of students’ experiences and learning outcomes requires careful and thoughtful attention to institutional goals and strategies for student learning, understanding of the processes and contexts for learning, attention to detail, and an understanding of the strengths and weaknesses of different analytical techniques. Measurement issues abound in assessment, but the greatest challenges lie in accurate and appropriate interpretation and use of assessment data.” (p. 12)

References

Pascarella, E. T., & Terenzini, P. T. (1991). How college affects students: Findings and insights from twenty years of research. San Francisco: Jossey-Bass.

Pike, G. (2010). Assessment measures. Lessons learned from a decade of assessment research using the National Survey of Student Engagement. Assessment Update, 22(3), 10-12.

Salisbury, M. H. & Goodman, K. M. (2009). Educational practices that foster intercultural competence. Diversity and Democracy, 12(2), 12–13.


Page 1 ... 4 5 6 7 8 ... 13 Next 4 Entries »