About

This is a blog of the Center of Inquiry in the Liberal Arts at Wabash College and the Higher Education Data Sharing Consortium, by Charles Blaich, director, and Kathleen Wise, associate director.

 

Directors' Blog

Tuesday
May102011

Using financial aid to promote student success

In an evaluation of the impact of different student success programs, the MDRC found that linking financial aid with students' future academic performance can, in fact, improve student performance.

According to a report summarizing the evaluation of two Louisiana community colleges:

The Louisiana program offered students up to $1,000 for each of two semesters for a total of $2,000. The scholarship was paid in three increments throughout the semester if students enrolled at least half time and maintained a “C” (2.0) or better grade point average (GPA). Program counselors monitored academic performance and disbursed the scholarship checks directly to students. Notably, the scholarships were paid in addition to federal Pell Grants and other financial aid. Because the program was funded with state welfare funds, eligibility was limited to low-income parents (though they did not need to be on welfare). The research sample was mostly African-American single mothers. Students in the study's control (or comparison) group in Louisiana could not receive the Opening Doors scholarship, but they had access to standard financial aid and the colleges' standard counseling.

The evaluation found that tying financial aid to academic performance can generate large positive effects — some of the largest MDRC has found in its higher education studies. The program substantially improved students' academic outcomes, and the positive effects continued through the third and fourth semesters of the study, when most students were no longer eligible for the scholarship. Students in the study's program group were more likely to attend college full time. They also earned better grades and more credits." (pages 2–3)

The report also includes summaries of other evaluations of programs that are designed to improve student success including learning communities and creating target support programs for students who are on academic probation. See http://www.mdrc.org/sites/default/files/policybrief_27.pdf to read the full report.

Wednesday
Nov032010

How are your plans for collecting student work progressing?

Given a choice, most faculty and staff would say that the papers, presentations, and performances their students create are better sources of assessment information than national surveys and standardized tests. And yet, most institutions rely more heavily on the latter than the former in their assessment programs.

In an upcoming report that will be distributed by the National Institute for Learning Outcomes Assessment (NILOA), Kathy Wise and I argue that one of the reasons we are more likely to gather than to use assessment data is because, whatever their limitations, standardized measures make it easier to collect data.

One reason for the difference in how challenging it is to collect versus use evidence is that there are many nationally known standardized tests, surveys, predesigned rubrics, or e-portfolio systems that institutions can adopt to collect assessment data and, in some cases, deliver detailed reports. We have heard these options sometimes referred to as “assessment in a box” or “plug in and play assessment.” This does not mean that gathering assessment evidence is easy, but it cuts down on the things that institutions have to design from scratch.

Most of the schools in the 2010 Wabash Study use some combination of national surveys and standardized tests.

Percentage of schools using the following input measures:

Cooperative Institutional Research Program Freshman Survey – 63%
Beginning College Survey of Student Engagement – 33%
Wabash National Study Incoming Student Survey – 33%

Percentage of schools using the following measures of student experiences:

National Survey of Student Engagement – 93%
Higher Education Research Institution College Senior Survey – 30%
Noel-Levitz Student Satisfaction Inventory – 23%

Percentage of schools using the following outcome measures:

Collegiate Learning Assessment – 50%
Collegiate Assessment of Academic Proficiency Critical Thinking Test – 33%
Wabash Study Outcome Measures – 33%

One of the most challenging parts of the new Wabash Study is that all schools will be examining student work. This means that much of the data collection and analysis that normally gets “outsourced” has to be done by people on campus. Given the goal of reviewing student work by this summer, we have a very tight timeline.

The information we currently have about institutions' plans for collecting and reviewing student work is sparse. Therefore, we ask two things of you. First, please download and review the information we have about your institution's plans for reviewing student work by clicking here (MS Excel document). If you have revisions, just send them in an email to staff@centerofinquiry.org. We will update the document. Second, we have a very short, 13-question survey on the specifics of your school’s plans that we would like you to complete. You can go to the survey by clicking here.

We would like you to complete the survey by November 12, and we will blog about the results by November 18.

Tuesday
Oct262010

How goes your work with rubrics?

All institutions that joined the Wabash Study this fall committed to using rubrics to evaluate some form of student work. According to the Wabash Study timeline, summer 2011 is the time for reviewing student work. This means that you need to be well on your way to (1) identifying the student work you wish to evaluate, (2) contacting the people from whom you will get the work and the prompts for that work, and (3) developing rubrics you will use to evaluate the work.

As we discussed at the kickoff meetings, you can either create your own rubrics from scratch or adapt rubrics that have already been developed by others. Generally, we recommend that you start with a rubric that someone else has developed and then adapt it for your purposes.

Adapting a rubric is a nice way of saying that before you begin using the rubric in earnest, you’ll need to pilot and revise your rubric by applying it to examples of your students’ work and by tuning the levels and descriptors in the rubric to match the unique qualities of your students, their courses, or the outcomes that you’re trying to assess.

This will take time and collaborative work. We suggest at least two or three sessions with different groups of faculty and staff who are representative of the people who will be applying the rubric in the summer. You’ll want to keep tuning your rubric until most graders can use it comfortably and consistently. We also suggest that the samples of student work you use to tune the rubric should range in quality from poor to good to make sure that the levels of the rubric are tested and tuned to the range of student work you will encounter in the evaluation phase of the rubric project.

It is also important to develop a “norming” exercise for your work this summer. Here’s a great description from the Academy of Art University of what norming looks like:

In a norming session, teachers all use the same rubric and score the same pieces without looking at each others’ scores. After three pieces have been scored, the teachers look at all of the scores together and discuss discrepancies, clarifying as they go. This process is repeated until the scores are the same most of the time—depending on the purpose of your norming session. (retrieved from http://faculty.academyart.edu/resource/rubrics.html, October 26,2010)

Resources:

1) Linda Suskie’s Assessing Student Learning: A Common Sense Guide (2nd ed.) includes a clear and concise chapter on using rubrics for assessment. Suskie describes different kinds of rubrics, provides examples of rubrics, and gives sound advice on how to develop and use rubrics. For example, Suskie suggests that rubric rating scales should contain at least three but no more than five levels and that it is important to create short descriptors for each level of the trait being evaluated. She also includes a helpful list of references, such as articles in “Practical Assessment, Research, and Evaluation” (see http://pareonline.net/).

Assessing Student Learning also contains a great deal of useful information about all aspects of assessment from defining assessment jargon to communicating assessment results. It is written for people who are not assessment experts.

2) You can find a shorter, but useful, PowerPoint presentation on Creating, Implementing, and Using Rubrics at http://www.ncsu.edu/assessment/old/presentations/assess_process/creating_implementing.pdf.

3) Many institutions are using AAC&U’s VALUE Rubrics (see http://www.aacu.org/value/rubrics/index_p.cfm?CFID=29677387&CFTOKEN=89446404).

4) You can also take a look at the Advice for Evaluating Student Work Using Rubrics document from the September Wabash Study kickoff meetings.

If you are using other rubrics and are willing to share them, please send them to us, and we will post them for the other Wabash Study institutions. Please also let us know if you are still looking for rubrics to assess specific student traits. We’re glad to see what we can locate.

Thursday
Oct072010

The Wabash Study has begun

First, our thanks to everyone who traveled to Crawfordsville to kick off the new version of the Wabash Study. We are grateful for the chance to work with you. A couple of quick updates—

We are compiling the meeting comments and will post them next week along with our summary comments. Based on the comments, we will be creating an electronic repository for the institutional assessment portfolios, and we will be sending out more information to clarify the content and structure of the portfolio.

Here are two quick examples of helpful follow-up ideas/questions. Jo Beld (St. Olaf) made an important point about the portfolio at the end of the second kickoff meeting—the narrative should include details about what assessment measures should change in response to the program, course, or institutional changes you decide to make. Susan Campbell (Middlebury College) added in a later phone call that the communication plan should also include thoughts on how the main players in implementing the project on campus should also develop a "team communication plan," of sorts, on how they will keep one another up to date on what they are doing amid all their other work on campus. If you have other suggestions, please add them in the comments below.

Parting point – As we said during the meeting, the purpose of assessment is to use evidence to guide improvement. Too often, assessment is understood as the process of measuring student learning, not of measuring student learning so that you can implement improvements. Of course, approaching assessment this way means that things may change, and change can be tough for any organization.

Rosabeth Moss Kanter wrote a short article on seven "Change Agent Bumper Stickers" for the Harvard Business Review. Kanter's audience consists mostly of business women and men, but some of her points are relevant for our work—

Change is a threat when done to me, but an opportunity when done by me . . . Resistance is always greatest when change is inflicted on people without their involvement, making the change effort feel oppressive or constraining. If it is possible to tie change to things people already want, and give them a chance to act on their own goals and aspirations, then it is met with more enthusiasm and commitment. In fact, they then seek innovation on their own.

Change is a campaign, not a decision. How many people make vows to improve their diet and exercise, then feel so good about the decision that they reward themselves with ice cream and sit down to read a book? CEOs and senior executives make pronouncements about change all the time, and then launch programs that get ignored. To change behavior requires a campaign, with constant communication, tools and materials, milestones, reminders, and rewards.

Everything can look like a failure in the middle . . . There are numerous roadblocks, obstacles, and surprises on the journey to change, and each one tempts us to give up. Give up prematurely, and the change effort is automatically a failure. Find a way around the obstacles, perhaps by making some tweaks in the plan, and keep going. Persistence and perseverance are essential to successful innovation and change.

You can read the entire article at http://blogs.hbr.org/kanter/2010/08/seven-truths-about-change-to-l.html

Page 1 ... 4 5 6 7 8 ... 13 Next 4 Entries »