Opinion: Time for real college assessment
By James R. Pomerantz and Daniel Oppenheimer
How do we know how much we learn in college?
If you search for an answer to this question, prepare to be disappointed. Popular college rankings such as U.S. News &World Report’s are based on subjective judgments of schools’ reputations and on the difficulty of gaining admission. Rarely, if ever, are rankings based on direct, value-added assessments comparing how well students perform when they graduate college with how they performed when they first enrolled.
It may seem odd that our colleges and universities—which study complex topics ranging from subatomic particles to the Big Bang—would have so little data with which to assess their own effectiveness. What might cause these institutions to be so reluctant to pursue information that would help them understand their own impact on students?
Some colleges may fear that the results will prove to be embarrassing. Some may argue that college skills such as writing proficiency cannot be measured accurately even though schools assign their students grade point averages with three digits of numerical precision.
But the biggest reason why college effectiveness doesn’t get measured is that schools, policy makers, parents, and students take for granted that undergraduates’ skills improve during college. This assumption of improvement may seem intuitive, but it is not backed up by much in the way of evidence. In studies described in the book “Academically Adrift,” more than 45 percent of college students showed no improvement in critical thinking during their time in college.
It’s time for a wakeup call. If schools aren’t measuring student learning, we cannot know whether students are actually learning.
Along with colleagues, we recently published the results of a nine-year study designed to answer whether students finishing college write any better than they did when they first enrolled. There is more to college than writing, but we studied writing because it is one skill that students, schools, and employers see as critically important. We selected a small private university in the Southwest as our test case, and randomly sampled students for testing. We modelled our study as closely as possible on randomized clinical trials, the same standards used to determine whether new medicines have their intended health benefits. We tested students both cross-sectionally (comparing first-year through fourth-year students on a single day) and longitudinally (tracking specific students over the course of their undergraduate years).
There was good news. We found that students improved their writing scores, as judged by expert assessors of writing who were blind to the identities of the students and to the purpose of the study. That improvement was approximately 7 percent from the first to the fourth year of college, a statistically significant increase. The same degree of improvement was found in persuasive and expository writing, for the cross-sectional and longitudinal data, for male and female students, and for humanities/social science majors and engineering/natural science majors.
Our findings also suggest an opportunity for improvement: Now that we have a benchmark, we can test new instructional interventions to see how much they improve upon (or prove worse than) the status quo. While 7 percent improvement is not trivial, we would hope for better. Schools need to engage in value-added assessment of their students. Without such testing, we will be navigating blind.
For college administrators who believe that studies such as ours are too expensive and time-consuming, we encourage them to think again. Universities spend countless hours and resources developing curricular requirements, establishing tutoring centers, and otherwise attempting to improve undergraduate instruction. But they typically fail to establish a formal assessment system to determine whether those interventions are effective.
Studies like ours are simple and inexpensive compared with other common initiatives on campus. And such studies are the only way we can know whether schools are accomplishing their goals.
We hope that universities will begin testing their entering students, not just on their writing skills but on other critical skills as well, so that four years down the road they can see whether their teaching has made a difference. When you bother to collect the data, before-and-after-college comparisons are not that hard to make, and they can make a big difference.
James R. Pomerantz is a professor of psychology at Rice University and Daniel Oppenheimer is a professor of psychology and management at UCLA.