"These aren't failed traditional students; they are the new norm."

So says Paul Attewell, author of Passing the Torch: Does Higher Education for the Disadvantaged Pay Off Across the Generations, as he suggests that we need to take a longer view of measuring college success when looking at "non-traditional" students (check out http://insidehighered.com/news/2007/06/15/cuny for a brief discussion of the book.)

I've written here before about the changing demographics of higher education seekers and the implications of those shifts on the way scholarships are created. Attewell's study recognizes that “three-fourths of students today aren’t traditional, so it doesn’t make sense to try to understand them through the lens of an 18-year-old living in a dorm." And yet, most of us, probably think about college students, and therefore the challenges that face them, within that paradigm. One of the most interesting things about Attewell's comments to me is that he is also suggesting that the way we measure "success" needs to match up with the who/what we're measuring. It sounds logical, doesn't it? The fact is that any result, such as the percentage of students completing a degree in 6 years, is a composite result of each individual in the group. And likely, there are groups of individuals who behave (to use marketing lingo) more similarly to each other than they do to another group. Attewell is suggesting that as we tease out what college success means, we need to also tease out sub-groups of individuals who have certain things in common, and be rigorous enough (or is it flexible enough) to adjust our measurement view accordingly. This is just good analytical practice to me...but then I think about these things often.

The rub here, is that more and more often, the push to be able to "benchmark" organizational performance, or individual performance, for that matter, requires that we migrate toward the average and in doing so, we lose the richness of what's really going on! If we only look at the percentage of students who graduate in 6 or fewer years, without digging into who is graduating in 4 and who is graduating in 10...we lose important information about what is working for whom and, in this case, fundamental shifts in the who the "whom" is.

There is, as always, a balance to be struck between measuring shorter-term progress and longer-term change. As donors, we don't want to, and shouldn't have to, wait 10 years to get some indication of how things are going. On the other hand, we also need to be acutely aware that context is king in interpreting any kind of measurement. Don't be afraid to "dig in" and ask questions that get beyond the average. That's where you really learn what works where and for whom.

Nancy DeFauw

Posted at 6:00 AM, Jul 13, 2007 in Accountability | Education | Permalink | Comments (2)


Occurs to me that not only do you need reasonable benchmarks, but you also need to be very clear about desired outcomes and be sure that what you are trying to accomplish can be achieved. Similarly you want some assurance that those achievements are not only measurable but can be attributed to the programs you are supporting with your philanthropic dollars.

Posted by: Bruce Trachtenberg [TypeKey Profile Page]

Agreed. Having thoughtfully developed outcomes and meaningful indicators of progres against those outcomes is a critical first step in ensuring the important (and meaningful) things are those that are being measured. But, I'm suggesting that once you get to results there needs to be a discipline of digging in to sort out where they come from (getting underneath, if you will, the average result). And, in fact, by studying results, outcomes and indicators themselves ought to be refined. It is, as is all analysis, a continuous recalibration process. I think donors can play an invaluable role in this process by asking the questions AND supporting their organizations in digging into the results.

Posted by: Nancy DeFauw