Assessment

Study tracks European tracking of university students

Smart Title: 

Universities and governments on the continent exhibit many of the same data limitations as U.S. colleges in gauging student outcomes, study shows.

International educators debate mass vs. elite higher education

Smart Title: 

International educators at a meeting of the Organization for Economic Cooperation and Development put a new twist on an old debate, prodded by New York University's provocative president.

Pulse podcast examines Blackboard Analytics for Learn

Smart Title: 

This month's edition of The Pulse podcast features a conversation with Mark Max, vice president of Blackboard Analytics for Learn.

Data show key role for community colleges in 4-year degree production

Smart Title: 

Study shows that 45 percent of bachelor's degree recipients studied at two-year institutions first -- as many as three-quarters in some states.

Why assessment isn't about certainty (essay)

When I first floated the idea of writing a weekly column from my perch as director of institutional research and assessment at my college, everyone in the dean’s office seemed to be on board.  But when I proposed calling it “Delicious Ambiguity,” I got more than a few funny looks.  
 
Although these looks could have been a mere byproduct of the low-grade bewilderment that I normally inspire, let’s just say for the sake of argument that they were largely triggered by the apparent paradox of a column written by the measurement guy that seems to advocate winging it. But strange as it may seem, I think the phrase “Delicious Ambiguity” embodies the real purpose of Institutional Research and Assessment. Let me explain why.
 
This particular phrase is part of a longer quote from Gilda Radner – a brilliant improvisational comedian and one of the early stars of “Saturday Night Live.” The line goes like this:
 
“Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next. Delicious Ambiguity.”
 
For those of you who chose a career in academia specifically to reduce ambiguity – to use scholarly research methods to discover truths and uncover new knowledge -- this statement probably inspires a measure of discomfort.  And there is a part of me that admittedly finds some solace in the task of isolating statistically significant “truths.”  I suppose I could have decided to name my column “Bland Certainty,” but – in addition to single-handedly squelching reader interest – such a title would suggest that my only role is to provide final answers – nuggets of fact that function like the period at the end of a sentence.
 
Radner’s view of life is even more intriguing because she wrote this sentence as her body succumbed to cancer.  For me, her words exemplify intentional – if not stubborn – optimism in the face of darkly discouraging odds. I have seen this trait repeatedly demonstrated in many of the faculty and staff members I know over the last several years as you have committed yourself to helping a particular student even as that student seems entirely uninterested in  learning.
 
Some have asserted that a college education is a black box; some good can happen, some good does happen – we just don’t know how it happens. On the contrary, we actually know a lot about how student learning and development happens – it’s just that student learning doesn’t work like an assembly line.  
 
Instead, student learning is like a budding organism that depends on the conduciveness of its environment; a condition that emerges through the interaction between the learner and the learning context.  And because both of these factors perpetually influence each other, we are most successful in our work to the degree that we know which educational ingredients to introduce, how to introduce them, and when to stir them into the mix.  The exact sequence of the student learning process is, by its very nature, ambiguous because it is unique to each individual learner.
 
In my mind, the act of educating is deeply satisfying precisely because of its unpredictability.  Knowing that we can make a profound difference in a young person’s life – a difference that will ripple forward and touch the lives of many more long after a student graduates – has driven many of us to extraordinary effort and sacrifice even as the ultimate outcome remains admittedly unknown.  What’s more, we look forward to that moment when our perseverance suddenly sparks a flicker of unexpected light that we know increases the likelihood – no matter how small – that this person will blossom into the lifelong student we believe they can be.
 
The purpose of collecting educational data should be to propel us – the teacher and the student – through this unpredictability, to help us navigate the uncertainty that comes with a process that is so utterly dependent upon the perpetually reconstituted synergy between teacher and student. The primary role of institutional research and assessment is to help us figure out the very best ways to cultivate – and in just the right ways – manipulate this process.  
 
The evidence of our success isn’t a result at the end of this process.  The evidence of our success is the process.  And pooling our collective expertise, if we focus on cultivating the quality, depth, and inclusiveness of that process, it isn’t outlandish at all to believe that our efforts can put our students on a path that someday just might change the world.
 
To me, this is delicious ambiguity.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. This essay is adapted from the first post on his new blog.

Editorial Tags: 

Program Assessment Workshop

Date: 
Sat, 10/27/2012

Location

Embassy Suites Hotel San Diego Bay - Downtown 601 Pacific Highway
92101-5914 San Diego , California
United States

Program Assessment Workshop

Date: 
Sat, 09/22/2012

Location

Tod Wehr Conference Center, Milwaukee School of Engineering (MSOE) 1025 North Broadway
53202 Milwaukee , Wisconsin
United States

WGU pushes transfer students to graduate community college first

Smart Title: 

Western Governors U. pushes graduation even before students enroll by offering financial perks for associate degree holders and, at WGU Texas, through partnerships with community colleges.

General Education and Assessment: A Sea Change in Student Learning

Date: 
Thu, 02/28/2013 to Sat, 03/02/2013

Location

Boston , Massachusetts
United States

Essay on flaws of Education Department's list of most expensive colleges

The Higher Education Opportunity Act of 2008 required the Department of Education to publish "College Affordability and Transparency Lists" of colleges and universities with the highest and lowest published prices and the highest and lowest net prices of attendance. To get on the list a college needs to be in the top 5 percent or bottom 10 percent of the cost scale. The new lists are now out. On the Education Department website, Secretary of Education Arne Duncan says, "These lists are a helpful tool for students and families as they determine what college or university is the best fit for them." We wish they really were.

A casual glance at the shame list of the most expensive private colleges pulls up names like the Culinary Institute of America, the Art Institute of Chicago, Cornish College of the Arts, and Berklee College of Music. At the other end of the scale, the list of the lowest-cost programs includes quite a number of tiny Talmudic institutions and Bible colleges. These are all fine programs, but to paraphrase former President Clinton, this isn't a list that looks like where America goes to college.

The Department of Education's definition of the net price of attendance paints a false picture of what most students have to pay, and the list of colleges and universities held up for public rebuke clearly reflects the weakness of this measure. In addition, the way net cost is calculated by the Department of Education may induce some institutions to change their behavior in truly unhelpful ways as they attempt to get off the list or to make sure they don’t get on it.

Here is the federal definition of the net price of attendance: "Average net price is for full-time beginning undergraduate students who received grant or scholarship aid from federal, state or local governments, or the institution." For starters, this price of attendance includes room and board charges, which arguably should not be a cost, since the student must eat and sleep whether or not they are in college.

But the key problem with the definition is that the net cost measure for the whole institution is based only on those students who actually get aid. Here is an example of the type of problem that this measure can create.

College X has 5 students and a list price of $40,000. Suppose that one of the five students at this college gets a full $40,000 scholarship and the remaining four are full pay students whose families fork out $40,000. For this institution the Education Department would report an average net price of $0 since the only student getting aid received a completely free ride. Now consider College Y, which also has 5 students and the same list price of $40,000. College Y has a scholarship budget of $40,000, just like College X. But instead of concentrating its grants upon one student, college Y gives each student an $8,000 scholarship.  The Department of Education would tell us that average net price for College Y is $32,000.

College X would show up as the world’s greatest bargain, while College Y might find itself on the shame list. This disparity of rankings happens even though both colleges have the same number of students, the same list price, and spend the same amount on scholarships. College X has found a way to game the system.  By concentrating its aid, it decreases the net price for students receiving aid.  But it has very few of those students.

Individuals and groups tend to respond to incentives. Colleges and universities are no different. When policy makers set up the rules of the game, colleges will respond to the incentives if they can. Since the shame list is a fixed percentage of the total number of colleges in the group, any college near the margin has a strong reason to try to game its way off the list. As our example makes clear, with the current rules one way to avoid being on the shame list is to concentrate grants and scholarships among as few students as possible.

We are sure that this is not what the Department of Education intends. But let's look at the socially perverse ways that colleges could game the system to get off the shame list. One way to concentrate aid is to avoid recruiting students who are likely to qualify for federal student aid. This is diametrically opposed to the thrust of federal financial aid policy ever since the passage of the Higher Education Act of 1965. Federal financial aid has been aimed at students with demonstrated financial need in order to improve access to higher education.

Additionally, if an institution does have a student who qualifies for federal need-based financial aid, the incentives are to pile on institutional grants. As a result, this poor student would get a better deal, but slightly less-poor students will be left out, and the middle-class students, unless they are targets for merit-based grants, will get a worse deal. In fact, middle-class students become somewhat poisonous in this system because they are a numerous group and because they require smaller dollops of aid than poorer students. Having a lot of middle-class students who receive small grants is a guaranteed way to see your net price go up in the federal calculation.

We’re sure the federal government does not want colleges to concentrate their grants and scholarships on a smaller number of students, but that is the new incentive for institutions that might be able to reallocate their grant monies.

The reasonable alternative way to measure the net cost of attendance is to compute what the average student actually pays, not what the average student who gets aid winds up paying. This method adds the full-pay students to the calculation. Many colleges, however, will favor the current federal definition of net price because the federal formula produces a lower number for net price than a formula that includes all students. But the colleges' preference for a lower number should not stand in the way of producing a measure that is more game-proof.

This way of calculating net price has the advantage that the number it gives you for net price is independent of how grant aid is distributed among the students who receive aid. If our hypothetical five-student college has $40,000 in aid to distribute, this measure will be unchanged whether that college gives all the money to one student or an equal amount to each. Using this measure of net price, if the institution wants to reduce its average net price then it will either have to find a way to reduce its list price or it will have to increase its scholarship budget. We think this is what policymakers want.

Our example is not a crazy hypothetical. For the colleges on the current shame list the percentage of their students getting aid is a very high 82.5 percent. But spreading this aid so widely as a tool to craft the freshman class now comes with consequences. It gets you smacked with a higher calculated net cost of attendance than if you had recruited differently.

We have recalculated net price to include all students and produced a very different "expensive 5 percent." On the alternative list, institutions such as Northwestern, Brown, Georgetown, and Villanova Universities and Boston College now appear. This list looks more like America’s major private colleges and universities. And on this list, aid is more concentrated. In the alternative top 5 percent, the percentage of students receiving aid is only 55 percent. The rest are full-pay students.

For individual colleges the effect is substantial. To take one example, Oberlin College is on the Education Department's shame list, holding down the 31st spot. Oberlin gives aid to 85 percent of its students. If we calculate net price for all students, Oberlin falls out of the shameful 5 percent.

Yet no matter the methodology, we do not think ranking colleges by net price is particularly useful. A single number for each institution tells a family almost nothing. The whole shame list ranking is a political exercise of dubious social value. The Education Department website does send students to each institution's net price calculator, but these calculators tend not to be standardized and they are often quite complex.

There is mileage, however, in reporting the size of the average grant aid package students receive at each college, broken down by rough categories of family income. A family should be able to find their income level and see that on average a student at university X that comes from a family like theirs received $5,682 dollars of federal, state, and institutional grant aid (not loans). Then the family could decide for itself whether or not a particular college offered value for money. There is no shame in having an expensive program and no particular virtue in being cheap.

We wish we could share Secretary Duncan’s optimism about the current lists, but we cannot. These lists will not help students and families find the college that best fits them.  The process of matching students to colleges is complex, and the information contained in the net price rankings actually is misleading. Lastly, as institutions respond to the incentives inherent in the current methodology for calculating net price, institutional aid may be distributed in increasingly unfair and socially inefficient ways.

Robert B. Archibald and David H. Feldman are professors of economics at the College of William and Mary and are the co-authors of Why Does College Cost So Much? (Oxford University Press).

Section: 

Pages

Subscribe to RSS - Assessment
Back to Top