assessmentaccountability

Deeper Completion Data, State by State

The National Student Clearinghouse Research Center today released state-by-state data on the various pathways students take on their way to earning degrees and certificates. The data builds on a national report from 2012 that showed a more optimistic picture of college completion than other studies had found previously.

According to the report, 13 percent of students nationwide who first enrolled at a four-year public institution completed their credential at a different college. And 3.6 percent of students who began at a four-year public institution earned their first degree or certificate at a community college. Among other findings, the report also gives state-specific breakdowns of the proportion of students who began at community colleges and eventually completed at four-year institutions.

Audit Criticizes Calif. Agency That Oversees Private Colleges

California's Bureau for Private Postsecondary Education has "consistently failed to meet its responsibility to protect the public's interests," a state audit released Wednesday said. The report from the California State Auditor cited a list of agency's shortcomings, including long backlogs of applications for licenses and delays in processing applications, failing to "identify proactively and sanction effectively unlicensed institutions," and conducting far too few inspections of institutions. The bureau, which the legislature created in 2009 after the state's previous regulatory body was killed, challenged the audit's negative conclusion but agreed with its recommendations for improving the agency's performance going forward.

 

 

We need a new student data system -- but the right kind of one (essay)

The New America Foundation’s recent report on the Student Unit Record System (SURS) is fascinating reading.  It is hard to argue with the writers’ contention that our current systems of data collection are broken, do not serve the public or policy makers very well, and are no better at protecting student privacy than their proposed SURS might be. 

It also lifts the veil on One Dupont Circle and Washington behind-the-scenes lobbying and politics that is delicious and also troubling, if not exactly "House of Cards" dramatic. Indeed, it is good wonkish history and analysis and sets the stage for a better informed debate about any national unit record system.

As president of a nonprofit private institution and paid-up member of NAICU, the industry sector and its representative organization in D.C. that respectively stand as SURS roadblocks in the report’s telling, I find myself both in support of a student unit record system and worried about the things it wants to record. Privacy, the principle argument mounted against such a system, is not my worry, and I tend to agree with the report’s arguments that it is the canard that masks the real reason for opposition: institutional fear of accountability. 

Our industry is a troubled one, after all, that loses too many students (Would we accept a 50 percent success rate among surgeons and bridge builders?) and often saddles them with too much debt, and whose outputs are increasingly questioned by employers.

The lack of a student record system hinders our ability to understand our industry, as New America’s Clare McCann and Amy Laitinen point out, and understanding the higher education landscape remains ever more challenging for consumers. A well-designed SURS would certainly help with the former and might eventually help with the latter problem, though college choices have so much irrationality built into them that consumer education is only one part of the issue.  But what does “well-designed” mean here? This is where I, like everyone, gets worried.

For me, three design principles must be in place for an effective SURS:

Hold us accountable for what we can control. This is a cornerstone principle of accountability and data collection. As an institution, we should be held accountable for what students learn, their readiness for their chosen careers, and giving them all the tools they need to go out there and begin their job search. Fair enough. But don’t hold me accountable for what I can’t control:

  • The labor market. I can’t create jobs where they don’t exist, and the struggles of undeniably well-prepared students to find good-paying, meaningful jobs say more about the economy, the ways in which technology is replacing human labor, and the choices that corporations make than my institutional effectiveness.  If the government wants to hold us accountable on earnings post-graduation, can we hold it accountable for making sure that good-paying jobs are out there?
  • Graduate motivation and grit. My institution can do everything in its power to encourage students to start their job search early, to do internships and network, and to be polished and ready for that first interview.  But if a student chooses to take that first year to travel, to be a ski bum, or simply stay in their home area when jobs in their discipline might be in Los Angeles or Washington or Omaha, there is little I can do.  Yet those have a lot of impact on the measure of earnings just after graduation.
  • Irrational passion. We should arm prospective students with good information about their majors: job prospects, average salaries, geographic demand, how recent graduates have fared.  However, if a student is convinced that being a poet or an art historian is his or her calling, to recall President Obama’s recent comment, how accountable is my individual institution if that student graduates and then struggles to find work? 

We wrestle with these questions internally.  We talk about capping majors that seem to have diminished demand, putting in place differential tuition rates, and more.  How should we think about our debt to earnings ratio? None of this is an argument against a unit record system, but a plea that it measure things that are more fully in our institutional control.   For example, does it make more sense to measure earnings three or five years out, which at least gets us past the transitional period into the labor market and allows for some evening out of the flux that often attends those first years after graduation? 

Contextualize the findings. As has been pointed out many times, a 98 percent graduation rate at a place like Harvard is less a testimony to its institutional quality than evidence of its remarkably talented incoming classes of students.  Not only would a 40 percent graduation rate at some institutions be a smashing success, but Harvard would almost certainly fail those very same students. As McCann and Laitinen point out, so much of what we measure and report on is not about students, so let’s make sure that an eventual SURS provides consumer information that makes sense for the individual consumer and institutional sector. 

If the consumer dimension of a student unit record system is to help people make wise choices, it can’t treat all institutions the same and it should be consumer-focused.  For example, can it be “smart” enough to solicit the kind of consumer information that then allows us to answer not only the question the authors pose, “What kinds of students are graduating from specific institutions?” but “What kinds of students like you are graduating from what set of similar institutions and how does my institution perform in that context?”

This idea extends to other items we might and should measure. For example, is a $30,000 salary for an elementary school teacher in a given region below, at, or above the average for a newly minted teacher three years after graduation?  How then are my teachers doing compared to graduates in my sector? Merely reporting the number without context is not very useful. It’s all about context.

What we measure will matter. This is obvious and it speaks to both the power of measuring and raises the specter of inadvertent consequences.  A cardiologist friend commented to me that his unit’s performance is measured in various ways and the simplest way for him to improve its mortality metric is to take fewer very sick heart patients. He of course worries that such a decision contradicts its mission and why he practices medicine. It continues to bother me that proposed student records systems don’t measure learning, the thing that matters most to my institution.  More precisely, that they don’t measure how much we have moved the dial for any given student, how impactful we have been. 

Internally, we have honed our predictive analytics based on student profile data and can measure impact pretty precisely.  Similarly, if we used student profile data as part of the SURS consumer function, we might be able to address more effectively both my first and second design principles. 

Imagine a system that was smart enough to say “Based on your student profile, here is the segment of colleges similar students most commonly attend, what the average performance band is for that segment, and how a particular institution performs within that band across these factors.…”  We would address the thing for which we should be held most accountable, student impact, and we’d provide context. And what matters most -- our ability to move students along to a better education -- would start to matter most to everyone and we’d see dramatic shifts in behaviors in many institutions.

This is the hard one, of course, and I’m not saying that we ought to hold up a SURS until we work it out. We can do a lot of what I’m calling for and find ways to at least let institutions supplement their reports with the claims they make for learning and how they know.  In many disciplines, schools already report passage rates on boards, C.P.A. exams, and more.  Competency-based models are also moving us forward in this regard. 

These suggestions are not insurmountable hurdles to a national student unit record system. New America makes a persuasive case for putting in place such a system and I and many of my colleagues in the private, nonprofit sector would support one. 

But we need something better than a blunt instrument that replaces one kind of informational fog for another.  That is their goal too, of course, and we should now step back from looking at what kinds of data we can collect to also look at our broader design principles and what kinds things we should collect and how we can best make sense of that data for students and their families. 

Their report gives us a lot of the answer and smart guidance on how a system might work.  It should also be our call to action to further refine the design model to take into account the kinds of challenges outlined above.

Paul LeBlanc is president of Southern New Hampshire University.

Accreditor Places Martin University on Probation

Indiana's Martin University has been placed on probation by the Higher Learning Commission of the North Central Association of Colleges and Schools, which cited concerns about the institution's finances and governance and the adequacy of its faculty and staff. The commission also placed several other institutions -- Arkansas Baptist College, Oglala Lakota College, Southwestern Christian University, and Salem International University -- on notice, which is less severe than probation. Kansas City Art Institute and Morton College were removed from notice.

Students need the right kind of college ratings system (essay)

The more expensive a purchase, the more important it is to be a smart consumer. Many Americans value labeling and rankings from food (nutrition labels) to appliances (energy ratings) to vehicles (gas mileage and crash safety) to health plans (Obamacare’s bronze, silver, gold, and platinum). Yet for one of the most expensive purchases a person will ever make – a college education – there is a dearth of reliable and meaningful comparable information.

In August, President Obama directed the U.S. Department of Education to develop a federal college ratings system with two goals: (1) to serve as a college search tool for students and (2) to function as an accountability measure for institutions of higher education.

Under the president’s proposal, ratings will be available for consumer use in 2015, and by 2018, they would be tied to the colleges’ receipt of federal student aid. Many colleges and universities have been protesting ever since, especially about the accountability goal.

But improving the information imbalance about higher education outcomes is a key step toward improving graduation rates and slowing the rise in student loan debt. Although accountability mechanisms are a complex issue that may well take somewhat longer than 2018 to develop, student advocates agree on the following: We must move forward now with the multifactor rating information that higher education consumers desperately need. Furthermore, the administration’s rating system should provide comparable data on several factors relevant to college choice so that students can choose which are most important to them, rather than imposing the government’s judgment about which handful of factors should be combined into a single institutional rating.

As we evaluate the case for federal consumer ratings, let’s first set aside the 15 percent of college students who attend the most selective institutions and enjoy generally very high graduation rates. They may feel well-served by rankings like Barron’s and U.S. News, which emphasize reputation, financial resources, and admissions selectivity.

But for the 85 percent of students who attend non- or less-selective institutions, the institution they choose has far greater consequences. For these “post-traditional” students, college choice could mean the difference between dropping out with an unmanageable debt load or graduating with a degree and moving on to a satisfying career.

To share a real example, consider three Philadelphia universities: a suburban private, a Catholic private, and an urban state. These institutions are all within 30 miles, enroll students with similar academic characteristics, and serve similar percentages of Pell-eligible students. If you are a local, low-income student of color who wants to attend college close to home, how should you decide where to go?

What if you knew that the suburban private school’s graduation rate for underrepresented minority students (31 percent) scored much lower than the Catholic private (54 percent) and urban state school (61 percent)? Or that the urban state and private Catholic schools have lower net prices for low-income students? Would that affect your choice? (Thanks to Education Trust’s College Results Online for these great data.)

A rating system with multiple measures (rather than a single one) could greatly help this student. Armed with facts about comparable graduation rates, admissions criteria, and net prices, she can investigate her options further, ask informed questions, and ultimately make a stronger decision about which institution is the best fit for her

A ratings system designed for the 85 percent of students going to less-selective institutions will help students get the information most important to them. Many consumer rating schemas include multiple measures. Car buyers can compare fuel efficiency, price and safety ratings as well as more subjective ratings of comfort or “driver experience” from a variety of sources. Some buy Honda Civics for gas mileage and safety, others choose more expensive options for luxury features or handling.

Similarly, prospective college students need to know not just about accessibility/selectivity (average GPA, SAT/ACT scores), but also about affordability (net price by income tier, average student loan debt, ability to repay loans) and accountability (graduation rates by race and by income). The information should be sortable by location (to aid place-bound students) and by institution type (two-year, four-year, public, private) for students to compare side by side. 

The data to fuel the rating system are for the most part already available, although some are in need of improvement. As is now widely acknowledged, we must change the federal calculation of graduation rates as soon as possible to account for part-time and transfer students, and we must collect and report institutional Pell Grant recipient graduation rates as part of the federal data system (IPEDS). Over the long term, we should also find a valid way to assess work force outcomes for students.

But let’s not delay a ratings system that will serve students any further. Once the system is up and running, we can turn to the more complex and politically difficult question of how to use federal financial aid dollars to incentivize better institutional outcomes.

Carrie Warick is director of partnerships and policy at the National College Access Network, which advocates on behalf of low-income and underrepresented students.

Editorial Tags: 

Accreditation at Risk for Sojourner-Douglass College

The Middle States Commission on Higher Education has told Sojourner-Douglass College that is has until September 1 to show why it should not lose its accreditation, The Baltimore Sun reported. The accreditor cited high debt and questions about financial viability. College officials did not respond to requests for comment.

 

Meetings feature more substance about education technology

Smart Title: 

Rhetoric about ed tech at SXSWedu and ACE meetings is more sober than soaring, as academics and experts talk about how to use emerging models.

Gainful employment debate aired out in The New York Times

Smart Title: 

With the release of the final gainful employment proposal looming, for-profits and their critics duke it out in the commentary section of The New York Times.

Essay questions benefits of rush to competency-based education

At best, so-called competency- and proficiency-based higher education is a world of good intentions and uncritical enthusiasms. At worst, it seems to be the fulfillment of conservative cost-cutting visions that will put our most enriching higher education experiences still further out of reach for many Americans.

In the U.S. these programs are aimed at sidelining the familiar credit hour in favor of personalized and flexible learning experiences for enrollees. They push the idea that some students will achieve mastery with fewer instructional hours than others and should thus be spared that expenditure of time and money. Online, real-world or self-guided experiences may also stand in for some conventional classroom approaches. Students who demonstrate mastery need not continue instruction in a particular area.

Through the use of all of these innovations, an affordable alternative to the conventional bachelor’s degree is envisioned, meeting the demands of many audiences -- funders, taxpayers and students -- for lowered higher education costs. The promise is that some students will clock fewer hours using the most costly college personnel and resources, and thus face lower debts upon graduation. Students’ “hours in seats,” once the sine qua non of higher education in contrast to vocational instruction, is seen to be an obsolete metric.

The University of Maine Presque Isle is a good example of such priorities, with a new proficiency-based undergraduate program the university rolled out recently with much fanfare. According to Inside Higher Ed, Presque Isle hosts many first-generation, underprepared students, and the campus seeks to help each student to work at his or her own pace along an affordable path of workforce preparation. Let me be clear: I believe, without a shadow of a doubt, that students learn at different rates in different ways; that current student debt levels in the United States are crushing; and that the status quo is deeply disadvantageous to Americans of lower socioeconomic status.

But this plan to save college students and their families money through the use of individualized curriculums; standardized instructional measurements; and reductions in classroom, lab, shop or studio hours will only increase those disadvantages. The university envisions a heightened accountability for its new instructional approach, through a newly careful matching of pedagogical experience and student achievement. But consider the criteria against which such matches will be assessed: success in teaching and learning will be defined by lowered spending. If we focus our attention on that contraction of institutional outlay, the promises of this new educational model start to seem less than solid.

In his recent work on personalized education, Oxford education theorist David Hartley has warned of the ways in which such market focused pedagogy constrains democratic opportunity, and I follow his lead here in considering the university’s new programs. First, if competency-based programs are accepted as cost-saving equivalents to conventional elements of bachelor’s degree curriculums, they render those conventions (because more costly) moot, and even undesirable. The idea that MORE MONEY (as in, public funding) might optimally be spent on higher education for Americans then becomes unreasonable.

And with that move, the notion that every student (not just those of affluence) might learn best by taking more rather than fewer courses, staged as small classes taught by well-compensated, securely employed (tenured!) instructors, in well-resourced facilities, is being taken off the table. The notion that our nation, if it wishes to promote workforce preparation and global economic competitiveness, would do best to EXPAND funding provisions for education is dismissed. Although naturally, crucially, the new cost-saving techniques are never, ever explicitly said to constitute a contraction. That elision makes it seem even more illogical, more irresponsible, to suggest that public monies be raised or reallocated in support of our colleges. To be inclusive is now to be profligate.

Second, we must recognize that despite the consumer-flattering appeal, a personalized curriculum is not automatically an optimized curriculum. For example, the university’s new program emphasizes “Voice and Choice”:

We offer students a freedom of choice that creates ownership of their degree and allows them to discover their unique identity. Students can choose to demonstrate their deep understanding of a subject by writing papers, taking multiple-choice exams, designing a project, completing a research study and more.

I’d ask: what is such choice, clearly part of the school’s new branding, really providing? I’m all for teaching students to question their instructors’ methods and priorities (see below), but if my students tell me they loathe the conceptual challenges involved in writing a paper, I know that’s one thing they’ll need to attempt before the term ends. If they balk at the logic challenges of multiple choice tests, that’s something I’ll be sure to expose them to. In other words, I suspect that “freedom of choice” is provided here not to facilitate “deep understanding,” but rather to provide a satisfactory customer experience.

While I am excited that the university’s instructors want to introduce students to “problem-posing” and emphasize real-world and hands-on experiences, all potentially engaging and genuinely flexible elements of college teaching, the valorization of market freedom and consumer choice here makes me wary. Like all performance standards, these efficiencies and controls are double-edged, providing a floor for student attainment but also a ceiling, as I’ve written before.

But what is most concerning is that in my experience, it is the errors, dead ends and confusions that launch students into the most profound and transformative moments of learning and self-discovery. These “off rubric” experiences are uncomfortable, and in no obvious way get anyone closer to passing a class or completing a degree. Just the opposite. And yet these are exactly the points at which the learner (not to mention the instructor) is most open to the unfamiliar and unexpected.

I fear that the deployment of “competencies” and “proficiencies” as instruments of economy and brevity is simply antithetical to the open-ended inquiry that is foundational to rigorous critical thinking, for learner and teacher. However concerned and inventive the professors involved in proficiency-based teaching might be, they are now facing clear disincentives to conceptual messiness. Learning, I believe, must be shot through with dissatisfaction, with frustration and moments of utter uncertainty about where one is heading intellectually -- all experiences that are now to be treated as inefficiencies. If these most-perfect conditions for inquiry and invention are the very ones that are now seen to be fiscally unwise, what hope for the creativity and growth of American college-goers?

Amy Slaton is a professor of history in the department of history and politics at Drexel University.

Editorial Tags: 

A book explores how to make accreditation more effective

Smart Title: 

The accreditation process needs to change, an expert writes in a new book, but accreditors are making more progress than their critics charge.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top