assessmentaccountability

Quality and 'Non-Institutional' Higher Education

The Council for Higher Education Accreditation (CHEA) and the Presidents' Forum this week released a policy report that explores the potential for an external quality review process for "non-institutional" providers in higher education. This emerging field include companies and nonprofits that offer courses, modules or badges. Most of this sector is online, non-credit and low-cost.

The two groups last year formed a commission to look at options for quality assurance in the space. The commission's report describes three possibilities: a voluntary, cooperative effort by providers; a voluntary service offered by an existing third-party association; or a new external group created for this purpose.

"The commission calls upon the postsecondary education community to seize this moment as a critical time to consider development, adoption and extension of new approaches that address the need for institutional and organizational quality review," the report said.

U. of Michigan Gets Accreditor Approval for Competency-Based Degree

The University of Michigan's regional accreditor has signed off on a new competency-based degree that does not rely on the credit-hour standard, the university said last week. The Higher Learning Commission of the North Central Association of Colleges and Schools gave a green light to the proposed master's of health professions education, which the university's medical school will offer. In its application to the regional accreditor, the university said the program "targets full-time practicing health professionals in the health professions of medicine, nursing, dentistry, pharmacy and social work."

A college rating system that might help students and not do harm (essay)

Many of my fellow college presidents remain worried about the Obama Administration’s proposed (and still being developed) rating system for higher education. While Education Department officials have been responsive and thoughtful about our concerns, many among us fundamentally do not trust government to get this right.

Or anyone, for that matter.  After all, we already have lots of rating systems and they mostly seem flawed -- some, like U.S. News and World Report, extremely so. Institutions game the system in various ways. Rarely do rating systems capture the complexity of the industry with its rich mix of institutions, missions, and student markets served.  Almost always, they are deeply reductionist. 

On the other hand, higher education mostly resists transparency, good data sharing, and accountability. I may be with the minority of my peers that actually support some kind of rating system, but I am with the majority in my worry about what will get measured and how. Take the proposed gainful employment regulations, for example.  My approach to accountability dictates that you hold me accountable for what I can control. I can’t control the labor market (can I hold government accountable for that piece?), the willingness of a graduate to move for a job, or the ridiculously low wages our society pays teachers and social workers.  I can control the level of preparedness my students have as they enter their chosen field. So hold me accountable for the latter, but not the former.

I’ve always thought that a rating system that does not adjust for the student being served is an inherently flawed system. It often fails to capture the real value-add of an education. For example, if we could measure how far a student has moved intellectually, developmentally, and professionally, I might argue that Harvard and Yale would rank near the bottom of such a rating system, while a Rio Salado College might rank near the top, at least in terms of how far they move students educationally.  After all, if you take the top 1 percent of high school graduates, how much actual educational value have you added (social value, value-added network, status and so on are other matters, of course)?  Or perhaps more kindly, the educational success of these students has a lot more to do with them and the other high performers around them than with Harvard – they would thrive anywhere they found themselves.

Yet it seems certain that we will have a rating system.  So I’ve played with the rough outlines of a rating system that is student-centered and program-based, and that places institutions (by program) on a matrix that tells us more than a simple score. Any rating system will answer some questions and ignore others. The questions I want my system to answer are these:

  • How does an institution, at the program level (which is more important than institution), serve various student profiles (because students are not at all alike)?
  • Do students who also fit my profile graduate in large numbers, find jobs, carry a lot of debt, get paid well enough?
  • How does the institution perform over all (as opposed to on the program level) on those questions, the government’s concern?

I’d design the system on these two axes:

Student matching. This depends on creating a student profile, a combination of academic and financial factors and perhaps other items we think might be important (Gender? Race? Age? Veteran?). 

High/low risk and resource = Level of aid needed + HS GPA +HS rating

Program success. The success of the program in terms of placing students in a related field one year after graduation (students do all sorts of things immediately after graduation -- we need to filter out that noise), their earnings one year after graduation, the percentage of students who graduate, and the cost of the program.

Success rate by program = grad rate + % of grads working in the related field or field of choice + avg earnings + net cost +average debt

Because the system uses the student as the lens of interpretation, a student with a high risk profile (let’s say a 2.8 HS GPA from a lower-ranked high school, with a family income of less than $40,000) looking at three schools offering Secondary Education programs might see this kind of comparison:

  School A School B School C
% of Like Students 95% 40% 35%
Graduation Rate 15% 65% 45%
% Working in Field 28% 85% 75%
Net Cost $22,000 $16,500 $10,500
Avg Debt $56,000 $29,000 $14,000
Earnings $29,000 $36,000 $34,000

In the above example, School A looks like certain poorly performing for-profits while School C might be the profile for a public institution.  School B, a private institution, leaves its graduates with more debt than does public School C, but they graduate more of their students and place them more effectively.  So the student has some tradeoffs to consider.

In contrast, a low-risk student with ample resources (say a 3.6 GPA from a good high school and a family income of $80,000) looking at the same program might see a different report for School A and B and, in this case, they have substituted an elite institution for School C.    

  School A School B School C
% of Like Students 3% 30% 12%
Graduation Rate 75% 85% 95%
% Working in Field 40% 85% 95%
Net Cost $22,000 $16,500 $24,000
Avg Debt $28,000 $23,000 $45,000
Earnings $29,000 $36,000 $44,000

For this second student, elite School C is a tough choice in terms of admissions, and if the student matriculates, it will provide less aid than it will for a very high-need student.  On the other hand, less selective School B provides more merit aid for a student with this profile and drives down long-term indebtedness in comparison to the first student (a practice that is common and often criticized). 

School A in both cases is a for-profit and not a very good one. So many of the bad agents in the for-profit world take very high-risk students, charge them a lot, and don’t graduate enough of them. Those that earn a credential too often fail to land jobs in their field. Or, in the case of more generalized liberal arts fields, in jobs they would not otherwise choose. They would look a lot like School A in the above examples. But a better for-profit player like Capella University or DeVry University would land closer to Southern New Hampshire University (the institution I lead), something like School B.

In contrast, an elite School C (think Princeton or Harvard) mostly takes very low-risk/high-resource students and charges them quite a bit (or funds them fully if they are among the small number of poor students they accept). Their graduates do quite well. My proposed system would reveal that School C has a high success rate over all (including for high-risk, low-resource students, but it just doesn’t serve very many of them). The second student would be better-served to look at School C in the first example, a public college.

The key is to give an interested student the tools to accurately access where they are on the student profile analysis so they get the best match of schools for them when considering programmatic performance.  Then they could, by program (and degree level), find the institutions that serve them best. Most importantly, the starting point for using the system is the student. A typical SNHU student would struggle at Harvard -- we are in fact the better institution for that student profile.  Very high-risk or low-resource students might be better served at a community college than at SNHU, where our higher cost might be a much bigger burden and a heavier price to pay should they not graduate. Those same students might be better served at Harvard if they are academically prepared, but very poor (which would not likely burden them with a lot of debt), but they’d see that Harvard has very few spots for them.

Such a system could make the student profile piece easy to use through a simple heuristic that identifies key data points (Name of your high school and city/town where it is located; your current GPA or average grade level; did one or both of your parents complete a college degree?). Ideally the system would pull family financial information from the last tax return. Rating high school quality might be a challenge, but I bet there are rankings or state ratings that could be employed.

What I’ve outlined above can be the base analysis, but I think it would fairly easy to add “filters” to the data for other factors. For example, an institution might do a pretty good job of graduating most students, but a filter for minority students might reveal a much lower graduation rate at a given institution. We could have an interesting discussion about what those others filters might be (veterans? gender? first generation? age bracket?). This is an area that needs to be carefully thought-out -- we don’t want the unintended outcome to be students of a given “type” self-selecting out of institutions because they confuse group identity metrics with their own talents and drive. This is complicated territory.

There is one other important variation to consider. The system fails to capture or address those students who go onto further degree study instead of entering the job market, so a community college or a liberal arts college that sends many directly on to grad school would be unfairly hurt in terms of employment and earnings if this population wasn’t separated out. For the job-seeking population, things like graduation rates would be calculated separately for those seeking work after graduation and those declaring their intent to go onto the next degree level (four-year degrees for community college graduates and masters or doctoral programs for four-year degree graduates).  The latter would be kept out of the denominator for the job seeking analysis and thus community colleges and schools sending students onto graduate programs would be more accurately represented.

I know I don’t have this quite right yet, but I think it might work with some more refining.  Some might argue that “percentage of graduates working in a related field” is just too hard to capture (especially for liberal arts programs), so maybe simply employed and at what level of earnings would suffice in Version 1.  Instead of the “average earnings one year after graduation,” one could use a simple metric like “percentage above or below the national media wage,” which is about $35,000. Some would ask it to measure only one student profile: how programs (institutions) serve high-risk students.  After all, low-risk students do pretty well and high-risk students are often served very poorly and fail at high rates. It’s where we waste enormous amounts of federal dollars. 

Even in its broadly sketched form, this kind of rating systems does a number of things:

  • It reframes the question of institutional performance to program performance (which matters a whole lot more to students) while still allowing regulators to roll up program performance into an aggregate institutional profile if they wish;
  • It squares program performance with student profile, recognizing that different institutions work better for different students -- a much more nuanced presentation of the challenge;
  • The idea of filters allows for students to go deeper, perhaps discovering that while Program X has good outcomes, perhaps for minority students or veterans it does less well, which is an important insight if you are a minority student or a veteran (though we’d have to be wary of small numbers in generalizing in many cases, itself another issue);
  • It avoids the oversimplification of a single score, which then becomes an oversimplified rating system that fails to take into account the variety of institutional types, missions, and student markets at work in higher education;
  • It allows the government to call out poor performers;
  • It allows institutions to have a more robust discussion about their programs, something they do poorly, by and large.

Some of what I propose would be difficult to execute and would require some hard thinking about to how to get the data. For example, tracking job placement is devilishly hard, but databases like LinkedIn are making it much easier and the University of Texas has unveiled a new system that achieves much of what I outline (yes, I am conceding the employment metric, despite my objections – the government is likely to demand it, after all). Tax returns can also be accessed in useful ways. The College Scorecard captures some of what we would need. In a state like Massachusetts, a combination of MCAS scores, per pupil spending, and percent going onto college could help rate high schools. My point is that most of the necessary data is available. In all cases, every ratings system has its problems and we need to choose what execution challenges we wish to sort out.  I prefer problems of execution rather than problems of oversimplifcation.

To that end, I know this idea doesn’t address important aspects of higher education. For example, our important role in civic engagement, measuring critical thinking skills, finding a sense of calling, and much more. But those are not the big questions the administration is asking us to address. 

For those questions, I wonder if a system like the one I’ve sketched above might give us a richer understanding -- a student-centered understanding – of institutional effectiveness that works far better than some of what is being described today. It is a near-certainty that we will have a ratings system, so let’s at least have one that focuses the question through the lens of students and that captures the complexity that is higher education today.

Paul J. LeBlanc is president of Southern New Hampshire University.

Brandman U. Gets Green Light for Direct Assessment

Brandman University this week announced that the U.S. Department of Education had approved its application to offer federal financial aid for an emerging form of competency-based education. The university is the fourth institution to get the nod from the department for "direct assessment" degrees, which are decoupled from the credit-hour standard. The feds have sent some mixed signals about this approach, most recently with a critical audit from the department's Office of Inspector General. But Brandman's successful application is more evidence that the Education Department largely backs direct assessment.

Federal government needs to revamp its oversight of higher education, says conservative think tank

Smart Title: 

New paper by conservative think tank argues that federal government should hold colleges more accountable not by expanding its regulatory reach but rather by using new metrics.

Ratings and scorecards: the wrong kind of higher ed accountability (essay)

Calls for scorecards and rating systems of higher education institutions that have been floating around Washington, if used for purposes beyond providing comparable consumer information, would make the federal government an arbiter of quality and judge of institutional performance.

This change would undermine the comprehensive, careful scrutiny currently provided by regional accrediting agencies and focus on cursory reviews.

Regional accreditors provide a peer-review process that sparks an investigation into key challenges institutions face to look beyond symptoms for root causes. They force all providers of postsecondary education to investigate closely every aspect of performance that is crucial to strengthening institutional excellence, improvement, and innovation. If you want to know how well a university is really performing, a graduation rate will only tell you so much.

But the peer-review process conducted by accrediting bodies provides a view into the vital systems of the institution: the quality of instruction, the availability and effectiveness of student support, how the institution is led and governed, its financial management, and how it uses data.

Moreover, as part of the peer-review process, accrediting bodies mobilize teams of expert volunteers to study governance and performance measures that encourage institutions to make significant changes. No government agency can replace this work, can provide the same level of careful review, or has the resources to mobilize such an expert group of volunteers. In fact, the federal government has long recognized its own limitations and, since 1952, has used accreditation by a federally recognized accrediting agency as a baseline for institutional eligibility for Title IV financial-aid programs.

Attacked at times by policy makers as an irrelevant anachronism and by institutions as a series of bureaucratic hoops through which they must jump, the regional accreditors’ approach to quality control has rather become increasingly more cost-effective, transparent, and data- and outcomes-oriented.

Higher education accreditors work collaboratively with institutions to develop mutually agreed-upon common standards for quality in programs, degrees, and majors. In fact, in the Southern region, accreditation has addressed public and policy maker interests in gauging what students gain from their academic experience by requiring, since the 1980s, the assessment of student learning outcomes in colleges. Accreditation agencies also have established effective approaches to ensure that students who attend institutions achieve desired outcomes for all academic programs, not just a particular major.

While the federal government has the authority to take actions against institutions that have proven deficient, it has not used this authority regularly or consistently. A letter to Congress from the American Council on Education and 39 other organizations underscored the inability of the U.S. Department of Education to act with dispatch, noting that last year the Department announced “it would levy fines on institutions for alleged violations that occurred in 1995 -- nearly two decades prior.”

By contrast, consider that in the past decade, the Southern Association of Schools and Colleges Commission on Colleges stripped nine institutions of their accreditation status and applied hundreds of sanctions to all types of institutions (from online providers to flagship campuses) in its region alone. But, when accreditors have acted boldly in recent times, they been criticized by politicians for going too far, giving accreditors the sense that we’re “damned if we do, damned if we don’t.”

The Problem With Simple Scores

Our concern about using rating systems and scorecards for accountability is based on several factors. Beyond tilting the system toward the lowest common denominator of quality, rating approaches can create new opportunities for institutions to game the system (as with U.S. News & World Report ratings and rankings) and introduce unintended consequences as we have seen occur in K-12 education.

Over the past decade, the focus on a few narrow measures for the nation’s public schools has not led to significant achievement gains or closing achievement gaps. Instead, it has narrowed the curriculum and spurred the current public backlash against overtesting. Sadly, the data generated from this effort have provided little actionable information to help schools and states improve, but have actually masked -- not illuminated -- the root causes of problems within K-12 institutions.

Accreditors recognize that the complex nature of higher education requires that neither accreditors nor the government should dictate how individual institutions can meet desired outcomes. No single bright line measure of accountability is appropriate for the vast diversity of institutions in the field, each with its own unique mission. The fact that students often enter and leave the system and increasingly earn credits from multiple institutions further complicates measures of accountability.

Moreover, setting minimal standards will not push institutions that think they are high performing to get better. All institutions – even those considered “elite” – need to work continually to achieve better outcomes and should have a role in identifying key outcomes and strategies for improvement that meet their specific challenges.

Accreditors also have demonstrated they are capable of addressing new challenges without strong government action. With the explosion of online providers, accreditors found a solution to address the challenges of quality control for these programs. Accrediting groups partnered with state agencies, institutions, national higher education organizations, and other stakeholders to form the State Authorization Reciprocity Agreements, which use existing regional higher education compacts to allow for participating states and institutions to operate under common, nationwide standards and procedures for regulating postsecondary distance education. This approach provides a more uniform and less costly regulatory environment for institutions, more focused oversight responsibilities for states, and better resolution of complaints without heavy-handed federal involvement.

Along with taking strong stands to sanction higher education institutions that do not meet high standards, regional accreditors are better-equipped than any centralized governmental body at the state or national level to respond to the changing ecology of higher education and the explosion of online providers.

We argue for serious -- not checklist -- approaches to accountability that support improving institutional performance over time and hold institutions of all stripes to a broad array of criteria that make them better, not simply more compliant.

Belle S. Wheelan is president of the Southern Association of Colleges and Schools Commission on Colleges, the regional accrediting body for 11 states and Latin America. Mark A. Elgart is founding president and chief executive officer for AdvancED, the world’s largest accrediting body and parent organization for three regional K-12 accreditors.

Researchers discuss the relationship between higher education and employment

Smart Title: 

New data on the labor-market returns of short-term certificates show both the value and pitfalls of using earnings data as a yardstick in higher education.

University Innovation Alliance kicks off with big completion goals

Smart Title: 

A new group of 11 public research universities says it can set aside competition and prestige-chasing to work together to graduate more low-income students.

Lead generation with more information and fewer leads

Smart Title: 

A new shade on lead generation includes assessments, online courses and mentors to help ensure that students can succeed once they enroll.

NOVA's Robert Templin Will Retire and Work with Aspen

Robert G. Templin Jr., the longtime president of Northern Virginia Community College and one of the nation's most prominent two-year chiefs, has announced that he will retire from college in February 2015. After stepping down Templin will work part-time as a senior fellow at the Aspen Institute's College Excellence Program. In recent years Aspen has studied the performance of community colleges and awarded a $1 million prize for excellence every two years.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top