assessmentaccountability

Performance-based funding provokes concern among college administrators

Smart Title: 

Public colleges may be using grade inflation or tightening admissions standards to comply with performance-based funding, survey finds.

Essay argues that colleges can measure the career success of graduates

With rising tuition, families are increasingly concerned about what students can expect after graduation in terms of debt, employment, and earnings. They want to know: What is the value of a college degree? Is it worth the cost? Are graduates getting good-paying jobs?

At the same time, state and federal policymakers are sounding the call to institutions for increased accountability and transparency. Are students graduating? Are they accruing unmanageable debt? Are graduates prepared to enter the workforce?

Colleges and universities struggle to answer some of these questions. Responses rely primarily on anecdotal evidence or under-researched and un-researched assumptions because there are little data available. Student data are the sole dominion of colleges and universities. Workforce data is confined to various state and federal agencies. With no systematic or easy way to pull the various data sources together, colleges universities have limited ability to provide the kind of analysis of return on investment that will satisfy the debate.

But access to unit-record data — connecting the student records to the workforce records — would allow institutions to discover those answers. What’s more, it would give colleges and universities the opportunity to conduct powerful research and analysis on post-graduation outcomes that could shape policies and program development.

For example, education provides a foundation of skills and abilities that students bring into the workforce upon graduation. But how long does this foundation continue to have a significant impact on workforce outcomes after graduation? Research based on unit-record data can also show the strongest predictors of student earnings after graduation — educational experience, the local and national economy, supply and demand within the field, or some combination of each.

President Obama and others have proposed that colleges share such information, and many colleges have objected. They have suggested that the information can’t be obtained; that data would be flawed because graduates of some programs at a college might see different career results than others at the same institution; that such a system would jeopardize student privacy; that it would penalize colleges with programs whose graduates might not earn the most one year out, but five or more years out.

At the University of Texas System, we have found a solution – at least within our own state – and, for the first time, are able to provide valuable information to our students and their families. We are doing so without assuming that data one year out is better or worse than a longer time frame – only that students and families should be able to have lots of statistics to examine. We formed a partnership with the Texas Workforce Commission that gives us access to the quarterly earnings records of our students who have graduated since 2001-02 and are found working in Texas. While most of our alumni do work in Texas, a similar partnership with the Social Security Administration might make this approach possible for institutions whose alumni scatter more than ours do.

With that data, we created seekUT, an online, interactive tool — accessible via desktop, tablet, and mobile device — that provides data on salaries and debt of UT System alumni who earned undergraduate, graduate, and professional degrees for 1, 5, and 10 years after graduation. The data are broken down by specific degrees and majors since we know that an education major and an engineering major from the same institution – both valuable to society – are unlikely to earn the same amount. Also, seekUT introduces the reality of student loan debt to prospective and graduate students. In addition to average total student loan debt, it shows estimated monthly loan payment alongside monthly income, as well as the debt-to-income ratio. And because this is shown over time, students get a longer view of how that debt load might play out over the course of their career as their earnings increase over time.

When we present data in this way, we provide students information to make important decisions about how much debt they can realistically afford to acquire based on what their potential earnings might be, not just a year after graduation, but 5 and 10 years down the road. Students and families can use seekUT to help inform decisions about their education and to plan for their financial future.

Admittedly, it is an incomplete picture. Many of our graduates, especially those with advanced degrees, leave the state. If they enroll elsewhere to continue their education, we can discover that through the National Student Clearinghouse StudentTracker. But for those who are not enrolled, there is no information. In lieu of a federal database, we are exploring other options and partnerships to help fill in these holes, but, for now, there are gaps.

With unit record data we can inform current and prospective students about past performance for graduates in their same major; this is a highly valuable product of this level of data. Access to this information in a user-friendly format can directly benefit students by offering real insights — not just alumni stories or survey-based information — into outcomes. The intent is not to change anyone’s major or sway them from their passion, but, instead, to help students make the decisions now that will allow them to pursue that passion after graduation.

There are a multitude of areas we need to explore, both to answer questions about how our universities are performing and to provide much-needed information to current and prospective students. The only way to definitively provide this important information is through unit-record data.

We recognize that there are legitimate concerns, especially given the nearly constant headlines regarding data breaches, about protecting student privacy and data. And the more expansive the data pool, the larger and more appealing the target. A federal student database may be an attractive target to hackers. But these risks can be mitigated — and are, in fact, on a daily basis by university institutional research offices, as well as state and federal agencies. We safeguard the IDs, locking down access to the original file, and not using any identified data for analysis. And when we display information, we do not include any data for cell sizes less than five. This has been true for the student data that we have always held. Given these safeguards, I believe that the need for the data and the benefits of having access to it far outweigh the risks.

seekUT is an example of just some of what higher education institutions can do with access to their workforce data. But for all its importance, seekUT is a tool to provide users access to the information, to inform individual decisions. It is from the deeper research and analysis of these data, however, that we may see major changes and shifts in the policies that impact all students. That is the true power of these data.

For example, while we are gleaning a great deal of helpful information studying our alumni, this same data gives us insights into our current students who are working while enrolled. UT System is currently examining the impact of income, type of work, and place of work (on or off campus) on student persistence and graduation. The results of this study could have an impact on work-study policies across our institutions.

Higher education institutions can leverage data from outside sources to better-understand student outcomes. However, without a federal unit record database, individual institutions will continue to be forced to forge their own partnerships, yielding piecemeal efforts and incomplete stories. We cannot wait; we must forge ahead. Institutions of higher education have a responsibility to students and parents and to the public.


 

Stephanie Bond Huie is vice chancellor of the Office of Strategic Initiatives at the University of Texas System.

Section: 
Editorial Tags: 

Administrators should work with the faculty to assess learning the right way (essay)

“Why do we have such trouble telling faculty what they are going to do?” said the self-identified administrator, hastening to add that he “still thinks of himself as part of the faculty.”

“They are our employees, after all. They should be doing what we tell them to do.”

Across a vast number of models for assessment, strategic planning, and student services on display at last month’s IUPUI Assessment Institute, it was disturbingly clear that assessment professionals have identified “The Faculty” (beyond the lip service to #notallfaculty, always as a collective body) as the chief obstacle to successful implementation of campuswide assessment of student learning. Faculty are recalcitrant. They are resistant to change for the sake of being resistant to change. They don’t care about student learning, only about protecting their jobs. They don’t understand the importance of assessment. They need to be guided toward the Gospel with incentives and, if those fail, consequences.

Certainly, one can find faculty members of whom these are true; every organization has those people who do just enough to keep from getting fired. But let me, at risk of offending the choir to whom keynote speaker Ralph Wolff preached, suggest that the faculty-as-enemy trope may well be a problem of the assessment field’s own making. There is a blindness to the organizational and substantive implications of assessment, hidden behind the belief that assessment is nothing more than collecting, analyzing, and acting rationally on information about student learning and faculty effectiveness.

Assessment is not neutral. In thinking of assessment as an effort to determine whether students are learning and faculty are being effective, it is imperative that we unpack the implicit subject doing the determining. That should make clear that assessment is first and foremost a management rather than a pedagogical practice. Assessment not reported to the administration meets the requirements of neither campus assessment processes nor accreditation standards, and is thus indistinguishable from non-assessment. As a fundamental principle of governance in higher education, assessment is designed to promote what social scientist James Scott has called “legibility”: the ability of outsiders to understand and compare conditions across very different areas in order to facilitate those outsiders’ capacity to manage.

The Northwest Commission on Colleges and Universities, for example, requires schools to practice “ongoing systematic collection and analysis of meaningful, assessable, and verifiable data” to demonstrate mission fulfillment. That is not simply demanding that schools make informed judgments. Data must be assessable and verifiable so that evaluators can examine the extent to which programs revise their practices using the assessment data. They can’t do that unless the data make sense to them. Administrators make the same demand on their departments through campus assessment processes. In the process a hierarchical, instrumentally rational, and externally oriented management model replaces one that has traditionally been decentralized, value rational, and peer-driven.

That’s a big shift in power. There are good (and bad) arguments to be made in favor of (and opposed to) it, and ways of managing assessment that shift that power more or less than others. Assessment professionals are naïve, however, to think that those shifts don’t happen, and fools to think that the people on the losing end of them will not notice or simply give in without objection.

At the same time, assessment also imposes substantive demands on programs through its demand that they “close the loop” and adapt their curriculums to those legible results regardless of how meaningful those results are to the programs themselves. An externally valid standard might demand significant changes to the curriculum that move the program away from its vision.

In my former department we used the ETS Major Field Test as such a standard. But while the MFT tests knowledge of political science as a whole, in political science competence is specific to subfields. Even at the undergraduate level students specialize sufficiently to be, for example, fully conversant in international relations and ignorant of political thought. The overall MFT score does not distinguish between competent specialization and broad mediocrity. One solution was to expect that students would demonstrate excellence in at least one subfield of the discipline. The curriculum would then have to require that students took nearly every course we offered in a subfield, and staffing realities in our program would inevitably make that field American politics.

Because the MFT was legible to a retired Air Force officer (the institutional effectiveness director), an English professor (the dean), a chemist (the provost), and a political appointee with no previous experience in higher education (the president), it stayed in place as a benchmark of progress, but offered little to guide program management. The main tool we settled on was an assessment of the research paper produced in a required junior-level research methods course (that nearly all students put off to their final semester). That assessment gave a common basis for evaluation (knowledge of quantitative research methods) and allowed faculty to evaluate substantive knowledge in a very narrow range of content through the literature review. But it also shifted emphasis toward quantitative work in the discipline, and further marginalized political thought altogether since that subfield isn’t based on empirical methods. We considered adding a political thought assignment, but that would have required students to prioritize that over the empirical fields (no other substantive field having a required assignment) rather than putting it on equal footing.

Evaluating a program with “meaningful, assessable, and verifiable data” can’t be done without changing the program. To “close the loop” based on MFT results required a substantive change in how we saw our mission: from producing well-rounded students to specialists in American politics. To do so with the methods paper required changes in course syllabuses and advising to bring more emphasis on empirical fields, more quantitative rather than qualitative work within those fields, more emphasis on methods supporting conclusions rather than the substance of the conclusions, and less coursework in political thought. We had a choice between these options. But we could not choose an option that would not require change in response to the standard, not just the results.

This is the reality facing those, like the administrator I quoted at the beginning of this essay, who believe that they can tell faculty what to do with assessment without telling them what to do with the curriculum. If assessment requires that a program make changes based on the results of its assessment processes, then the selection of processes defines a domain of curricular changes that can result. Some of these will be unavoidable: a multiple-choice test will require faculty to favor knowledge transmission over synthetic thinking. Others will be completely proscribed: if employment in the subfield of specialization is an assessment measure, the curriculum in political thought will never be reinforced, because people don’t work in political thought. But no process can be neutral among all possible curriculums.

Again, that may or may not be a bad thing. Sometimes a curriculum just doesn’t work, and assessment can be a way to identify it and replace it with something that does. But the substantive influence of assessment is most certainly a thing one way or the other, and that thing means that assessment professionals can’t say that assessment doesn’t change what faculty teach and how they teach it. When they tell faculty members that, they appear at best clueless and at worst disingenuous. With most faculty members having oversensitive BS detectors to begin with, especially when dealing with administrators, piling higher and deeper doesn’t exactly win friends and influence people.

The blindness that comes from belief in organizationally and curricularly neutral assessment is, I think, at the heart of the condescending attitudes toward faculty at the Assessment Institute. In the day two plenary session, one audience member asked, essentially, “What do we do about them?” as if there were no faculty members in the room. The faculty member next to me was quick to tune out as the panel took up the discussion with the usual platitudes about buy-in and caring about learning.

Throughout the conference there was plenty of discussion of why faculty members don’t “get it.” Of how to get them to buy into assessment on the institutional effectiveness office’s terms. Of providing effective incentives — carrots, yes, but plenty of sticks — to get them to cooperate. Of how to explain the importance of accreditation to them, as if they are unaware of even the basics. And of faculty paranoia that assessment was a means for the administration to come for their jobs.

What there wasn’t: discussion of what the faculty’s concerns with assessment actually are. Of how assessment processes do in fact influence what happens in classrooms. Of how assessment feeds program review, thus influencing administrative decisions about program closure and the allocation of tenure lines (especially of the conversion of tenure lines to adjunct positions when vacancies occur). Of the possibility that assessment might have unintended consequences that hinder student learning. These are very real concerns for faculty members, and should be for assessment professionals as well.

Nor was there discussion of what assessment professionals can do to work with faculty in a relationship that doesn’t subordinate faculty. Of how assessment professionals can build genuinely collaborative rather than merely cooptive relationships with faculty members. Of, more than anything, the virtues of listening before telling. When it comes to these things, it is the assessment field that doesn’t “get it.”

Let me assure you, as a former faculty member who talks about these issues with current ones: faculty members do care about whether students learn. In fact, many lose sleep over it. Faculty members informally assess their teaching techniques every time they leave a classroom and adjust what they do accordingly. In fact, that usually happens before they walk back into that classroom, not at the end of a two-year assessment cycle. Faculty members most certainly feel disrespected by suggestions they only care for themselves. In fact, it is downright offensive to suggest that they are selfish when in order to make learning happen they frequently make less than their graduates do and live in the places their graduates talk of escaping.

Assessment professionals need to approach faculty members as equal partners rather than as counterrevolutionaries in need of reeducation. That’s common courtesy, to be sure. But it is also essential if assessment is to actually improve student learning.

You do care about student learning, don’t you?

Jeffrey Alan Johnson is assistant director of institutional effectiveness and planning at Utah Valley University.

Editorial Tags: 

U. of Texas System to Try Competency-Based Education

The University of Texas System on Monday announced a plan to create a broad, competency-based education program in the medical sciences. The system-wide curriculum will be aimed at learners from high school through post-graduate studies, according to a news release. The forthcoming competency-based credentials will be personalized, adaptive and industry-aligned, the system said. The University of Texas Rio Grande Valley, which opens next year, will offer the curriculum's undergraduate degree. 

Akron Tops in College Completion Contest

Akron beat out 56 other cities in a contest to increase college-degree production during a four-year period that concluded in 2013. The city on Wednesday received the $1 million Talent Dividend Prize, which was sponsored by the nonprofit groups CEOs for Cities and Living Cities. The contest was based on proportional increases in degrees issued, with extra weight given to four-year and graduate degrees. Overall, the group had a 7.6 percent increase. Akron topped the list with a 20.2 percent increase. An additional 69,000 associate degrees and 55,000 bachelor's or graduate degrees were awarded by colleges in the 57 urban areas.

Dramatic Testimony in Trial on College's Accreditor

Press accounts are describing dramatic testimony in the second day of the trial of the Accrediting Commission for Community and Junior Colleges, which stands accused in a lawsuit in California court of being unfair in its evaluations of City College of San Francisco.

Barbara Beno, president of the commission, made two admissions in testimony Tuesday that were seen by supporters of the college -- whose accreditation the commission voted to revoke -- as key evidence. First, she admitted that when the commission identified new problems at the college, which was at risk of losing accreditation, it did not give the college required time to respond, The San Francisco Chronicle reported. Beno made this admission only after a judge told her she hadn't been answering the question and needed to do so.

Second, she admitted that she asked the accrediting team to remove some positive language from the report, and that the team did so. The removed language said that the college “demonstrated a high level of dedication, passion and enthusiasm to address the issues, and provided evidence of compelling action to address previous findings.” Beno told the court that she asked that the passage be removed because she was concerned about a lack of "clarity" in the phrase "compelling action."

The commission has maintained that its findings on City College of San Francisco were appropriate.

 

 

Quality and 'Non-Institutional' Higher Education

The Council for Higher Education Accreditation (CHEA) and the Presidents' Forum this week released a policy report that explores the potential for an external quality review process for "non-institutional" providers in higher education. This emerging field include companies and nonprofits that offer courses, modules or badges. Most of this sector is online, non-credit and low-cost.

The two groups last year formed a commission to look at options for quality assurance in the space. The commission's report describes three possibilities: a voluntary, cooperative effort by providers; a voluntary service offered by an existing third-party association; or a new external group created for this purpose.

"The commission calls upon the postsecondary education community to seize this moment as a critical time to consider development, adoption and extension of new approaches that address the need for institutional and organizational quality review," the report said.

U. of Michigan Gets Accreditor Approval for Competency-Based Degree

The University of Michigan's regional accreditor has signed off on a new competency-based degree that does not rely on the credit-hour standard, the university said last week. The Higher Learning Commission of the North Central Association of Colleges and Schools gave a green light to the proposed master's of health professions education, which the university's medical school will offer. In its application to the regional accreditor, the university said the program "targets full-time practicing health professionals in the health professions of medicine, nursing, dentistry, pharmacy and social work."

A college rating system that might help students and not do harm (essay)

Many of my fellow college presidents remain worried about the Obama Administration’s proposed (and still being developed) rating system for higher education. While Education Department officials have been responsive and thoughtful about our concerns, many among us fundamentally do not trust government to get this right.

Or anyone, for that matter.  After all, we already have lots of rating systems and they mostly seem flawed -- some, like U.S. News and World Report, extremely so. Institutions game the system in various ways. Rarely do rating systems capture the complexity of the industry with its rich mix of institutions, missions, and student markets served.  Almost always, they are deeply reductionist. 

On the other hand, higher education mostly resists transparency, good data sharing, and accountability. I may be with the minority of my peers that actually support some kind of rating system, but I am with the majority in my worry about what will get measured and how. Take the proposed gainful employment regulations, for example.  My approach to accountability dictates that you hold me accountable for what I can control. I can’t control the labor market (can I hold government accountable for that piece?), the willingness of a graduate to move for a job, or the ridiculously low wages our society pays teachers and social workers.  I can control the level of preparedness my students have as they enter their chosen field. So hold me accountable for the latter, but not the former.

I’ve always thought that a rating system that does not adjust for the student being served is an inherently flawed system. It often fails to capture the real value-add of an education. For example, if we could measure how far a student has moved intellectually, developmentally, and professionally, I might argue that Harvard and Yale would rank near the bottom of such a rating system, while a Rio Salado College might rank near the top, at least in terms of how far they move students educationally.  After all, if you take the top 1 percent of high school graduates, how much actual educational value have you added (social value, value-added network, status and so on are other matters, of course)?  Or perhaps more kindly, the educational success of these students has a lot more to do with them and the other high performers around them than with Harvard – they would thrive anywhere they found themselves.

Yet it seems certain that we will have a rating system.  So I’ve played with the rough outlines of a rating system that is student-centered and program-based, and that places institutions (by program) on a matrix that tells us more than a simple score. Any rating system will answer some questions and ignore others. The questions I want my system to answer are these:

  • How does an institution, at the program level (which is more important than institution), serve various student profiles (because students are not at all alike)?
  • Do students who also fit my profile graduate in large numbers, find jobs, carry a lot of debt, get paid well enough?
  • How does the institution perform over all (as opposed to on the program level) on those questions, the government’s concern?

I’d design the system on these two axes:

Student matching. This depends on creating a student profile, a combination of academic and financial factors and perhaps other items we think might be important (Gender? Race? Age? Veteran?). 

High/low risk and resource = Level of aid needed + HS GPA +HS rating

Program success. The success of the program in terms of placing students in a related field one year after graduation (students do all sorts of things immediately after graduation -- we need to filter out that noise), their earnings one year after graduation, the percentage of students who graduate, and the cost of the program.

Success rate by program = grad rate + % of grads working in the related field or field of choice + avg earnings + net cost +average debt

Because the system uses the student as the lens of interpretation, a student with a high risk profile (let’s say a 2.8 HS GPA from a lower-ranked high school, with a family income of less than $40,000) looking at three schools offering Secondary Education programs might see this kind of comparison:

  School A School B School C
% of Like Students 95% 40% 35%
Graduation Rate 15% 65% 45%
% Working in Field 28% 85% 75%
Net Cost $22,000 $16,500 $10,500
Avg Debt $56,000 $29,000 $14,000
Earnings $29,000 $36,000 $34,000

In the above example, School A looks like certain poorly performing for-profits while School C might be the profile for a public institution.  School B, a private institution, leaves its graduates with more debt than does public School C, but they graduate more of their students and place them more effectively.  So the student has some tradeoffs to consider.

In contrast, a low-risk student with ample resources (say a 3.6 GPA from a good high school and a family income of $80,000) looking at the same program might see a different report for School A and B and, in this case, they have substituted an elite institution for School C.    

  School A School B School C
% of Like Students 3% 30% 12%
Graduation Rate 75% 85% 95%
% Working in Field 40% 85% 95%
Net Cost $22,000 $16,500 $24,000
Avg Debt $28,000 $23,000 $45,000
Earnings $29,000 $36,000 $44,000

For this second student, elite School C is a tough choice in terms of admissions, and if the student matriculates, it will provide less aid than it will for a very high-need student.  On the other hand, less selective School B provides more merit aid for a student with this profile and drives down long-term indebtedness in comparison to the first student (a practice that is common and often criticized). 

School A in both cases is a for-profit and not a very good one. So many of the bad agents in the for-profit world take very high-risk students, charge them a lot, and don’t graduate enough of them. Those that earn a credential too often fail to land jobs in their field. Or, in the case of more generalized liberal arts fields, in jobs they would not otherwise choose. They would look a lot like School A in the above examples. But a better for-profit player like Capella University or DeVry University would land closer to Southern New Hampshire University (the institution I lead), something like School B.

In contrast, an elite School C (think Princeton or Harvard) mostly takes very low-risk/high-resource students and charges them quite a bit (or funds them fully if they are among the small number of poor students they accept). Their graduates do quite well. My proposed system would reveal that School C has a high success rate over all (including for high-risk, low-resource students, but it just doesn’t serve very many of them). The second student would be better-served to look at School C in the first example, a public college.

The key is to give an interested student the tools to accurately access where they are on the student profile analysis so they get the best match of schools for them when considering programmatic performance.  Then they could, by program (and degree level), find the institutions that serve them best. Most importantly, the starting point for using the system is the student. A typical SNHU student would struggle at Harvard -- we are in fact the better institution for that student profile.  Very high-risk or low-resource students might be better served at a community college than at SNHU, where our higher cost might be a much bigger burden and a heavier price to pay should they not graduate. Those same students might be better served at Harvard if they are academically prepared, but very poor (which would not likely burden them with a lot of debt), but they’d see that Harvard has very few spots for them.

Such a system could make the student profile piece easy to use through a simple heuristic that identifies key data points (Name of your high school and city/town where it is located; your current GPA or average grade level; did one or both of your parents complete a college degree?). Ideally the system would pull family financial information from the last tax return. Rating high school quality might be a challenge, but I bet there are rankings or state ratings that could be employed.

What I’ve outlined above can be the base analysis, but I think it would fairly easy to add “filters” to the data for other factors. For example, an institution might do a pretty good job of graduating most students, but a filter for minority students might reveal a much lower graduation rate at a given institution. We could have an interesting discussion about what those others filters might be (veterans? gender? first generation? age bracket?). This is an area that needs to be carefully thought-out -- we don’t want the unintended outcome to be students of a given “type” self-selecting out of institutions because they confuse group identity metrics with their own talents and drive. This is complicated territory.

There is one other important variation to consider. The system fails to capture or address those students who go onto further degree study instead of entering the job market, so a community college or a liberal arts college that sends many directly on to grad school would be unfairly hurt in terms of employment and earnings if this population wasn’t separated out. For the job-seeking population, things like graduation rates would be calculated separately for those seeking work after graduation and those declaring their intent to go onto the next degree level (four-year degrees for community college graduates and masters or doctoral programs for four-year degree graduates).  The latter would be kept out of the denominator for the job seeking analysis and thus community colleges and schools sending students onto graduate programs would be more accurately represented.

I know I don’t have this quite right yet, but I think it might work with some more refining.  Some might argue that “percentage of graduates working in a related field” is just too hard to capture (especially for liberal arts programs), so maybe simply employed and at what level of earnings would suffice in Version 1.  Instead of the “average earnings one year after graduation,” one could use a simple metric like “percentage above or below the national media wage,” which is about $35,000. Some would ask it to measure only one student profile: how programs (institutions) serve high-risk students.  After all, low-risk students do pretty well and high-risk students are often served very poorly and fail at high rates. It’s where we waste enormous amounts of federal dollars. 

Even in its broadly sketched form, this kind of rating systems does a number of things:

  • It reframes the question of institutional performance to program performance (which matters a whole lot more to students) while still allowing regulators to roll up program performance into an aggregate institutional profile if they wish;
  • It squares program performance with student profile, recognizing that different institutions work better for different students -- a much more nuanced presentation of the challenge;
  • The idea of filters allows for students to go deeper, perhaps discovering that while Program X has good outcomes, perhaps for minority students or veterans it does less well, which is an important insight if you are a minority student or a veteran (though we’d have to be wary of small numbers in generalizing in many cases, itself another issue);
  • It avoids the oversimplification of a single score, which then becomes an oversimplified rating system that fails to take into account the variety of institutional types, missions, and student markets at work in higher education;
  • It allows the government to call out poor performers;
  • It allows institutions to have a more robust discussion about their programs, something they do poorly, by and large.

Some of what I propose would be difficult to execute and would require some hard thinking about to how to get the data. For example, tracking job placement is devilishly hard, but databases like LinkedIn are making it much easier and the University of Texas has unveiled a new system that achieves much of what I outline (yes, I am conceding the employment metric, despite my objections – the government is likely to demand it, after all). Tax returns can also be accessed in useful ways. The College Scorecard captures some of what we would need. In a state like Massachusetts, a combination of MCAS scores, per pupil spending, and percent going onto college could help rate high schools. My point is that most of the necessary data is available. In all cases, every ratings system has its problems and we need to choose what execution challenges we wish to sort out.  I prefer problems of execution rather than problems of oversimplifcation.

To that end, I know this idea doesn’t address important aspects of higher education. For example, our important role in civic engagement, measuring critical thinking skills, finding a sense of calling, and much more. But those are not the big questions the administration is asking us to address. 

For those questions, I wonder if a system like the one I’ve sketched above might give us a richer understanding -- a student-centered understanding – of institutional effectiveness that works far better than some of what is being described today. It is a near-certainty that we will have a ratings system, so let’s at least have one that focuses the question through the lens of students and that captures the complexity that is higher education today.

Paul J. LeBlanc is president of Southern New Hampshire University.

Brandman U. Gets Green Light for Direct Assessment

Brandman University this week announced that the U.S. Department of Education had approved its application to offer federal financial aid for an emerging form of competency-based education. The university is the fourth institution to get the nod from the department for "direct assessment" degrees, which are decoupled from the credit-hour standard. The feds have sent some mixed signals about this approach, most recently with a critical audit from the department's Office of Inspector General. But Brandman's successful application is more evidence that the Education Department largely backs direct assessment.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top