Assessment

Obama proposal on tuition-free community colleges elevates sound bites over sound policy (essay)

President Obama has jumped on the bandwagon, which started in Tennessee, of making community college tuition-free. This latest proposal is his most recent effort to increase the prominence of the federal government in higher education. While giving higher education more federal  visibility may be a good thing, making community colleges tuition-free is also the latest in a series of proposals in which the administration seems to have decided that sound bites trump sound policy.

The cycle began in the administration’s early days when it declared its primary goal in higher education was to “re-establish” the U.S. as having the world’s highest attainment rate -- the proportion of working adults with a postsecondary degree of some sort. 

Never mind that the U.S. has not had the highest rate in the world for at least several decades and that achieving such a distinction now is well nigh impossible given where some other countries are. And also ignore the fact that some countries which have overtaken us, such as South Korea and Japan, have done so in large part because they are educating an increasing share of a declining number of their young people – a demographic condition we should want to avoid at all costs. 

In this effort to be Number One in higher education, the Obama administration is continuing a trend in K-12 education that began in the Clinton and George W. Bush administrations in which we as a nation set totally unrealistic goals to be achieved after the incumbent administration has left office.  Not clear why we would want to expand this practice into higher education, but that’s what we are doing.

The administration also in its first year pushed for a remarkable expansion of Pell Grants as part of the economic stimulus package of 2009. It was certainly good to augment Pell Grants in the midst of a severe recession when so many students were having a tough time paying their college bills. But rather than doing it on a temporary basis by increasing awards for current recipients, the administration pushed for and the Congress agreed to a permanent legislative change that increased the number of recipients by 50 percent and doubled long term funding. 

This is the equivalent of changing tax rates in the middle of a recession rather than providing a rebate.   It certainly provided more aid for many more students – nearly one in two undergraduates now receives a Pell Grant.  But the expansion in eligibility means less aid is available for the low-income students who most need it.  And few seem worried that Pell Grant increases may have led many institutions that package aid to reduce the grants they provide from their own funds to Pell recipients, as is reflected in the fact that institutional aid increasingly goes to middle-income students.

The Obama administration’s recent effort to develop a rating system for postsecondary institutions is another example of politics triumphing over sound policy. The rhetoric goes to the noble notion of making institutions more productive and more affordable, but the metrics the administration has proposed using are unlikely to produce the desired result or may well have the unintended effect of producing bad results.

Much more troublesome, the administration’s ratings proposal would penalize students based on where they decide to enroll, as those going to colleges that don’t perform well would get less aid. This is illogical as well as counterproductive. Thankfully, there seems little chance that this proposal would be adopted, but one is left to wonder why it was suggested and pushed when it would do little to address the many real challenges facing American higher education, such as chronic inequity and unaffordability.

Which brings me to the most recent proposal by President Obama – to make community colleges tuition-free.  At this stage, we know relatively little about what is being proposed other than that it is modeled on what was done in Tennessee where state lottery funds (not a very good federal model) were used to ensure that students with good grades would not have to pay tuition to go to community college.  But since there are so few details as to how this tuition-free package would be structured, there are more questions regarding the President’s proposal than there are answers.  These include:

Who will benefit and who will pay? If the administration were to follow the Tennessee plan, current Pell Grant recipients will largely not benefit as their Pell Grant award fully covers the cost of tuition at most community colleges throughout the country. So beneficiaries would disproportionately be middle-class students who mostly can afford $3,300 in annual average tuition costs of community college, just as has been the case for the Tennessee plan. 

The administration to its credit seems to recognize this potential lack of progressivity, and its spokesmen have declared (to Inside Higher Ed) that the new benefits will be on top of what Pell Grant recipients currently receive. This could be an avenue for a big step forward in federal policy were we to recognize that Pell Grants are largely for living expenses for students whose families cannot afford to pay those expenses, but it means that the federal costs of implementing such a plan will be substantial, probably far more than the $60 billion in additional costs over 10 years now being suggested.

Also lost in the enthusiasm about making community colleges tuition-free is the reality that the biggest bill for most students are the costs of living while enrolled and the opportunity costs of leaving the job market to enroll in school on more than an occasional basis. Also lost in the hubbub is the question of how these benefits are going to be paid for. This key financing question seems largely unanswered in the administration’s explanation thus far.

What would happen to enrollments in other higher education institutions? Advocates for the Tennessee Promise talk about how it has already boosted enrollments in community colleges. There seems to be little consideration, though, of whether this might come at the expense of enrollments in other colleges and universities. The Obama administration clearly prefers for students to go to community colleges rather than for-profit trade schools, but it seems to have little concern that offering more aid for students enrolling in community colleges will have any adverse effect on enrollments in more traditional four-year institutions -- including historically black colleges that could ill afford the dropoff in enrollments.    

But federal and state officials have an obligation to recognize that enrollments in higher education are not unlimited and that providing incentives for students to enroll in one sector means that enrollments in other sectors are likely to decline. Is the next step for the federal government to propose a program of support for those institutions that cannot afford to wait for all those new community college students to transfer in two or three years to fill their now empty seats?

Why would community colleges participate? Like many other federal and state policy initiatives, the president’s proposal reflects a tendency to think only in terms of demand and to believe that price reductions will inevitably result in enrollment increases. But the economic reality is that good policy must take into account institutional behavior as well, and it is not at all clear why community colleges would change their behavior in light of the Obama proposal. Under the Obama plan the federal and state governments would replace funds that families currently spend or loans that students currently borrow for tuition. The likely result of such a policy would be more students enrolling in already overcrowded community colleges will little or no additional funds provided to community colleges to educate them.

If one truly wants to improve community college financing, a better approach would be one in which governments recognize the additional costs entailed in enrolling additional students and try to help pay for those costs. But in the absence of such a proposal, the current Obama plan seems more of the same – more requirements but no more money. As a result, it is hard to understand the enthusiasm of the community college and other national associations for the president’s plan.

Why would states participate? It’s also not immediately clear why states would participate in the Obama plan as it is aimed primarily or entirely at changing how tuition is financed. As a result, it really would not get at the majority of the community college financing iceberg – what states and localities spend in support of every student who enrolls. So the question remains: why would states choose to participate in this plan that obligates them to meet a series of new requirements AND pay for one-quarter of tuition costs in addition to still paying what they do now for operating subsidies.

In sum, an analysis of what we know of the president’s plan is part of a troubling pattern that seems to characterize our higher education policy debates these days. Political considerations trump good policy. The interests of low-income students get second billing to middle class affordability, or no billing at all. Not enough attention is paid to how things actually would work or why institutions or states would decide to participate. 

It all goes to show that, as the economist John Maynard Keynes famously said, “There is no free lunch.” One of the problems with the Obama administration’s continuing enthusiasm for higher education policy initiatives is that is doesn’t seem to recognize this basic economic reality.

Arthur M. Hauptman is a public policy consultant specializing in higher education policy and finance. This is the first in a series of articles about how federal and state higher education policies might be changed to produce greater equity, efficiency and effectiveness.

Fundamentals of Program Assessment Workshop

Date: 
Sat, 04/25/2015

Location

Atlanta , Georgia
United States

Assessment (of the right kind) is key to institutional revival

Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.

Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.

The "Grim March" & the Meaning of Assessment

Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.

For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.

Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.

The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.

Who Owns Change Management

Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.

How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.

Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.

This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.

The Assessment Renaissance

Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.

Leaders looking to eradicate the walking dead assessment march in a systematic way need to:

  1. Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
  2. Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
  3. Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
  4. Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
  5. Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
  6. Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.

Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.

Geoff Irvine is CEO and founder of Chalk & Wire.

Editorial Tags: 

Administrators should work with the faculty to assess learning the right way (essay)

“Why do we have such trouble telling faculty what they are going to do?” said the self-identified administrator, hastening to add that he “still thinks of himself as part of the faculty.”

“They are our employees, after all. They should be doing what we tell them to do.”

Across a vast number of models for assessment, strategic planning, and student services on display at last month’s IUPUI Assessment Institute, it was disturbingly clear that assessment professionals have identified “The Faculty” (beyond the lip service to #notallfaculty, always as a collective body) as the chief obstacle to successful implementation of campuswide assessment of student learning. Faculty are recalcitrant. They are resistant to change for the sake of being resistant to change. They don’t care about student learning, only about protecting their jobs. They don’t understand the importance of assessment. They need to be guided toward the Gospel with incentives and, if those fail, consequences.

Certainly, one can find faculty members of whom these are true; every organization has those people who do just enough to keep from getting fired. But let me, at risk of offending the choir to whom keynote speaker Ralph Wolff preached, suggest that the faculty-as-enemy trope may well be a problem of the assessment field’s own making. There is a blindness to the organizational and substantive implications of assessment, hidden behind the belief that assessment is nothing more than collecting, analyzing, and acting rationally on information about student learning and faculty effectiveness.

Assessment is not neutral. In thinking of assessment as an effort to determine whether students are learning and faculty are being effective, it is imperative that we unpack the implicit subject doing the determining. That should make clear that assessment is first and foremost a management rather than a pedagogical practice. Assessment not reported to the administration meets the requirements of neither campus assessment processes nor accreditation standards, and is thus indistinguishable from non-assessment. As a fundamental principle of governance in higher education, assessment is designed to promote what social scientist James Scott has called “legibility”: the ability of outsiders to understand and compare conditions across very different areas in order to facilitate those outsiders’ capacity to manage.

The Northwest Commission on Colleges and Universities, for example, requires schools to practice “ongoing systematic collection and analysis of meaningful, assessable, and verifiable data” to demonstrate mission fulfillment. That is not simply demanding that schools make informed judgments. Data must be assessable and verifiable so that evaluators can examine the extent to which programs revise their practices using the assessment data. They can’t do that unless the data make sense to them. Administrators make the same demand on their departments through campus assessment processes. In the process a hierarchical, instrumentally rational, and externally oriented management model replaces one that has traditionally been decentralized, value rational, and peer-driven.

That’s a big shift in power. There are good (and bad) arguments to be made in favor of (and opposed to) it, and ways of managing assessment that shift that power more or less than others. Assessment professionals are naïve, however, to think that those shifts don’t happen, and fools to think that the people on the losing end of them will not notice or simply give in without objection.

At the same time, assessment also imposes substantive demands on programs through its demand that they “close the loop” and adapt their curriculums to those legible results regardless of how meaningful those results are to the programs themselves. An externally valid standard might demand significant changes to the curriculum that move the program away from its vision.

In my former department we used the ETS Major Field Test as such a standard. But while the MFT tests knowledge of political science as a whole, in political science competence is specific to subfields. Even at the undergraduate level students specialize sufficiently to be, for example, fully conversant in international relations and ignorant of political thought. The overall MFT score does not distinguish between competent specialization and broad mediocrity. One solution was to expect that students would demonstrate excellence in at least one subfield of the discipline. The curriculum would then have to require that students took nearly every course we offered in a subfield, and staffing realities in our program would inevitably make that field American politics.

Because the MFT was legible to a retired Air Force officer (the institutional effectiveness director), an English professor (the dean), a chemist (the provost), and a political appointee with no previous experience in higher education (the president), it stayed in place as a benchmark of progress, but offered little to guide program management. The main tool we settled on was an assessment of the research paper produced in a required junior-level research methods course (that nearly all students put off to their final semester). That assessment gave a common basis for evaluation (knowledge of quantitative research methods) and allowed faculty to evaluate substantive knowledge in a very narrow range of content through the literature review. But it also shifted emphasis toward quantitative work in the discipline, and further marginalized political thought altogether since that subfield isn’t based on empirical methods. We considered adding a political thought assignment, but that would have required students to prioritize that over the empirical fields (no other substantive field having a required assignment) rather than putting it on equal footing.

Evaluating a program with “meaningful, assessable, and verifiable data” can’t be done without changing the program. To “close the loop” based on MFT results required a substantive change in how we saw our mission: from producing well-rounded students to specialists in American politics. To do so with the methods paper required changes in course syllabuses and advising to bring more emphasis on empirical fields, more quantitative rather than qualitative work within those fields, more emphasis on methods supporting conclusions rather than the substance of the conclusions, and less coursework in political thought. We had a choice between these options. But we could not choose an option that would not require change in response to the standard, not just the results.

This is the reality facing those, like the administrator I quoted at the beginning of this essay, who believe that they can tell faculty what to do with assessment without telling them what to do with the curriculum. If assessment requires that a program make changes based on the results of its assessment processes, then the selection of processes defines a domain of curricular changes that can result. Some of these will be unavoidable: a multiple-choice test will require faculty to favor knowledge transmission over synthetic thinking. Others will be completely proscribed: if employment in the subfield of specialization is an assessment measure, the curriculum in political thought will never be reinforced, because people don’t work in political thought. But no process can be neutral among all possible curriculums.

Again, that may or may not be a bad thing. Sometimes a curriculum just doesn’t work, and assessment can be a way to identify it and replace it with something that does. But the substantive influence of assessment is most certainly a thing one way or the other, and that thing means that assessment professionals can’t say that assessment doesn’t change what faculty teach and how they teach it. When they tell faculty members that, they appear at best clueless and at worst disingenuous. With most faculty members having oversensitive BS detectors to begin with, especially when dealing with administrators, piling higher and deeper doesn’t exactly win friends and influence people.

The blindness that comes from belief in organizationally and curricularly neutral assessment is, I think, at the heart of the condescending attitudes toward faculty at the Assessment Institute. In the day two plenary session, one audience member asked, essentially, “What do we do about them?” as if there were no faculty members in the room. The faculty member next to me was quick to tune out as the panel took up the discussion with the usual platitudes about buy-in and caring about learning.

Throughout the conference there was plenty of discussion of why faculty members don’t “get it.” Of how to get them to buy into assessment on the institutional effectiveness office’s terms. Of providing effective incentives — carrots, yes, but plenty of sticks — to get them to cooperate. Of how to explain the importance of accreditation to them, as if they are unaware of even the basics. And of faculty paranoia that assessment was a means for the administration to come for their jobs.

What there wasn’t: discussion of what the faculty’s concerns with assessment actually are. Of how assessment processes do in fact influence what happens in classrooms. Of how assessment feeds program review, thus influencing administrative decisions about program closure and the allocation of tenure lines (especially of the conversion of tenure lines to adjunct positions when vacancies occur). Of the possibility that assessment might have unintended consequences that hinder student learning. These are very real concerns for faculty members, and should be for assessment professionals as well.

Nor was there discussion of what assessment professionals can do to work with faculty in a relationship that doesn’t subordinate faculty. Of how assessment professionals can build genuinely collaborative rather than merely cooptive relationships with faculty members. Of, more than anything, the virtues of listening before telling. When it comes to these things, it is the assessment field that doesn’t “get it.”

Let me assure you, as a former faculty member who talks about these issues with current ones: faculty members do care about whether students learn. In fact, many lose sleep over it. Faculty members informally assess their teaching techniques every time they leave a classroom and adjust what they do accordingly. In fact, that usually happens before they walk back into that classroom, not at the end of a two-year assessment cycle. Faculty members most certainly feel disrespected by suggestions they only care for themselves. In fact, it is downright offensive to suggest that they are selfish when in order to make learning happen they frequently make less than their graduates do and live in the places their graduates talk of escaping.

Assessment professionals need to approach faculty members as equal partners rather than as counterrevolutionaries in need of reeducation. That’s common courtesy, to be sure. But it is also essential if assessment is to actually improve student learning.

You do care about student learning, don’t you?

Jeffrey Alan Johnson is assistant director of institutional effectiveness and planning at Utah Valley University.

Editorial Tags: 

Ratings and scorecards: the wrong kind of higher ed accountability (essay)

Calls for scorecards and rating systems of higher education institutions that have been floating around Washington, if used for purposes beyond providing comparable consumer information, would make the federal government an arbiter of quality and judge of institutional performance.

This change would undermine the comprehensive, careful scrutiny currently provided by regional accrediting agencies and focus on cursory reviews.

Regional accreditors provide a peer-review process that sparks an investigation into key challenges institutions face to look beyond symptoms for root causes. They force all providers of postsecondary education to investigate closely every aspect of performance that is crucial to strengthening institutional excellence, improvement, and innovation. If you want to know how well a university is really performing, a graduation rate will only tell you so much.

But the peer-review process conducted by accrediting bodies provides a view into the vital systems of the institution: the quality of instruction, the availability and effectiveness of student support, how the institution is led and governed, its financial management, and how it uses data.

Moreover, as part of the peer-review process, accrediting bodies mobilize teams of expert volunteers to study governance and performance measures that encourage institutions to make significant changes. No government agency can replace this work, can provide the same level of careful review, or has the resources to mobilize such an expert group of volunteers. In fact, the federal government has long recognized its own limitations and, since 1952, has used accreditation by a federally recognized accrediting agency as a baseline for institutional eligibility for Title IV financial-aid programs.

Attacked at times by policy makers as an irrelevant anachronism and by institutions as a series of bureaucratic hoops through which they must jump, the regional accreditors’ approach to quality control has rather become increasingly more cost-effective, transparent, and data- and outcomes-oriented.

Higher education accreditors work collaboratively with institutions to develop mutually agreed-upon common standards for quality in programs, degrees, and majors. In fact, in the Southern region, accreditation has addressed public and policy maker interests in gauging what students gain from their academic experience by requiring, since the 1980s, the assessment of student learning outcomes in colleges. Accreditation agencies also have established effective approaches to ensure that students who attend institutions achieve desired outcomes for all academic programs, not just a particular major.

While the federal government has the authority to take actions against institutions that have proven deficient, it has not used this authority regularly or consistently. A letter to Congress from the American Council on Education and 39 other organizations underscored the inability of the U.S. Department of Education to act with dispatch, noting that last year the Department announced “it would levy fines on institutions for alleged violations that occurred in 1995 -- nearly two decades prior.”

By contrast, consider that in the past decade, the Southern Association of Schools and Colleges Commission on Colleges stripped nine institutions of their accreditation status and applied hundreds of sanctions to all types of institutions (from online providers to flagship campuses) in its region alone. But, when accreditors have acted boldly in recent times, they been criticized by politicians for going too far, giving accreditors the sense that we’re “damned if we do, damned if we don’t.”

The Problem With Simple Scores

Our concern about using rating systems and scorecards for accountability is based on several factors. Beyond tilting the system toward the lowest common denominator of quality, rating approaches can create new opportunities for institutions to game the system (as with U.S. News & World Report ratings and rankings) and introduce unintended consequences as we have seen occur in K-12 education.

Over the past decade, the focus on a few narrow measures for the nation’s public schools has not led to significant achievement gains or closing achievement gaps. Instead, it has narrowed the curriculum and spurred the current public backlash against overtesting. Sadly, the data generated from this effort have provided little actionable information to help schools and states improve, but have actually masked -- not illuminated -- the root causes of problems within K-12 institutions.

Accreditors recognize that the complex nature of higher education requires that neither accreditors nor the government should dictate how individual institutions can meet desired outcomes. No single bright line measure of accountability is appropriate for the vast diversity of institutions in the field, each with its own unique mission. The fact that students often enter and leave the system and increasingly earn credits from multiple institutions further complicates measures of accountability.

Moreover, setting minimal standards will not push institutions that think they are high performing to get better. All institutions – even those considered “elite” – need to work continually to achieve better outcomes and should have a role in identifying key outcomes and strategies for improvement that meet their specific challenges.

Accreditors also have demonstrated they are capable of addressing new challenges without strong government action. With the explosion of online providers, accreditors found a solution to address the challenges of quality control for these programs. Accrediting groups partnered with state agencies, institutions, national higher education organizations, and other stakeholders to form the State Authorization Reciprocity Agreements, which use existing regional higher education compacts to allow for participating states and institutions to operate under common, nationwide standards and procedures for regulating postsecondary distance education. This approach provides a more uniform and less costly regulatory environment for institutions, more focused oversight responsibilities for states, and better resolution of complaints without heavy-handed federal involvement.

Along with taking strong stands to sanction higher education institutions that do not meet high standards, regional accreditors are better-equipped than any centralized governmental body at the state or national level to respond to the changing ecology of higher education and the explosion of online providers.

We argue for serious -- not checklist -- approaches to accountability that support improving institutional performance over time and hold institutions of all stripes to a broad array of criteria that make them better, not simply more compliant.

Belle S. Wheelan is president of the Southern Association of Colleges and Schools Commission on Colleges, the regional accrediting body for 11 states and Latin America. Mark A. Elgart is founding president and chief executive officer for AdvancED, the world’s largest accrediting body and parent organization for three regional K-12 accreditors.

AALHE 5th Annual Assessment Conference

Date: 
Mon, 06/01/2015 to Wed, 06/03/2015

Location

Lexington , Kentucky
United States

General Education and Assessment

Date: 
Thu, 02/19/2015 to Sat, 02/21/2015

Location

200 West 12th Street
64105 Kansas City , Missouri
United States

Let's differentiate between 'competency' and 'mastery' in higher ed (essay)

"Competency-based” education appears to be this year’s answer to America’s higher education challenges, judging from this week's news in Washington. Unlike MOOCs (last year’s solution), there is, refreshingly, greater emphasis on the validation of learning. Yet, all may not be as represented.

On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.

However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.

Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.

In the course of their own education, readers can undoubtedly recall the instructors who had complete command of their subjects, but who could not effectively present to their students. The mastery of content did not extend to their being competent as teachers. Other examples might include the much-in-demand marketing professors who did not know how, in practice, to sell their executive education programs. Just as leadership and management differ one from the other, so to do mastery and competence.

My institution has been involved in assessing both mastery and competence for several decades. Created by New York’s Board of Regents in the early 1970s, it is heir to the Regents’ century-old belief in the importance of measuring educational attainment (New York secondary students have been taking Regent’s Exams, as a requirement for high school graduation, since 1878).

Building on its legacy, the college now offers more than 60 subject matter exams. These have been developed with the help of nationally known subject matter experts and a staff of doctorally prepared psychometricians. New exams are field tested, nationally normed and reviewed for credit by the American Council on Education, which also reviews the assessments of ETS (DSST) and the College Board (CLEP). Such exams are routinely used for assessing subject matter mastery.

In the case of the institution’s competency-based associate degree in nursing, a comprehensive, hands-on assessment of clinical competence is required as a condition of graduation. This evaluation, created with the help of the W.K. Kellogg Foundation in 1975, takes place over three days in an actual hospital, with real patients, from across the life span -- pediatric to geriatric. Performance is closely monitored by multiple, carefully selected and trained nurse educators. Students must demonstrate skill and ability to a level of defined competence within three attempts or face dismissal or transfer from the program.

In developing a competency-based program as opposed to a mastery-based one, there are many challenges that must be addressed if the program is to have credibility. These include:

  • Who specifies the elements to be addressed in a competency determination? In the case of nursing, this is done by the profession. Other fields may not be so fortunate. For instance, who would determine the key areas of competency in the humanities or arts?
  • Who does the assessing, and what criteria must be met to be seen as a qualified assessor of someone’s competency?
  • How will competence be assessed, and is the process scalable? In the nursing example above, we have had to establish a national network of hospitals, as well as recruit, train and field a corps of graduate prepared nurse educators. At scale, this infrastructure is limited to approximately 2,000 competency assessments per year, which is far less than the number taking the College’s computer-based mastery examinations.
  • Who is to be served by the growing number of CBE programs? Are they returning adults who have been in the workplace long enough to acquire relevant skills and knowledge on the job, or is CBE thought to be relevant even for traditional-aged students?

(It is difficult to imagine many 22 year-olds as competent within a field or profession. Yet, there is little question that most could show some level of mastery of a body of knowledge for which prepared.)

  • Do prospective students want this type of learning/validation? Has there been market research that supports the belief that there is demand? We have offered two mastery-based bachelor’s degrees (each for less than $10,000) since 2011. Demand has been modest because of uncertainty about how a degree earned in such a manner might be viewed by employers and graduate schools (this despite the fact that British educators have offered such a model for centuries).
  • Will employers and graduate schools embrace those with credentials earned in a CBE program? Institutions that have varied from the norm (dropping the use of grades, assessing skills vs. time in class) have seen their graduates face admissions challenges when attempting to build on their undergraduate credentials by applying to graduate schools. As for employers, a backlash may be expected if academic institutions sell their graduates as “competent” and later performance makes clear that they are not.

The interest in CBE has, in large part, been driven by the fact that employers no longer see new college graduates as job-ready. In fact, a recent Lumina Foundation report found that only 11 percent of employers believe that recent graduates have the skills needed to succeed within their work forces. One CBE educator has noted, "We are stopping one step short of delivering qualified job applicants if we send them off having 'mastered' content, but not demonstrating competencies." 

Or, as another put it, somewhat more succinctly, "I don't give a damn what they KNOW.  I want to know what they can DO.”

The move away from basing academic credit on seat time is to be applauded. Determining levels of mastery through various forms of assessment -- exams, papers, projects, demonstrations, etc. – is certainly a valid way to measure outcomes. However, seat time has rarely been the sole basis for a grade or credit. The measurement tools listed here have been found in the classroom for decades, if not centuries.

Is this a case of old wine in new bottles? Perhaps not. What we now see are programs being approved for Title IV financial aid on the basis of validated learning, not for a specified number of instructional hours; whether the process results in a determination of competence or mastery is secondary, but not unimportant.

A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.

Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.

To continue to use “competency” when we mean “mastery” may seem like a small thing. Yet, if we of the academy cannot be more precise in our use of language, we stand to further the distrust which many already have of us. To say that we mean “A” when in fact we mean “B” is to call into question whether we actually know what we are doing.

John F. Ebersole is the president of Excelsior College, in Albany, N.Y.

Editorial Tags: 

Assessment Conference

Date: 
Mon, 03/09/2015 to Wed, 03/11/2015

Location

Austin , Texas
United States

Pages

Subscribe to RSS - Assessment
Back to Top