"Competency-based” education appears to be this year’s answer to America’s higher education challenges, judging from this week's news in Washington. Unlike MOOCs (last year’s solution), there is, refreshingly, greater emphasis on the validation of learning. Yet, all may not be as represented.
On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.
However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.
Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.
In the course of their own education, readers can undoubtedly recall the instructors who had complete command of their subjects, but who could not effectively present to their students. The mastery of content did not extend to their being competent as teachers. Other examples might include the much-in-demand marketing professors who did not know how, in practice, to sell their executive education programs. Just as leadership and management differ one from the other, so to do mastery and competence.
My institution has been involved in assessing both mastery and competence for several decades. Created by New York’s Board of Regents in the early 1970s, it is heir to the Regents’ century-old belief in the importance of measuring educational attainment (New York secondary students have been taking Regent’s Exams, as a requirement for high school graduation, since 1878).
Building on its legacy, the college now offers more than 60 subject matter exams. These have been developed with the help of nationally known subject matter experts and a staff of doctorally prepared psychometricians. New exams are field tested, nationally normed and reviewed for credit by the American Council on Education, which also reviews the assessments of ETS (DSST) and the College Board (CLEP). Such exams are routinely used for assessing subject matter mastery.
In the case of the institution’s competency-based associate degree in nursing, a comprehensive, hands-on assessment of clinical competence is required as a condition of graduation. This evaluation, created with the help of the W.K. Kellogg Foundation in 1975, takes place over three days in an actual hospital, with real patients, from across the life span -- pediatric to geriatric. Performance is closely monitored by multiple, carefully selected and trained nurse educators. Students must demonstrate skill and ability to a level of defined competence within three attempts or face dismissal or transfer from the program.
In developing a competency-based program as opposed to a mastery-based one, there are many challenges that must be addressed if the program is to have credibility. These include:
Who specifies the elements to be addressed in a competency determination? In the case of nursing, this is done by the profession. Other fields may not be so fortunate. For instance, who would determine the key areas of competency in the humanities or arts?
Who does the assessing, and what criteria must be met to be seen as a qualified assessor of someone’s competency?
How will competence be assessed, and is the process scalable? In the nursing example above, we have had to establish a national network of hospitals, as well as recruit, train and field a corps of graduate prepared nurse educators. At scale, this infrastructure is limited to approximately 2,000 competency assessments per year, which is far less than the number taking the College’s computer-based mastery examinations.
Who is to be served by the growing number of CBE programs? Are they returning adults who have been in the workplace long enough to acquire relevant skills and knowledge on the job, or is CBE thought to be relevant even for traditional-aged students?
(It is difficult to imagine many 22 year-olds as competent within a field or profession. Yet, there is little question that most could show some level of mastery of a body of knowledge for which prepared.)
Do prospective students want this type of learning/validation? Has there been market research that supports the belief that there is demand? We have offered two mastery-based bachelor’s degrees (each for less than $10,000) since 2011. Demand has been modest because of uncertainty about how a degree earned in such a manner might be viewed by employers and graduate schools (this despite the fact that British educators have offered such a model for centuries).
Will employers and graduate schools embrace those with credentials earned in a CBE program? Institutions that have varied from the norm (dropping the use of grades, assessing skills vs. time in class) have seen their graduates face admissions challenges when attempting to build on their undergraduate credentials by applying to graduate schools. As for employers, a backlash may be expected if academic institutions sell their graduates as “competent” and later performance makes clear that they are not.
The interest in CBE has, in large part, been driven by the fact that employers no longer see new college graduates as job-ready. In fact, a recent Lumina Foundation report found that only 11 percent of employers believe that recent graduates have the skills needed to succeed within their work forces. One CBE educator has noted, "We are stopping one step short of delivering qualified job applicants if we send them off having 'mastered' content, but not demonstrating competencies."
Or, as another put it, somewhat more succinctly, "I don't give a damn what they KNOW. I want to know what they can DO.”
The move away from basing academic credit on seat time is to be applauded. Determining levels of mastery through various forms of assessment -- exams, papers, projects, demonstrations, etc. – is certainly a valid way to measure outcomes. However, seat time has rarely been the sole basis for a grade or credit. The measurement tools listed here have been found in the classroom for decades, if not centuries.
Is this a case of old wine in new bottles? Perhaps not. What we now see are programs being approved for Title IV financial aid on the basis of validated learning, not for a specified number of instructional hours; whether the process results in a determination of competence or mastery is secondary, but not unimportant.
A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.
Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.
To continue to use “competency” when we mean “mastery” may seem like a small thing. Yet, if we of the academy cannot be more precise in our use of language, we stand to further the distrust which many already have of us. To say that we mean “A” when in fact we mean “B” is to call into question whether we actually know what we are doing.
John F. Ebersole is the president of Excelsior College, in Albany, N.Y.
This week Pearson introduced a new learning model for competency-based education. The company's seven-step "platform" seeks to help colleges prepare, build and sustain successful competency-based programs. It includes advice on market analysis, curriculum design and using data to evaluate student performance.
When the teacher and poet Taylor Mali declares, “I can make a C+ feel like a Congressional Medal of Honor and an A- feel like a slap in the face,” he testifies to the powerful ways teachers can use emotions to help students learn and grow. Students -- and their parents -- put a great deal of trust in college educators to use these powers wisely and cautiously. This is why the unfolding debacle of the Facebook emotional contagion experiment should give educators great pause.
In 2012, for one week, Facebook changed an algorithm in its News Feed function so that certain users saw more messages with words associated with positive sentiment and others saw more words associated with negative sentiment. Researchers from Facebook and Cornell then analyzed the results and found that the experiment had a small but statistically significant effect on the emotional valence of the kinds of messages that News Feed readers subsequently went on to write. People who saw more positive messages wrote more positive ones, and people who saw more negative messages wrote more negative ones. The researchers published a study in the Proceedings of the National Academy of Sciences, and they claimed the study provides evidence of the possibility of large-scale emotional contagion.
The debate immediately following the release of the study in the Proceedings of the National Academy of Sciences has been fierce. There has been widespread public outcry that Facebook has been manipulating people’s emotions without following widely accepted research guidelines that require participant consent. Social scientists who have come to the defense of the study note that Facebook conducts experiments on the News Feed algorithm constantly, as do virtually all other online platforms, so users should expect to be subject to these experiments. Regardless of how merit and harm are ultimately determined in the Facebook case, however, the implications of its precedent for learning research are potentially very large.
All good teachers observe their students and use what they learn from those observations to improve instruction. Good teachers assess and probe their students, experiment with different approaches to instruction and coaching, and make changes to their practice and pedagogy based on the results of those experiments. In physical classrooms, these experiments are usually ad hoc and the data analysis informal.
But as more college instruction moves online, it becomes ever easier for instructors to observe their students systematically and continuously. Digital observation of college instruction promises huge advances in the science of learning. It also raises ethical questions that higher education leaders have only begun to address.
What does it mean to give consent in an age of pages-long terms-of-service documents that can be changed at any time? In a world where online users should expect to be constantly studied, what conditions should require additional consent? What bedrock ethical principles of the research enterprise need to be rethought or reinforced as technology reshapes the frontiers of research? How do we ensure that corporate providers of online learning tools adhere to the same ethical standards for research as universities?
If the ultimate aim of research is beneficence -- to do maximum good with minimum harm -- how do we weigh new risks and new opportunities that cannot be fully understood without research?
Educational researchers must immediately engage these questions. The public has enormous trust in academic researchers to conduct their inquiries responsibly, but this trust may be fragile. Educational researchers have not yet had a Facebook moment, but the conditions for concern are rising, and online learning research is expanding.
Proactively addressing these concerns means revisiting the principles and regulatory structures that have guided academic research for generations. The Belmont Report, a keystone document of modern research ethics, was crafted to guide biomedical science in an analog world. Some of the principles of that report should undoubtedly continue to guide research ethics, but we may also need new thinking to wisely advance the science of learning in a digital age.
In June 2014, a group of 50 educational researchers, computer scientists, and privacy experts from a variety of universities, as well as observers from government and allied philanthropies, gathered at Asilomar Conference Grounds in California to draft first principles for learning research in the digital era. We released a document, the Asilomar Convention for Learning Research in Higher Education, which recognizes the importance of changing technology and public expectations for scientific practice.
The document embraces three principles from the Belmont Report: respect for persons, justice, and beneficence. It also specifies three new ones: the importance of openness of data use practices and research findings, the fundamental humanity of learning regardless of the technical sophistication learning media, and the need for continuous consideration of research ethics in the context of rapidly changing technology.
We hope the Asilomar Convention begins a broader conversation about the future of learning research in higher education. This conversation should happen at all levels of higher education: in institutional review boards, departments and ministries of education, journal editorial boards, and scholarly societies. It should draw upon new research about student privacy and technology emerging from law schools, computer science departments, and many other disciplines.
And it should specifically consider the ethical implications of the fact that much online instruction takes the form of joint ventures between nonprofit universities and for-profit businesses. We encourage organizers of meetings and conferences to make consideration of the ethics of educational data use an immediate and ongoing priority. Preservation of public trust in higher education requires a proactive research ethics in the era of big data.
Justin Reich is the Richard L. Menschel HarvardX Research Fellow and a Fellow at the Berkman Center for Internet & Society at Harvard University. Mitchell L. Stevens is associate professor and director of digital research and planning in the Graduate School of Education at Stanford University.
The regional accrediting commissions for New England and the Mid-Atlantic states placed several colleges on probation at their most recent meetings.
Burlington College, in Vermont, announced that it had been cited by the New England Association of Schools and Colleges' Commission on Institutions of Higher Education for failing to meet the accreditor's standard for financial resources. College officials attributed the problem to debt the private four-year institution accumulated when it purchased property previously owned by a local diocese.
It’s surprising how many house pets hold advanced degrees. Last year, a dog received his M.B.A. from the American University of London, a non-accredited distance-learning institution. It feels as if I should add “not to be confused with the American University in London,” but getting people to confuse them seems like a pretty basic feature of the whole AUOL marketing strategy.
The dog, identified as “Peter Smith” on his diploma, goes by Pete. He was granted his degree on the basis of “previous experiential learning,” along with payment of £4500. The funds were provided by a BBC news program, which also helped Pete fill out the paperwork. The American University of London required that Pete submit evidence of his qualifications as well as a photograph. The applicant submitted neither, as the BBC website explains, “since the qualifications did not exist and the applicant was a dog.”
The program found hundreds of people listing AUOL degrees in their profiles on social networking sites, including “a senior nuclear industry executive who was in charge of selling a new generation of reactors in the UK.” (For more examples of suspiciously credentialed dogs and cats, see this list.)
Inside Higher Ed reports on diploma mills and fake degrees from time to time but can’t possibly cover every revelation that some professor or state official has a bogus degree, or that a “university” turns out to be run by a convicted felon from his prison cell. Even a blog dedicated to the topic, Diploma Mill News, links to just a fraction of the stories out there. Keeping up with every case is just too much; nobody has that much Schaudenfreude in them.
By contrast, scholarly work on the topic of counterfeit credentials has appeared at a glacial pace. Allen Ezell and John Bear’s expose Degree Mills: The Billion-Dollar Industry -- first published by Prometheus Books in 2005 and updated in 2012 – points out that academic research on the phenomenon amounts is conspicuously lacking, despite the scale of the problem. (Ezell headed up the Federal Bureau of Investigation's “DipScam” investigation of diploma mills that ran from 1980 through 1991.)
The one notable exception to that blind spot is the history of medical quackery, which enjoyed its golden age in the United States during the late 19th and early 20th centuries. Thousands of dubious practitioners throughout the United States got their degrees from correspondence course or fly-by-night medical schools. The fight to put both the quacks and the quack academies out of business reached its peak during the 1920s and ‘30s, under the tireless leadership of Morris Fishbein, editor of the Journal of the American Medical Association.
H.L. Mencken was not persuaded that getting rid of medical charlatans was such a good idea. “As the old-time family doctor dies out in the country towns,” he wrote in a newspaper column from 1924, “with no competent successor willing to take over his dismal business, he is followed by some hearty blacksmith or ice-wagon driver, turned into a chiropractor in six months, often by correspondence.... It eases and soothes me to see [the quacks] so prosperous, for they counteract the evil work of the so-called science of public hygiene, which now seeks to make imbeciles immortal.” (On the other hand, he did point out quacks worth pursuing to Fishbein.)
The pioneering scholar of American medical shadiness was James Harvey Young, an emeritus professor of history at Emory University when he died in 2006, who first published on the subject in the early 1950s. Princeton University Press is reissuing American Health Quackery: Collected Essays of James Harvey Young in paperback this month. But while patent medicines and dubious treatments are now routinely discussed in books and papers on medical history, very little research has appeared on the institutions -- or businesses, if you prefer -- that sold credentials to the snake-oil merchants of yesteryear.
There are plenty still around, incidentally. In Degree Mills, Ezell and Bear cite a Congressional committee’s estimate from 1986 that there were more than 5,000 fake doctors practicing in the United States. The figure must be several times that by now.
The demand for fraudulent diplomas comes from a much wider range of aspiring professionals now than in the patent-medicine era – as the example of Pete, the canine MBA, may suggest. The most general social-scientific study of the problem seems to be “An Introduction to the Economics of Fake Degrees,” published in the Journal of Economic Issues in 2008.
The authors -- Gilles Grolleau, Tarik Lakhal, and Naoufel Mzoughi – are French economists who do what they can with the available pool of data, which is neither wide nor deep. “While the problem of diploma mills and fake degrees is acknowledged to be serious,” they write, “it is difficult to estimate their full impact because it is an illegal activity and there is an obvious lack of data and rigorous studies. Several official investigations point to the magnitude and implications of this dubious activity. These investigations appear to underestimate the expanding scale and dimensions of this multimillion-dollar industry.”
Grolleau et al. distinguish between counterfeit degrees (fabricated documents not actually issued by the institutions the holder thereby claims to have attended) and “degrees from bogus universities, sold outright and that can require some academic work but significantly less than comparable, legitimate accredited programs.” The latter institutions, also known as diploma mills, are sometimes backed up by equally dubious accreditation “agencies.” A table in the paper indicates that more than 200 such “accreditation mills” (defined as agencies not recognized by either the Council for Higher Education Accreditation or the U.S. Department of Education) were operating as of 2004.
The authors work out the various costs, benefits, and risk factors involved in the fake degree market, but the effort seems very provisional, not to say pointless, in the absence of solid data. They write that “fake degrees allow their holders to ‘free ride’ on the rights and benefits normally tied to legitimate degrees, without the normal investment of human capital,” which may be less of a tautology than “A=A” but not by much.
The fake-degree consumer’s investment “costs” include the price demanded by the vendor but also "other ‘costs,’ such as … the fear of being discovered and stigmatized.” I suppose so, but it’s hardly the sort of expense that can be monetized. By contrast, the cost to legitimate higher-education institutions for “protecting their intellectual property rights by conducting investigations and mounting litigation against fakers” might be more readily quantified, at least in principle.
The authors state, sensibly enough: “The resources allocated to decrease the number of fake degrees should be set equal to the pecuniary value of the marginal social damage caused by the existence of the fakes, at the point of the optimal level of fakes.” But then they point to “the difficulty in measuring the value of the damage and the cost of eliminating it completely.”
So: If we had some data about the problem, we could figure out how much of a problem it is, but we don’t -- and that, too, is a problem.
Still, the paper is a reminder that empirical research on the whole scurvy topic would be of value – especially when you consider that in the United States, according to one study, “at least 3 percent of all doctorate degrees in occupational safety and health and related areas” are bogus. Also keep in mind Ezell Bear’s estimate in Degree Mills: The Billion-Dollar Industry that 40-45,000 legitimate Ph.D.s are awarded annually in the U.S. -- while another 50,000 spurious Ph.D.s are purchased here.
“In other words,” they write, “more than half of all people claiming a new Ph.D. have a fake degree.” And so I have decided not to make matters worse by purchasing one for my calico cat, despite “significant experiential learning” from her studies in ornithology.
Wilberforce University, the oldest private historically black college in the country, is in danger of losing accreditation. The Higher Learning Commission of the North Central Association, this week sent the university a "show cause" order asking Wilberforce to give specific reasons and evidence that it should not lose accreditation. The letter says that Wilberforce is out of compliance with key requirements, such as having an effectively functioning board and sufficient financial resources. The university has a deficit in its main operating fund of nearly $10 mllion, is in default on some bond debt, and problems with the physical plan have left the campus "unsafe and unhealthy," the letter says. University officials did not respond to local reporters seeking comment on the accreditor's action.
Nearly 70 institutions are collaborating to better assess learning outcomes as part of a new initiative called the Multi-State Collaborative to Advance Learning Outcomes Assessment. The colleges and universities are a mix of two- and four-year institutions.
The initiative, funded in its initial planning year by the Bill & Melinda Gates Foundation, was announced Monday by the Association of Colleges and Universities and the State Higher Education Executive Officers association.
”The calls are mounting daily for higher education to be able to show what students can successfully do with their learning,” said Carol Geary Schneider, AAC&U president, in an announcement. “The Multi-State Collaborative is a very important step toward focusing assessment on the best evidence of all: the work students produce in the course of their college studies."
The 68 colleges and universities participating in the collaborative are from Connecticut, Indiana, Kentucky, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island and Utah. Faculty at those institutions will sample and assess student work as part of a cross-state effort to document how students are achieving learning outcomes such as quantitative reasoning, written communication, and critical thinking.
All of the assessments will be based on a set of common rubrics. The project will also develop an online data platform for uploading student work samples and assessment data.
U.S. Sen. Kay Hagan, a North Carolina Democrat, last week introduced a bill that would seek to encourage four-year institutions to identify transfer students who have earned enough credits for an associate degree but never received one. Through this process, which is dubbed "reverse transfer," students at four-year institutions can earn associate degrees they failed to receive before transferring. The bill would encourage reverse transfer by creating competitive grants for states.
In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.
That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success.
Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements.
Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?
Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:
The most “at risk” students are the most likely to be affected by a particular form of support.
Every form of support has a positive impact on every “at risk” student.
Students outside this group do not require or deserve support.
What we have found over 14 years working with students and institutions across the country is that:
There are students whose success you can positively affect at every point along the risk distribution.
Different forms of support impact different students in different ways.
The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).
Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.
To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:
vote for Obama if they received the intervention (positive impact subgroup)
vote for Obama or Romney irrespective of the intervention (no impact subgroup)
vote for Romney if they received the intervention (negative impact subgroup)
The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.
This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout.
Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.
The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple.
However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.
There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.
Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.