When the teacher and poet Taylor Mali declares, “I can make a C+ feel like a Congressional Medal of Honor and an A- feel like a slap in the face,” he testifies to the powerful ways teachers can use emotions to help students learn and grow. Students -- and their parents -- put a great deal of trust in college educators to use these powers wisely and cautiously. This is why the unfolding debacle of the Facebook emotional contagion experiment should give educators great pause.
In 2012, for one week, Facebook changed an algorithm in its News Feed function so that certain users saw more messages with words associated with positive sentiment and others saw more words associated with negative sentiment. Researchers from Facebook and Cornell then analyzed the results and found that the experiment had a small but statistically significant effect on the emotional valence of the kinds of messages that News Feed readers subsequently went on to write. People who saw more positive messages wrote more positive ones, and people who saw more negative messages wrote more negative ones. The researchers published a study in the Proceedings of the National Academy of Sciences, and they claimed the study provides evidence of the possibility of large-scale emotional contagion.
The debate immediately following the release of the study in the Proceedings of the National Academy of Sciences has been fierce. There has been widespread public outcry that Facebook has been manipulating people’s emotions without following widely accepted research guidelines that require participant consent. Social scientists who have come to the defense of the study note that Facebook conducts experiments on the News Feed algorithm constantly, as do virtually all other online platforms, so users should expect to be subject to these experiments. Regardless of how merit and harm are ultimately determined in the Facebook case, however, the implications of its precedent for learning research are potentially very large.
All good teachers observe their students and use what they learn from those observations to improve instruction. Good teachers assess and probe their students, experiment with different approaches to instruction and coaching, and make changes to their practice and pedagogy based on the results of those experiments. In physical classrooms, these experiments are usually ad hoc and the data analysis informal.
But as more college instruction moves online, it becomes ever easier for instructors to observe their students systematically and continuously. Digital observation of college instruction promises huge advances in the science of learning. It also raises ethical questions that higher education leaders have only begun to address.
What does it mean to give consent in an age of pages-long terms-of-service documents that can be changed at any time? In a world where online users should expect to be constantly studied, what conditions should require additional consent? What bedrock ethical principles of the research enterprise need to be rethought or reinforced as technology reshapes the frontiers of research? How do we ensure that corporate providers of online learning tools adhere to the same ethical standards for research as universities?
If the ultimate aim of research is beneficence -- to do maximum good with minimum harm -- how do we weigh new risks and new opportunities that cannot be fully understood without research?
Educational researchers must immediately engage these questions. The public has enormous trust in academic researchers to conduct their inquiries responsibly, but this trust may be fragile. Educational researchers have not yet had a Facebook moment, but the conditions for concern are rising, and online learning research is expanding.
Proactively addressing these concerns means revisiting the principles and regulatory structures that have guided academic research for generations. The Belmont Report, a keystone document of modern research ethics, was crafted to guide biomedical science in an analog world. Some of the principles of that report should undoubtedly continue to guide research ethics, but we may also need new thinking to wisely advance the science of learning in a digital age.
In June 2014, a group of 50 educational researchers, computer scientists, and privacy experts from a variety of universities, as well as observers from government and allied philanthropies, gathered at Asilomar Conference Grounds in California to draft first principles for learning research in the digital era. We released a document, the Asilomar Convention for Learning Research in Higher Education, which recognizes the importance of changing technology and public expectations for scientific practice.
The document embraces three principles from the Belmont Report: respect for persons, justice, and beneficence. It also specifies three new ones: the importance of openness of data use practices and research findings, the fundamental humanity of learning regardless of the technical sophistication learning media, and the need for continuous consideration of research ethics in the context of rapidly changing technology.
We hope the Asilomar Convention begins a broader conversation about the future of learning research in higher education. This conversation should happen at all levels of higher education: in institutional review boards, departments and ministries of education, journal editorial boards, and scholarly societies. It should draw upon new research about student privacy and technology emerging from law schools, computer science departments, and many other disciplines.
And it should specifically consider the ethical implications of the fact that much online instruction takes the form of joint ventures between nonprofit universities and for-profit businesses. We encourage organizers of meetings and conferences to make consideration of the ethics of educational data use an immediate and ongoing priority. Preservation of public trust in higher education requires a proactive research ethics in the era of big data.
Justin Reich is the Richard L. Menschel HarvardX Research Fellow and a Fellow at the Berkman Center for Internet & Society at Harvard University. Mitchell L. Stevens is associate professor and director of digital research and planning in the Graduate School of Education at Stanford University.
The regional accrediting commissions for New England and the Mid-Atlantic states placed several colleges on probation at their most recent meetings.
Burlington College, in Vermont, announced that it had been cited by the New England Association of Schools and Colleges' Commission on Institutions of Higher Education for failing to meet the accreditor's standard for financial resources. College officials attributed the problem to debt the private four-year institution accumulated when it purchased property previously owned by a local diocese.
It’s surprising how many house pets hold advanced degrees. Last year, a dog received his M.B.A. from the American University of London, a non-accredited distance-learning institution. It feels as if I should add “not to be confused with the American University in London,” but getting people to confuse them seems like a pretty basic feature of the whole AUOL marketing strategy.
The dog, identified as “Peter Smith” on his diploma, goes by Pete. He was granted his degree on the basis of “previous experiential learning,” along with payment of £4500. The funds were provided by a BBC news program, which also helped Pete fill out the paperwork. The American University of London required that Pete submit evidence of his qualifications as well as a photograph. The applicant submitted neither, as the BBC website explains, “since the qualifications did not exist and the applicant was a dog.”
The program found hundreds of people listing AUOL degrees in their profiles on social networking sites, including “a senior nuclear industry executive who was in charge of selling a new generation of reactors in the UK.” (For more examples of suspiciously credentialed dogs and cats, see this list.)
Inside Higher Ed reports on diploma mills and fake degrees from time to time but can’t possibly cover every revelation that some professor or state official has a bogus degree, or that a “university” turns out to be run by a convicted felon from his prison cell. Even a blog dedicated to the topic, Diploma Mill News, links to just a fraction of the stories out there. Keeping up with every case is just too much; nobody has that much Schaudenfreude in them.
By contrast, scholarly work on the topic of counterfeit credentials has appeared at a glacial pace. Allen Ezell and John Bear’s expose Degree Mills: The Billion-Dollar Industry -- first published by Prometheus Books in 2005 and updated in 2012 – points out that academic research on the phenomenon amounts is conspicuously lacking, despite the scale of the problem. (Ezell headed up the Federal Bureau of Investigation's “DipScam” investigation of diploma mills that ran from 1980 through 1991.)
The one notable exception to that blind spot is the history of medical quackery, which enjoyed its golden age in the United States during the late 19th and early 20th centuries. Thousands of dubious practitioners throughout the United States got their degrees from correspondence course or fly-by-night medical schools. The fight to put both the quacks and the quack academies out of business reached its peak during the 1920s and ‘30s, under the tireless leadership of Morris Fishbein, editor of the Journal of the American Medical Association.
H.L. Mencken was not persuaded that getting rid of medical charlatans was such a good idea. “As the old-time family doctor dies out in the country towns,” he wrote in a newspaper column from 1924, “with no competent successor willing to take over his dismal business, he is followed by some hearty blacksmith or ice-wagon driver, turned into a chiropractor in six months, often by correspondence.... It eases and soothes me to see [the quacks] so prosperous, for they counteract the evil work of the so-called science of public hygiene, which now seeks to make imbeciles immortal.” (On the other hand, he did point out quacks worth pursuing to Fishbein.)
The pioneering scholar of American medical shadiness was James Harvey Young, an emeritus professor of history at Emory University when he died in 2006, who first published on the subject in the early 1950s. Princeton University Press is reissuing American Health Quackery: Collected Essays of James Harvey Young in paperback this month. But while patent medicines and dubious treatments are now routinely discussed in books and papers on medical history, very little research has appeared on the institutions -- or businesses, if you prefer -- that sold credentials to the snake-oil merchants of yesteryear.
There are plenty still around, incidentally. In Degree Mills, Ezell and Bear cite a Congressional committee’s estimate from 1986 that there were more than 5,000 fake doctors practicing in the United States. The figure must be several times that by now.
The demand for fraudulent diplomas comes from a much wider range of aspiring professionals now than in the patent-medicine era – as the example of Pete, the canine MBA, may suggest. The most general social-scientific study of the problem seems to be “An Introduction to the Economics of Fake Degrees,” published in the Journal of Economic Issues in 2008.
The authors -- Gilles Grolleau, Tarik Lakhal, and Naoufel Mzoughi – are French economists who do what they can with the available pool of data, which is neither wide nor deep. “While the problem of diploma mills and fake degrees is acknowledged to be serious,” they write, “it is difficult to estimate their full impact because it is an illegal activity and there is an obvious lack of data and rigorous studies. Several official investigations point to the magnitude and implications of this dubious activity. These investigations appear to underestimate the expanding scale and dimensions of this multimillion-dollar industry.”
Grolleau et al. distinguish between counterfeit degrees (fabricated documents not actually issued by the institutions the holder thereby claims to have attended) and “degrees from bogus universities, sold outright and that can require some academic work but significantly less than comparable, legitimate accredited programs.” The latter institutions, also known as diploma mills, are sometimes backed up by equally dubious accreditation “agencies.” A table in the paper indicates that more than 200 such “accreditation mills” (defined as agencies not recognized by either the Council for Higher Education Accreditation or the U.S. Department of Education) were operating as of 2004.
The authors work out the various costs, benefits, and risk factors involved in the fake degree market, but the effort seems very provisional, not to say pointless, in the absence of solid data. They write that “fake degrees allow their holders to ‘free ride’ on the rights and benefits normally tied to legitimate degrees, without the normal investment of human capital,” which may be less of a tautology than “A=A” but not by much.
The fake-degree consumer’s investment “costs” include the price demanded by the vendor but also "other ‘costs,’ such as … the fear of being discovered and stigmatized.” I suppose so, but it’s hardly the sort of expense that can be monetized. By contrast, the cost to legitimate higher-education institutions for “protecting their intellectual property rights by conducting investigations and mounting litigation against fakers” might be more readily quantified, at least in principle.
The authors state, sensibly enough: “The resources allocated to decrease the number of fake degrees should be set equal to the pecuniary value of the marginal social damage caused by the existence of the fakes, at the point of the optimal level of fakes.” But then they point to “the difficulty in measuring the value of the damage and the cost of eliminating it completely.”
So: If we had some data about the problem, we could figure out how much of a problem it is, but we don’t -- and that, too, is a problem.
Still, the paper is a reminder that empirical research on the whole scurvy topic would be of value – especially when you consider that in the United States, according to one study, “at least 3 percent of all doctorate degrees in occupational safety and health and related areas” are bogus. Also keep in mind Ezell Bear’s estimate in Degree Mills: The Billion-Dollar Industry that 40-45,000 legitimate Ph.D.s are awarded annually in the U.S. -- while another 50,000 spurious Ph.D.s are purchased here.
“In other words,” they write, “more than half of all people claiming a new Ph.D. have a fake degree.” And so I have decided not to make matters worse by purchasing one for my calico cat, despite “significant experiential learning” from her studies in ornithology.
Wilberforce University, the oldest private historically black college in the country, is in danger of losing accreditation. The Higher Learning Commission of the North Central Association, this week sent the university a "show cause" order asking Wilberforce to give specific reasons and evidence that it should not lose accreditation. The letter says that Wilberforce is out of compliance with key requirements, such as having an effectively functioning board and sufficient financial resources. The university has a deficit in its main operating fund of nearly $10 mllion, is in default on some bond debt, and problems with the physical plan have left the campus "unsafe and unhealthy," the letter says. University officials did not respond to local reporters seeking comment on the accreditor's action.
Nearly 70 institutions are collaborating to better assess learning outcomes as part of a new initiative called the Multi-State Collaborative to Advance Learning Outcomes Assessment. The colleges and universities are a mix of two- and four-year institutions.
The initiative, funded in its initial planning year by the Bill & Melinda Gates Foundation, was announced Monday by the Association of Colleges and Universities and the State Higher Education Executive Officers association.
”The calls are mounting daily for higher education to be able to show what students can successfully do with their learning,” said Carol Geary Schneider, AAC&U president, in an announcement. “The Multi-State Collaborative is a very important step toward focusing assessment on the best evidence of all: the work students produce in the course of their college studies."
The 68 colleges and universities participating in the collaborative are from Connecticut, Indiana, Kentucky, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island and Utah. Faculty at those institutions will sample and assess student work as part of a cross-state effort to document how students are achieving learning outcomes such as quantitative reasoning, written communication, and critical thinking.
All of the assessments will be based on a set of common rubrics. The project will also develop an online data platform for uploading student work samples and assessment data.
U.S. Sen. Kay Hagan, a North Carolina Democrat, last week introduced a bill that would seek to encourage four-year institutions to identify transfer students who have earned enough credits for an associate degree but never received one. Through this process, which is dubbed "reverse transfer," students at four-year institutions can earn associate degrees they failed to receive before transferring. The bill would encourage reverse transfer by creating competitive grants for states.
In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.
That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success.
Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements.
Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?
Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:
The most “at risk” students are the most likely to be affected by a particular form of support.
Every form of support has a positive impact on every “at risk” student.
Students outside this group do not require or deserve support.
What we have found over 14 years working with students and institutions across the country is that:
There are students whose success you can positively affect at every point along the risk distribution.
Different forms of support impact different students in different ways.
The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).
Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.
To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:
vote for Obama if they received the intervention (positive impact subgroup)
vote for Obama or Romney irrespective of the intervention (no impact subgroup)
vote for Romney if they received the intervention (negative impact subgroup)
The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.
This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout.
Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.
The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple.
However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.
There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.
Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.
Universities, as seats of learning and powerhouses of research, are stepping up to assume a new role. In the wake of a global financial meltdown and consequent challenges to the fabric of many societies, universities are emerging as powerful catalysts and indeed drivers of socioeconomic growth – not only through research or technology transfer, but by assuming responsibility for preparing students for jobs in delivering today’s highly skilled workers and tomorrow’s innovators and leaders of industry.
That’s why the employability of our graduates needs to take center stage and why I applaud the Obama administration’s recent call to action in this regard. The emergence of new institutional rankings to compare the "value" delivered, such as graduate employment and earnings across institutions, means that employability has become "our job." And we need to take this responsibility seriously if we want to successfully compete in the global marketplace for higher education. Universities need to understand that we have a social duty and perhaps a moral one too, to help successfully launch our talented graduates into society.
Here in Britain, employability outcomes are already part of our world and feature heavily in the key performance indicators of British universities. Our Higher Education Statistics Agency collects and reports national data on our publicly funded institutions, including employment rate overall from each university and type of employment outcome. And while our American cousins are decades ahead in areas such as philanthropy and have helped our journey, Britain's experience of the employability agenda is one where we can perhaps return the favor. It's this spirit of sharing and exploring wider global education trends that moved me to share some insights into how the employability agenda is influencing behavior among our students and faculty, and in the administration team, too.
It’s clear in Britain that the move to show a return on investment through enhanced employment opportunities – the so-called "graduate premium" – is strongly correlated with the recent significant increase in student fees, or what would be considered tuition in the American context. This was a key part of a public policy shift, across successive UK governments, to recognize more overtly that graduates are beneficiaries of their education and as such should contribute to it directly, in turn reducing the public subsidy for higher education. The fees, covered by a public student loan, are repaid only once the graduate is earning a salary deemed appropriate for a graduate (approximately $35,000) and no payments are needed up-front.
A few things have happened as a consequence. The first, perhaps rather unexpected but of high value, was that we have seen a positive impact on the social inclusion agenda as more students from poorer backgrounds progress to university; analysis from the University and Colleges Acceptance Service (UCAS) indicates that compared with entry rates in 2011, the year before the introduction of higher tuition fees in England, 18-year-olds in disadvantaged areas in England were 12 percent more likely to enter in 2013. The second was, however, anticipated, and is the subject of this commentary, in that students are now much more savvy as education "consumers" and are fiercely attuned to understanding the job opportunities at the end of their degree.
As such, the student voice is being heard right at the heart of university administration and across the faculty. Newly introduced UK websites such as Unistats (similar to the College Scorecard) allow prospective students to directly compare courses and institutions. Of course, when first introduced around two years ago, such public comparison sites were disruptive – and this perhaps echoes the current disquiet in the United States as similar plans are rolled out across the pond. Britain’s "Key Information Set" (KIS) data, which populates the site, comprises the items of information which students have said they would find most useful when making informed choices about where to study. The "empowered" student wants to know what the likelihood is of getting a job after graduation in various fields, what type of job they may get (professional or non-professional), and what salary they could expect. Nationally, total employability and a new measure of professional versus non-professional employment are both used in national university league tables, which are used by students to pick institutions and by the government to award funds.
With this public interest in outcome measures, university presidents and the wider administration are acutely aware of the potential impact on reputation, and by extension, recruitment. There are risks to both if we do not continue to produce graduates who are highly employable, who can obtain graduate-level jobs and who can deliver on the investment they have made in their education through the "graduate premium" on earnings. Placing such key institutional risks to one side, the wider public policy agenda surely means that governments, industry and indeed society at large need to pay attention to employability given the economic and indeed social impact of skilled labor in the global market place. Research consistently shows that graduates are more likely to be employed than those who left education with lower qualifications. In 2013, there were 12 million graduates in Britain and the graduate employment rate stood at 87 percent; this compares to 83 percent employment rate for those with A levels – approximately equivalent to the high school diploma.
But it’s not quite as simple as that. A degree, once considered the passport to a graduate-level career, needs to now come in a total package – "graduate plus" – as employers seek well-rounded employees who are "work-ready" with clear evidence of both job-specific skills and, prized graduate attributes. Given the fact that more people are achieving graduate status, we need to help our students develop employability attributes and skills throughout their time at university while they study. This needs careful curriculum and indeed pedagogic innovation and stewardship, including partnerships with business, industry and the professions.
This is why at my own institution, Plymouth University in Britain, we embed employability throughout the curriculum from day one and we then continue to focus on developing the entrepreneurial skills of our students through academic courses as well as support, mentoring and networking opportunities. For example, curricular experiential learning projects across the university range from business (such as management students conducting consultancy work for local businesses in a program called Inspiring Futures) to health (dental, medical and optometry students are all trained in primary care settings, ensuring they have to develop communication skills with real patients in order to better understand their needs), to the whole institution, such as the Wonder Room consultancy, which brings together students from business, arts and science to pitch for, and undertake, live projects in the region.
We are also focusing on developing internships and placements for our students to enable them to enhance their resumes and gain real work-place experience. Our Plymouth Graduate Internship Program develops graduate-level internship positions with employers where recent graduates are given the opportunity to apply a range of skills, assume real responsibilities, make an impact and progress quickly from new graduate to successful professional. Last year alone, 40 percent of our students embarked on paid industry placements. I shared this fact on social media whilst at a conference in the U.S. earlier this year and was overwhelmed by the impact of the response stateside to something that we see here as very much just "business as usual."
For us, at Plymouth, a key factor in our success has been to establish our unique "students as partners" charter which, rather than a transactional relationship that places the student as a customer, we feel that the we take joint responsibility with our students for their educational outcomes. This means that as well as supporting employment opportunities, whether through internships or placements, we recognize that we are preparing graduates for jobs that don’t even exist yet and for a career that will be multidimensional and more akin to a career portfolio. And so, in line with our focus on enterprise, we foster an entrepreneurial mindset with our students so that they are set up to thrive as socially responsible, highly employable global citizens. Testament to this success has been national success as our students and student societies win major entrepreneurial and business competitions. We are also seeing more of our graduates progress to set up their own business ventures and also to engage in community volunteering work with a social purpose. So, for students, the employability metrics impact their decision-making as they make more informed decisions.
Our faculty have embraced the employability agenda through curriculum and pedagogic innovations and by creating partnerships with employers; this in turn, has served to connect us as a university to the society we serve, leading to research opportunities and live commissions for students and staff consultancy. And for the senior administrative team around the president’s table? Well, that’s an interesting one. Of course, we always had awareness of the demand for our programs, and an interest in student satisfaction – but there’s been a real shift in emphasis and we talk a lot more about the student experience which sits comfortably alongside other top table issues such as financial sustainability and risk. We are now more acutely aware that our brand is firmly aligned to the quality of our graduates and their market value, and that employability metrics are a clear proxy measure of our university standing. So jobs for our students now sit very much as one of our jobs, too.
So, dear American colleagues, if I may be so bold – I would say please embrace employability metrics as a powerful direction of travel. Be aware that public and private supporters of higher education are keenly interested to know more about the returns on their investment and on the role universities are playing now and can go on to play in driving economic and social inclusion. Universities can respond on their own terms in powerful and compelling ways to drive the narrative around employability. We should be clear that employability is very much part of the learning continuum, and learning – well, that is our job, isn’t it?
Wendy Purcell is president of Plymouth University, in Britain.