assessmentaccountability

Syracuse Tops Princeton Review’s ‘Party School’ List

Syracuse University is the nation’s top party school, according to the Princeton Review’s annual college rankings, which were released Monday.

The ranking dismayed Syracuse officials. “Syracuse University has a long-established reputation for academic excellence with programs that are recognized nationally and internationally as the best in their fields,” university officials said in a statement. “We do not aspire to be a party school.”

The Princeton Review surveyed 130,000 students across the country – an average of 343 students per campus – to develop its rankings. The “party school” rankings come from survey questions on alcohol and drug use, the number of hours students spend studying each day and the Greek system’s popularity.

Syracuse has fretted about college rankings in the past. Nancy Cantor, Syracuse’s former chancellor, disdained rankings. She quipped that the U.S. News & World Report rankings “may sell magazines,” but not much else. Syracuse slid down rankings lists under Cantor’s tenure as the university admitted more low-income and at-risk students.

Its current chancellor, Kent Syverud, who took office in January, pledged to pay more attention to rankings. The dubious honor of “top party school” is likely not what he had in mind.

“With new leadership, we are very focused on enhancing the student experience, both academically and socially,” Syracuse officials said in response to the party-school designation. “Students, parents, faculty and the full Syracuse University community should expect to see important and positive changes in the year ahead that will improve and enhance the student environment in every aspect.”

Officials said the rankings came from a “two-year-old survey of a very small portion of our student body” – a claim that misleads slightly.

The Princeton Review conducts formal surveys of colleges once every three years. But the company also offers an online survey, which students can complete any time.

“Surveys we receive from students outside of their schools’ normal survey cycles are always factored into the subsequent year’s ranking calculations, so our pool of student survey data is continuously refreshed,” Princeton Review editors wrote in their 2015 “Best 379 Colleges” guidebook.

Brigham Young University ranked number one among “Stone Cold Sober" universities – a title it has captured for 17 years in a row. To celebrate, the university posted an image on its Facebook page of what may be its preferred celebratory beverage: reduced fat chocolate milk.

White House Talks College Success With Education Leaders from 10 Cities

The White House summoned officials from higher education, K-12 and business in 10 cities to a meeting Thursday at the U.S. Department of Education. The group was brought together to discuss collaborative strategies on college completion, according to a brief written statement from the department. It was a follow-up to the college "summit" the White House held earlier this year. One area of focus was improving college preparedness and remedial success rates, sources said.

The represented cities and counties were Albany, New York; Baltimore County, Maryland; Camden, New Jersey; Denver, Colorado; Kansas City, Missouri; Minneapolis, Minnesota; Providence, Rhode Island; Rio Grande Valley and McAllen, Texas, Riverside County, California; and Spartanburg County, South Carolina.

Bar Exam Technology Disaster

New law graduates in many states experienced a technology snafu at the worst possible time Tuesday night: as they were attempting to upload bar examinations just before deadlines in their states. Many reported spending hours trying and failing to upload their answers. ExamSoft, a company that manages the bar test submission process in many states, acknowledged "slowness or difficulty" being experienced by many test-takers, and said that it was sorry for the difficulties many were having. The company, working with various state bar associations, announced 17 deadline extensions by states, so that people who couldn't submit their exams would not be penalized.

The legal blog Above the Law posted some of the emails and social media messages being posted by angry law graduates. the blog said that the situation "appears to be the biggest bar exam debacle in history."

Many bar exams continue today, so the frustrated test-takers who were up late, some fearing that they may have failed by not submitting their day's results, have another stressful day ahead of them, for many of them without as much sleep as they might have had otherwise. One comment on the ExamSoft page on Facebook said: "This is unbelievably disrespectful. I don't think you quite understand the pressure we are all under. We understand technical issues happen (although you are supposed to be a tech company), but your 'support staff' is a joke and you should at the VERY least had updates for each of the states BEFORE their respective deadlines. Now we are wondering, HOURS before a second day of grueling testing if any of it will matter. Please answer the states with past or remaining deadlines. Or get someone to answer the phone, chat or email--> have been trying all three methods for 4 hours. Thanks."

One law blogger, Josh Blackman, wondered what would happen if failure rates are higher this year. He explained: "And for crying out loud, this is serious business. Failing the bar in this economy is a 6-month sentence of unemployment. Somewhere, a plaintiff’s lawyer is putting together a class-action suit for those who used ExamSoft and failed."

 

Let's differentiate between 'competency' and 'mastery' in higher ed (essay)

"Competency-based” education appears to be this year’s answer to America’s higher education challenges, judging from this week's news in Washington. Unlike MOOCs (last year’s solution), there is, refreshingly, greater emphasis on the validation of learning. Yet, all may not be as represented.

On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.

However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.

Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.

In the course of their own education, readers can undoubtedly recall the instructors who had complete command of their subjects, but who could not effectively present to their students. The mastery of content did not extend to their being competent as teachers. Other examples might include the much-in-demand marketing professors who did not know how, in practice, to sell their executive education programs. Just as leadership and management differ one from the other, so to do mastery and competence.

My institution has been involved in assessing both mastery and competence for several decades. Created by New York’s Board of Regents in the early 1970s, it is heir to the Regents’ century-old belief in the importance of measuring educational attainment (New York secondary students have been taking Regent’s Exams, as a requirement for high school graduation, since 1878).

Building on its legacy, the college now offers more than 60 subject matter exams. These have been developed with the help of nationally known subject matter experts and a staff of doctorally prepared psychometricians. New exams are field tested, nationally normed and reviewed for credit by the American Council on Education, which also reviews the assessments of ETS (DSST) and the College Board (CLEP). Such exams are routinely used for assessing subject matter mastery.

In the case of the institution’s competency-based associate degree in nursing, a comprehensive, hands-on assessment of clinical competence is required as a condition of graduation. This evaluation, created with the help of the W.K. Kellogg Foundation in 1975, takes place over three days in an actual hospital, with real patients, from across the life span -- pediatric to geriatric. Performance is closely monitored by multiple, carefully selected and trained nurse educators. Students must demonstrate skill and ability to a level of defined competence within three attempts or face dismissal or transfer from the program.

In developing a competency-based program as opposed to a mastery-based one, there are many challenges that must be addressed if the program is to have credibility. These include:

  • Who specifies the elements to be addressed in a competency determination? In the case of nursing, this is done by the profession. Other fields may not be so fortunate. For instance, who would determine the key areas of competency in the humanities or arts?
  • Who does the assessing, and what criteria must be met to be seen as a qualified assessor of someone’s competency?
  • How will competence be assessed, and is the process scalable? In the nursing example above, we have had to establish a national network of hospitals, as well as recruit, train and field a corps of graduate prepared nurse educators. At scale, this infrastructure is limited to approximately 2,000 competency assessments per year, which is far less than the number taking the College’s computer-based mastery examinations.
  • Who is to be served by the growing number of CBE programs? Are they returning adults who have been in the workplace long enough to acquire relevant skills and knowledge on the job, or is CBE thought to be relevant even for traditional-aged students?

(It is difficult to imagine many 22 year-olds as competent within a field or profession. Yet, there is little question that most could show some level of mastery of a body of knowledge for which prepared.)

  • Do prospective students want this type of learning/validation? Has there been market research that supports the belief that there is demand? We have offered two mastery-based bachelor’s degrees (each for less than $10,000) since 2011. Demand has been modest because of uncertainty about how a degree earned in such a manner might be viewed by employers and graduate schools (this despite the fact that British educators have offered such a model for centuries).
  • Will employers and graduate schools embrace those with credentials earned in a CBE program? Institutions that have varied from the norm (dropping the use of grades, assessing skills vs. time in class) have seen their graduates face admissions challenges when attempting to build on their undergraduate credentials by applying to graduate schools. As for employers, a backlash may be expected if academic institutions sell their graduates as “competent” and later performance makes clear that they are not.

The interest in CBE has, in large part, been driven by the fact that employers no longer see new college graduates as job-ready. In fact, a recent Lumina Foundation report found that only 11 percent of employers believe that recent graduates have the skills needed to succeed within their work forces. One CBE educator has noted, "We are stopping one step short of delivering qualified job applicants if we send them off having 'mastered' content, but not demonstrating competencies." 

Or, as another put it, somewhat more succinctly, "I don't give a damn what they KNOW.  I want to know what they can DO.”

The move away from basing academic credit on seat time is to be applauded. Determining levels of mastery through various forms of assessment -- exams, papers, projects, demonstrations, etc. – is certainly a valid way to measure outcomes. However, seat time has rarely been the sole basis for a grade or credit. The measurement tools listed here have been found in the classroom for decades, if not centuries.

Is this a case of old wine in new bottles? Perhaps not. What we now see are programs being approved for Title IV financial aid on the basis of validated learning, not for a specified number of instructional hours; whether the process results in a determination of competence or mastery is secondary, but not unimportant.

A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.

Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.

To continue to use “competency” when we mean “mastery” may seem like a small thing. Yet, if we of the academy cannot be more precise in our use of language, we stand to further the distrust which many already have of us. To say that we mean “A” when in fact we mean “B” is to call into question whether we actually know what we are doing.

John F. Ebersole is the president of Excelsior College, in Albany, N.Y.

Editorial Tags: 

Pearson's New Competency-Based Education Framework

This week Pearson introduced a new learning model for competency-based education. The company's seven-step "platform" seeks to help colleges prepare, build and sustain successful competency-based programs. It includes advice on market analysis, curriculum design and using data to evaluate student performance.

Facebook study raises hard questions about use of big data in higher ed (essay)

When the teacher and poet Taylor Mali declares, “I can make a C+ feel like a Congressional Medal of Honor and an A- feel like a slap in the face,” he testifies to the powerful ways teachers can use emotions to help students learn and grow.  Students -- and their parents -- put a great deal of trust in college educators to use these powers wisely and cautiously. This is why the unfolding debacle of the Facebook emotional contagion experiment should give educators great pause.

In 2012, for one week, Facebook  changed an algorithm in its News Feed function so that certain users saw more messages with words associated with positive sentiment and others saw more words associated with negative sentiment. Researchers from Facebook and Cornell then analyzed the results and found that the experiment had a small but statistically significant effect on the emotional valence of the kinds of messages that News Feed readers subsequently went on to write. People who saw more positive messages wrote more positive ones, and people who saw more negative messages wrote more negative ones. The researchers published a study in the Proceedings of the National Academy of Sciences, and they claimed the study provides evidence of the possibility of large-scale emotional contagion.

The debate immediately following the release of the study in the Proceedings of the National Academy of Sciences has been fierce. There has been widespread public outcry that Facebook has been manipulating people’s emotions without following widely accepted research guidelines that require participant consent. Social scientists who have come to the defense of the study note that Facebook conducts experiments on the News Feed algorithm constantly, as do virtually all other online platforms, so users should expect to be subject to these experiments. Regardless of how merit and harm are ultimately determined in the Facebook case, however, the implications of its precedent for learning research are potentially very large.

All good teachers observe their students and use what they learn from those observations to improve instruction. Good teachers assess and probe their students, experiment with different approaches to instruction and coaching, and make changes to their practice and pedagogy based on the results of those experiments. In physical classrooms, these experiments are usually ad hoc and the data analysis informal.

But as more college instruction moves online, it becomes ever easier for instructors to observe their students systematically and continuously.  Digital observation of college instruction promises huge advances in the science of learning.  It also raises ethical questions that higher education leaders have only begun to address.

What does it mean to give consent in an age of pages-long terms-of-service documents that can be changed at any time? In a world where online users should expect to be constantly studied, what conditions should require additional consent? What bedrock ethical principles of the research enterprise need to be rethought or reinforced as technology reshapes the frontiers of research? How do we ensure that corporate providers of online learning tools adhere to the same ethical standards for research as universities?

If the ultimate aim of research is beneficence -- to do maximum good with minimum harm -- how do we weigh new risks and new opportunities that cannot be fully understood without research?

Educational researchers must immediately engage these questions. The public has enormous trust in academic researchers to conduct their inquiries responsibly, but this trust may be fragile. Educational researchers have not yet had a Facebook moment, but the conditions for concern are rising, and online learning research is expanding.

Proactively addressing these concerns means revisiting the principles and regulatory structures that have guided academic research for generations. The Belmont Report, a keystone document of modern research ethics, was crafted to guide biomedical science in an analog world. Some of the principles of that report should undoubtedly continue to guide research ethics, but we may also need new thinking to wisely advance the science of learning in a digital age.

In June 2014, a group of 50 educational researchers, computer scientists, and privacy experts from a variety of universities, as well as observers from government and allied philanthropies, gathered at Asilomar Conference Grounds in California to draft first principles for learning research in the digital era. We released a document, the Asilomar Convention for Learning Research in Higher Education, which recognizes the importance of changing technology and public expectations for scientific practice.

The document embraces three principles from the Belmont Report: respect for persons, justice, and beneficence. It also specifies three new ones: the importance of openness of data use practices and research findings, the fundamental humanity of learning regardless of the technical sophistication learning media, and the need for continuous consideration of research ethics in the context of rapidly changing technology.

We hope the Asilomar Convention begins a broader conversation about the future of learning research in higher education. This conversation should happen at all levels of higher education: in institutional review boards, departments and ministries of education, journal editorial boards, and scholarly societies. It should draw upon new research about student privacy and technology emerging from law schools, computer science departments, and many other disciplines.

And it should specifically consider the ethical implications of the fact that much online instruction takes the form of joint ventures between nonprofit universities and for-profit businesses.  We encourage organizers of meetings and conferences to make consideration of the ethics of educational data use an immediate and ongoing priority.  Preservation of public trust in higher education requires a proactive research ethics in the era of big data.

Justin Reich is the Richard L. Menschel HarvardX Research Fellow and a Fellow at the Berkman Center for Internet & Society at Harvard University. Mitchell L. Stevens is associate professor and director of digital research and planning in the Graduate School of Education at Stanford University.

New England, Mid-Atlantic Accreditors Place Colleges on Probation

The regional accrediting commissions for New England and the Mid-Atlantic states placed several colleges on probation at their most recent meetings.

Burlington College, in Vermont, announced that it had been cited by the New England Association of Schools and Colleges' Commission on Institutions of Higher Education for failing to meet the accreditor's standard for financial resources. College officials attributed the problem to debt the private four-year institution accumulated when it purchased property previously owned by a local diocese.

The Middle States Commission on Higher Education, meanwhile, placed three institutions on probation late last month: Harrisburg University of Science and Technology, in Pennsylvania; New York's Unification Theological Seminary, and the University of the Potomac, in Washington. Harrisburg was cited for failing to meet the commission's standards on institutional resources and assessment of student learning; the Unification seminary for shortcomings on those two standards as well as others related to "mission and goals" and student admissions and retention; and University of the Potomac for planning and resource allocation, institutional resources), institutional assessment, and assessment of student learning.

Essay on diploma mills

It’s surprising how many house pets hold advanced degrees. Last year, a dog received his M.B.A. from the American University of London, a non-accredited distance-learning institution. It feels as if I should add “not to be confused with the American University in London,” but getting people to confuse them seems like a pretty basic feature of the whole AUOL marketing strategy.

The dog, identified as “Peter Smith” on his diploma, goes by Pete. He was granted his degree on the basis of “previous experiential learning,” along with payment of £4500. The funds were provided by a BBC news program, which also helped Pete fill out the paperwork. The American University of London required that Pete submit evidence of his qualifications as well as a photograph. The applicant submitted neither, as the BBC website explains, “since the qualifications did not exist and the applicant was a dog.”

The program found hundreds of people listing AUOL degrees in their profiles on social networking sites, including “a senior nuclear industry executive who was in charge of selling a new generation of reactors in the UK.” (For more examples of suspiciously credentialed dogs and cats, see this list.)

Inside Higher Ed reports on diploma mills and fake degrees from time to time but can’t possibly cover every revelation that some professor or state official has a bogus degree, or that a “university” turns out to be run by a convicted felon from his prison cell. Even a blog dedicated to the topic, Diploma Mill News, links to just a fraction of the stories out there. Keeping up with every case is just too much; nobody has that much Schaudenfreude in them.

By contrast, scholarly work on the topic of counterfeit credentials has appeared at a glacial pace. Allen Ezell and John Bear’s expose Degree Mills: The Billion-Dollar Industry -- first published by Prometheus Books in 2005 and updated in 2012 – points out that academic research on the phenomenon amounts is conspicuously lacking, despite the scale of the problem. (Ezell headed up the Federal Bureau of Investigation's “DipScam” investigation of diploma mills that ran from 1980 through 1991.)

The one notable exception to that blind spot is the history of medical quackery, which enjoyed its golden age in the United States during the late 19th and early 20th centuries. Thousands of dubious practitioners throughout the United States got their degrees from correspondence course or fly-by-night medical schools. The fight to put both the quacks and the quack academies out of business reached its peak during the 1920s and ‘30s, under the tireless leadership of Morris Fishbein, editor of the Journal of the American Medical Association.

H.L. Mencken was not persuaded that getting rid of medical charlatans was such a good idea. “As the old-time family doctor dies out in the country towns,” he wrote in a newspaper column from 1924, “with no competent successor willing to take over his dismal business, he is followed by some hearty blacksmith or ice-wagon driver, turned into a chiropractor in six months, often by correspondence.... It eases and soothes me to see [the quacks] so prosperous, for they counteract the evil work of the so-called science of public hygiene, which now seeks to make imbeciles immortal.” (On the other hand, he did point out quacks worth pursuing to Fishbein.)

The pioneering scholar of American medical shadiness was James Harvey Young, an emeritus professor of history at Emory University when he died in 2006, who first published on the subject in the early 1950s. Princeton University Press is reissuing American Health Quackery: Collected Essays of James Harvey Young in paperback this month. But while patent medicines and dubious treatments are now routinely discussed in books and papers on medical history, very little research has appeared on the institutions -- or businesses, if you prefer -- that sold credentials to the snake-oil merchants of yesteryear.

There are plenty still around, incidentally. In Degree Mills, Ezell and Bear cite a Congressional committee’s estimate from 1986 that there were more than 5,000 fake doctors practicing in the United States. The figure must be several times that by now.

The demand for fraudulent diplomas comes from a much wider range of aspiring professionals now than in the patent-medicine era – as the example of Pete, the canine MBA, may suggest. The most general social-scientific study of the problem seems to be “An Introduction to the Economics of Fake Degrees,” published in the Journal of Economic Issues in 2008.

The authors -- Gilles Grolleau, Tarik Lakhal, and Naoufel Mzoughi – are French economists who do what they can with the available pool of data, which is neither wide nor deep. “While the problem of diploma mills and fake degrees is acknowledged to be serious,” they write, “it is difficult to estimate their full impact because it is an illegal activity and there is an obvious lack of data and rigorous studies. Several official investigations point to the magnitude and implications of this dubious activity. These investigations appear to underestimate the expanding scale and dimensions of this multimillion-dollar industry.”

Grolleau et al. distinguish between counterfeit degrees (fabricated documents not actually issued by the institutions the holder thereby claims to have attended) and “degrees from bogus universities, sold outright and that can require some academic work but significantly less than comparable, legitimate accredited programs.” The latter institutions, also known as diploma mills, are sometimes backed up by equally dubious accreditation “agencies.” A table in the paper indicates that more than 200 such “accreditation mills” (defined as agencies not recognized by either the Council for Higher Education Accreditation or the U.S. Department of Education) were operating as of 2004.

The authors work out the various costs, benefits, and risk factors involved in the fake degree market, but the effort seems very provisional, not to say pointless, in the absence of solid data. They write that “fake degrees allow their holders to ‘free ride’ on the rights and benefits normally tied to legitimate degrees, without the normal investment of human capital,” which may be less of a tautology than “A=A” but not by much.

The fake-degree consumer’s investment “costs” include the price demanded by the vendor but also "other ‘costs,’ such as … the fear of being discovered and stigmatized.” I suppose so, but it’s hardly the sort of expense that can be monetized. By contrast, the cost to legitimate higher-education institutions for “protecting their intellectual property rights by conducting investigations and mounting litigation against fakers” might be more readily quantified, at least in principle.  

The authors state, sensibly enough: “The resources allocated to decrease the number of fake degrees should be set equal to the pecuniary value of the marginal social damage caused by the existence of the fakes, at the point of the optimal level of fakes.” But then they point to “the difficulty in measuring the value of the damage and the cost of eliminating it completely.”

So: If we had some data about the problem, we could figure out how much of a problem it is, but we don’t -- and that, too, is a problem.

Still, the paper is a reminder that empirical research on the whole scurvy topic would be of value – especially when you consider that in the United States, according to one study, “at least 3 percent of all doctorate degrees in occupational safety and health and related areas” are bogus. Also keep in mind Ezell Bear’s estimate in Degree Mills: The Billion-Dollar Industry that 40-45,000 legitimate Ph.D.s are awarded annually in the U.S. -- while another 50,000 spurious Ph.D.s are purchased here.

“In other words,” they write, “more than half of all people claiming a new Ph.D. have a fake degree.” And so I have decided not to make matters worse by purchasing one for my calico cat, despite “significant experiential learning” from her studies in ornithology.

Editorial Tags: 

Education Commission of the States takes on inconsistency in tracking remedial education

Smart Title: 

States have chaotic lack of consistency in how they track college remediation, according to the Education Commission of the States, which seeks national standards.

Wilberforce U. Could Lose Accreditation

Wilberforce University, the oldest private historically black college in the country, is in danger of losing accreditation. The Higher Learning Commission of the North Central Association, this week sent the university a "show cause" order asking Wilberforce to give specific reasons and evidence that it should not lose accreditation. The letter says that Wilberforce is out of compliance with key requirements, such as having an effectively functioning board and sufficient financial resources. The university has a deficit in its main operating fund of nearly $10 mllion, is in default on some bond debt, and problems with the physical plan have left the campus "unsafe and unhealthy," the letter says. University officials did not respond to local reporters seeking comment on the accreditor's action.

 

Pages

Subscribe to RSS - assessmentaccountability
Back to Top