From health care to major league baseball, entire industries are being shaped by the evolving use of data to drive results. One sector that remains largely untouched by the effective use of data is higher education. Fortunately, a recent regulation from the Department of Education offers a potential new tool that could begin driving critical income data into conversations about higher education programs and policies.
Last year, the Department of Education put forward a regulation called gainful employment. It was designed to crack down on bad actors in investor-funded higher education (sometimes called for-profit higher education). It set standards for student loan repayment and debt-to-income ratios that institutions must meet in order for students attending a specific institution to remain eligible for federal funds.
In order to implement the debt-to-income metric, the Obama administration created a system by which schools submitted social security data for a cohort of graduates from specific programs. As long as the program had over 30 graduates, the Department of Education could then work with the Social Security Administration to produce an aggregated income for the cohort. Department officials used this to determine a program-level debt-to-income metric against which institutions would be assessed. This summer, the income data was released publicly along with the rest of the gainful employment metrics.
Unfortunately, the future of the gainful employment regulation is unclear. A federal court judge has effectively invalidated it. We, at Capella University, welcome being held accountable for whether our graduates can use their degree to earn a living and pay back their loans. While we think that standard should be applied to all of higher education, we also believe there is an opportunity for department officials to take the lemons of the federal court’s ruling and make lemonade.
They have already created a system by which any institution can submit a program-level cohort of graduates (as long as it has a minimum number of graduates in order to ensure privacy) and receive aggregate income data. Rather than letting this system sit on the shelf and gather dust while the gainful employment regulations wind their way through the courts, they should put it to good use. The Department of Education could open this system up and make it available to any institution that wants to receive hard income data on their graduates.
I’m not proposing a new regulation or a requirement that institutions use this system. It could be completely voluntary. Ultimately, it is hard to believe that any institution, whether for-profit or traditional, would seek to ignore this important data if it were available to them. Just as importantly, it is hard to believe that students wouldn’t expect an institution to provide this information if they knew it was available.
Historically, the only tool for an institution to understand the earnings of its graduates has been self-reported alumni surveys. While we at Capella did the best we could with surveys, they are at best educated guesswork. Now, thanks to gainful employment, any potential student who wants to get an M.B.A. in finance from Capella can know exactly what graduates from that program earned on average in the 2010 tax year, which in this case is $95,459. Prospective students can also compare this and other programs, which may not see similar incomes, against competitors.
For those programs where graduates are earning strong incomes, the data can validate the value of the program and drive important conversations about best practices and employer alignment. For those programs whose graduates are not receiving the kinds of incomes expected, it can drive the right conversations about what needs to be done to increase the economic value of a degree. Perhaps most importantly, hard data about graduate incomes can lead to productive public policy conversations about student debt and student financing across all higher education.
That said, the value of higher education is not only measured by the economic return it provides. For example, some career paths that are critical to our society do not necessarily lead to high-paying jobs. All of higher education needs to come up with better ways to measure a wide spectrum of outcomes, but just because we don’t yet have all those measurements doesn’t mean we shouldn’t seize an a good opportunity to use at least one important data point. The Department of Education has created a potentially powerful tool to increase the amount of data around a degree’s return on investment. They should put this tool to work for institutions and students so that everyone can drive toward informed decisions and improved outcomes.
It should become standard practice for incoming college students or adults looking to further their education to have an answer to this simple question: What do graduates from this program earn annually? We welcome that conversation.
The liberal arts and sciences have no economic value. Let me repeat that: none, nada. Taught in the right spirit, they are useless from an economic point of view. They are designed in fact to be downright wasteful. The liberal arts’ ancient roots, after all, are from a world in which a few free men had the time -- the leisure -- to engage in study. It was for the elite. The purpose of the liberal arts in ancient times was to offer to the elite the knowledge, morals, and skills (like oratory) that they needed to determine what was good for individuals and the public, and to help achieve that good in society through citizenship.
In a democracy, however, we cannot afford to leave the liberal arts to the elite. In a society in which we expect all people to be effective citizens, all people need to have access to the liberal arts in order to have the knowledge and moral foundation that they need to think about what is a good life and a good society, and the skills necessary to help them work to achieve it here in our democracy. Today’s students need to know a lot about how the human and natural worlds work and they need not just knowledge but the capacity to evaluate — that is to determine the moral value of — different goals, ideas, and policies. This evaluation requires moving well beyond the economic calculus to questions of what is worth it and to understanding our cultural traditions. As Martha Nussbaum has put it, such an education is by definition not for profit.
There is also a second tradition that we have inherited from the ancient world, one more closely tied to Greece -- and Socrates and Plato -- than to the ideal of the Roman free citizen. In this framework, a liberal education is designed to help people seek truth, and to use truth to serve society. While distinct, it too is designed to develop human beings and citizens, not workers. Applied to a democratic society, it means that all citizens must be given opportunities to question their assumptions, to engage in inquiry to gain new insights about the nature of the world. Applied more broadly, such an approach to liberal education recognizes that the pursuit of knowledge develops our human capabilities and fosters our ability to engage with the world -- in work and in play -- with more depth. It too is not for profit.
Of course, in reality, the liberal arts are economically beneficial. They teach the high end “transferable skills” -- critical thinking, analytical ability, creativity, imagination, and the ability to learn new things -- that our economy needs, and without which we would not graduate students capable of innovation. That’s why China and other countries are now embracing the liberal arts even as we abandon them. The liberal arts are also the best preparation for advanced professional training in the “liberal professions” of law and medicine, as well as other fields, including business. Finally, since Thorstein Veblen, we have known that the liberal arts embody a certain kind of prestige that matters in a pecuniary culture. The liberal arts, therefore, may be the best bet for students to achieve long-term economic success.
All of these claims about the economic value of the liberal arts are probably true, but who cares? Not employers. In fact, Anthony Carnevale has concluded that the economic value of a college education depends highly on one’s major now that employers want graduates with specific technical skills (although this may in part reflect the different career goals of graduates with different majors rather than the inherent economic potential of the liberal arts). Certainly, many employers value their own liberal education and will continue to hire the graduates of our nation’s top liberal arts colleges and universities. But while employers no doubt want knowledgeable, thoughtful, critical, and creative employees, they do not want nor need these qualities in all their workers. Instead, increasingly, they want technicians.
Yet we continue to argue that the liberal arts should be defended for their economic value. Such defenses of the liberal arts may turn out to be their true downfall, because they leave us with no language to make clear what the liberal arts are worth. In fact, it means that we must evaluate the liberal arts by a criterion — their profitability — that not only is irrelevant to them but corrupts them, orienting them toward goals that are instrumental in nature and preventing them from serving their true humanistic and civic purposes. In fact, one recent essay has suggested that the liberal arts should be designed to foster entrepreneurs rather than human beings and citizens. If that is the goal of education, we don’t need the liberal arts at all. Instead, we can have everyone engage in entrepreneurial studies programs and abandon the study of chemistry, history, political science, anthropology, biology, or geology.
If our only god is money, we live in a sad society. A long time ago John Kenneth Galbraith pointed out in his book The Affluent Society that our narrow focus on marginal economic gains makes no sense in a society that is no longer facing scarcity. While we may not live in the kind of economic wonderland that marked Galbraith’s 1950s, we still live in an affluent society. While a vibrant economy is a public good, and while people need good-paying jobs, that is not all that we are about, and certainly not the heart of what collegiate education is about.
But how, then, to save the liberal arts if emphasizing their economic value debases them and may even prove to be a losing argument empirically? The answer is simple: remember the ancient ideal that the liberal arts serve human and civic purposes and are therefore designed for people with the leisure to study them. But, in a society committed to equality, we cannot permit only the elite to have access to the liberal arts. Instead we must democratize leisure by offering undergraduate college students the time and opportunity to study the liberal arts.
The way forward, then, is simple. Instead of seeing college as a private investment, we must consider it a public good. If we remember the generation that was educated after World War II, generous public support meant that they could afford -- economically -- to spend four years studying the subject that most interested or spoke to them, and then they took their education and did millions of things with it that helped us develop a richer society, not just in terms of wealth but in terms of knowledge, art, and citizenship. That generation could do so because they did not have to take on thousands of dollars in debt and to worry all the time about how to pay for it. They could do so because public support for their education -- meaning low tuition for students thanks to tax support for America’s colleges -- gave them the freedom -- the leisure -- to study.
The liberal arts are declining because today’s students do not have the leisure to study, much less to study hard. They are worried about their student debt and how to pay it off. They are working long hours at a job that should be spent engaged in study or conversation. They are told that they have to make their college degrees pay for themselves, and we have in turn robbed them of the freedom -- in the ancient sense -- that was the precondition for studying the liberal arts. Saving the liberal arts, then, requires restoring to students the freedom to engage in them.
Johann Neem is associate professor of history at Western Washington University.
As participation in higher education worldwide rises and geographic barriers and boundaries fall, collaboration on some postsecondary issues has increased. But most countries and regions still operate independently on many fronts, both purposefully (because countries want to go their own way) and less so, because of inadequate communication and cooperation. That fragmentation can be particularly vexing in areas such as quality assurance, and it is a major reason for a new endeavor announced Thursday by the Council for Higher Education Accreditation.
Through the new CHEA International Quality Group, the council -- which represents American colleges and universities that are accredited by agencies that it recognizes -- aims to bring together colleges, accreditors, quality assurance agencies and associations from around the world to work together on dealing with quality-related issues in higher education. CHEA itself has been active in international matters, setting aside part of its annual meeting for an international forum and working with entities such as the Organization for Economic Cooperation and Development and UNESCO on issues such as diploma mills.
But Judith S. Eaton, CHEA's president, said council officials believed that the "growth in worldwide activity of our institutions, through study abroad and branch campuses, and the expanding international activity of U.S. accreditors" -- as well as the explosion of issues such as cross-border education, for-profit higher education, and massive open online courses -- made this a logical time to expand its involvement. The council does not plan either to accredit institutions or to recognize international quality assurance agencies as it does U.S. accreditors.
"We're trying to create a forum in which we and our partners around the world can work together on quality assurance issues," she said. The new entity, which will be part of CHEA, plans to convene discussions, conduct research, share news and best practices, and provide consulting services on quality assurance issues.
When I first floated the idea of writing a weekly column from my perch as director of institutional research and assessment at my college, everyone in the dean’s office seemed to be on board. But when I proposed calling it “Delicious Ambiguity,” I got more than a few funny looks.
Although these looks could have been a mere byproduct of the low-grade bewilderment that I normally inspire, let’s just say for the sake of argument that they were largely triggered by the apparent paradox of a column written by the measurement guy that seems to advocate winging it. But strange as it may seem, I think the phrase “Delicious Ambiguity” embodies the real purpose of Institutional Research and Assessment. Let me explain why.
This particular phrase is part of a longer quote from Gilda Radner – a brilliant improvisational comedian and one of the early stars of “Saturday Night Live.” The line goes like this:
“Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next. Delicious Ambiguity.”
For those of you who chose a career in academia specifically to reduce ambiguity – to use scholarly research methods to discover truths and uncover new knowledge -- this statement probably inspires a measure of discomfort. And there is a part of me that admittedly finds some solace in the task of isolating statistically significant “truths.” I suppose I could have decided to name my column “Bland Certainty,” but – in addition to single-handedly squelching reader interest – such a title would suggest that my only role is to provide final answers – nuggets of fact that function like the period at the end of a sentence.
Radner’s view of life is even more intriguing because she wrote this sentence as her body succumbed to cancer. For me, her words exemplify intentional – if not stubborn – optimism in the face of darkly discouraging odds. I have seen this trait repeatedly demonstrated in many of the faculty and staff members I know over the last several years as you have committed yourself to helping a particular student even as that student seems entirely uninterested in learning.
Some have asserted that a college education is a black box; some good can happen, some good does happen – we just don’t know how it happens. On the contrary, we actually know a lot about how student learning and development happens – it’s just that student learning doesn’t work like an assembly line.
Instead, student learning is like a budding organism that depends on the conduciveness of its environment; a condition that emerges through the interaction between the learner and the learning context. And because both of these factors perpetually influence each other, we are most successful in our work to the degree that we know which educational ingredients to introduce, how to introduce them, and when to stir them into the mix. The exact sequence of the student learning process is, by its very nature, ambiguous because it is unique to each individual learner.
In my mind, the act of educating is deeply satisfying precisely because of its unpredictability. Knowing that we can make a profound difference in a young person’s life – a difference that will ripple forward and touch the lives of many more long after a student graduates – has driven many of us to extraordinary effort and sacrifice even as the ultimate outcome remains admittedly unknown. What’s more, we look forward to that moment when our perseverance suddenly sparks a flicker of unexpected light that we know increases the likelihood – no matter how small – that this person will blossom into the lifelong student we believe they can be.
The purpose of collecting educational data should be to propel us – the teacher and the student – through this unpredictability, to help us navigate the uncertainty that comes with a process that is so utterly dependent upon the perpetually reconstituted synergy between teacher and student. The primary role of institutional research and assessment is to help us figure out the very best ways to cultivate – and in just the right ways – manipulate this process.
The evidence of our success isn’t a result at the end of this process. The evidence of our success is the process. And pooling our collective expertise, if we focus on cultivating the quality, depth, and inclusiveness of that process, it isn’t outlandish at all to believe that our efforts can put our students on a path that someday just might change the world.
To me, this is delicious ambiguity.
Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. This essay is adapted from the first post on his new blog.
Teacher education has been under siege in the last few years, the first line of attack in the growing criticism and more aggressive regulation of higher education.
Most recently, the U.S. Department of Education proposed — in a highly contentious negotiated rule-making exercise — to use test scores of graduates’ students to evaluate schools of education, despite the warnings of leading researchers that such scores are unstable and invalid for this purpose. Furthermore, in an unprecedented move, the department would limit eligibility for federal TEACH grants to prospective teachers from highly rated programs, denying aid to many deserving candidates while penalizing programs that prepare teachers for the most challenging teaching assignments.
This was only the most recent example of how education reformers have made teachers and teacher education a punching bag, painting those in the entire field as having low standards and being unwilling to accept responsibility for the quality of their work.
However, teacher educators from across the country are stepping up to create new, more valid accountability tools. An important part of this effort is the spread of the edTPA, a new performance assessment process that examines — through candidates’ plans, videotapes of instruction, evidence of student work and learning, and commentary — whether prospective teachers are really ready to teach. As highlighted recently in The New York Times, the assessment focuses on whether teachers can organize instruction to promote learning for all students, including new English learners and students with disabilities, and how they analyze learning outcomes to create greater student success.
This new assessment was developed by a team of researchers and teacher educators at Stanford University, of which I have been privileged to be a part, working with teachers and teacher educators across the country. The American Association of Colleges for Teacher Education (AACTE) helped to coordinate higher education involvement. Ultimately, teacher educators and state agencies in 24 states and the District of Columbia formed a Teacher Performance Assessment Consortium (TPAC) to develop and test the assessment. Today, about 160 colleges of education are field-testing the assessment, with the goal of transforming initial licensure, improving teacher education, and informing accreditation.
This may be the first time that the teacher education community has come together to hold itself accountable for the quality of teachers who are being prepared and to develop tools its members believe are truly valid measures of teaching knowledge and skill. Unlike other professionals, teachers have historically had little control over the tests by which they are evaluated. This rigorous, authentic measure represents a healthy and responsible professionalization of teacher preparation.
The edTPA is built on the portfolio-based model teachers developed two decades ago through the National Board for Professional Teaching Standards, and on additional work by California educators since 2002, coordinated by staff at Stanford. Teacher educators from more than 30 traditional and alternative programs helped develop the Performance Assessment for California Teachers (PACT) as the basis for an initial license. The PACT is scored in a consistent fashion by faculty members, instructors, supervisors, cooperating teachers, and principals in partnership schools. It provides vivid evidence of what beginning teachers can do, as well as useful information for guiding their learning and that of the programs themselves.
The assessment puts aside the tired arguments about which pathways to teaching are better and, instead, evaluates candidates on whether they can meet a common standard of effective practice. Unlike most current teacher tests, scores on PACT have proven to predict the capacity of candidates to foster student achievement as beginning teachers.
California programs have found the assessment so helpful in guiding and improving their practice — and that of their candidates — that they have continued the work on their own dime, even when promised state funds disappeared. One California teacher educator put it this way: "This experience has forced me to revisit the question of what really matters in the assessment of teachers, which in turn means revisiting the question of what really matters in the preparation of teachers."
As a teacher educator in California who uses the PACT, I agree with this evaluation. It has focused our candidates and program on what it means to teach effectively and it has improved our collective work. We now rely on it as a central part of our ongoing program improvement efforts.
A national version of the assessment process was started as interest spread across the country. First, a teacher educator from the University of California at Santa Barbara moved to the University of Washington and took the PACT with him. Faculty at the University of Washington liked the assessment so much they adopted it and talked about it to others in the state, who also got engaged. Ultimately, the state of Washington proposed building a similar model to use for beginning licensure. California educators also got jobs in other states and took the idea with them. Teacher educators from other states asked to be part of the project and urged the National Council for Accreditation of Teacher Education as well as their own state agencies to look at edTPA because they believe it measures their work more accurately than many other approaches currently on the books.
Meanwhile, AACTE coordinated information sessions and conversations. Ultimately, a group of teacher educators from across the country decided to create a national version, recruited Pearson as an operational partner to manage the large number of participants, and when it came time to field test the assessment, the interest grew to 22 states, 160 institutions of higher education, and more than 7,000 teaching candidates participating in the TPA field test
Demand for edTPA grew so rapidly that support was needed to deliver it to campuses and states that asked for it. Stanford chose Evaluation Systems, a long-time developer of state teacher assessments that is now part of Pearson, to provide support for administering the assessment. As the administrative partner for the National Board’s portfolio assessment as well, Pearson brought the experience, capacity, and infrastructure to deploy the edTPA to scale quickly, so that the field would not have to wait to see the benefits in the classroom.
During the field test, an instructor at a Massachusetts college made national news when she challenged the assessment as corporatization of the teacher education process that replaces the relationship between instructor and students. Nothing could be further from the truth. Instructors and supervisors continue to teach, observe, support, and evaluate candidates, as they always have. The assessment – which allows teachers to be evaluated authentically in their own student teaching or internship classrooms teaching curriculums and lessons they have designed – focuses attention on the kinds of things all beginning teachers need to learn: how to plan around learning goals and student needs, how to engage in purposeful instruction and reflect on the results; how to evaluate student learning and plan for next steps for individual students and the class as a whole.
Like assessments in other professions, such as the bar exam or the medical boards, the edTPA is a peer-developed process that evaluates how well candidates have mastered a body of knowledge and skills, and a tool that teacher educators and institutions of higher learning can use to develop their programs. It does not restrict or replace the judgment of professionals in designing their courses and supervising their candidates, as they always have. It adds information about the candidate's performance to supplement those judgments. The edTPA scorers are themselves experienced teacher educators and accomplished teachers in the same fields as the candidates being evaluated, many of them from the programs participating in the assessment.
In fact, the field test has engendered considerable excitement at most universities, where conversations about how to prepare teachers have deepened. Amee Adkins, a teacher educator at Illinois State University, says, "[edTPA] provides something long overdue in teacher education: a clear, concise, and precise definition of the core of effective beginning teaching. It takes us a step further than other professional licensure exams because it goes beyond knowledge and judgment and examines actual candidate performance."
Vanderbilt University’s Marcy Singer-Gabella notes that faculty at the eight Tennessee universities piloting the assessment say that working with edTPA has led to more productive conversations about teaching practices and how to develop them. She adds: "At Vanderbilt, where we have used [edTPA] data to make changes, our candidates are better prepared and more skilled, according to school principals and teachers."
And the candidates themselves report that the TPA has helped them develop the habits and routines for planning, assessing, and adjusting instruction that allow them to succeed and keep learning as they teach. By comparison, as one put it, the teacher evaluation systems in their districts are “a piece of cake.”
In the context of the current debates about teacher education quality, it has been inspiring to see educators step up and accept the challenge to create something better, rather than merely complaining about narrow measures that do not reflect our highest aspirations. The best hope for significantly improving education at all levels of the system is for educators to take charge of accountability and make it useful for learning and improvement.
Linda Darling-Hammond is the Charles E. Ducommun Professor of Teaching and Teacher Education at Stanford University.
The American taxpayer has a huge stake in higher education accreditation. In order to access some of the $160 billion in federal student aid dollars, colleges and universities must be approved by a recognized regional or national accrediting body. In the absence of an alternative, the accreditation process has come to serve as the federal government’s primary quality control mechanism in higher education. Yet this process is largely hidden from public view and not well-understood.
That’s why the recent announcement from the Western Association of Schools and Colleges (WASC), one of the country’s six regional higher education accrediting bodies, that it will regularly make all of its accreditation reports available to the public is so important.
To those familiar with financial markets, product safety, environmental protection or a host of other sectors where public reporting is a given, it may seem puzzling that such an announcement is considered innovative. But when it comes to our colleges and universities, WASC’s initiative is downright revolutionary. That WASC is taking this worthwhile step ought to be applauded. That this step is only now taking place tells you everything you need to know about the sorry state of quality control and transparency in higher education.
Despite the high stakes for taxpayers, accreditation is opaque -- groups of faculty and administrators recruited from other colleges and universities visit the campus, assess its financial and academic health, and provide a report on whether the college should maintain its accreditation. Typically, this happens every five years. The colleges themselves must take time to engage in “self-study” and prepare reams of documentation — sometimes down to the number of volumes in the library. All of this is expensive: the provost of Princeton recently told a Department of Education panel that its most recent accreditation cost the university about $1 million.
What do we get for all of that time and money? Not much, at least in terms of quality control: few colleges ever lose their accreditation, and schools with low graduation rates, financial issues, or other problems often remain fully accredited. For example, WASC accredits a range of institutions, from elites like Stanford and UCLA, both of which graduate 90 percent or more of their students, to less prominent colleges like California State University, Dominguez Hills; Alliant International University; and San Diego Christian College, where graduation rates for BA-seekers hover around 30 percent. Other institutions on WASC’s roster, including Cogswell Polytechnic College, Vanguard University of Southern California and the California Institute of Integral Studies, have failed recent Department of Education “financial responsibility” audits.
And while accreditors may uncover such areas where institutions need to improve, these details are not routinely made public. Until WASC stepped up, none of the accrediting bodies systematically published the results of its reviews. Instead, most colleges and universities simply announce that they’ve passed another round of accreditation, while the occasional news item vaguely reports on colleges that are “on probation” or “at risk” of losing their accreditation. Otherwise, all accredited schools bear the same seal of approval, whether they have a sterling record of success or a troubled history.
Only in very rare instances do schools lose accreditation. Just this month, WASC rejected the for-profit Ashford University’s bid for renewed accreditation, based largely on what reviewers described as its high dropout rates. And here in the Washington, D.C., area, Southeastern University lost its accreditation in 2009 after a long stretch of probationary periods, threats, and scandal. As Kevin Carey reported in Washington Monthly in 2010, by the time it shuttered, Southeastern had onlysix full-time faculty to teach over thirty degree programs.
As anyone who has ever read an accreditation report can tell you, making these documents public will do little to help prospective students in the near term. You need a higher education glossary and a helping of patience to even begin to decipher the jargon. Even then, the results are often difficult to interpret, and almost impossible to use in a comparative way.
But WASC’s move is rhetorically important for what it signals to the insular, risk-averse, and often defensive culture of higher education. The days of hiding behind accreditation and benefiting from its imprimatur will slowly come to an end. Demands for better information about higher education quality and value -- whether defined in terms of student learning, labor market outcomes, or return on investment -- are growing from the statehouse to the White House.
Colleges, universities, and accrediting bodies that continue to resist this movement will find themselves unable to compete with those that embrace it. And while accreditors will rarely put a college out of business, armies of prospective students equipped with a clearer notion of quality and cost can do just that.
Andrew P. Kelly is a research fellow at the American Enterprise Institute. Mark Schneider is a vice president at the American Institutes for Research and a visiting scholar at AEI.
Two California community colleges are ahead of City College of San Francisco in coping with accreditation threat. Special trustees or a takeover could loom, while accreditor warns CCSF faculty about misleading statements.