Submitted by Paul Fain on February 3, 2015 - 3:00am
California's community colleges and the California State University System continue to make "notable progress" in creating smoother transfer pathways for students, according to a new report from the state's Legislative Analyst's Office (LAO). The two systems have been working to comply with 2010 legislation that requires the creation of associate degrees for transfer, which are designed to help clear away some of the "maze of academic requirements that vary across campuses," the report said. The LAO recommends setting specific reporting and data requirements to make sure the public institutions stay on track.
Submitted by Paul Fain on January 29, 2015 - 3:00am
Florida is one of several states where legislatures are exploring dramatic approaches to reforming developmental (remedial) education.
A high percentage of students who enroll at the 28 state colleges (formerly the community colleges) in the Florida College System have remedial needs, and only a small fraction of those students actually earn college credentials.
To try to combat this problem, the state’s Legislature in 2013 passed a new law mandating that the 28 state colleges provide developmental education that is more tailored to the needs of students. As reported earlier by Inside Higher Ed, the policy gives students much more flexibility in terms of whether they participate in developmental education and what options they choose if they do decide to participate.
Some concerns have emerged since the Florida reform was implemented in the fall of 2014. For example, The Chronicle of Higher Educationdescribed “headaches” such as a drastic decline in students enrolling in developmental education courses, challenges faculty members face and other issues regarding student decisions and choices.
It’s clear that the state’s developmental policy reform could have a long-lasting influence on student success in Florida and beyond. The Florida reform would be particularly relevant if the proposal of two years of free community college by President Obama ever becomes a reality. To learn more about it, the Center for Postsecondary Success (CPS) at Florida State University has been conducting a comprehensive evaluation of the implementation and effects of the policy.
The Florida Experiment
The law drastically changes the placement and instructional practices in developmental education. It prohibits requiring placement testing or developmental education for students who entered ninth grade in a Florida public school in the 2003-2004 school year and after, provided the student earned a standard high school diploma. The law also exempts active-duty members of the military from required placement testing and developmental coursework. It does, however, allow exempted students to choose to be tested and/or to take developmental education once advised of their options.
Students now have several new options in terms of developmental education delivery methods that are designed to move them quickly into college credit, using corequisite instruction, modules and tutoring. The new strategies include: (1) modularized instruction that is customized and targeted to address specific skills gaps; (2) compressed course structures that accelerate student progression from developmental instruction to college-level coursework; (3) contextualized developmental instruction that is related to metamajors (a collection of programs of study or academic discipline groupings that share common foundational skills); and (4) corequisite developmental instruction or tutoring that supplements credit instruction while a student is concurrently enrolled in a credit-bearing course.
The legislation does not mandate the specifics around each option and therefore allows the individual campuses in the system some flexibility in regard to the form and delivery of each option.
Challenges and Opportunities
The reform strategies underway are sweeping.
Because a key intent of the reform is to provide greater flexibility in determining who needs to take developmental education courses, it is not surprising to observe a sizable drop-off in students enrolling in them. The drop-off itself may not necessarily become a concern for some students, but we will need to closely monitor those who choose not to opt in to developmental education programs to determine their outcomes compared to those who did.
Research has indicated that developmental education may not be that helpful for borderline students, thus suggesting flexible placement may increase student success by not holding back students just shy of the cut score. However, a large number of students who would have scored far below traditional cutoff scores and instead opt in to college-level courses may present new and difficult challenges to institutions and instructors, and may also jeopardize students’ chances of succeeding in college. Such a scenario could be compounded depending on how students of different backgrounds make decisions.
While some perceive the increased student choice to be positive, others question whether developmental education students have the preparation and wisdom to make informed choices about course options. Students, though, generally appreciate the increased choice provided by the legislation but questioned whether other students would always make the appropriate decisions. Colleges and universities have ramped up advising and student support services, which could be key to student success and the reform as a whole. Advising students to make the “good” choice, and students following the advice properly, will be critical to student success in this new policy environment. Meanwhile, providing the necessary support to students along the way is important to sustain student success.
With greater flexibility in placement, the developmental education reform could alter the composition of classrooms across college campuses, possibly also shaping the structure and culture of teaching and learning on campus due to the wider range of student academic preparation in both developmental and college-level classes. The voices of faculty have indicated this is the case. A promising sign is that faculty members are designing customized instruction tailored to students based on their assessment of student preparation. This is consistent with the substantial literature on effective teaching and learning by meeting the needs of learners. Of course, this customization increases the work of faculty members, but if there is a way to support faculty adaptation to the new classroom reality, student success may be well in reach.
In anticipation of both student and faculty concerns, most campuses planned to increase the student support services they provide. A content analysis of the 28 implementation plans indicated that the colleges planned to ramp up advising as well as extensive training and professional development for front-line personnel. In addition, support services such as tutoring and success courses are widely considered in colleges’ implementation plans.
An earlier survey of college administrators also indicated a whole-campus approach in implementing the new policy. There is a fairly wide agreement that the reform reflects a spirit of innovation and offers an opportunity to solve an old problem in new ways, and colleges mobilized to respond to the new law and increased intra-institutional collaboration in developing strategies. Each campus has an implementation team that includes the key constituents on campus so that perspectives from all can be shared and considered.
Learning From the Experiences
The Florida experiment is a state response to a persistent problem. It marks a drastic departure from the traditional developmental education model that has not been working well. The “headaches” reported in The Chronicle from the early stage of implementation are not unexpected. However, the issues raised should not be ignored. In fact, we should keep close eyes on those issues and student outcomes.
The law allows institutions to be responsive to their individual student populations. But because there are variations in institutional reality based on student characteristics, infrastructure and previous experiences with developmental education, some colleges may be ahead of the game while others may be struggling to catch up, resulting in different reactions to the reform. While some colleges embrace it, others may have some reservations. The state and other interested parties should provide assistance to help struggling colleges to get up to speed.
The success of the reform depends on a multitude of players and factors. It depends on students to make the right decisions for themselves; it depends on practitioners and administrators to successfully rally the troops on the ground to implement the critical components called for by the new law; it depends on faculty members to deliver courses that meet student needs; it depends on advisers to effectively advise students and support services staff members to provide timely and needed support to the students along the way; it also depends on policy makers to create favorable policy environments for those on the ground to do the work at the best of their expertise and capacity.
The bold reform strategies in developmental education in Florida could blaze a new trail, or offer states valuable lessons. It is easy to point fingers to K-12 education for the lack of preparation of college students. While it is important to continue to improve the quality of K-12 education for all students, it is also important to consider the ways the higher education system can improve student success. Given the nature of the reform and the multiplicity of issues, strong and sustainable leadership at both the state and campus level is required in order for the reform to stand a chance of delivering results. At least six steps appear to be warranted to determine whether such a broad reform is capable of achieving its intended outcome.
First, as for any policy change, it will take time to see results. Is there willingness to wait for a period of time to see the impacts of the current policy changes on student success, given the likely pressures from various sources? If not, we may never know whether such a reform is able to deliver.
Second, to assess the impact of the reform on students and continuously improve the policy, there is a need for credible evidence. The research community needs to contribute to the conversation by conducting valid research to understand the perspectives from all concerned and affected, and assess the impact of the new policy on outcomes related to student success.
Third, practitioners and administrators need to be open-minded and provide feedback on what works and what may be needed on the ground. On the one hand, they need to challenge conventional practices that have been in place for a long time. Fortunately, the early signs indicate they indeed embrace the idea of innovation. On the other hand, they should demand the support they need to ensure the new initiatives will be successfully put in place.
Fourth, policy makers should use the evidence and results to guide the policy-making and -remaking process. Just as practitioners within community colleges need to be open-minded in implementing reform, policy makers need to be open-minded and honestly consider feedback to adjust the policy accordingly.
Fifth, funding agencies should be keenly attentive to what is really going on in educational reform and put their resources behind research on real-world problems. Instead of waiting for perfect research, they should strike a good balance in pursuing the rigor and relevance of the research to promptly respond to the needs on the ground. Otherwise, they may end up being empty-handed in the pursuit of connecting research, policy and practice.
Finally, credible and timely research has the potential to generate valuable evidence to inform policy and practice, and it can be accomplished by collaboration among researchers, practitioners, state agencies and funding organizations. After all, it is our shared responsibility to optimize the educational environment so that our students can succeed, reach their full potential and realize their dreams.
Shouping Hu is the Louis W. and Elizabeth N. Bender Endowed Professor and the founding director of the Center for Postsecondary Success (CPS) at Florida State University.
Submitted by Paul Fain on January 28, 2015 - 3:00am
ACT, the nonprofit testing giant, this week began its third annual national career and college readiness campaign. The 34 participating states will recognize a community college and an employer for their efforts to help more students make the transition to college and the workforce.
Submitted by Paul Fain on January 15, 2015 - 3:00am
City College of San Francisco's regional accreditor has granted the college a two-year restoration of its accreditation status. The Accrediting Commission for Community and Junior Colleges (ACCJC) in 2013 moved to revoke the community college's accreditation, citing financial mismanagement and a wide range of other problems. That would have been a death blow to the huge institution, which would have lost eligibility for federal and state student aid programs.
The college's supporters and the accreditor have waged a politicized battle during the last 16 months. The commission has come under fire during the process, and received a reprimand from the U.S. Department of Education. Then, last June, the commission offered a reprieve to City College by allowing it to apply for two years to fix the identified problems. The college applied for that option, receiving it last week, according to the commission.
Both sides in the dispute are awaiting a ruling by a San Francisco Superior Court judge on a lawsuit that Dennis Herrera, the city attorney in San Francisco, filed last year. Herrera is seeking to block the accreditor's actions, accusing it of political bias, improper procedures and conflicts of interest. The judge is expected to make a decision this week.
President Obama has jumped on the bandwagon, which started in Tennessee, of making community college tuition-free. This latest proposal is his most recent effort to increase the prominence of the federal government in higher education. While giving higher education more federal visibility may be a good thing, making community colleges tuition-free is also the latest in a series of proposals in which the administration seems to have decided that sound bites trump sound policy.
The cycle began in the administration’s early days when it declared its primary goal in higher education was to “re-establish” the U.S. as having the world’s highest attainment rate -- the proportion of working adults with a postsecondary degree of some sort.
Never mind that the U.S. has not had the highest rate in the world for at least several decades and that achieving such a distinction now is well nigh impossible given where some other countries are. And also ignore the fact that some countries which have overtaken us, such as South Korea and Japan, have done so in large part because they are educating an increasing share of a declining number of their young people – a demographic condition we should want to avoid at all costs.
In this effort to be Number One in higher education, the Obama administration is continuing a trend in K-12 education that began in the Clinton and George W. Bush administrations in which we as a nation set totally unrealistic goals to be achieved after the incumbent administration has left office. Not clear why we would want to expand this practice into higher education, but that’s what we are doing.
The administration also in its first year pushed for a remarkable expansion of Pell Grants as part of the economic stimulus package of 2009. It was certainly good to augment Pell Grants in the midst of a severe recession when so many students were having a tough time paying their college bills. But rather than doing it on a temporary basis by increasing awards for current recipients, the administration pushed for and the Congress agreed to a permanent legislative change that increased the number of recipients by 50 percent and doubled long term funding.
This is the equivalent of changing tax rates in the middle of a recession rather than providing a rebate. It certainly provided more aid for many more students – nearly one in two undergraduates now receives a Pell Grant. But the expansion in eligibility means less aid is available for the low-income students who most need it. And few seem worried that Pell Grant increases may have led many institutions that package aid to reduce the grants they provide from their own funds to Pell recipients, as is reflected in the fact that institutional aid increasingly goes to middle-income students.
The Obama administration’s recent effort to develop a rating system for postsecondary institutions is another example of politics triumphing over sound policy. The rhetoric goes to the noble notion of making institutions more productive and more affordable, but the metrics the administration has proposed using are unlikely to produce the desired result or may well have the unintended effect of producing bad results.
Much more troublesome, the administration’s ratings proposal would penalize students based on where they decide to enroll, as those going to colleges that don’t perform well would get less aid. This is illogical as well as counterproductive. Thankfully, there seems little chance that this proposal would be adopted, but one is left to wonder why it was suggested and pushed when it would do little to address the many real challenges facing American higher education, such as chronic inequity and unaffordability.
Which brings me to the most recent proposal by President Obama – to make community colleges tuition-free. At this stage, we know relatively little about what is being proposed other than that it is modeled on what was done in Tennessee where state lottery funds (not a very good federal model) were used to ensure that students with good grades would not have to pay tuition to go to community college. But since there are so few details as to how this tuition-free package would be structured, there are more questions regarding the President’s proposal than there are answers. These include:
Who will benefit and who will pay? If the administration were to follow the Tennessee plan, current Pell Grant recipients will largely not benefit as their Pell Grant award fully covers the cost of tuition at most community colleges throughout the country. So beneficiaries would disproportionately be middle-class students who mostly can afford $3,300 in annual average tuition costs of community college, just as has been the case for the Tennessee plan.
The administration to its credit seems to recognize this potential lack of progressivity, and its spokesmen have declared (to Inside Higher Ed) that the new benefits will be on top of what Pell Grant recipients currently receive. This could be an avenue for a big step forward in federal policy were we to recognize that Pell Grants are largely for living expenses for students whose families cannot afford to pay those expenses, but it means that the federal costs of implementing such a plan will be substantial, probably far more than the $60 billion in additional costs over 10 years now being suggested.
Also lost in the enthusiasm about making community colleges tuition-free is the reality that the biggest bill for most students are the costs of living while enrolled and the opportunity costs of leaving the job market to enroll in school on more than an occasional basis. Also lost in the hubbub is the question of how these benefits are going to be paid for. This key financing question seems largely unanswered in the administration’s explanation thus far.
What would happen to enrollments in other higher education institutions?Advocates for the Tennessee Promise talk about how it has already boosted enrollments in community colleges. There seems to be little consideration, though, of whether this might come at the expense of enrollments in other colleges and universities. The Obama administration clearly prefers for students to go to community colleges rather than for-profit trade schools, but it seems to have little concern that offering more aid for students enrolling in community colleges will have any adverse effect on enrollments in more traditional four-year institutions -- including historically black colleges that could ill afford the dropoff in enrollments.
But federal and state officials have an obligation to recognize that enrollments in higher education are not unlimited and that providing incentives for students to enroll in one sector means that enrollments in other sectors are likely to decline. Is the next step for the federal government to propose a program of support for those institutions that cannot afford to wait for all those new community college students to transfer in two or three years to fill their now empty seats?
Why would community colleges participate? Like many other federal and state policy initiatives, the president’s proposal reflects a tendency to think only in terms of demand and to believe that price reductions will inevitably result in enrollment increases. But the economic reality is that good policy must take into account institutional behavior as well, and it is not at all clear why community colleges would change their behavior in light of the Obama proposal. Under the Obama plan the federal and state governments would replace funds that families currently spend or loans that students currently borrow for tuition. The likely result of such a policy would be more students enrolling in already overcrowded community colleges will little or no additional funds provided to community colleges to educate them.
If one truly wants to improve community college financing, a better approach would be one in which governments recognize the additional costs entailed in enrolling additional students and try to help pay for those costs. But in the absence of such a proposal, the current Obama plan seems more of the same – more requirements but no more money. As a result, it is hard to understand the enthusiasm of the community college and other national associations for the president’s plan.
Why would states participate? It’s also not immediately clear why states would participate in the Obama plan as it is aimed primarily or entirely at changing how tuition is financed. As a result, it really would not get at the majority of the community college financing iceberg – what states and localities spend in support of every student who enrolls. So the question remains: why would states choose to participate in this plan that obligates them to meet a series of new requirements AND pay for one-quarter of tuition costs in addition to still paying what they do now for operating subsidies.
In sum, an analysis of what we know of the president’s plan is part of a troubling pattern that seems to characterize our higher education policy debates these days. Political considerations trump good policy. The interests of low-income students get second billing to middle class affordability, or no billing at all. Not enough attention is paid to how things actually would work or why institutions or states would decide to participate.
It all goes to show that, as the economist John Maynard Keynes famously said, “There is no free lunch.” One of the problems with the Obama administration’s continuing enthusiasm for higher education policy initiatives is that is doesn’t seem to recognize this basic economic reality.
Arthur M. Hauptman is a public policy consultant specializing in higher education policy and finance. This is the first in a series of articles about how federal and state higher education policies might be changed to produce greater equity, efficiency and effectiveness.
The Lumina Foundation and Indiana University’s Center for Postsecondary Education will be taking over the important Carnegie Classification of Institutions of Higher Education, from the Carnegie Foundation for the Advancement of Teaching. Lumina announced that its Degree Qualifications Profile (DQP) will inform the 2015 edition of the classification. This development is yet another step away from the original intent of the classification -- to provide an objective and easy-to-understand categorization of American postsecondary institutions.
In recent years, the Carnegie Foundation made its categories more complex: in part to suit the foundation’s specific policy orientations at the time, and in part to reflect the increased complexity of higher education institutions. As a result, the classification became less useful as an easy yet reasonably accurate and objective way to understand the shape of the system, and the roles of more than 4,500 individual postsecondary institutions.
Among the great advantages of the original classification were its simplicity and its objectivity, and the fact that it did not rank institutions but rather put them into recognizable categories. Unlike the U.S. News and World Report and other rankings, the Carnegie Classification did not use reputational measures—asking academics and administrators to rank competing colleges and universities. It relied entirely on objective data.
It is not clear how the classification’s new sponsors will change its basic orientation, and its new director says that the 2015 version will not be fundamentally altered. Yet, given Lumina’s strong emphasis on access, equity, and degree completion, as well as designing a new national credential framework — highly laudable goals of course — it is likely that the classification in the longer term will be shaped to be aligned with Lumina’s policy agenda, as it was more subtly changed in its later Carnegie years.
The original Carnegie Classification contributed immensely to clarifying the role of postsecondary institutions and made it possible for policy-makers as well as individuals in the United States and abroad to basically understand the American higher education landscape as a whole and see where each institution fit in it. The classification was also quite useful internationally — it provided a roadmap to America’s many kinds of academic institutions. An overseas institution interested in working with a research university, a community college, or a drama school could easily locate a suitable partner. We are likely to lose this valuable resource.
A Historical Perspective
The classification dates back to 1973, when the legendary Clark Kerr, having devised the California Master Plan a decade earlier and leading the Carnegie Commission on Higher Education, wanted to get a sense of America’s diverse and at the time rapidly expanding higher education landscape. The original classification broadly resembled Kerr’s vision of a differentiated higher education system, with different kinds of institutions serving varied goals, needs, and constituencies. It included only five categories of institutions — doctoral granting, comprehensive universities and colleges, liberal arts colleges, two-year colleges and institutes, and professional schools and other specialized institutions, along with several subcategories.
Because the classification was the first effort to categorize the system, it quickly became influential — policy-makers valued an objective data based categorization of institutions and academic leaders found it useful to understand where their own institutions fit. The classification had the advantage of simplicity, and its sponsor was trusted as neutral. Although the classification was not a ranking — it listed institutions by category in alphabetical order, many came to see it in competitive terms. Some universities wanted to join the ranks of the subcategory of “research university–I,” those institutions that had the largest research budgets and offered the most doctoral degrees — and were overjoyed when their institution was listed in that category. Similarly, the most selective liberal arts colleges were in “liberal arts colleges–I,” and many wanted to join that group. Over time, the classification became a kind of informal measure, if not of rank, at least of academic status.
Fiddling and Changing
The classification’s categories and methodology remained quite stable over several decades of major transformation in American higher education. In 2005, with new leadership at the Carnegie Foundation, major changes were introduced. Foundation leaders argued that the realities of American higher education required rethinking the methodology. It is also likely that the foundation’s focus changed and it wanted to shape the classification to serve its new orientation and support its policy foci. The foundation revised the basic classification, added new categories such as instructional programs, student enrollment profiles, and others. The classification became significantly more complex, and over time became less influential. People found that the new categories confused the basic purpose of the classification and introduced variable that did not seem entirely relevant. The basic simplicity was compromised. Indeed, people still refer to “Carnegie Research 1” (top research universities) even though they have not existed in the Carnegie lexicon for two decades.
There may well be more fiddling — the U.S. federal government’s desire to rank postsecondary institutions by cost and degree completion rates may add a further dimension to the enterprise. A further dilemma is the role of the for-profit higher education sector — these entities are fundamentally different in their orientations and management from traditional non-profit institutions — so also are the new online degree providers. Should these new additions to the higher education landscape be included in the classification? These elements will contribute to “classification creep” — a bad idea.
What Is Really Needed
It is surprising that, in the four decades since Clark Kerr conceptualized the Carnegie Classification, no one has stepped forward to provide a clear and reasonably objective and comprehensive guide to the more than 4,500 postsecondary institutions in the United States. Resurrecting the basic purpose and organization of Kerr’s original Carnegie Classification is not rocket science, nor would it be extraordinarily expensive.
It is of course true that the postsecondary education has become more complex. How would one deal with the for-profit sector? Probably by adding a special category for them. Many community colleges now offer four-year bachelors degrees, but their basic purpose and organization has not essentially changed. There are a larger number of specialized institutions, and many colleges and universities have expanded and diversified their degree and other offerings. Technology has to some extent become part of teaching programs of some postsecondary institutions — and the MOOC revolution continues to unfold. Research productivity has grown dramatically, and research is reported in more ways. Intellectual property of all kinds has become more central to the academic enterprise — at least in the research university sector.
Yet, the basic elements of the original classification — those that help to determine the main purposes and functions of postsecondary institutions — remain largely unchanged, if somewhat more complicated to describe. The key metrics are clear enough:
Types of degrees offered
Number of faculty, full-time and part-time
Income from research and intellectual property
Internationalization as measured by student mobility.
A few more might be added — but again, simplicity is the watchword.
The types of institutions — six main and eight major subcategories — seem about right. These might be expanded somewhat to accommodate the growth in complexity and diversity of the system. Later iterations confusingly expanded the categories, in part to reflect the policy and philosophical orientations of the foundation. The basic purpose of the classification will be best served by keeping the institutional typology as simple and straightforward as possible.
While it is clear that these metrics may not provide a sophisticated or complete measure of each institution — and they require additional definitions — they will provide basic information that will make reasonably categorization possible. They lack the philosophical and policy orientations that have crept into the Carnegie Classification in recent years, and return the enterprise to its original purpose — describing the richness, diversity, and complexity of the American higher education landscape.
Philip G. Altbach is research professor and director of the Center for International Higher Education at Boston College.
“As a nation, we have to make college more accessible and affordable and ensure that all students graduate with a quality education of real value.”
--Secretary Arne Duncan, December 19, 2014
With the release of the Obama administration’s much-anticipated framework for rating the nation’s colleges and universities, commentators already are weighing in on the yawning gulf between the stated intention of ensuring “a quality education of real value” and the severe limitations of the metrics being considered. While the proposed college ratings system can and should expose some truly bad institutions that don’t deserve to receive federal support, the ratings framework by design presents a severely limited picture of how individual colleges and universities serve students and the nation. Regardless of whether one judges the proposed ratings data to be clarifying or misleading, the fact remains that the most important outcome of higher education — the impact a college or university has on student learning outcomes — is completely missing from the federal ratings framework.
American higher education urgently needs a college learning assessment system, but not one that equates student learning with disciplinary knowledge alone. Rather, it needs a way to account for the higher-order capacities and skills that are the hallmarks of a liberal education. The ordinary citizen will very reasonably assume that the college ratings system the federal government is now poised to promote does provide the needed evidence on college learning and quality. (Secretary Duncan himself seems to assume this, as the quote above makes clear.)
But the ordinary citizen will be wrong in this assumption. The proposed college ratings system does not, in fact, provide any evidence at all about the quality of student learning. By design, the federal ratings system is focused carefully and exclusively on data related to who enrolls in college, institutional affordability, and employment at a living wage after graduation.
What then should we do about the quality of learning challenge? What America absolutely does not need the federal government to do — and what the administration has so far very prudently and thoughtfully refrained from doing — is to create a national, federally devised and controlled system that would specify what the learning goals of college should be and then assess whether students are achieving them. Nonetheless, the public does need to know how well colleges, universities, and community colleges are doing in providing the kinds of learning that contribute directly to students’ success beyond graduation.
Under established law, private college and university boards of trustees and public college and university state system governing arrangements rightly determine the missions of individual higher education institutions, and through longstanding shared governance arrangements faculty and institutional leaders set the goals for student learning on individual campuses with the needs and goals of students and of the nation very much in mind. Yet there is wide recognition — especially among America’s employers, but also within higher education itself — that far too few students graduate from college well-enough prepared for success in work, civic participation and democratic citizenship, and life in the 21st century.
American higher education must do much better in both assessing and improving learning.
And, on this front, there is genuinely good news to report. This year, far away from the ratings furor, educators themselves are taking the lead in developing the kind of learning assessments the public deserves from higher education. The VALUE (Valid Assessment of Learning in Undergraduate Education) initiative of the Association of American Colleges and Universities (AAC&U) represents an important step forward — one that has at its core not only the assessment of student learning, but also the creation of a platform for providing institutions with direct feedback to support continuous quality improvement in teaching and learning. Developed in 2007 through a national collaboration of faculty, institutional, and state-system leaders along with content knowledge and student learning experts, the VALUE approach to assessment has since gained acceptance with remarkable speed.
This year, building on this foundation, AAC&U, the State Higher Education Executive Officers Association (SHEEO), nine state systems, and 85 public and private institutions are engaged in a major proof of concept study designed to demonstrate the different direction the VALUE approach represents both for assessing learning outcomes and for providing useful feedback to educators about strengths and needed improvements in student performance. The states working in concert with AAC&U and SHEEO are Connecticut, Indiana, Kentucky, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island, and Utah. Private liberal arts institutions in additional states also are contributing to the study.
Under the VALUE approach, rubrics — common across participating institutions — are used rather than standardized tests, and scores are based on faculty judgments about actual student work. Specifically, graded student work products that show what a student knows and can do — an essay, a piece of creative writing, a lab report, an oral presentation — are evaluated and scored by faculty members (not those who originally assigned and graded the work product) against a rubric that describes multiple dimensions of what it means to do critical thinking, quantitative reasoning, integrative reasoning, or any of the other forms of higher-order learning for which the VALUE rubrics describe achievement at different levels. The exciting promise of this work is that higher education itself is advancing an approach to assessment that is meaningful and accessible to faculty, students, and higher education stakeholders alike.
The VALUE rubrics were initially created by faculty members, and they reflect educators’ shared judgments about both the substance and the quality of student learning outcomes. Teams of faculty and academic professionals from more than 100 campuses across the country contributed to the development of these VALUE assessment rubrics for each of 16 liberal learning outcomes: inquiry and analysis, critical thinking, writing, integrative learning, oral communication, information literacy, problem solving, teamwork, intercultural knowledge, civic engagement, creative thinking, quantitative literacy, lifelong learning, ethical reasoning, global learning, and reading. These outcomes are important to the education of all college students, whether in two-year or four-year institutions, liberal arts or pre-professional programs, online or in-person courses, and regardless of institutional mission.
But the VALUE approach offers more than just a way to assess student learning. It is itself potentially a “high-impact practice” that will lead to greater student persistence and completion and to a reduction in the achievement gap between white students and disadvantaged students of color. The VALUE rubrics show students what excellence with regard to a particular learning goal looks like, and they let students see where they are on the path toward excellent performance. When faculty talk with students about their work and how it was scored, they are providing students with precisely the kind of “frequent, timely and constructive feedback,” “interactions with faculty ... about substantive matters,” and “structured opportunities to reflect on and integrate learning” that is characteristic of high-impact practices as George Kuh has defined them in his influential reports. In addition, AAC&U has learned already from campuses piloting the use of VALUE rubrics that, after initial experiences with the rubrics, faculty come together to develop assignments that directly address higher-order liberal learning skills — especially evidence-based reasoning — rather than lower-order skills such as description, summary, and paraphrase. None of this happens when a student is sent his or her score on a standardized test. This feature of VALUE, above and beyond its great utility as an assessment system, is responsible for its already very wide and growing support in colleges, universities, and state systems nationally.
What the federal government could and should do, even as it develops and tests its new ratings system, is to remind the nation, over and over, that student acquisition of the knowledge and skills college graduates need is the primary and most critical public purpose for which colleges and universities are chartered. Hence, the federal government should say that assessing what college students know and can do must be a very high institutional — and, for public institutions, institutional and state-system — priority.
While the federal government should not seek to take responsibility for this assessment, it can and should remind those properly responsible that the quality and assessment of student learning — not just access, completion, and non-learning outcomes — must become a top priority.
At the very least, the US Department of Education should publicly be calling attention to and rooting for the success of state- and institution-driven efforts like VALUE that have national potential. But it also could, through existing federal grant programs such as the Fund for the Improvement of Postsecondary Education (FIPSE) or through Department of Education contracts, create incentives for institutions and state systems to adopt new assessment approaches by offsetting temporary institutional “ramping-up” costs or providing financial support for the necessary infrastructure to allow initiatives like VALUE to become functional nationwide.
This is how public-private partnerships should work: investing in promising ideas and facilitating their testing as they develop. Both at the federal and state levels, public policy can be an enabler for the radically better approach to assessment that VALUE represents.
So even as we debate what’s right or wrong with the ratings, let’s remember that advancing accountability in higher education ultimately needs to include what students are learning.
Carol Geary Schneider is president of the Association of American Colleges and Universities. Daniel F. Sullivan is president emeritus of St. Lawrence University and chair of the AAC&U LEAP Presidents’ Trust.
Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.
Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.
The "Grim March" & the Meaning of Assessment
Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.
For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.
Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.
The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.
Who Owns Change Management
Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.
How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.
Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.
This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.
The Assessment Renaissance
Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.
Leaders looking to eradicate the walking dead assessment march in a systematic way need to:
Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.
Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.
The U.S. Department of Education will release a much-anticipated outline of its college ratings system on Friday, according to several sources familiar with the department's plans.
Department officials have indicated to a handful of college leaders and higher education associations that they will publish Friday a draft framework that includes the metrics on which colleges would be rated by the federal government.
This will be the first look at how the department intends to structure the federal college ratings system, which President Obama announced in August 2013. Department officials have twice delayed the release of its draft proposal, which was originally expected last spring.
Undersecretary of Education Ted Mitchell said in an interview earlier this month that the draft outline would not include the names of specific colleges or universities, nor would it show how institutions perform under the draft metrics.
The department will solicit public input on the framework during the next couple months with a comment deadline of mid-February, several sources said.