assessmentaccountability

Obama proposal on tuition-free community colleges elevates sound bites over sound policy (essay)

President Obama has jumped on the bandwagon, which started in Tennessee, of making community college tuition-free. This latest proposal is his most recent effort to increase the prominence of the federal government in higher education. While giving higher education more federal  visibility may be a good thing, making community colleges tuition-free is also the latest in a series of proposals in which the administration seems to have decided that sound bites trump sound policy.

The cycle began in the administration’s early days when it declared its primary goal in higher education was to “re-establish” the U.S. as having the world’s highest attainment rate -- the proportion of working adults with a postsecondary degree of some sort. 

Never mind that the U.S. has not had the highest rate in the world for at least several decades and that achieving such a distinction now is well nigh impossible given where some other countries are. And also ignore the fact that some countries which have overtaken us, such as South Korea and Japan, have done so in large part because they are educating an increasing share of a declining number of their young people – a demographic condition we should want to avoid at all costs. 

In this effort to be Number One in higher education, the Obama administration is continuing a trend in K-12 education that began in the Clinton and George W. Bush administrations in which we as a nation set totally unrealistic goals to be achieved after the incumbent administration has left office.  Not clear why we would want to expand this practice into higher education, but that’s what we are doing.

The administration also in its first year pushed for a remarkable expansion of Pell Grants as part of the economic stimulus package of 2009. It was certainly good to augment Pell Grants in the midst of a severe recession when so many students were having a tough time paying their college bills. But rather than doing it on a temporary basis by increasing awards for current recipients, the administration pushed for and the Congress agreed to a permanent legislative change that increased the number of recipients by 50 percent and doubled long term funding. 

This is the equivalent of changing tax rates in the middle of a recession rather than providing a rebate.   It certainly provided more aid for many more students – nearly one in two undergraduates now receives a Pell Grant.  But the expansion in eligibility means less aid is available for the low-income students who most need it.  And few seem worried that Pell Grant increases may have led many institutions that package aid to reduce the grants they provide from their own funds to Pell recipients, as is reflected in the fact that institutional aid increasingly goes to middle-income students.

The Obama administration’s recent effort to develop a rating system for postsecondary institutions is another example of politics triumphing over sound policy. The rhetoric goes to the noble notion of making institutions more productive and more affordable, but the metrics the administration has proposed using are unlikely to produce the desired result or may well have the unintended effect of producing bad results.

Much more troublesome, the administration’s ratings proposal would penalize students based on where they decide to enroll, as those going to colleges that don’t perform well would get less aid. This is illogical as well as counterproductive. Thankfully, there seems little chance that this proposal would be adopted, but one is left to wonder why it was suggested and pushed when it would do little to address the many real challenges facing American higher education, such as chronic inequity and unaffordability.

Which brings me to the most recent proposal by President Obama – to make community colleges tuition-free.  At this stage, we know relatively little about what is being proposed other than that it is modeled on what was done in Tennessee where state lottery funds (not a very good federal model) were used to ensure that students with good grades would not have to pay tuition to go to community college.  But since there are so few details as to how this tuition-free package would be structured, there are more questions regarding the President’s proposal than there are answers.  These include:

Who will benefit and who will pay? If the administration were to follow the Tennessee plan, current Pell Grant recipients will largely not benefit as their Pell Grant award fully covers the cost of tuition at most community colleges throughout the country. So beneficiaries would disproportionately be middle-class students who mostly can afford $3,300 in annual average tuition costs of community college, just as has been the case for the Tennessee plan. 

The administration to its credit seems to recognize this potential lack of progressivity, and its spokesmen have declared (to Inside Higher Ed) that the new benefits will be on top of what Pell Grant recipients currently receive. This could be an avenue for a big step forward in federal policy were we to recognize that Pell Grants are largely for living expenses for students whose families cannot afford to pay those expenses, but it means that the federal costs of implementing such a plan will be substantial, probably far more than the $60 billion in additional costs over 10 years now being suggested.

Also lost in the enthusiasm about making community colleges tuition-free is the reality that the biggest bill for most students are the costs of living while enrolled and the opportunity costs of leaving the job market to enroll in school on more than an occasional basis. Also lost in the hubbub is the question of how these benefits are going to be paid for. This key financing question seems largely unanswered in the administration’s explanation thus far.

What would happen to enrollments in other higher education institutions? Advocates for the Tennessee Promise talk about how it has already boosted enrollments in community colleges. There seems to be little consideration, though, of whether this might come at the expense of enrollments in other colleges and universities. The Obama administration clearly prefers for students to go to community colleges rather than for-profit trade schools, but it seems to have little concern that offering more aid for students enrolling in community colleges will have any adverse effect on enrollments in more traditional four-year institutions -- including historically black colleges that could ill afford the dropoff in enrollments.    

But federal and state officials have an obligation to recognize that enrollments in higher education are not unlimited and that providing incentives for students to enroll in one sector means that enrollments in other sectors are likely to decline. Is the next step for the federal government to propose a program of support for those institutions that cannot afford to wait for all those new community college students to transfer in two or three years to fill their now empty seats?

Why would community colleges participate? Like many other federal and state policy initiatives, the president’s proposal reflects a tendency to think only in terms of demand and to believe that price reductions will inevitably result in enrollment increases. But the economic reality is that good policy must take into account institutional behavior as well, and it is not at all clear why community colleges would change their behavior in light of the Obama proposal. Under the Obama plan the federal and state governments would replace funds that families currently spend or loans that students currently borrow for tuition. The likely result of such a policy would be more students enrolling in already overcrowded community colleges will little or no additional funds provided to community colleges to educate them.

If one truly wants to improve community college financing, a better approach would be one in which governments recognize the additional costs entailed in enrolling additional students and try to help pay for those costs. But in the absence of such a proposal, the current Obama plan seems more of the same – more requirements but no more money. As a result, it is hard to understand the enthusiasm of the community college and other national associations for the president’s plan.

Why would states participate? It’s also not immediately clear why states would participate in the Obama plan as it is aimed primarily or entirely at changing how tuition is financed. As a result, it really would not get at the majority of the community college financing iceberg – what states and localities spend in support of every student who enrolls. So the question remains: why would states choose to participate in this plan that obligates them to meet a series of new requirements AND pay for one-quarter of tuition costs in addition to still paying what they do now for operating subsidies.

In sum, an analysis of what we know of the president’s plan is part of a troubling pattern that seems to characterize our higher education policy debates these days. Political considerations trump good policy. The interests of low-income students get second billing to middle class affordability, or no billing at all. Not enough attention is paid to how things actually would work or why institutions or states would decide to participate. 

It all goes to show that, as the economist John Maynard Keynes famously said, “There is no free lunch.” One of the problems with the Obama administration’s continuing enthusiasm for higher education policy initiatives is that is doesn’t seem to recognize this basic economic reality.

Arthur M. Hauptman is a public policy consultant specializing in higher education policy and finance. This is the first in a series of articles about how federal and state higher education policies might be changed to produce greater equity, efficiency and effectiveness.

Essay on how to change - and how not to change - the Carnegie Classifications

The Lumina Foundation and Indiana University’s Center for Postsecondary Education will be taking over the important Carnegie Classification of Institutions of Higher Education, from the Carnegie Foundation for the Advancement of Teaching. Lumina announced that its Degree Qualifications Profile (DQP) will inform the 2015 edition of the classification. This development is yet another step away from the original intent of the classification -- to provide an objective and easy-to-understand categorization of American postsecondary institutions.

In recent years, the Carnegie Foundation made its categories more complex: in part to suit the foundation’s specific policy orientations at the time, and in part to reflect the increased complexity of higher education institutions. As a result, the classification became less useful as an easy yet reasonably accurate and objective way to understand the shape of the system, and the roles of more than 4,500 individual postsecondary institutions.

Among the great advantages of the original classification were its simplicity and its objectivity, and the fact that it did not rank institutions but rather put them into recognizable categories. Unlike the U.S. News and World Report and other rankings, the Carnegie Classification did not use reputational measures—asking academics and administrators to rank competing colleges and universities. It relied entirely on objective data.

It is not clear how the classification’s new sponsors will change its basic orientation, and its new director says that the 2015 version will not be fundamentally altered. Yet, given Lumina’s strong emphasis on access, equity, and degree completion, as well as designing a new national credential framework — highly laudable goals of course — it is likely that the classification in the longer term will be shaped to be aligned with Lumina’s policy agenda, as it was more subtly changed in its later Carnegie years.

The original Carnegie Classification contributed immensely to clarifying the role of postsecondary institutions and made it possible for policy-makers as well as individuals in the United States and abroad to basically understand the American higher education landscape as a whole and see where each institution fit in it. The classification was also quite useful internationally — it provided a roadmap to America’s many kinds of academic institutions. An overseas institution interested in working with a research university, a community college, or a drama school could easily locate a suitable partner. We are likely to lose this valuable resource.

A Historical Perspective

The classification dates back to 1973, when the legendary Clark Kerr, having devised the California Master Plan a decade earlier and leading the Carnegie Commission on Higher Education, wanted to get a sense of America’s diverse and at the time rapidly expanding higher education landscape. The original classification broadly resembled Kerr’s vision of a differentiated higher education system, with different kinds of institutions serving varied goals, needs, and constituencies. It included only five categories of institutions — doctoral granting, comprehensive universities and colleges, liberal arts colleges, two-year colleges and institutes, and professional schools and other specialized institutions, along with several subcategories.

Because the classification was the first effort to categorize the system, it quickly became influential — policy-makers valued an objective data based categorization of institutions and  academic leaders found it useful to understand where their own institutions fit. The classification had the advantage of simplicity, and its sponsor was trusted as neutral. Although the classification was not a ranking — it listed institutions by category in alphabetical order, many came to see it in competitive terms. Some universities wanted to join the ranks of the subcategory of “research university–I,” those institutions that had the largest research budgets and offered the most doctoral degrees — and were overjoyed when their institution was listed in that category. Similarly, the most selective liberal arts colleges were in “liberal arts colleges­–I,” and many wanted to join that group. Over time, the classification became a kind of informal measure, if not of rank, at least of academic status.

Fiddling and Changing

The classification’s categories and methodology remained quite stable over several decades of major transformation in American higher education. In 2005, with new leadership at the Carnegie Foundation, major changes were introduced. Foundation leaders argued that the realities of American higher education required rethinking the methodology. It is also likely that the foundation’s focus changed and it wanted to shape the classification to serve its new orientation and support its policy foci. The foundation revised the basic classification, added new categories such as instructional programs, student enrollment profiles, and others. The classification became significantly more complex, and over time became less influential. People found that the new categories confused the basic purpose of the classification and introduced variable that did not seem entirely relevant. The basic simplicity was compromised. Indeed, people still refer to “Carnegie Research 1” (top research universities) even though they have not existed in the Carnegie lexicon for two decades.

There may well be more fiddling — the U.S. federal government’s desire to rank postsecondary institutions by cost and degree completion rates may add a further dimension to the enterprise. A further dilemma is the role of the for-profit higher education sector — these entities are fundamentally different in their orientations and management from traditional non-profit institutions — so also are the new online degree providers. Should these new additions to the higher education landscape be included in the classification? These elements will contribute to “classification creep” — a bad idea.

What Is Really Needed

It is surprising that, in the four decades since Clark Kerr conceptualized the Carnegie Classification, no one has stepped forward to provide a clear and reasonably objective and comprehensive guide to the more than 4,500 postsecondary institutions in the United States. Resurrecting the basic purpose and organization of Kerr’s original Carnegie Classification is not rocket science, nor would it be extraordinarily expensive.

It is of course true that the postsecondary education has become more complex. How would one deal with the for-profit sector? Probably by adding a special category for them. Many community colleges now offer four-year bachelors degrees, but their basic purpose and organization has not essentially changed. There are a larger number of specialized institutions, and many colleges and universities have expanded and diversified their degree and other offerings. Technology has to some extent become part of teaching programs of some postsecondary institutions — and the MOOC revolution continues to unfold. Research productivity has grown dramatically, and research is reported in more ways. Intellectual property of all kinds has become more central to the academic enterprise — at least in the research university sector.

Yet, the basic elements of the original classification — those that help to determine the main purposes and functions of postsecondary institutions — remain largely unchanged, if somewhat more complicated to describe. The key metrics are clear enough:

  • Student enrollment
  • Degrees awarded
  • Types of degrees offered
  • Number of faculty, full-time and part-time
  • Income from research and intellectual property
  • Research productivity
  • Internationalization as measured by student mobility.

A few more might be added — but again, simplicity is the watchword.

The types of institutions — six main and eight major subcategories — seem about right. These might be expanded somewhat to accommodate the growth in complexity and diversity of the system. Later iterations confusingly expanded the categories, in part to reflect the policy and philosophical orientations of the foundation. The basic purpose of the classification will be best served by keeping the institutional typology as simple and straightforward as possible.

While it is clear that these metrics may not provide a sophisticated or complete measure of each institution — and they require additional definitions — they will provide basic information that will make reasonably categorization possible. They lack the philosophical and policy orientations that have crept into the Carnegie Classification in recent years, and return the enterprise to its original purpose — describing the richness, diversity, and complexity of the American higher education landscape.

 

Philip G. Altbach is research professor and director of the Center for International Higher Education at Boston College.

Editorial Tags: 
Image Source: 
Wikimedia Commons

Essay on what's missing from the Obama administration's proposal to rate colleges

“As a nation, we have to make college more accessible and affordable and ensure that all students graduate with a quality education of real value.”

                                                                                             --Secretary Arne Duncan, December 19, 2014

With the release of the Obama administration’s much-anticipated framework for rating the nation’s colleges and universities, commentators already are weighing in on the yawning gulf between the stated intention of ensuring “a quality education of real value” and the severe limitations of the metrics being considered. While the proposed college ratings system can and should expose some truly bad institutions that don’t deserve to receive federal support, the ratings framework by design presents a severely limited picture of how individual colleges and universities serve students and the nation. Regardless of whether one judges the proposed ratings data to be clarifying or misleading, the fact remains that the most important outcome of higher education — the impact a college or university has on student learning outcomes — is completely missing from the federal ratings framework.

American higher education urgently needs a college learning assessment system, but not one that equates student learning with disciplinary knowledge alone. Rather, it needs a way to account for the higher-order capacities and skills that are the hallmarks of a liberal education. The ordinary citizen will very reasonably assume that the college ratings system the federal government is now poised to promote does provide the needed evidence on college learning and quality. (Secretary Duncan himself seems to assume this, as the quote above makes clear.)

But the ordinary citizen will be wrong in this assumption. The proposed college ratings system does not, in fact, provide any evidence at all about the quality of student learning. By design, the federal ratings system is focused carefully and exclusively on data related to who enrolls in college, institutional affordability, and employment at a living wage after graduation.

What then should we do about the quality of learning challenge? What America absolutely does not need the federal government to do — and what the administration has so far very prudently and  thoughtfully refrained from doing — is to create a national, federally devised and controlled system that would specify what the learning goals of college should be and then assess whether students are achieving them. Nonetheless, the public does need to know how well colleges, universities, and community colleges are doing in providing the kinds of learning that contribute directly to students’ success beyond graduation.   

Under established law, private college and university boards of trustees and public college and university state system governing arrangements rightly determine the missions of individual higher education institutions, and through longstanding shared governance arrangements faculty and institutional leaders set the goals for student learning on individual campuses with the needs and goals of students and of the nation very much in mind. Yet there is wide recognition — especially among America’s employers, but also within higher education itself — that far too few students graduate from college well-enough prepared for success in work, civic participation and democratic citizenship, and life in the 21st century.

American higher education must do much better in both assessing and improving learning. 

And, on this front, there is genuinely good news to report. This year, far away from the ratings furor, educators themselves are taking the lead in developing the kind of learning assessments the public deserves from higher education. The VALUE (Valid Assessment of Learning in Undergraduate Education) initiative of the Association of American Colleges and Universities (AAC&U) represents an important step forward — one that has at its core not only the assessment of student learning, but also the creation of a platform for providing institutions with direct feedback to support continuous quality improvement in teaching and learning. Developed in 2007 through a national collaboration of faculty, institutional, and state-system leaders along with content knowledge and student learning experts, the VALUE approach to assessment has since gained acceptance with remarkable speed. 

This year, building on this foundation, AAC&U, the State Higher Education Executive Officers Association (SHEEO), nine state systems, and 85 public and private institutions are engaged in a major proof of concept study designed to demonstrate the different direction the VALUE approach represents both for assessing learning outcomes and for providing useful feedback to educators about strengths and needed improvements in student performance. The states working in concert with AAC&U and SHEEO are Connecticut, Indiana, Kentucky, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island, and Utah. Private liberal arts institutions in additional states also are contributing to the study. 

Under the VALUE approach, rubrics — common across participating institutions — are used rather than standardized tests, and scores are based on faculty judgments about actual student work. Specifically, graded student work products that show what a student knows and can do — an essay, a piece of creative writing, a lab report, an oral presentation — are evaluated and scored by faculty members (not those who originally assigned and graded the work product) against a rubric that describes multiple dimensions of what it means to do critical thinking, quantitative reasoning, integrative reasoning, or any of the other forms of higher-order learning for which the VALUE rubrics describe achievement at different levels. The exciting promise of this work is that higher education itself is advancing an approach to assessment that is meaningful and accessible to faculty, students, and higher education stakeholders alike. 

The VALUE rubrics were initially created by faculty members, and they reflect educators’ shared judgments about both the substance and the quality of student learning outcomes. Teams of faculty and academic professionals from more than 100 campuses across the country contributed to the development of these VALUE assessment rubrics for each of 16 liberal learning outcomes: inquiry and analysis, critical thinking, writing, integrative learning, oral communication, information literacy, problem solving, teamwork, intercultural knowledge, civic engagement, creative thinking, quantitative literacy, lifelong learning, ethical reasoning, global learning, and reading. These outcomes are important to the education of all college students, whether in two-year or four-year institutions, liberal arts or pre-professional programs, online or in-person courses, and regardless of institutional mission.

But the VALUE approach offers more than just a way to assess student learning. It is itself potentially a “high-impact practice” that will lead to greater student persistence and completion and to a reduction in the achievement gap between white students and disadvantaged students of color. The VALUE rubrics show students what excellence with regard to a particular learning goal looks like, and they let students see where they are on the path toward excellent performance. When faculty talk with students about their work and how it was scored, they are providing students with precisely the kind of “frequent, timely and constructive feedback,” “interactions with faculty ... about substantive matters,” and “structured opportunities to reflect on and integrate learning” that is characteristic of high-impact practices as George Kuh has defined them in his influential reports. In addition, AAC&U has learned already from campuses piloting the use of VALUE rubrics that, after initial experiences with the rubrics, faculty come together to develop assignments that directly address higher-order liberal learning skills — especially evidence-based reasoning — rather than lower-order skills such as description, summary, and paraphrase. None of this happens when a student is sent his or her score on a standardized test. This feature of VALUE, above and beyond its great utility as an assessment system, is responsible for its already very wide and growing support in colleges, universities, and state systems nationally.

What the federal government could and should do, even as it develops and tests its new ratings system, is to remind the nation, over and over, that student acquisition of the knowledge and skills college graduates need is the primary and most critical public purpose for which colleges and universities are chartered. Hence, the federal government should say that assessing what college students know and can do must be a very high institutional — and, for public institutions, institutional and state-system — priority. 

While the federal government should not seek to take responsibility for this assessment, it can and should remind those properly responsible that the quality and assessment of student learning — not just access, completion, and non-learning outcomes — must become a top priority.

At the very least, the US Department of Education should publicly be calling attention to and rooting for the success of state- and institution-driven efforts like VALUE that have national potential. But it also could, through existing federal grant programs such as the Fund for the Improvement of Postsecondary Education (FIPSE) or through Department of Education contracts, create incentives for institutions and state systems to adopt new assessment approaches by offsetting temporary institutional “ramping-up” costs or providing financial support for the necessary infrastructure to allow initiatives like VALUE to become functional nationwide.

This is how public-private partnerships should work: investing in promising ideas and facilitating their testing as they develop. Both at the federal and state levels, public policy can be an enabler for the radically better approach to assessment that VALUE represents.

So even as we debate what’s right or wrong with the ratings, let’s remember that advancing accountability in higher education ultimately needs to include what students are learning. 

 

Carol Geary Schneider is president of the Association of American Colleges and Universities. Daniel F. Sullivan is president emeritus of St. Lawrence University and chair of the AAC&U LEAP Presidents’ Trust. 

Section: 
Editorial Tags: 

Assessment (of the right kind) is key to institutional revival

Today, leaders of colleges and universities across the board, regardless of size or focus, are struggling to meaningfully demonstrate the true value of their institution for students, educators and the greater community because they can't really prove that students are learning.

Most are utilizing some type of evaluation or assessment mechanism to keep “the powers that be” happy through earnest narratives about goals and findings, interspersed with high-level data tables and colorful bar charts. However, this is not scientific, campuswide assessment of student learning outcomes aimed at the valid measure of competency.

The "Grim March" & the Meaning of Assessment

Campuswide assessment efforts rarely involve the rigorous, scientific inquiry about actual student learning that is aligned from program to program and across general education. Instead, year after year, the accreditation march has trudged grimly on, its participants working hard to produce a plausible picture of high “satisfaction” for the whole, very expensive endeavor.

For the past 20-plus years, the primary source of evidence for a positive impact of instruction has come from tools like course evaluation surveys. Institutional research personnel have diligently combined, crunched and correlated this data with other mostly indirect measures such as retention, enrollment and grade point averages.

Attempts are made to produce triangulation with samplings of alumni and employer opinions about the success of first-time hires. All of this is called “institutional assessment,” but this doesn’t produce statistical evidence from direct measurement that empirically demonstrates that the university is directly responsible for the students’ skill sets based on instruction at the institution. Research measurement methods like Chi-Square or Inter-rater reliability combined with a willingness to assess across the institution can demonstrably prove that a change in student learning is statistically significant over time and is the result of soundly delivered curriculum. This is the kind of “assessment” the world at large wants to know about.

The public is not satisfied with inferentially derived evidence. Given the cost, they yearn to know if their sons and daughters are getting better at things that matter to their long-term success. Employers routinely stoke this fire by expressing doubt about the out-of-the-box skills of graduates.

Who Owns Change Management

Whose responsibility is it to redirect the march to provide irrefutable reports that higher education is meeting the needs of all its stakeholders? Accreditors now wring their hands and pronounce that reliance on indirect measures will no longer suffice. They punish schools with orders to fix the shortfalls in the assessment of outcomes and dole out paltry five-year passes until the next audit. They will not, however, provide sound, directive steps for the marchers about how to systematically address learning outcomes.

How about the government? The specter of more third-party testing is this group’s usual response. They did it to K-12 and it has not worked there either. Few would be happy with that center of responsibility.

Back to the campus. To be fair, IR or offices of institutional effectiveness have been reluctant to get involved with direct measures of student performance for good reasons. Culture dictates that such measures belong to program leaders and faculty. The traditions and rules of “academic freedom” somehow demand this. The problem is that faculty and program leaders are indeed content experts, but they are no more versed in effective assessment of student outcomes than anyone else on campus.

This leaves us with campus leaders who have long suspected something is very wrong or at least misdirected. To paraphrase one highly placed academic officer, “We survey our students and a lot of other people and I’m told that our students are ‘happy.’ I just can’t find anyone who can tell me for sure if they’re ‘happy-smarter’ or not!” Their immersion in the compliance march does not give them much clue about what to do about the dissonance they are feeling.

The Assessment Renaissance

Still, the intelligent money is on higher ed presidents first and foremost, supported by their provosts and other chief academic officers. If there is to be deep change in the current culture they are the only ones with the proximal power to make it happen. The majority of their number has declared that “disruption” in higher education is now essential.

Leaders looking to eradicate the walking dead assessment march in a systematic way need to:

  1. Disrupt. This requires a college or university leader to see beyond the horizon and ultimately have an understanding of the long-term objective. It doesn’t mean they need to have all the ideas or proper procedures, but they must have the vision to be a leader and a disrupter. They must demand change on a realistic, but short timetable.
  2. Get Expertise. Outcomes/competency-based assessment has been a busy field of study over the past half-decade. Staff development and helping hands from outside the campus are needed.
  3. Rally the Movers and Shakers. In almost every industry, there are other leaders without ascribed power but whose drive is undeniable. They are the innovators and the early adopters. Enlist them as co-disruptors. On campuses there are faculty/staff that will be willing to take risks for the greater good of assessment and challenge the very fabric of institutional assessment. Gather them together and give them the resources, the authority and the latitude to get the job done. Defend them. Cheerlead at every opportunity.
  4. Change the Equation. Change the conversation from GPAs and satisfaction surveys to one essential unified goal: are students really learning and how can a permanent change in behavior be measurably demonstrated?
  5. Rethink your accreditation assessment software. Most accreditation software systems rely on processes that are narrative, not a systematic inquiry via data. Universities are full of people who research for a living. Give them tools (yes, like Chalk & Wire, which my company provides) to investigate learning and thereby rebuild a systematic approach to improve competency.
  6. Find the Carrots. Assume a faculty member in engineering is going to publish. Would a research-based study about teaching and learning in their field stand for lead rank and tenure? If disruption is the goal, then the correct answer is yes.

Assessment is complex, but it’s not complicated. Stop the grim march. Stand still for a time. Think about learning and what assessment really means and then pick a new proactive direction to travel with colleagues.

Geoff Irvine is CEO and founder of Chalk & Wire.

Editorial Tags: 

Obama Administration to Unveil College Ratings Plan

The U.S. Department of Education will release a much-anticipated outline of its college ratings system on Friday, according to several sources familiar with the department's plans.

Department officials have indicated to a handful of college leaders and higher education associations that they will publish Friday a draft framework that includes the metrics on which colleges would be rated by the federal government.   

This will be the first look at how the department intends to structure the federal college ratings system, which President Obama announced in August 2013. Department officials have twice delayed the release of its draft proposal, which was originally expected last spring.

Undersecretary of Education Ted Mitchell said in an interview earlier this month that the draft outline would not include the names of specific colleges or universities, nor would it show how institutions perform under the draft metrics.

The department will solicit public input on the framework during the next couple months with a comment deadline of mid-February, several sources said. 

State authorization reciprocity effort passes 'tipping point,' supporters say

Smart Title: 

A national effort to simplify regulations for distance education providers adds 18 members in less than a year. Can it sustain its momentum?

Book offers tips on quality reviews and accreditation process

Smart Title: 

There are plenty of ways faculty members and administrators can help improve the process, writes Linda Suskie in a new book.

ACE Creates Alternative Credit Consortium

The American Council on Education on Monday announced that 25 colleges have agreed to accept all or most transfer credit from students who have completed courses from a council-created pool of 100 low-cost online courses. The previously announced pool will include lower-division and general-education courses. Some will be offered by online universities. But it may also include courses from non-accredited providers. The Bill and Melinda Gates Foundation is funding the council-led effort.

Accreditation Panel Issues Higher Ed Act Suggestions

The federal panel tasked with advising the U.S. Department of Education on accreditation issues on Thursday released a draft set of recommendations for changing accreditation during reauthorization of the Higher Education Act.

The National Advisory Committee on Institutional Quality and Integrity has been working on an updated set of recommendations since earlier this year. The panel previously made a series of recommendations in 2011 and 2012, but the Education Department has asked members of the committee to update those documents.

“This is not a final document in any sense,” said Susan Phillips, who chairs the panel and is vice president for strategic partnerships of the State University of New York at Albany and senior vice president for academic affairs of the SUNY Health Science Center in Brooklyn. She said the panel would continue working on the recommendations with the goal of producing a more final product during its next meeting in June.

Among the ideas in the draft recommendations:

  • Convert all accrediting agencies into national accreditors and eliminate regionally focused ones.
  • Allow for alternative accrediting organizations.
  • Simplify the recognition process for accreditors by establishing common definitions across various different accrediting agencies
  • Allow NACIQI reviews to be focused on “the health and well-being and the quality of institutions of higher education and their affordability, rather than on technical compliance with the criteria for recognition.”
  • Give accrediting agencies greater authority to create different tiers of approval of institutions.
  • Require colleges to produce self-certified data on “key metrics of access, cost and student success” (such as dropout rate, student loan burdens, repayment rates, and job placement rates for vocational programs).
  • Establish a range of accreditation statutes that provide differential access to Title IV funds, which would move away from the current “all or nothing” system.

Federal Approval of Vet School Accreditor Recommended

WASHINGTON -- A federal advisory committee on Thursday recommended that the U.S. Department of Education extend for only six months its recognition of the accreditor for veterinary schools and require the agency to prove that it is following federal standards.

The unanimous recommendation by the National Advisory Committee on Institutional Quality and Integrity was based on concerns that the veterinary accreditor’s standards are not widely accepted by practitioners and that it doesn’t effectively guard against conflicts of interest.

The review of the American Veterinary Medical Association’s Council on Education came amid a host of other concerns, including how the accreditor approves foreign veterinary schools and those without teaching hospitals.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top