Assessment

Data from one college shows whether its degree is worth it

Smart Title: 

A database of all students ever admitted to the National Technical Institute for the Deaf allows the college to more reliably show the economic payoff of its programs. 

ACE panel calls for sustaining but changing regional accreditation

Smart Title: 

Higher ed group's panel calls for refining, not revamping, system of quality assurance, with emphasis on transparency and independence from the government.

Using Big Data to Predict Online Student Success

Smart Title: 

An ambitious research project is proving the payoffs of predictive analytics in higher ed, and early findings overturn conventional wisdom about student success.

State budget cuts make completion goals difficult for community colleges

Smart Title: 

Citing severe state budget woes, community college leaders are pessimistic about the feasibility of the push to graduate more students, survey finds.

Ratings fight over, Education Department prepares to launch new consumer tool

Smart Title: 

The fight over college ratings is over -- but what do institutions and others want to see in the consumer tool the Obama administration is developing in their place?

How the Department of Education should design its ratings tools (essay)

Last week, the Department of Education walked back from its plans to develop a comprehensive college ratings system. In its place, the department plans to release “easy-to-use tools that will provide students with more data than ever before to compare college costs and outcomes.”

There are already plenty of resources and tools to help students and families research these questions, provided by government (College Navigator), nonprofit organizations (College Results Online), and media outlets (U.S. News & World Report College Rankings), and lawmakers continue to call for more consumer information about college outcomes.

But is anyone actually using these tools, and are the interfaces, graphics and user experiences designed to actually help students? And how “easy to use” will these new tools be?

To better understand what works and what doesn’t, I recently co-wrote a report with Healey Whitsett, now of the Pew Charitable Trust, that provides best practices on how to design and deliver information to help prospective college students navigate the best programs available for them. The department should turn to these principles as it develops its new tools to maximize their effectiveness.

We synthesized scores of studies on behavioral economics, information search, retention, and bottlenecks that get in the way and break down the proper processing of information – and recommended ways to design tools so students get the information they need to make more informed decisions about where to go to school, what to study, and how to pay for it. We also recommend that designers target their efforts at students from low-income families, as unfortunately, they’re the least likely spend a lot of time searching for information.

For example, designers shouldn’t try and cram too much information in one place. Cognitive research tells us this overwhelms the reader and makes it difficult to comprehend and retain the information.

In one study, researchers compared individuals' interactions with the standard mortgage disclosure form, against a redesigned, better organized prototype: borrowers using the redesigned form were 38 percentage points more likely to correctly identify the amount of the loan, and 11 percentage points more likely to correctly identify their monthly payment amount. Since students are unable to identify how much money they took out in student loans, redesigning these forms makes a lot of sense.

The literature also demonstrates the importance of personally tailored information: Consumers are much more likely to identify, remember, and use information if it is personally relevant to them. That’s why broad national rankings are probably only so useful to students and families. In this regard, the Department seems to be on the right track by making the tools “customizable” to the user.

Our research also finds that higher education stakeholders shouldn’t present students with too many options of comparison. With 7,000 colleges and universities to choose from, it’s critical that we figure out manageable ways for students to compare a limited set of schools that tailor to their interests, or they risk facing what we call the “tyranny of choice.”

A study by Judith Scott-Clayton at the Community College Resource Center at Columbia University's Teachers College showed that too many choices in community college majors or programs may overwhelm and discourage students from persisting in and succeeding at earning a credential.

Furthermore, information should be as personally tailored as possible, as individually contextualized information is much more likely to be recalled and used in decision making.

Aspiring students must also be able to compare and contrast their school and loan choices. It would be very difficult to choose which school to attend if you knew the graduation rate of one and the list of majors of the other. They must be able to compare the same variables side by side.  

Fortunately, it looks like Congressional leaders are interested in arming students with more information as well. In its white paper on proposals for reauthorizing the Higher Education Act, the Senate Health, Education, Labor and Pensions Committee called for “extensive consumer testing on what information is needed and how it should be presented” and to “[a]pply this research to any federally produced consumer tools and make the research available publicly to voluntarily inform the market.”

The House Education and Workforce Committee recently said, “Access to better information will empower students with the knowledge they need to make smart decisions in the college marketplace.”

We hope that the department will take note of this research as it designs this new tool and we also applaud department officials for committing to work with outside parties to design their own mobile apps and interfaces optimized for usability. After all, aspiring students can have all the data and information in the world, but if it’s not packaged and delivered in a way that’s useful to them, then we’re wasting our time.

Tom Allison is research and policy manager at Young Invincibles.

Editorial Tags: 

Colleges begin to take notice of Common Core

Smart Title: 

A growing number of states and colleges are beginning to use Common Core-based assessments to determine student placement and college readiness. 

Re-inventing Higher Education

Date: 
Tue, 05/05/2015

Federal rating system could displace accreditation as judge of higher ed quality (essay)

With all the extensive consultation about the Postsecondary Institutions Ratings System during the past 18 months, all the meetings and the many conversations, we know almost nothing about its likely impact on accreditation, our all-important effort by colleges, universities and accrediting organizations working together to define, judge and improve academic quality.

All that the U.S. Department of Education has officially said to date is that the system will “help inform” accreditation -- and we do not know what this means. 

This is worrisome. Ratings create, in essence, a federal system of quality review of higher education, with the potential to upend the longstanding tradition of nongovernmental accreditation that has carried out this role for more than 100 years. And establishing the system may mean the end of more than 60 years of accreditation as a partner with government, the reliable authority on educational quality to which Congress and the Education Department have turned.

Accreditation is about judgment of academic quality in the hands of faculty members and academic administrators. It is about the commitment to peer review -- academics reviewing academics yet accountable to the public -- as the preferred, most effective mode of determining quality. It is about leadership for academic judgment when it comes to such factors as curriculum, programs, standards and strategic direction remaining in the hands of the academic community. 

In contrast, a ratings system is a path to a government model of quality review in place of the current model of academics as the primary judges of quality.

First introduced by President Obama in August 2013 and turned over to the Education Department for development, the ratings system is on track for implementation in 2015-16. Based on the still incomplete information the department has released to the public, the system is intended to rate (read: judge) colleges and universities based on three indicators: access, affordability and student outcomes. Institutions will be considered either “high performing,” “low performing” or “those in the middle.” Ultimately, the amount of federal student aid funding a college or university receives is intended to be linked to its rating.

A federal ratings system is both an existential and political challenge to accreditation.

First, there is the challenge of a potential shift of ownership of quality. Second, new key actors in judging quality may be emerging. Finally, the relationship between accreditation and the federal government when it comes to quality may be shifting, raising questions about both the gatekeeping role of accreditation in eligibility for federal funds and the agreement about distribution of responsibilities among the parties in the triad -- the federal government, the states and accreditation.

A ratings system means that government owns quality through its indicators and its decisions about what counts as success in meeting the indicators. The indicators replace peer review. 

It means that government officials are key actors in judging quality. Officials replace academics. With all respect to the talent and commitment of these officials, they are not hired for their expertise in teaching and learning, developing higher education curriculum, setting academic standards, or conducting academic research. Yet using a ratings system calls for just these skills.

A ratings system means that the relationship between accreditors and the federal government, with the accreditors as dominant with regard to quality judgments, may give way to a lesser role for accreditation, perhaps using performance on the ratings system as a key determinant of eligibility for federal funds -- in addition to accreditation. Or, it is not difficult to envision a scenario in which ratings replace accreditation entirely with regard to institutional eligibility for access to federal financial aid.

We need to know more about what we do not know about the ratings system. Going forward, we will benefit from keeping the following questions in mind as the system -- and its impact on accreditation -- continues to develop.

First, there are questions about the big picture of the ratings system:

  • Has a decision been made that the United States, with the single most distinctive system of a government-private sector partnership that maximizes the responsible independence of higher education, is now shifting to the model of government dominance of higher education that typifies most of the rest of the world?
  • What reliable information will be available to students and the public through the ratings system that they do not currently have? Will this information be about academic quality, including effective teaching and learning? What is the added value? 

Second, there are questions about the impact of the ratings on accredited institutions:

  • Are the indicators to serve as the future quality profile of a college or university? Will the three indicators that the system uses -- access, affordability and outcomes -- become the baseline for judging academic quality in the future? 
  • Will it be up to government to decide what counts as success with regard to the outcomes indicators for a college or university -- graduation, transfer of credit, entry to graduate school and earnings?
  • To claim quality, will colleges and universities have to not only provide information about their accredited status, but also their ratings, whether “high performing,” “low performing” or “in the middle”?
  • Will institutions be pushed to diminish their investment in accreditation if, ultimately, it is the ratings that matter -- in place of accreditation?

Finally, there are questions about how ratings will affect the day-to-day operation of accrediting organizations and their relationship to the federal government:

  • Will accreditors be required to collect/use/take into account the information generated by the ratings system? If so, how is this to influence their decisions about institutions and programs that are currently based on peer review, not ratings?
  • Will performance on the ratings system be joined with formal actions of accrediting organizations, with both required for accredited status and thus eligibility of institutions for federal funds -- in contrast to the current system of reliance on the formal actions of accrediting organizations?
  • How, if at all, will the ratings system affect the periodic federal review of the 52 accrediting organizations that are currently federally recognized? Will the government review now include the ratings of institutions as part of examination and judgment of an accreditor’s effectiveness?

While we cannot answer many of these questions at this time, we can use them to anticipate what may take place in the approaching reauthorization of the Higher Education Act, with bills expected in spring or summer.

We can use them to identify key developments in the ratings that have the potential to interfere with our efforts to retain peer review and nongovernmental quality review in preference to the ratings system.

Judith S. Eaton is president of the Council for Higher Education Accreditation.

Faculty members should drive efforts to measure student learning (essay)

Lumina Foundation recently released an updated version of its Degree Qualifications Profile (D.Q.P.), which helps define what students should know and what skills they should master to obtain higher education degrees.

This revised framework marks a significant step in the conversation about measuring students’ preparedness for the workforce and for life success based on how much they've learned rather than how much time they’ve spent in the classroom. It also provides a rare opportunity for faculty members at colleges and universities to take the lead in driving long-overdue change in how we define student success.

The need for such change has never been stronger. As the economy evolves and the cost of college rises, the value of a college degree is under constant scrutiny. No longer can we rely on piled-up credit hours to prove whether students are prepared for careers after graduation. We need a more robust -- and relevant -- way of showing that our work in the classroom yields results.

Stakeholders ranging from university donors to policy makers have pushed for redefining readiness, and colleges and universities have responded to their calls for action. But too often the changes have been driven by the need to placate those demanding reform and produce quick results. That means faculty input has been neglected.

If we’re to set up assessment reform for long-term success, we need to empower faculty members to be the true orchestrators.  

The D.Q.P. provides an opportunity to do that, jelling conversations that have been going on among faculty and advisers for years. Lumina Foundation developed the tool in consultation with faculty and other experts from across the globe and released a beta version to be piloted by colleges and universities in 2011. The latest version reflects feedback from the field, based on their experience with the beta version -- and captures the iterative, developmental processes of education understood by people who work with students daily.

Many of the professionals teaching in today’s college classrooms understand the need for change. They’re used to adapting to ever-changing technologies, as well as evolving knowledge. And they want to measure students’ preparedness in a way that gives them the professional freedom to own the changes and do what they know, as committed professionals, works best for students.

As a tool, the D.Q.P. encourages this kind of faculty-driven change. Rather than a set of mandates, the D.Q.P. is a framework that invites them to be change agents. It allows faculty to assess students in ways that are truly beneficial to student growth. Faculty members don't care about teaching to the assessment; they want to use what they glean from assessments to help improve student learning.

We’ve experienced the value of using the D.Q.P. in this fashion at Utah State University. In 2011, when the document was still in its beta version, we adopted it as a guide to help us rethink general education and its connection to our degrees and the majors within them. 

We began the process by convening disciplinary groups of faculty to engage them in a discussion about a fundamental question: “What do you think your students need to know, understand and be able to do?” This led to conversations about how students learn and what intellectual skills they need to develop.

We began reverse engineering the curriculum, which forced us to look at how general education and the majors work together to produce proficient graduates. This process also forced us to ask where degrees started, as well as ended, and taught us how important advisers, librarians and other colleagues are to strong degrees.

The proficiencies and competencies outlined in the D.Q.P. provided us with a common institutional language to use in navigating these questions. The D.Q.P.’s guideposts also helped us to avoid reducing our definition of learning to course content and enabled us to stay focused on the broader framework of student proficiencies at various degree milestones.

Ultimately the D.Q.P. helped us understand the end product of college degrees, regardless of major: citizens who are capable of thinking critically, communicating clearly, deploying specialized knowledge and practicing the difficult soft skills needed for a 21st-century workplace.

While establishing these criteria in general education, we are teaching our students to see their degrees holistically. In our first-year program, called Connections, we engage students in becoming "intentional learners" who understand that a degree is more than a major. This program also gives students a conceptual grasp of how to use their educations to become well prepared for their professional, personal and civic lives. They can explain their proficiencies within and beyond their disciplines and understand they have soft skills that are at a premium.

While by no means a perfect model, what we’ve done at Utah State showcases the power of engaging faculty and staff as leaders to rethink how a quality degree is defined, assessed and explained. Such engagement couldn’t be more critical.

After all, if we are to change the culture of higher learning, we can't do it without the buy-in from those who perform it. Teachers and advisers want their students to succeed, and the D.Q.P. opens a refreshing conversation about success that focuses on the skills and knowledge students truly need.

The D.Q.P. helps give higher education practitioners an opportunity to do things differently. Let’s not waste it.

Norm Jones is a professor of history and chairman of general education at Utah State University. Harrison Kleiner is a lecturer of philosophy at Utah State.

Editorial Tags: 

Pages

Subscribe to RSS - Assessment
Back to Top