A new federal report presents a wealth of data about how 2002's 10th graders fared in higher education (and not) a decade later -- potentially offering researchers and policy makers enormous insight into who attains postsecondary success and why.
The report offers a first look at new data from one of the U.S. Education Department's most important longitudinal research studies, the Educational Longitudinal Study of 2002, which followed 10th graders through to the 2012-13 academic year. Eighty-four percent of those high school sophomores went on to at least some postsecondary education within that decade, while 16 percent did not, with those variations differing, somewhat predictably, for certain demographic traits (women were more likely to go on than men, students from wealthier socioeconomic backgrounds were more likely than their peers, etc.).
For those who went on to postsecondary education, the study examines what they attained (how many credits earned, whether they earned a credential and, if so, what kind), when and in what kind of institution they enrolled (13 percent of students attended a two-year institution first and then a four-year college, and 12 percent did the reverse), how they performed in terms of grade point average and other outcomes, and how many times they stopped their studies.
The report also includes data on the proportion of undergraduate credits that students actually earned versus those they attempted, and provides a slew of information on the characteristics of students who took at least one remedial course.
The American Enterprise Institute's Center on Higher Education Reform today released two new reports on competency-based education, which follow a report the center released in January. The first paper uses results from a survey of hiring managers at companies around the country to learn about employers' perceptions of the emerging form of higher education. The survey found that while employers' overall awareness of competency-based education is low, those that do know about it have a favorable view.
The center's second paper seeks to describe best practices for the assessments that competency-based programs use. The report argues that the credibility of this form of higher education hinges on the quality of those assessments.
The federal government has made significant investments over the last several decades toward reducing socioeconomic inequalities in college access and success: hundreds of billions of dollars in financial aid; a host of informational tools to provide students and families with better information about college quality and costs; prominent attention from the White House, including not one but two presidential summits in 2014 on expanding college opportunity for economically disadvantaged students.
Yet as we all know, gaps in college completion by family income have actually widened over time. How can this be?
Simply creating a financial aid system and college search tool kit isn’t enough. We have to make sure students and families know about these resources and can easily access them.
Think about Apple products for a minute. Why do we buy iPhones, iPads and MacBook Pros? First and foremost, they are durable, high-quality devices. We imagine some readers may be able to rattle off the technical specifications that make these devices superior to their competitors.
But there are four characteristics that really hooked us on Apple products: (1) They are easy and intuitive to use. It’s unlikely the iPhone would have gone viral if the average user needed to spend an hour reading through an instruction manual before getting started; (2) The devices have a sheer stylistic appeal -- sleek metallic casings, glossy touch screens; (3) Apple has been incredibly savvy in its marketing and advertisements -- Steve Jobs’s legendary launch events, crisp Apple television and print ads; (4) The social atmosphere of the Apple stores and skill of the Genius Bar technicians make getting help when problems arise an enjoyable experience.
Now think about the Pell Grant, or the College Navigator search tool, or federal loan entrance counseling. The maximum Pell Grant is worth over $5,000 a year, so one could argue that the product quality is in place. But the Pell Grant falls short on other dimensions that make Apple products so successful. The grant is not nearly as well marketed, so some students and families aren’t aware that it exists or that the money doesn’t need to be paid back.
And to access the money, students and families need to complete a cumbersome and confusing financial aid application. Imagine if, to get your iPhone, you had to first fill out a complicated rebate form, send it in and wait for a few months for the device to arrive.
College Navigator has literally hundreds of data points on every college and university in the country, but this is as much a problem as a benefit -- too much information overwhelms the average user. There are also design limitations -- the site is set up for people who know what to look for and how to interpret all the information they see.
There’s little guidance for students about how to structure a college search, or which data points to prioritize over others. And College Navigator has an even bigger name-recognition problem than the Pell Grant. Our guess is that a small fraction of first-generation college students in the country has even heard of the tool.
We can learn a great deal from companies like Apple. We’ve also learned a lot from the burgeoning science of decision making over the last several years. In the face of complex decisions and complicated choices -- like deciding where to apply to college or navigating the financial aid process -- people have a common set of responses.
One common behavior is to put off making any decision at all. Another common response is to use a simplifying strategy to make a decision, like choosing which college to attend based on where your friends have gone to school or a connection with a particularly charismatic tour guide. And in some cases, people don’t make any decision at all -- they just follow the path of least resistance. For middle-income students, this might mean enrolling in the nearby public university. For students whose parents did not go to college, it might be looking for a local job after high school graduation.
Research from fields like behavioral economics, psychology and neuroscience has helped us recognize what private sector companies like Apple have known and exploited for decades: just having a good product or policy isn’t enough. For policies to achieve their desired aims, we need to do what Apple does -- develop high-quality products, and then devote just as much attention to publicity, consumer engagement and customer service as we do to policy development.
What does this mean in practice?
Nudging students about important tasks. Especially in this day and age, adolescents often balance a multitude of academic, social, work and family commitments. These responsibilities, on top of the fact that adolescents frequently struggle with organization and long-term planning, mean that even students with clear intentions to start or stay in college may miss important deadlines. Simple strategies like sending students text message reminders to renew their federal financial aid can help students translate their intentions into concrete actions.
Improving the design of publicity materials. There’s a reason Apple print ads rely on striking visuals and minimal content: people tend to glaze over dense text. Yet much student-facing communication is incredibly text heavy. By simplifying the content of letters, emails and websites and incorporating behavioral cues for students to take action, we can more effectively help students take advantage of the opportunities and resources that are available to them.
Simplifying enrollment processes. Researchers and advocates have devoted considerable attention over the last decade to how complexities in the federal financial aid application process can deter college-ready and financially eligible students from receiving aid. Reducing hassles associated with completing the Free Application for Federal Student Aid (FAFSA) -- either by simplifying the form itself or by making it easier for students to get assistance -- can lead to substantial increases in the share of students who receive aid and enroll in college.
The student loan origination process is another important decision-making bottleneck where highly complex information may inhibit students from making informed choices about how much to borrow. Simplifying information about borrowing and increasing access to loan counseling can help students make more informed choices about borrowing levels that are a good fit for their personal circumstances.
Changing default options. Several important stages in the college-going process -- taking college entrance exams, choosing courses once in college -- require active steps on the student’s part. Failure to take action can lead students to miss important opportunities, like getting into key prerequisite courses for their intended majors.
We can change the default option so that, for example, the curriculum is laid out for students unless they actively make different choices. Several states have shifted to mandatory college entrance exam testing to increase the number of students who take the S.A.T. or A.C.T. Several colleges have employed active course mapping that provides students with a scripted set of courses to take that will help them complete their intended majors in the least amount of time possible. Students can opt out of these course maps, but only by taking active steps to meet with an adviser to discuss alternative course options.
The greatest appeals of these approaches include their relative ease of implementation, low cost and scalability. A rapidly growing number of academic, public and private-sector ventures are applying behavioral insights to improve postsecondary access and success. Some of these initiatives have been rigorously evaluated through randomized controlled trials and have generated substantial improvements in students’ outcomes. We bring together insightful essays on many of these innovative approaches in our forthcoming volume, Decision Making for Student Success.
Behavioral solutions alone won’t eliminate socioeconomic inequalities in postsecondary access and success. But for a relatively small investment in these strategies, we can meaningfully improve the efficacy of existing programs and policies and expand college opportunity for hardworking but economically disadvantaged students.
Ben Castleman is an assistant professor of education and public policy at the University of Virginia. Saul Schwartz is a professor in Carleton University's School of Public Policy and Administration. Sandy Baum is a senior fellow at the Urban Institute.
Inside Higher Ed is pleased to release today "New Debates About Accountability," our latest compilation of articles. As with other such print-on-demand booklets, the compilation groups together news articles and opinion essays representing a range of views. The booklet is free and you may download a copy here. And you may sign up here for a free webinar on Wednesday, April 29, at 2 p.m. Eastern, about the themes of the booklet.
The student services company Chegg on Monday announced its users will soon be able to subscribe to career counseling from InsideTrack. The counseling service, currently in beta, will launch next month.
Chegg, which made a name for itself as a used textbook provider, has moved quickly to invest in student services. Among its recent moves, the company has launched a college counseling platform, acquired Internships.com and embedded its services in Blackboard's learning management system.
InsideTrack has provided career counseling to students (in addition to its much larger service providing academic and life coaching) through their colleges for about three years. But the partnership with Chegg represents its first major effort to provide services directly to students, using a multimedia platform it has built in the wake of its purchase last year of Logrado.
With all the extensive consultation about the Postsecondary Institutions Ratings System during the past 18 months, all the meetings and the many conversations, we know almost nothing about its likely impact on accreditation, our all-important effort by colleges, universities and accrediting organizations working together to define, judge and improve academic quality.
All that the U.S. Department of Education has officially said to date is that the system will “help inform” accreditation -- and we do not know what this means.
This is worrisome. Ratings create, in essence, a federal system of quality review of higher education, with the potential to upend the longstanding tradition of nongovernmental accreditation that has carried out this role for more than 100 years. And establishing the system may mean the end of more than 60 years of accreditation as a partner with government, the reliable authority on educational quality to which Congress and the Education Department have turned.
Accreditation is about judgment of academic quality in the hands of faculty members and academic administrators. It is about the commitment to peer review -- academics reviewing academics yet accountable to the public -- as the preferred, most effective mode of determining quality. It is about leadership for academic judgment when it comes to such factors as curriculum, programs, standards and strategic direction remaining in the hands of the academic community.
In contrast, a ratings system is a path to a government model of quality review in place of the current model of academics as the primary judges of quality.
First introduced by President Obama in August 2013 and turned over to the Education Department for development, the ratings system is on track for implementation in 2015-16. Based on the still incomplete information the department has released to the public, the system is intended to rate (read: judge) colleges and universities based on three indicators: access, affordability and student outcomes. Institutions will be considered either “high performing,” “low performing” or “those in the middle.” Ultimately, the amount of federal student aid funding a college or university receives is intended to be linked to its rating.
A federal ratings system is both an existential and political challenge to accreditation.
First, there is the challenge of a potential shift of ownership of quality. Second, new key actors in judging quality may be emerging. Finally, the relationship between accreditation and the federal government when it comes to quality may be shifting, raising questions about both the gatekeeping role of accreditation in eligibility for federal funds and the agreement about distribution of responsibilities among the parties in the triad -- the federal government, the states and accreditation.
A ratings system means that government owns quality through its indicators and its decisions about what counts as success in meeting the indicators. The indicators replace peer review.
It means that government officials are key actors in judging quality. Officials replace academics. With all respect to the talent and commitment of these officials, they are not hired for their expertise in teaching and learning, developing higher education curriculum, setting academic standards, or conducting academic research. Yet using a ratings system calls for just these skills.
A ratings system means that the relationship between accreditors and the federal government, with the accreditors as dominant with regard to quality judgments, may give way to a lesser role for accreditation, perhaps using performance on the ratings system as a key determinant of eligibility for federal funds -- in addition to accreditation. Or, it is not difficult to envision a scenario in which ratings replace accreditation entirely with regard to institutional eligibility for access to federal financial aid.
We need to know more about what we do not know about the ratings system. Going forward, we will benefit from keeping the following questions in mind as the system -- and its impact on accreditation -- continues to develop.
First, there are questions about the big picture of the ratings system:
Has a decision been made that the United States, with the single most distinctive system of a government-private sector partnership that maximizes the responsible independence of higher education, is now shifting to the model of government dominance of higher education that typifies most of the rest of the world?
What reliable information will be available to students and the public through the ratings system that they do not currently have? Will this information be about academic quality, including effective teaching and learning? What is the added value?
Second, there are questions about the impact of the ratings on accredited institutions:
Are the indicators to serve as the future quality profile of a college or university? Will the three indicators that the system uses -- access, affordability and outcomes -- become the baseline for judging academic quality in the future?
Will it be up to government to decide what counts as success with regard to the outcomes indicators for a college or university -- graduation, transfer of credit, entry to graduate school and earnings?
To claim quality, will colleges and universities have to not only provide information about their accredited status, but also their ratings, whether “high performing,” “low performing” or “in the middle”?
Will institutions be pushed to diminish their investment in accreditation if, ultimately, it is the ratings that matter -- in place of accreditation?
Finally, there are questions about how ratings will affect the day-to-day operation of accrediting organizations and their relationship to the federal government:
Will accreditors be required to collect/use/take into account the information generated by the ratings system? If so, how is this to influence their decisions about institutions and programs that are currently based on peer review, not ratings?
Will performance on the ratings system be joined with formal actions of accrediting organizations, with both required for accredited status and thus eligibility of institutions for federal funds -- in contrast to the current system of reliance on the formal actions of accrediting organizations?
How, if at all, will the ratings system affect the periodic federal review of the 52 accrediting organizations that are currently federally recognized? Will the government review now include the ratings of institutions as part of examination and judgment of an accreditor’s effectiveness?
While we cannot answer many of these questions at this time, we can use them to anticipate what may take place in the approaching reauthorization of the Higher Education Act, with bills expected in spring or summer.
We can use them to identify key developments in the ratings that have the potential to interfere with our efforts to retain peer review and nongovernmental quality review in preference to the ratings system.
Judith S. Eaton is president of the Council for Higher Education Accreditation.
This revised framework marks a significant step in the conversation about measuring students’ preparedness for the workforce and for life success based on how much they've learned rather than how much time they’ve spent in the classroom. It also provides a rare opportunity for faculty members at colleges and universities to take the lead in driving long-overdue change in how we define student success.
The need for such change has never been stronger. As the economy evolves and the cost of college rises, the value of a college degree is under constant scrutiny. No longer can we rely on piled-up credit hours to prove whether students are prepared for careers after graduation. We need a more robust -- and relevant -- way of showing that our work in the classroom yields results.
Stakeholders ranging from university donors to policy makers have pushed for redefining readiness, and colleges and universities have responded to their calls for action. But too often the changes have been driven by the need to placate those demanding reform and produce quick results. That means faculty input has been neglected.
If we’re to set up assessment reform for long-term success, we need to empower faculty members to be the true orchestrators.
The D.Q.P. provides an opportunity to do that, jelling conversations that have been going on among faculty and advisers for years. Lumina Foundation developed the tool in consultation with faculty and other experts from across the globe and released a beta version to be piloted by colleges and universities in 2011. The latest version reflects feedback from the field, based on their experience with the beta version -- and captures the iterative, developmental processes of education understood by people who work with students daily.
Many of the professionals teaching in today’s college classrooms understand the need for change. They’re used to adapting to ever-changing technologies, as well as evolving knowledge. And they want to measure students’ preparedness in a way that gives them the professional freedom to own the changes and do what they know, as committed professionals, works best for students.
As a tool, the D.Q.P. encourages this kind of faculty-driven change. Rather than a set of mandates, the D.Q.P. is a framework that invites them to be change agents. It allows faculty to assess students in ways that are truly beneficial to student growth. Faculty members don't care about teaching to the assessment; they want to use what they glean from assessments to help improve student learning.
We’ve experienced the value of using the D.Q.P. in this fashion at Utah State University. In 2011, when the document was still in its beta version, we adopted it as a guide to help us rethink general education and its connection to our degrees and the majors within them.
We began the process by convening disciplinary groups of faculty to engage them in a discussion about a fundamental question: “What do you think your students need to know, understand and be able to do?” This led to conversations about how students learn and what intellectual skills they need to develop.
We began reverse engineering the curriculum, which forced us to look at how general education and the majors work together to produce proficient graduates. This process also forced us to ask where degrees started, as well as ended, and taught us how important advisers, librarians and other colleagues are to strong degrees.
The proficiencies and competencies outlined in the D.Q.P. provided us with a common institutional language to use in navigating these questions. The D.Q.P.’s guideposts also helped us to avoid reducing our definition of learning to course content and enabled us to stay focused on the broader framework of student proficiencies at various degree milestones.
Ultimately the D.Q.P. helped us understand the end product of college degrees, regardless of major: citizens who are capable of thinking critically, communicating clearly, deploying specialized knowledge and practicing the difficult soft skills needed for a 21st-century workplace.
While establishing these criteria in general education, we are teaching our students to see their degrees holistically. In our first-year program, called Connections, we engage students in becoming "intentional learners" who understand that a degree is more than a major. This program also gives students a conceptual grasp of how to use their educations to become well prepared for their professional, personal and civic lives. They can explain their proficiencies within and beyond their disciplines and understand they have soft skills that are at a premium.
While by no means a perfect model, what we’ve done at Utah State showcases the power of engaging faculty and staff as leaders to rethink how a quality degree is defined, assessed and explained. Such engagement couldn’t be more critical.
After all, if we are to change the culture of higher learning, we can't do it without the buy-in from those who perform it. Teachers and advisers want their students to succeed, and the D.Q.P. opens a refreshing conversation about success that focuses on the skills and knowledge students truly need.
The D.Q.P. helps give higher education practitioners an opportunity to do things differently. Let’s not waste it.
Norm Jones is a professor of history and chairman of general education at Utah State University. Harrison Kleiner is a lecturer of philosophy at Utah State.
Zaytuna College has become the first accredited Muslim college in the United States, after the college commission of the Western Association of Schools and Colleges granted its approval, The Los Angeles Times reported. Zaytuna is based in Berkeley, Calif.
Submitted by Paul Fain on February 24, 2015 - 3:00am
The National Student Clearinghouse Research Center this week released state-level student completion data. The nonprofit center tracked 2.7 million students who first enrolled in college in the fall of 2008, following them for 6 years. The report builds on the center's previous research, which found more encouraging graduation rates than other studies had identified, in part because the Clearinghouse has huge data sets that can follow students across institutions and state lines.
Nationwide, the report found that one in three community college students earned a credential at an institution other than the one at which they first enrolled. And 13 percent of students who began at a four-year public completed at a different institution. In five states (Iowa, North Dakota, Virginia, Kansas and Texas), more than 20 percent of students who began at a community college completed at a four-year institution. The report includes state-by-state tables and other breakouts of the data.