assessmentaccountability

Doubts About Career Readiness From College Seniors

Only 40 percent of college seniors say their experience in college has been very helpful in preparing them for a career, according to the results of a survey by McGraw-Hill Education. Students majoring in arts and humanities are more than three times as likely as other students to say they feel “not at all prepared” for their careers (18 percent compared to less than 6 percent of all other students), according to the survey.

The third annual version of McGraw-Hill's workforce readiness survey found a rise in the perceived importance of preparing for careers in college. While students report that they are increasingly satisfied with their overall college experience (79 percent in 2016 compared to 65 percent in 2014), an increasing percentage said they would have preferred their schools to provide:

  • More internships and professional experiences (67 percent in 2016 compared to 59 percent in 2014).
  • More time to focus on career preparation (59 percent compared to 47 percent).
  • Better access to career preparation tools (47 percent compared to 38 percent).
  • More alumni networking opportunities (34 percent compared to 22 percent).

The survey also queried students about whether they would have chosen a different college path if community college were free, with some of the responses below:

Defining disciplinary learning outcomes and principles for assessment (essay)

The higher education lore is that faculty members cannot agree on anything. Like other myths, this accepted folk wisdom is far from the truth. 

Indeed, over the course of our careers, we have repeatedly observed faculty members coming together collaboratively to address the challenges faced institutionally or in higher education more broadly. More recently, we have been heartened and inspired in particular by those who spent the last several years grappling with a fundamental question: what should students learn in higher education?

Instead of ignoring external pressures to measure and improve college outcomes, faculty members came together under the auspices of the Social Science Research Council's Measuring College Learning Project, which we have helped lead, to address these pedagogical challenges. Faculty members in six disciplines -- biology, business, communication, economics, history and sociology -- engaged in invigorating discussions, lively debates and difficult conversations. Supported by their disciplinary associations and encouraged by their collaborative spirit, they have articulated frameworks for defining learning outcomes in six disciplines and the principles for assessing learning outcomes in the 21st century, as described in the recently released Improving Quality in American Higher Education: Learning Outcomes and Assessments for the 21st Century.

In our work, we have found that faculty members readily agree that higher education is not about efficient acquisition of surface content knowledge and the simple regurgitation of memorized facts.That does not mean that content is unimportant. Content is indeed crucial, but primarily as a building block for more complex forms of thinking. Faculty members are eager to get students to apply, analyze and evaluate from their disciplinary perspectives, to acquire a disciplinary mindset and think like a biologist or an economist.

Faculty members across disciplines in the MCL project rather quickly coalesced around “essential concepts and competencies” for their disciplines, which represent ideas and skills that faculty believe are fundamental to the discipline, valuable to students and worth emphasizing given limited time and resources.There are similarities across disciplines including an emphasis on analytical writing and problem-solving, but these generic skills take form, are defined and are honed within specific fields of study. They are not abstract ideas, but concepts and competencies that faculty members engage, develop and deploy in their work and value in their disciplines.  

Faculty members are also often seen as resisting assessment. But, in fact, what they resist are simplistic assessments of student learning that focus on recollection of knowledge, rely on blunt instruments and are narrow and reductionist. They resist, as would all other professions, externally imposed mandates that fail to reflect the complexity of their jobs or that misrepresent the purpose of higher education. But they also believe that what they are doing makes a difference -- that they are teaching students how to see the world in a new light -- and they would be eager to have the tools to demonstrate their contributions to the development of student cognitive capacities.

Constructive conversations about learning outcomes and assessments require the proper context and frame.That is rarely offered in a world in which we in higher education are on the defensive, trying to argue against externally proposed accountability measures based on distal labor market outcomes, instead of being proactive and making the case on our own terms.There is no shortage of proposals in the public sphere about what higher education should do. But those conversations often lack the voices of faculty members, who are the professionals with responsibility for defining, enabling and assessing what students learn.

The faculty should be at the forefront of the conversations about the purposes of higher education and thus at the center of defining and measuring undergraduate learning outcomes.That is not only a matter of professional duty but also of doing justice to our students.Students from all backgrounds and institutions should have an opportunity to demonstrate their knowledge and skills.

Years of institutionally incentivized grade inflation and proliferation of course titles have all but made transcripts irrelevant. In our research, we found that most employers do not even ask to see them. And while some recent efforts have aimed to add extracurricular activities and other accomplishments to college transcripts, none of those tell us what students actually know or can do.Taking a class is not the same thing as mastering the concepts and competencies presented. Being a member of a club similarly says little about the skills a student has developed.

In addition to placing faculty and student learning at the center of the conversation, the MCL project is committed to recognizing the complexity of what higher education aims to accomplish and ensuring that any measure of learning is part of a larger holistic assessment plan.The project focuses on the disciplines.That does not preclude making sure that students are also civically minded and globally competent. It only means that we need to be clear about which part of the puzzle one hopes to address with a disciplinary focused initiative.

The MCL project is committed to ensuring that institutions use assessment tools on a voluntary basis. We have elaborated elsewhere the pitfalls of externally imposed accountability. Only by willingly looking in the mirror will higher education institutions make progress toward improving student learning outcomes.

While assessment should be voluntary, it need not be a solitary endeavor. Collaborating with other institutions makes us not only realize that we all face challenges and struggle with current circumstances but also offers insight into possible ways forward. Measures of learning outcomes must be of high quality and comparable, so they can allow multiple institutions to use them and share their insights. Governed by the principle of continuous improvement, assessments -- albeit limited and imperfect -- are necessary tools on the road toward reaching our goals.

As we look toward the future, we are excited and energized by the commitment and thoughtfulness of the faculty members who participated in the MCL project.They have put forth a bold and forward-thinking vision for the future of learning and assessment in their disciplines: a set of frameworks that will be subject to ongoing iteration and improvement in the years ahead. Instead of waiting for the storm to subside, these faculty members and their disciplinary associations have tackled the challenge head on. They have paved the way for a more promising future.

Josipa Roksa is associate professor of sociology and education at the University of Virginia. Richard Arum is chair of sociology at New York University and incoming dean of the School of Education at the University of California at Irvine. They are the authors of Academically Adrift: Limited Learning on College Campuses (University of Chicago, 2011).

Editorial Tags: 
Image Source: 
istock

Akron abandons advising experiment with outside start-up after 1 year

Section: 
Smart Title: 

University of Akron abandons its partnership with untested start-up company to provide "success coaches" after one year.

Growing number of community colleges use multiple measures to place students

Smart Title: 

More community colleges are moving away from relying on placement exams alone to figure out whether incoming students need remediation, but establishing a substitute system can be tricky.

Critique of Performance-Based Funding

The Century Foundation on Wednesday published a report that is critical of state policies that link funding of public colleges with measures of their performance, such as graduation rates and degree production numbers. Roughly 35 states are either developing or using some form of performance-based funding for higher education.

The new report's author, Nicholas Hillman, an assistant professor of education at the University of Wisconsin at Madison who has studied such state-based formulas, argues that performance-based funding is rarely effective.

"While pay for performance is a compelling concept in theory, it has consistently failed to bear fruit in actual implementation, whether in the higher education context or in other public services," Hillman wrote. "Performance-based funding regimes are most likely to work in noncomplex situations where performance is easily measured, tasks are simple and routine, goals are unambiguous, employees have direct control over the production process, and there are not multiple people involved in producing the outcome."

Toward Better National Data on Postsecondary Education

The Institute for Higher Education Policy is today releasing a series of papers that, taken together, are designed to point the way toward a more vibrant set of national data on student outcomes.

The papers, which come from a wide range of policy experts, cover an array of topics, such as the possibility of creating a federal student-level data system, how to link existing federal data systems, strategies for protecting privacy of students and the possible role of the National Student Clearinghouse. The release is in conjunction with an event today in Washington, D.C.

44 Colleges Join U.S. Experiment on Dual Enrollment

The U.S. Education Department on Monday announced that it had chosen 44 colleges for an experiment in which they will be able to give Pell Grants to high school students participating in dual enrollment programs. The announcement carries out the department's plan (another in a string of efforts to use its "experimental sites" authority) to allow as many as 10,000 high school students to use federal postsecondary student aid funds to take college-level courses, which is generally prohibited by federal law.

The institutions chosen to participate, about 80 percent of which are community colleges, have agreed to use promising practices for ensuring the students' success, such as creating clear curricular pathways, building linkages to careers and ensuring strong advising.

New Metrics Urged for Performance and Equity

A new report from the Institute for Higher Education Policy identifies the key metrics that would help federal and state data systems provide information on colleges' performance, efficiency and equity.

The report, developed in partnership with the Bill and Melinda Gates Foundation, details that the information provided today leaves out answers to college access, progression, completion, cost and outcomes. Using the three metrics identified in the report and integrating them into federal and state systems will make the information available to all students from all types of institutions.

"This report draws on the knowledge and experience of higher education leaders and experts to lay out in detail the metrics we should be collecting and explains why those data will make a difference, for all students, but particularly for those who traditionally have been underserved by higher education," said Michelle Cooper, IHEP's president, in a news release. "The field needs a core set of comprehensive and comparable metrics and should incorporate those metrics into federal and state data systems."

General Assembly on Measuring Student Results

General Assembly, the largest of the skills boot camp providers, today released a public framework for measuring student outcomes. Boot camps are not accredited. And while many claim job-placement rates of more than 90 percent, those numbers typically are not verified by outside groups. But Skills Fund, a student lender for boot camps, and other players are seeking to play that role.

To design its standards for reporting and measuring student success, General Assembly worked with two major accounting firms to craft an approach public companies use to measure nonfinancial metrics such as social impact and environmental sustainability.

"Our goal is to start a conversation about outcomes predicated on the use of consistent definitions and the application of a rigorous framework and methodology," the company said. "Over time, we hope to develop new measures of return on education that consider income or other criteria that can be used by students and other stakeholders to understand student success in even more specific and granular ways."

Essay challenging academic studies on states' performance funding formulas

A recent Inside Higher Ed article about the analysis of state performance funding formulas by Seton Hall University researchers Robert Kelchen and Luke Stedrak might unfairly lead readers to believe that such formulas are driving public colleges and universities to intentionally enroll more students from high-income families, displacing much less well-off students. It would be cause for concern if institutions were intentionally responding to performance-based funding policies by shifting their admissions policies in ways that make it harder for students who are eligible to receive Pell Grants to go to college.

Kelchen and Stedrak’s study raises this possibility, but even they acknowledge the data fall woefully short of supporting such a conclusion. These actions would, in fact, be contrary to the policy intent of more recent and thoughtfully designed outcomes-based funding models pursued in states such as Ohio and Tennessee. These formulas were adopted to signal to colleges and universities that increases in attainment that lead to a better-educated society necessarily come from doing a much better job of serving and graduating all students, especially students of color and students from low-income families.

Unfortunately, Kelchen’s study has significant limitations, as has been the case with previous studies of performance-based funding. Most notably, as acknowledged by Kelchen and Stedrak, these studies lump together a wide variety of approaches to performance-based funding, some adopted decades ago, which address a number of challenges not limited to the country’s dire need to increase educational attainment. Such a one-size-fits-all approach fails to give adequate attention to the fact that how funding policies are designed and implemented actually matters.

For example, the researchers’ assertion that institutions could possibly be changing admissions policies to enroll better-prepared, higher-income students does not account for differential effects among states that provide additional financial incentives in their formulas to ensure low-income and minority students’ needs are addressed vs. those states that do nothing in this area. All states are simply lumped together for purposes of the analysis.

In addition, the claim that a decrease in Pell dollars per full-time-equivalent student could possibly be caused by performance-based funding fails to account for changes over time in federal policy related to Pell Grants, different state (and institutional) tuition policies, other state policies adopted or enacted over time, changes in the economy and national and state economic well-being, and changes in student behavior and preferences. For example, Indiana public research and comprehensive universities have become more selective over time because of a policy change requiring four-year institutions to stop offering remedial and developmental education and associate degrees, instead sending these students to community colleges.

If any of these factors have affected states with newer, well-designed outcomes-based funding systems and other states with rudimentary performance-based funding or no such systems at all, as I believe they have, then there is strong potential for a research bias introduced by failing to account for key variables. For example, in states that are offering incentives for students to enroll in community colleges, such as Tennessee, the average value of Pell Grants at public bachelor’s-granting institutions would drop if more low-income, Pell-eligible students were to choose to go to lower-cost, or free, community colleges.

I agree with Kelchen and Stedrak that more evaluation and discussion are needed on all forms of higher education finance formulas to better understand their effects on institutional behavior and student outcomes. Clearly, there are states that had, and in some cases continue to have, funding models designed in a way that could create perverse incentives for institutions to raise admissions standards or to respond in other ways that run contrary to raising attainment for all students, and for students of color in particular. As the Seton Hall researchers point out, priority should be given to understanding the differential effects of various elements that go into the design and implementation of state funding models.

The HCM Strategists’ report referenced in the study was an attempt by us to inform state funding model design and implementation efforts. There needs to be a better understanding of which design elements matter for which students in which contexts -- as well as the implications of these evidence-based findings for policy design and what finance policy approaches result in the best institutional responses for students. There is clear evidence that performance funding can and does prompt institutions to improve student supports and incentives in ways that benefit students.

Analysis under way by Research for Action, an independent, Philadelphia-based research shop, will attempt to account for several of the existing methodological limitations correctly noted by Kelchen and Stedrak. This quantitative and qualitative analysis focuses on the three most robust and longest-tenured outcomes-based funding systems, in Indiana, Ohio and Tennessee.

Factors examined by Research for Action will include the type of outcomes-based funding being implemented, specifics of each state’s formula as applied to both the two- and four-year sectors, the timing of full implementation, changes in state policies over time, differences in the percentages of funding allocated based on outcomes such as program and degree completion, and differences in overall state allocations to public higher education. And, for the first time, Research for Action will move beyond the limitations of analyses based primarily on federal IPEDS data by incorporating state longitudinal data, which give a more complete picture.

As states continue to implement various approaches to funding higher education, it is essential to understand the effects on institutional behavior and student outcomes. Doing so will require more careful analyses than those seen to date and a more detailed understanding of policy design and implementation factors that are likely to affect institutional responses. Broad-brush analyses such as Kelchen and Stedrak’s can help to inform the questions that need to be asked but should not be used to draw any meaningful conclusions about the most effective ways to ensure colleges and universities develop and maintain a laser focus on graduating more students with meaningful credentials that offer real hope for the future.

Martha Snyder is a director at HCM Strategists, a public policy advocacy and consulting firm.

Editorial Tags: 

Pages

Subscribe to RSS - assessmentaccountability
Back to Top