assessmentaccountability

44 Colleges Join U.S. Experiment on Dual Enrollment

The U.S. Education Department on Monday announced that it had chosen 44 colleges for an experiment in which they will be able to give Pell Grants to high school students participating in dual enrollment programs. The announcement carries out the department's plan (another in a string of efforts to use its "experimental sites" authority) to allow as many as 10,000 high school students to use federal postsecondary student aid funds to take college-level courses, which is generally prohibited by federal law.

The institutions chosen to participate, about 80 percent of which are community colleges, have agreed to use promising practices for ensuring the students' success, such as creating clear curricular pathways, building linkages to careers and ensuring strong advising.

New Metrics Urged for Performance and Equity

A new report from the Institute for Higher Education Policy identifies the key metrics that would help federal and state data systems provide information on colleges' performance, efficiency and equity.

The report, developed in partnership with the Bill and Melinda Gates Foundation, details that the information provided today leaves out answers to college access, progression, completion, cost and outcomes. Using the three metrics identified in the report and integrating them into federal and state systems will make the information available to all students from all types of institutions.

"This report draws on the knowledge and experience of higher education leaders and experts to lay out in detail the metrics we should be collecting and explains why those data will make a difference, for all students, but particularly for those who traditionally have been underserved by higher education," said Michelle Cooper, IHEP's president, in a news release. "The field needs a core set of comprehensive and comparable metrics and should incorporate those metrics into federal and state data systems."

General Assembly on Measuring Student Results

General Assembly, the largest of the skills boot camp providers, today released a public framework for measuring student outcomes. Boot camps are not accredited. And while many claim job-placement rates of more than 90 percent, those numbers typically are not verified by outside groups. But Skills Fund, a student lender for boot camps, and other players are seeking to play that role.

To design its standards for reporting and measuring student success, General Assembly worked with two major accounting firms to craft an approach public companies use to measure nonfinancial metrics such as social impact and environmental sustainability.

"Our goal is to start a conversation about outcomes predicated on the use of consistent definitions and the application of a rigorous framework and methodology," the company said. "Over time, we hope to develop new measures of return on education that consider income or other criteria that can be used by students and other stakeholders to understand student success in even more specific and granular ways."

Essay challenging academic studies on states' performance funding formulas

A recent Inside Higher Ed article about the analysis of state performance funding formulas by Seton Hall University researchers Robert Kelchen and Luke Stedrak might unfairly lead readers to believe that such formulas are driving public colleges and universities to intentionally enroll more students from high-income families, displacing much less well-off students. It would be cause for concern if institutions were intentionally responding to performance-based funding policies by shifting their admissions policies in ways that make it harder for students who are eligible to receive Pell Grants to go to college.

Kelchen and Stedrak’s study raises this possibility, but even they acknowledge the data fall woefully short of supporting such a conclusion. These actions would, in fact, be contrary to the policy intent of more recent and thoughtfully designed outcomes-based funding models pursued in states such as Ohio and Tennessee. These formulas were adopted to signal to colleges and universities that increases in attainment that lead to a better-educated society necessarily come from doing a much better job of serving and graduating all students, especially students of color and students from low-income families.

Unfortunately, Kelchen’s study has significant limitations, as has been the case with previous studies of performance-based funding. Most notably, as acknowledged by Kelchen and Stedrak, these studies lump together a wide variety of approaches to performance-based funding, some adopted decades ago, which address a number of challenges not limited to the country’s dire need to increase educational attainment. Such a one-size-fits-all approach fails to give adequate attention to the fact that how funding policies are designed and implemented actually matters.

For example, the researchers’ assertion that institutions could possibly be changing admissions policies to enroll better-prepared, higher-income students does not account for differential effects among states that provide additional financial incentives in their formulas to ensure low-income and minority students’ needs are addressed vs. those states that do nothing in this area. All states are simply lumped together for purposes of the analysis.

In addition, the claim that a decrease in Pell dollars per full-time-equivalent student could possibly be caused by performance-based funding fails to account for changes over time in federal policy related to Pell Grants, different state (and institutional) tuition policies, other state policies adopted or enacted over time, changes in the economy and national and state economic well-being, and changes in student behavior and preferences. For example, Indiana public research and comprehensive universities have become more selective over time because of a policy change requiring four-year institutions to stop offering remedial and developmental education and associate degrees, instead sending these students to community colleges.

If any of these factors have affected states with newer, well-designed outcomes-based funding systems and other states with rudimentary performance-based funding or no such systems at all, as I believe they have, then there is strong potential for a research bias introduced by failing to account for key variables. For example, in states that are offering incentives for students to enroll in community colleges, such as Tennessee, the average value of Pell Grants at public bachelor’s-granting institutions would drop if more low-income, Pell-eligible students were to choose to go to lower-cost, or free, community colleges.

I agree with Kelchen and Stedrak that more evaluation and discussion are needed on all forms of higher education finance formulas to better understand their effects on institutional behavior and student outcomes. Clearly, there are states that had, and in some cases continue to have, funding models designed in a way that could create perverse incentives for institutions to raise admissions standards or to respond in other ways that run contrary to raising attainment for all students, and for students of color in particular. As the Seton Hall researchers point out, priority should be given to understanding the differential effects of various elements that go into the design and implementation of state funding models.

The HCM Strategists’ report referenced in the study was an attempt by us to inform state funding model design and implementation efforts. There needs to be a better understanding of which design elements matter for which students in which contexts -- as well as the implications of these evidence-based findings for policy design and what finance policy approaches result in the best institutional responses for students. There is clear evidence that performance funding can and does prompt institutions to improve student supports and incentives in ways that benefit students.

Analysis under way by Research for Action, an independent, Philadelphia-based research shop, will attempt to account for several of the existing methodological limitations correctly noted by Kelchen and Stedrak. This quantitative and qualitative analysis focuses on the three most robust and longest-tenured outcomes-based funding systems, in Indiana, Ohio and Tennessee.

Factors examined by Research for Action will include the type of outcomes-based funding being implemented, specifics of each state’s formula as applied to both the two- and four-year sectors, the timing of full implementation, changes in state policies over time, differences in the percentages of funding allocated based on outcomes such as program and degree completion, and differences in overall state allocations to public higher education. And, for the first time, Research for Action will move beyond the limitations of analyses based primarily on federal IPEDS data by incorporating state longitudinal data, which give a more complete picture.

As states continue to implement various approaches to funding higher education, it is essential to understand the effects on institutional behavior and student outcomes. Doing so will require more careful analyses than those seen to date and a more detailed understanding of policy design and implementation factors that are likely to affect institutional responses. Broad-brush analyses such as Kelchen and Stedrak’s can help to inform the questions that need to be asked but should not be used to draw any meaningful conclusions about the most effective ways to ensure colleges and universities develop and maintain a laser focus on graduating more students with meaningful credentials that offer real hope for the future.

Martha Snyder is a director at HCM Strategists, a public policy advocacy and consulting firm.

Editorial Tags: 

Accreditor's New Vision for Business Education

The Association to Advance Collegiate Schools of Business, an accreditor, last week released a new "vision" for management education. In a report the group identified five roles business schools are well positioned to fill, including being catalysts for innovation, co-creators of knowledge, hubs of lifelong learning, leaders on leadership and enablers of global prosperity.

"Business education has changed dramatically in the past decade, and schools are facing increasing pressure to drive positive economic and social impact," said Santiago Iniguez de Onzono, chair of AACSB's Committee on Issues in Management Education, chair-elect of the AACSB Board and dean of IE Business School, in a written statement. "Currently many business schools across the globe are already rising to the challenge. By identifying a clearer path forward, the collective vision helps to accelerate the transformation already underway, while becoming a key resource for business schools as they innovate and contribute to society in a more meaningful way."

Research Project on Performance-Based Funding

An ongoing study conducted by Research for Action, a Philadelphia-based nonprofit research organization, looked at the effect of performance-based funding policies in higher education across three states: Indiana, Ohio and Tennessee. The group released early results from the work over the weekend at the annual meeting of the American Education Research Association.

The project takes into account key differences in the type of policies as well as variations in state funding that were tied to them. Initial findings showed consistent positive effects on the numbers of bachelor's degrees awarded under the policies. But the study did not find evidence of a positive effect on graduation rates.

Series of studies seeks to gauge higher ed effectiveness, defined broadly

Smart Title: 

New volume of research examines various aspects of higher education performance, going well beyond labor market outcomes to include academic quality and socioeconomic equity.

Essay on how fixation on 'inane' student learning outcomes fails to ensure academic quality

In a recent Century Foundation essay, I raised a concern that accreditors of traditional colleges are allowing low-quality education to go unaddressed while insisting, in a misguided attempt to prove they care about learning, that colleges engage in inane counting exercises involving meaningless phantom creatures they call student learning outcomes, or SLOs.

The approach to quality assurance I recommend, instead, is to focus not on artificially created measures but on the actual outputs from students -- the papers, tests and presentations professors have deemed adequate for students to deserve a degree.

I got a lot of positive feedback on the essay, especially, as it happens, from people involved in some of the processes I was criticizing. Peter Ewell, for example, acknowledged in an email that “the linear and somewhat mindless implementation of SLOs on the part of many accreditors is not doing anybody any good.”

This story began in the 1990s, when reformers thought they could improve teaching and learning in college if they insisted that colleges declare their specific “learning goals,” with instructors defining “the knowledge, intellectual skills, competencies and attitudes that each student is expected to gain.” The reformers’ theory was that these faculty-enumerated learning objectives would serve as the hooks that would then be used by administrators to initiate reviews of actual student work, the key to improving teaching.

That was the idea. But it hasn’t worked out that way. Not even close. Here is one example of how the mindless implementation of this idea distracts rather than contributes to the goal of improved student learning. When a team from the western accreditor, the WASC Senior College and University Commission, visited San Diego State University in 2005, it raised concerns that the school had shut down its review process of college majors, which was supposed to involve outside experts and the review of student work. Now, 10 years have passed and the most recent review by WASC (the team visit is scheduled for this month) finds there are still major gaps, with “much work to be done to ensure that all programs are fully participating in the assessment process.”

What has San Diego State been doing instead of repairing its program review process? It has been writing all of its meaningless student learning outcome blurbs that accreditors implemented largely in response to the Spellings Commission in 2006. San Diego State reported its progress in that regard in a self-review it delivered to WASC last year:

Course Learning Outcomes (CLOs) are required for all syllabi; curricular maps relating Degree Learning Outcomes (DLOs) to major required courses are now a required component for Academic Program Review; programs are being actively encouraged to share their DLOs with students and align DLOs with CLOs to provide a broader programmatic context for student and to identify/facilitate course-embedded program assessment.

All this SLO-CLO-DLO gibberish and the insane curriculum map database (really crazy, take a look) is counterproductive, giving faculty members ample ammunition for dismissing the idiocy of the whole process. The insulting reduction of learning to brief blurbs, using a bizarre system of verb-choice rules, prevents rather than leads to the type of quality assurance that has student work at the center.

The benefits of, instead, starting with student work as the unit of analysis is that it respects the unlimited variety of ways that colleges, instructors and students alike, arriving with different skill levels, engage in the curriculum.

Validating colleges’ own quality-assurance systems should become the core of what accreditors do if they want to serve as a gateway to federal funds. Think of it as an outside audit of the university’s academic accounting system.

With this approach, colleges are responsible for establishing their own systems for the occasional review of their majors and courses by outside experts they identify. Accreditors, meanwhile, have the responsibility of auditing those campus review processes, to make sure that they are comprehensive and valid, involving truly independent outsiders and the examination of student work.

SLO madness has to stop. If accreditors instead focus on the traditional program-review processes, assuring that both program reviews and audits include elements of random selection, no corner of the university can presume to be immune from scrutiny.

Robert Shireman is a senior fellow at the Century Foundation and a former official at the U.S. Department of Education.

Essay on value of student learning outcomes in measuring and ensuring academic quality

Robert Shireman is right. The former official at the U.S. Department of Education correctly wrote recently that there is little evidence that using accreditation to compel institutions to publicly state their desired student learning outcomes (SLOs), coupled with the rigid and frequently ritualistic ways in which many accreditation teams now apply these requirements, has done much to improve the quality of teaching and learning in this country.

But the answer, surely, is not to abolish such statements. It is to use them as they were intended -- as a way to articulate collective faculty intent about the desired impact of curricula and instruction. For example, more than 600 colleges and universities have used the Degree Qualifications Profile (DQP). Based on my firsthand experience with dozens of them as one of the four authors of the DQP, their faculties do not find the DQP proficiency statements to be “brief blurbs” that give them “an excuse to dismiss the process,” as Shireman wrote. Instead, they are using these statements to guide a systematic review of their program offerings, to determine where additional attention is needed to make sure students are achieving the intended skills and dispositions, and to make changes that will help students do so.

As another example, the Accreditation Board for Engineering Technology (ABET) established a set of expectations for engineering programs that have guided the development of both curricula and accreditation criteria since 2000. Granted, SLOs are easier to establish and use in professional fields than they are in the liberal arts. Nevertheless, a 10-year retrospective study, published about two years ago, provided persuasive empirical evidence that engineering graduates were achieving the intended outcomes and that these outcomes have been supported and used by faculties in engineering worldwide.

Shireman also is on point about the most effective way to examine undergraduate quality: looking at actual student work. But what planet has he been living on to not recognize that this method isn’t already in widespread use? Results of multiple studies by the National Institute for Learning Outcomes Assessment (NILOA) and the Association of American Colleges and Universities (AAC&U) indicate that this is how most institutions look at academic quality -- far exceeding the numbers that use standardized tests, surveys or other methods. Indeed, faculty by and large already agree that the best way to judge the quality of student work is to use a common scoring guide or rubric to determine how well students have attained the intended proficiency. Essential to this task is to set forth unambiguous learning outcomes statements. There is simply no other way to do it.

As an example of the efficacy of starting with actual student work, 69 institutions in nine states last year looked at written communications, quantitative fluency and critical thinking based on almost 9,000 pieces of student work scored by faculty using AAC&U’s VALUE rubrics. This was done as part of an ongoing project called the Multi-State Collaborative (MSC) undertaken by AAC&U and the State Higher Education Executive Officers (SHEEO). The project is scaling up this year to 12 states and more than 100 institutions. It’s a good example of how careful multi-institutional efforts to assess learning using student work as evidence can pay considerable dividends. And this is just one of hundreds of individual campus efforts that use student work as the basis for determining academic quality, as documented by NILOA.

One place where the SLO movement did go off the rails, though, was allowing SLOs to be so closely identified with assessment. When the assessment bandwagon really caught on with accreditors in the mid-1990s, it required institutions and programs to establish SLOs solely for the purpose of constructing assessments. These statements otherwise weren’t connected to anything. So it was no wonder that they were ignored by faculty who saw no link with their everyday tasks in the classroom. The hundreds of DQP projects catalogued by NILOA are quite different in this respect, because all of them are rooted closely in curriculum or course design, implementing new approaches to teaching or creating settings for developing particular proficiencies entirely outside the classroom. This is why real faculty members in actual institutions remain excited about them.

At the same time, accreditors can vastly improve how they communicate and work with institutions about SLOs and assessment processes. To begin with, it would help a lot if they adopted more common language. As it stands, they use different terms to refer to the same things and tend to resist reference to external frameworks like the DQP or AAC&U’s Essential Learning Outcomes. As Shireman maintains, and as I have argued for decades, they also could focus their efforts much more deliberately on auditing actual teaching and learning processes -- a common practice in the quality assurance approaches of other nations. Indeed, starting with examples of what is considered acceptable-quality student work can lead directly to an audit approach.

Most important, accreditors need to carefully monitor what they say to institutions about these matters and the consistency with which visiting teams “walk the talk” about the centrality of teaching and learning. Based on volunteer labor and seriously undercapitalized, U.S. accreditation faces real challenges in this arena. The result is that institutions hear different things from different people and constantly try to second-guess “what the accreditors really want.” This compliance mentality is extremely counterproductive and accreditors themselves are only partially responsible for it. Instead, as my NILOA colleagues and I argue in our recent book, Using Evidence of Student Learning to Improve Higher Education, faculty members and institutional leaders need to engage in assessment primarily for purposes of improving their own teaching and learning practices. If they get that right, success with actors like regional accreditors will automatically follow.

So let’s take a step back and ponder whether we can realistically improve the quality of student learning without first clearly articulating what students should know and be able to do as result of their postsecondary experience. Such learning outcomes statements are essential to evaluating student attainment and are equally important in aligning curricula and pedagogy.

Can we do better about how we talk about and use SLOs? Absolutely. But abandoning them would be a serious mistake.

Peter Ewell is president of the National Center for Higher Education Management Systems (NCHEMS), a research and development center.

Editorial Tags: 

Parents and students pay a high price for college remediation, study finds

Smart Title: 

An inadequate high school education can get expensive for students when they need to take remedial courses in college, according to a new report.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top