assessmentaccountability

White House Meeting Postponed Until Next Year

The White House on Friday postponed a meeting with an estimated 140 college leaders that had been scheduled for this week, according to notices administration officials distributed to invited participants. The event was slated to be a discussion of strategies to better serve lower-income students. In order to get in the door the group of college presidents, state and local government officials and other invitees were asked to set a specific goal for improvement in areas such as remediation or enrollment numbers of Pell Grant recipients.

The meeting was bumped, however, because of a trip President Obama and Michelle Obama are taking to South Africa this week to attend a memorial for Nelson Mandela. In emails to invitees, White House officials said they remained "100 percent" committed to holding the meeting on higher education, probably in January. In the meantime they encouraged participants to continue to work with the administration to further develop their student-success pledges.

Wisconsin Expands Competency-Based Offerings

Four more institutions will participate in the University of Wisconsin System's competency-based education program, which is dubbed the UW Flexible Option. System officials said the new offerings will be certificate programs aimed at adult and nontraditional students. They will include certificates in sales, geographic information systems and alcohol and drug abuse counseling, among others. Some will be non-credit programs, while others may soon be linked to "stackable" bachelor's degree tracks.

Conference Connoisseurs visit the City of Brotherly Love (and cheesesteaks)

Our conference-going gourmands check out the culinary treats of the City of Brotherly Love.

Editorial Tags: 
Show on Jobs site: 

Lumina's 20 Partner Cities

The Lumina Foundation on Wednesday announced the first 20 cities that it will team with on localized college completion strategies. In January Lumina announced a shift in its approach, with a plan to spend $300 million over the next four years on rethinking financial aid, new delivery models of higher education and mobilizing key constituencies to boost completion rates. The foundation also said it would team up with cities, with this group being the first batch. Each local area will receive up to $200,000 from Lumina, and foundation officials said more cities would be selected to participate in the next year.

The risks of assessing only what students know and can do (essay)

A central tenet of the student learning outcomes "movement" is that higher education institutions must articulate a specific set of skills, traits and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes.

Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.

This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.

Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limit their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve.

As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts -- because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply held belief that a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation.

The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as lifelong learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 -- hardly a lifetime (the fact that a few institutions gather data from alumni 5 and 10 years after graduation doesn’t invalidate the larger point).

Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this lifelong approach is directly attributable to one’s undergraduate education probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.

I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.

If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes.

Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.

Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences -- especially subsequent experiences that occur during college -- then the educational experience can’t be so loosely constructed that the number of potential variations in the order of a student experiences virtually equals the number of students enrolled at our institution.

This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.

The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product. 

Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater -- and qualitatively distinct -- whole.

Mark Salisbury is director of institutional research and assessment at Augustana College, in Illinois. He blogs at Delicious Ambiguity.

Editorial Tags: 
Image Source: 
Getty Images

Performance funding isn't perfect, but a recent study shortchanges it (essay)

A recent research paper published by the Wisconsin Center for the Advancement of Postsecondary Education and reported on by Inside Higher Ed criticized states' efforts to fund higher education based in part on outcomes, in addition to enrollment. The authors, David Tandberg and Nicholas Hillman, hoped to provide a "cautionary tale" for those looking to performance funding as a "quick fix."

While we agree that performance-based funding is not the only mechanism for driving change, what we certainly do not need are impulsive conclusions that ignore positive results and financial context. With serious problems plaguing American higher education, accompanied by equally serious efforts across the country to address them, it is disheartening to see a flawed piece of research mischaracterize the work on finance reform and potentially set back one important effort, among many, to improve student success in postsecondary education.

As two individuals who have studied performance funding in depth, we know that performance funding is a piece of the puzzle that can provide an intuitive, effective incentive for adopting best practices for student success and encourage others to do so. Our perspective is based on the logical belief that tying some funding dollars to results will provide an incentive to pursue those results. This approach should not be dismissed in one fell swoop. 

We are dismayed that the authors were willing to assert an authoritative conclusion from such simplistic research. The study compares outcomes of states "where the policy was in force" to those where it was not -- as if "performance funding" is a monolithic policy everywhere it has been adopted.

The authors failed to differentiate among states in terms of when performance funding was implemented, how much money is at stake, whether performance funds are "add ins" or part of base funding formulas, the metrics used to define and measure "performance," and the extent to which "stop loss" provisions have limited actual change in allocations. These are critical design issues that vary widely and that have evolved dramatically over the 20-year period the authors used to decide if "the policy was in force" or not.

Treating this diverse array of unique approaches as one policy ignores the thoughtful work that educators and policy makers are currently engaged in to learn from past mistakes and to improve the design of performance funding systems. Even a well-designed study would probably fail to reveal positive impacts yet, as states are only now trying out new and better approaches -- certainly not the "rush" to adopting a "quick fix" that the authors assert. It could just as easily be argued that more traditional funding models actually harm institutions trying to make difficult and necessary changes in the best interest of students and their success (see here and here).

The simplistic approach is exacerbated by two other design problems. First, we find errors in the map indicating the status of performance funding. Texas, for example, has only recently implemented (passed in spring 2013) a performance funding model for its community colleges; it has yet to affect any budget allocations. The recommended four-year model was not passed. Washington has a small performance funding program for its two-year colleges but none for its universities. Yet the map shows both states with performance funding operational for both two-year and four-year sectors.

Second, the only outcome examined by the authors was degree completions as it "is the only measure that is common among all states currently using performance funding." While that may be convenient for running a regression analysis, it ignores current thinking about appropriate metrics that honor different institutional missions and provide useful information to drive institutional improvement. The authors make passing reference to different measures at the end of the article but made no effort to incorporate any realism or complexities into their statistical model.

On an apparent mission to discredit performance funding, the authors showed a surprising lack of curiosity about their own findings. They found eight states where performance funding had a positive, significant effect on degree production but rather than examine why that might be, they found apparent comfort in the finding that there were "far more examples" of performance funding failing the significance tests.

"While it may be worthwhile to examine the program features of those states where performance funding had a positive impact on degree completions," they write, "the overall story of our state results serves as a cautionary tale." Mission accomplished.

In their conclusion they assert that performance funding lacks "a compelling theory of action" to explain how and why it might change institutional behaviors.

We strongly disagree. The theory of action behind performance funding is simple: financial incentives shape behaviors. Anyone doubting the conceptual soundness of performance funding is, in effect, doubting that people respond to fiscal incentives. The indisputable evidence that incentives matter in higher education is the overwhelming priority and attention that postsecondary faculty and staff have placed, over the years, on increasing enrollments and meeting enrollment targets, with enrollment-driven budgets.

The logic of performance funding is simply that adding incentives for specified outcomes would encourage individuals to redirect a portion of that priority and attention to achieving those outcomes. Accepting this logic is to affirm the potential of performance funding to change institutional behaviors and student outcomes. It is not to defend any and all versions of performance funding that have been implemented, many of which have been poorly done. And it is not to criticize the daily efforts of faculty and staff, who are committed to student success but cannot be faulted for doing what matters to maintain budgets.

Surely there are other means -- and more powerful means -- to achieve state and national goals of improving student success, as the authors assert. But just as surely it makes sense to align state investments with the student success outcomes that we all seek.
 

Nancy Shulock is executive director of the Institute for Higher Education Leadership & Policy at California State University at Sacramento, and Martha Snyder is senior associate at HCM Strategists.

Editorial Tags: 

Another Chance for Controversial Accreditor?

Education Department staff members have recommended that the Accrediting Commission for Community and Junior Colleges -- which evaluates community colleges in California -- be permitted to operate for another year, while it works on fixing problems that the department has identified. The recommendation may be accepted or rejected next week at a meeting of the National Advisory Committee on Institutional Quality and Integrity, which advises the education secretary on which accreditors to recognize. (Such recognition is crucial as students are eligible for federal student aid only if they enroll at institutions accredited by recognized accreditors.) The department notified the accreditor in August that it was out of compliance with many rules -- and that action cheered advocates for the City College of San Francisco. In July, the accreditor said that it would strip the college of its accreditation -- a decision that has led to intense scrutiny of the accreditor's review, which has been blasted by faculty unions and others as seriously flawed.

The Education Department's staff report says that there has been enough progress at fixing problems at the accreditation agency to give it another 12 months to improve, but it outlines areas of continued lack of compliance as well. Many of the remaining issues are broad and serious. Among them: "the agency must demonstrate wide acceptance of the agency's standards, policies, procedures, and decisions to grant or deny accreditation by educators" and "the agency must demonstrate that academic personnel, as generally defined by the accrediting agency and wider higher education community, are represented on its evaluation teams" and "the agency must demonstrate that it evaluates the appropriateness of the measures of student achievement chosen by its institutions." While these issues extend beyond the controversy over the City College of San Francisco, they relate to criticisms made of the accreditor's handling of that case.

 

 

2 Democrats plan legislation to promote competency-based ed and rate colleges

Smart Title: 

Two senators join the increasingly crowded Washington bandwagon for alternative forms of higher education to have access to federal funding. They also want college aid tied to institutional performance.

Studies question effectiveness of state performance-based funding

Smart Title: 

Studies at meeting of higher education researchers suggest that state policies that link funds to outcomes don't increase degree completion.

Google's $3.2 Million Grant to Help Student Veterans

Google on Wednesday announced a $3.2 million grant that four organizations will share to produce data-based research on how student veterans are faring in college. The Institute for Veterans and Military Families, Student Veterans of America, Posse Foundation and Veterans of Foreign Wars will study which colleges are the most successful at supporting student veterans, which campus programs have the biggest impact and how veterans' majors of study match up with employment opportunities. The resulting report will be made public, Google officials said, and the company will fund the expansion of programs that are found to be the most effective.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top