assessmentaccountability

Impact of Academic Preparation on Dropout Rates

The academic preparation of incoming colleges students has a strong impact on dropout rates, according to a newly released report from the ACT, which is a nonprofit testing organization. The findings show that students have the greatest risk of dropping out if they earn lower scores on college readiness assessments, particularly students with less-educated parents.

New Compilation of Articles on the Completion Agenda

Inside Higher Ed is today releasing a free compilation of articles -- in print-on-demand format -- about the drive to increase the number of Americans with college credentials. The articles reflect challenges faced by colleges, and some of the key strategies they are adopting. Download the booklet here.

This booklet is part of a series of such compilations that Inside Higher Ed is publishing on a range of topics.

On Monday, April 28, at 2 p.m. Eastern, Inside Higher Ed editors Scott Jaschik and Doug Lederman will conduct a free webinar to talk about the issues raised in the booklet's articles. To register for the webinar, please click here.

Data Standards for Workforce Credentials

A newly formed coalition of 20 states is trying to create joint data standards and data sharing agreements for non-degree credentials, like industry certifications. While demand is high for these credentials, data is scarce on whether students are able to meet industry-specified competencies. The Workforce Credentials Coalition, which held its first meeting at the New America Foundation on Monday, wants to change that by developing a unified data framework between colleges and employers. Community college systems in California and North Carolina are leading the work.

Also this week, the Workforce Data Quality Campaign released a new report that describes states and schools that have worked to broker data-sharing agreements with certification bodies and licensing agencies. The goal of those efforts is to improve non-degree programs and to reduce confusion about the different types of credentials.

Basketball box scores include numerous stats -- so should a federal ratings system (essay)

Dear Secretary of Education Arne Duncan:

Congratulations on your MVP award at the NBA Celebrity All-Star game: 20 points, 8 boards, 3 assists and a steal -- you really filled up that stat sheet. Even the NBA guys were amazed at your ability to play at such a high level -- still. Those hours on the White House court are paying off!

Like you, I spent some time playing overseas after college and have long been a consumer of basketball box scores -- they tell you so much about a game. I especially like the fact that the typical box score counts assists, rebounds and steals — not just points. I have spent many hours happily devouring box scores, mostly in an effort to defend my favorite players (who were rarely the top scorers).

As a coach of young players, my wife Michele and I (she is the real player in the family) expanded the typical box score — we counted everything in the regular box score, then added “good passes,” “defensive stops,” “loose ball dives” and anything else we could figure out a way to measure. This was all part of an effort to describe for our young charges the “right way” to play the game. I think you will agree that “points scored” rarely tells the full story of a player’s worth to the team.

Mr. Secretary, I think the basketball metaphor is instructive when we “measure” higher education, which is a task that has taken up a lot of your time lately. If you look at all the higher education “success” measures as a basketball box score instead of a golf-type scorecard, it helps clarify two central flaws.

First, exclusivity. Almost every single higher education scorecard fails to account for the efforts of more than half of the students actually engaged in “higher” education.

At Mount Aloysius College, we love our Division III brand of Mountie basketball, but we don’t have any illusions about what would happen if we went up against those five freshman phenoms from Division I Kentucky (or UConn/Notre Dame on the women’s side) -- especially if someone decided that half our points wouldn’t even get counted in the box score.

You see, the databases for all the current higher education scorecards focus exclusively on what the evaluators call “first-time four-year bachelor’s-degree-seeking students.” Nothing wrong with these FTFYBDs, Mr. Secretary, except that they represent less than half of all students in college, yet are the only students the scorecards actually “count.”

None of the following “players” show up in the box score when graduation rates are tabulated:

  • Players who are non-starters (that is, they aren’t FTFYBDs) — even if they play every minute of the last three quarters, score the most points and graduate on time. These are students who transfer (usually to save money, sometimes to take care of family), spring enrollees (increasingly popular), part-time students and mature students (who usually work full-time while going to school).
  • Any player on the team, even a starter, who has transferred in from another school. If you didn’t start at the school from which you graduated, then you don’t “count,” even if you graduate first in your class!
  • Any player, even if she is the best player on the team, who switches positions during the game: Think two-year degree students who switch to a four-year program, or four-year degree students who instead complete a two-year degree (usually because they have to start working).
  • Any player who is going to play for only two years. This is every single student in a community college and also graduates who get a registered-nurse degree in two years and go right to work at a hospital (even if they later complete a four-year bachelor’s degree, they still don’t count).
  • Any scoring by any player that occurs in overtime: Think mature and second-career students who never intended to graduate on the typical schedule because they are working full time and raising a family.

The message sent by today’s flawed college scorecards is unavoidable: These hard-working students don’t count.

Mr. Secretary, I know that you understand how essential two-year degrees are to our economy; that students who need to transfer for family, health or economic reasons are just as valuable as FTFYBDs, and that nontraditional students are now the rule, not the exception. But current evaluation methods are almost universally out-of-date with readily available data and out of synch with the real lives of many students who simply don’t have the economic luxury of a fully financed four-year college degree. All five types of students listed above just don’t show up anywhere in the box score.

“Scorecards” should look more like box scores and include total graduation rates for both two- and four-year graduates (the current IPEDS overall grad rate), all transfer-in students (it looks like IPEDs may begin to track these), as well as transfer-out students who complete degrees (current National Student Clearing­house numbers). These changes would provide a more accurate result for the student success rate at all institutions. 

Another relatively easy fix would be to break out cohort comparisons that would allow Scorecard users to see how institutions perform when compared to others with a similar profile (as in the Carnegie Classifi­cations).

The second issue is fairness.

Current measurement systems make no effort to account for the difference between (in basketball terms) Division I and Division III, between “highly selective schools” that “select” from the top echelons of college “recruits” and those schools that work best with students who are the first in their families to go to college, or low-income, or simply less prepared (“You can’t coach height,” we used to say).

As much as you might love the way Wisconsin-Whitewater won this year’s Division III national championship (last-second shot), I don’t think even the most fervent Warhawks fan has any doubt about how they would fare against Coach Bo Ryan’s Division-I Wisconsin Badgers. The Badgers are just taller, faster, stronger — and that’s why they’re in Division I and why they made it to the Final Four.

The bottom line on fairness is that graduation rates track closely with family income, parental education, Pell Grant eligibility and other obvious socioeconomic indicators. These data are consistent over time and truly incontrovertible.

Mr. Secretary, I know that you understand in a personal way how essential it is that any measuring system be fair. And I know you already are working on this problem, on a “degree of difficulty” measure, very like the hospital “acuity index” in use in the health care industry. 

The classi­fication system that your team is working on right now could assign a coefficient that weighs these measurable mitigating factors when posting outcomes.  Such a coefficient would also help to identify those institutions that are doing the best job at serving these very students.  Let us hope that your team can successfully weigh measurable mitigating factors to more fairly score schools.  This also would help identify those institutions that are doing the best job at serving the students with the fewest advantages.  

In the health care industry, patients are assigned “acuity levels” (based on a risk-adjustment methodology), numbers that reflect a patient’s condition upon admission to a facility. The intent of this classi­fication system is to consider all mitigating factors when measuring outcomes and thus to provide consumers accurate information when comparing providers. A similar model could be adopted for measuring higher education outcomes.

This would allow consideration of factors like (1) Pell eligibility rates, (2) income relative to poverty rates, (3) percentage that are first-generation-to-college, (4) SAT scores, etc.  A coefficient that factors in these “challenges” could best measure higher education outcomes.  Such “degree of difficulty” factors, like “acuity levels,” would provide consumers accurate information for purposes of comparison.

Absent such a calculation, colleges will continue to have every incentive to “cream” their admissions, and every disincentive against serving the students you have said are central to our economic future, including two-year, low-income and minority students. That’s the “court” that schools like Mount Aloysius and 16 other Mercy colleges play on. We love our FTFYBDs, but we work just as hard on behalf of the more than 50 percent of our students whose circumstances require a less traditional but no less worthy route to graduation. We think they count, too.

Thanks for listening.

Thomas P.  Foley
President

Mount Aloysius College

Thomas P. Foley is president of Mount Aloysius College.

Editorial Tags: 
Image Source: 
NBA
Image Caption: 
Arne Duncan (center) receiving the MVP trophy at the NBA celebrity all-star game.

Mixed Assessment of Institutional Research Offices

Institutional research offices at public colleges and universities that are part of state systems focused more heavily on data collection and report writing than on analysis and communication, and spend far more of their time examining student retention and graduation than issues related to campuses' use of money, people and facilities, the National Association of System Heads says in a new report. The study, based on surveys of campus and system IR officials and interviews with campus leaders, says that IR officials themselves are more confident than their bosses are about whether the institutional research offices can adapt to the increased demands on their institutions to use data to improve their performance.

"IR offices are running hard and yet many are still falling behind, deluged by demands for data collection and report writing that blot out time and attention for deeper research, analysis and communication," the report states. Institutional leaders "often expressed the need for some ‘outside’ help in this area, drawing from expertise from other complex organizations such as hospitals, where there is a sense that more is being done to use data to drive both accountability and change."

.
.

 

 

Competency based learning isn't a panacea, but may be one answer (essay)

Amy Slaton's February 21 essay is a good example of how a well-intentioned effort to defend the value of higher education ends up portraying competency-based education as something it’s not and perpetuates the view that there is only one true approach to higher education.

To understand the recent focus on competency-based education, it’s important to recognize a few critical realities.

First, the cost of higher education from 1980 to 2010 has risen more than 600 percent -- a rise more rapid than the cost of any other major good or service in the United States, including health care.

Second, state support dropped in 2012 to its lowest rate in 25 years.

Third, technology has yet to generate the dramatic cost savings we’ve seen in other arenas. For example, in 1900, the average American family spent 50 percent of its income on food and more than half of the American workforce was engaged in farming. Today, food consumes just 8 percent of household income and farming requires only 2 percent of the labor force.

Fourth, the American public has very mixed feelings about higher education. On the one hand, we know that better-educated individuals are happier on average, make better personal financial decisions, suffer fewer spells of unemployment and enjoy better health. On the other hand, there is a widely shared view that higher education is overpriced, inefficient, elitist, and inaccessible.

Fifth, research by Richard Florida (The Rise of the Creative Class) and Thomas Friedman (That Used to Be Us) and others has shown the importance of higher education to the future welfare of this country, just as global competition is mounting and our worldview is being shaken.

This is the reality in which higher education is operating as it tries to solve the problems of access and cost, while protecting quality and rigor.

Slaton’s solution to the cost problem is to typographically shout that higher education should get “MORE MONEY (as in, public funding).”

Unfortunately, shouting and wishing it so seldom works. The fact is that Americans are not willing to spend more money on the public good, let alone agree on what the public good is. The bottom line is that higher education is going to have to help itself; no one is coming to its rescue.

Enter competency-based education. It was introduced in America towards the end of the 1960s, but it applied only to small niche markets. Back then, cost and access were not the acute problems that they have become. The reason that new models are emerging now is that competency-based education is a well-conceived effort to meet at least some of the challenges facing higher education today. It is not the only effort, but it is promising because if done well, it addresses the issues of cost, quality, scaling and individualized learning all at once.

Competency-based education is a team effort. Similar to traditional higher education, faculty continue to be on center stage; they are the experts and the specialists. They set the standards and the criteria for success. They decide what students must know and how they must be able to demonstrate their knowledge in order to qualify for a degree.

Faculty in competency-based education work collaboratively to determine the structure of curriculum as a whole, the levels of competencies, and assessments that best measure competency. When constructed well, a competency-based curriculum is tight, with little ambiguity about how students must perform to demonstrate mastery, move through the program, and qualify for a degree.

The individualized nature of teaching changes. In their relationships with students, faculty function more like tutors and academic quality guarantors, attending to those students who need their expertise the most. Other staff, including advisers, coaches, professional tutors, instructional designers, and others, all pull in the same direction to make the learning and mastery process for students individualized, comprehensive, effective and efficient.

In his January 30 piece on Inside Higher Ed, Paul LeBlanc wrote that competency-based education "offers a fundamental change at the core of our higher education ‘system’: making learning non-negotiable and the claims for learning clear while making time variable. This is a profound change and stands to reverse the long slow erosion of quality in higher education.” 

Competency-based education is not a panacea that will save higher education, but no one claims that it is. It is one approach to higher education that expands students’ options for learning and most importantly, expands their access while focusing on what they know and are able to do (instead of focusing on how many hours students spend in a classroom or the number of credits they pay for).

Today 40 percent of college students are nontraditional (U.S. Department of Education): they work full time, they have families, they care for aging parents and they attend to myriad responsibilities that make going to college in the traditional time blocks impractical if not impossible. In addition, many adult students have knowledge and experiences that are worthy of academic recognition that’s unavailable through traditional programs.

The view that the status quo is the only correct model of teaching and learning is the kind of hubris that makes higher education appear haughty and conceited, rather than as a vehicle for growth and opportunity.  Competency-based education is a viable and important approach that provides students with another option for accessing and benefiting from higher education.  We should support its development, and we should strongly encourage students to create ownership of their degrees and allow them to discover their unique identities.

If not this, what else is higher education for?

David Schejbal is dean of continuing education, outreach and e-learning at University of Wisconsin-Extension.

Editorial Tags: 

Deeper Completion Data, State by State

The National Student Clearinghouse Research Center today released state-by-state data on the various pathways students take on their way to earning degrees and certificates. The data builds on a national report from 2012 that showed a more optimistic picture of college completion than other studies had found previously.

According to the report, 13 percent of students nationwide who first enrolled at a four-year public institution completed their credential at a different college. And 3.6 percent of students who began at a four-year public institution earned their first degree or certificate at a community college. Among other findings, the report also gives state-specific breakdowns of the proportion of students who began at community colleges and eventually completed at four-year institutions.

Audit Criticizes Calif. Agency That Oversees Private Colleges

California's Bureau for Private Postsecondary Education has "consistently failed to meet its responsibility to protect the public's interests," a state audit released Wednesday said. The report from the California State Auditor cited a list of agency's shortcomings, including long backlogs of applications for licenses and delays in processing applications, failing to "identify proactively and sanction effectively unlicensed institutions," and conducting far too few inspections of institutions. The bureau, which the legislature created in 2009 after the state's previous regulatory body was killed, challenged the audit's negative conclusion but agreed with its recommendations for improving the agency's performance going forward.

 

 

We need a new student data system -- but the right kind of one (essay)

The New America Foundation’s recent report on the Student Unit Record System (SURS) is fascinating reading.  It is hard to argue with the writers’ contention that our current systems of data collection are broken, do not serve the public or policy makers very well, and are no better at protecting student privacy than their proposed SURS might be. 

It also lifts the veil on One Dupont Circle and Washington behind-the-scenes lobbying and politics that is delicious and also troubling, if not exactly "House of Cards" dramatic. Indeed, it is good wonkish history and analysis and sets the stage for a better informed debate about any national unit record system.

As president of a nonprofit private institution and paid-up member of NAICU, the industry sector and its representative organization in D.C. that respectively stand as SURS roadblocks in the report’s telling, I find myself both in support of a student unit record system and worried about the things it wants to record. Privacy, the principle argument mounted against such a system, is not my worry, and I tend to agree with the report’s arguments that it is the canard that masks the real reason for opposition: institutional fear of accountability. 

Our industry is a troubled one, after all, that loses too many students (Would we accept a 50 percent success rate among surgeons and bridge builders?) and often saddles them with too much debt, and whose outputs are increasingly questioned by employers.

The lack of a student record system hinders our ability to understand our industry, as New America’s Clare McCann and Amy Laitinen point out, and understanding the higher education landscape remains ever more challenging for consumers. A well-designed SURS would certainly help with the former and might eventually help with the latter problem, though college choices have so much irrationality built into them that consumer education is only one part of the issue.  But what does “well-designed” mean here? This is where I, like everyone, gets worried.

For me, three design principles must be in place for an effective SURS:

Hold us accountable for what we can control. This is a cornerstone principle of accountability and data collection. As an institution, we should be held accountable for what students learn, their readiness for their chosen careers, and giving them all the tools they need to go out there and begin their job search. Fair enough. But don’t hold me accountable for what I can’t control:

  • The labor market. I can’t create jobs where they don’t exist, and the struggles of undeniably well-prepared students to find good-paying, meaningful jobs say more about the economy, the ways in which technology is replacing human labor, and the choices that corporations make than my institutional effectiveness.  If the government wants to hold us accountable on earnings post-graduation, can we hold it accountable for making sure that good-paying jobs are out there?
  • Graduate motivation and grit. My institution can do everything in its power to encourage students to start their job search early, to do internships and network, and to be polished and ready for that first interview.  But if a student chooses to take that first year to travel, to be a ski bum, or simply stay in their home area when jobs in their discipline might be in Los Angeles or Washington or Omaha, there is little I can do.  Yet those have a lot of impact on the measure of earnings just after graduation.
  • Irrational passion. We should arm prospective students with good information about their majors: job prospects, average salaries, geographic demand, how recent graduates have fared.  However, if a student is convinced that being a poet or an art historian is his or her calling, to recall President Obama’s recent comment, how accountable is my individual institution if that student graduates and then struggles to find work? 

We wrestle with these questions internally.  We talk about capping majors that seem to have diminished demand, putting in place differential tuition rates, and more.  How should we think about our debt to earnings ratio? None of this is an argument against a unit record system, but a plea that it measure things that are more fully in our institutional control.   For example, does it make more sense to measure earnings three or five years out, which at least gets us past the transitional period into the labor market and allows for some evening out of the flux that often attends those first years after graduation? 

Contextualize the findings. As has been pointed out many times, a 98 percent graduation rate at a place like Harvard is less a testimony to its institutional quality than evidence of its remarkably talented incoming classes of students.  Not only would a 40 percent graduation rate at some institutions be a smashing success, but Harvard would almost certainly fail those very same students. As McCann and Laitinen point out, so much of what we measure and report on is not about students, so let’s make sure that an eventual SURS provides consumer information that makes sense for the individual consumer and institutional sector. 

If the consumer dimension of a student unit record system is to help people make wise choices, it can’t treat all institutions the same and it should be consumer-focused.  For example, can it be “smart” enough to solicit the kind of consumer information that then allows us to answer not only the question the authors pose, “What kinds of students are graduating from specific institutions?” but “What kinds of students like you are graduating from what set of similar institutions and how does my institution perform in that context?”

This idea extends to other items we might and should measure. For example, is a $30,000 salary for an elementary school teacher in a given region below, at, or above the average for a newly minted teacher three years after graduation?  How then are my teachers doing compared to graduates in my sector? Merely reporting the number without context is not very useful. It’s all about context.

What we measure will matter. This is obvious and it speaks to both the power of measuring and raises the specter of inadvertent consequences.  A cardiologist friend commented to me that his unit’s performance is measured in various ways and the simplest way for him to improve its mortality metric is to take fewer very sick heart patients. He of course worries that such a decision contradicts its mission and why he practices medicine. It continues to bother me that proposed student records systems don’t measure learning, the thing that matters most to my institution.  More precisely, that they don’t measure how much we have moved the dial for any given student, how impactful we have been. 

Internally, we have honed our predictive analytics based on student profile data and can measure impact pretty precisely.  Similarly, if we used student profile data as part of the SURS consumer function, we might be able to address more effectively both my first and second design principles. 

Imagine a system that was smart enough to say “Based on your student profile, here is the segment of colleges similar students most commonly attend, what the average performance band is for that segment, and how a particular institution performs within that band across these factors.…”  We would address the thing for which we should be held most accountable, student impact, and we’d provide context. And what matters most -- our ability to move students along to a better education -- would start to matter most to everyone and we’d see dramatic shifts in behaviors in many institutions.

This is the hard one, of course, and I’m not saying that we ought to hold up a SURS until we work it out. We can do a lot of what I’m calling for and find ways to at least let institutions supplement their reports with the claims they make for learning and how they know.  In many disciplines, schools already report passage rates on boards, C.P.A. exams, and more.  Competency-based models are also moving us forward in this regard. 

These suggestions are not insurmountable hurdles to a national student unit record system. New America makes a persuasive case for putting in place such a system and I and many of my colleagues in the private, nonprofit sector would support one. 

But we need something better than a blunt instrument that replaces one kind of informational fog for another.  That is their goal too, of course, and we should now step back from looking at what kinds of data we can collect to also look at our broader design principles and what kinds things we should collect and how we can best make sense of that data for students and their families. 

Their report gives us a lot of the answer and smart guidance on how a system might work.  It should also be our call to action to further refine the design model to take into account the kinds of challenges outlined above.

Paul LeBlanc is president of Southern New Hampshire University.

Accreditor Places Martin University on Probation

Indiana's Martin University has been placed on probation by the Higher Learning Commission of the North Central Association of Colleges and Schools, which cited concerns about the institution's finances and governance and the adequacy of its faculty and staff. The commission also placed several other institutions -- Arkansas Baptist College, Oglala Lakota College, Southwestern Christian University, and Salem International University -- on notice, which is less severe than probation. Kansas City Art Institute and Morton College were removed from notice.

Pages

Subscribe to RSS - assessmentaccountability
Back to Top